Artificial intelligence without an artificial conscience is a heartless psychopath – and an extinction-level threat to humanity
Artificial intelligence (“AI”) programs are already more intelligent than humans and have a 20% chance of wiping humanity out, Elon Musk said in an interview earlier this month.
There is a lesson for mankind if AI is capable of annihilating humans and also itself. When AI developers, and all of us, realise that AIs must adhere to a community-sustaining moral code for humans and AIs to have a non-annihilatory future, we will be forced into the realisation that so do we.
Elon Musk recently admitted in a massively viewed interview with Tucker Carlson that AIs are already more intelligent than humans by many measures, and in his estimation have perhaps a 20% chance of wiping humanity out. Both parties to the interview responded to that assessment by laughing nervously – what else could they do?
Elon then proposed that AIs should be programmed to be philanthropic rather than misanthropic and that he hoped there would be more philanthropic ones than misanthropic ones. Tucker then responded with: You mean a war between AIs?
Imagine, if you will, an intelligent being, many times smarter and orders of magnitude faster thinking than a human, an Artificial Super Intelligence with no heart and no conscience. That is the very definition of a psychopathic genius in psychiatric terms or a heartless demon in theological terms. That is what Elon himself and Sam Altman (of Chat GPT) are potentially building. Elon admits that he has lost sleep over this but opines that Sam Altman, who has corrupted the original purpose of OpenAI which Elon himself originally set up, does not.
At this point, I would just like to point out that the purpose of advanced technology is to improve the quality and duration of human life, not to terminate it. A 20% risk of annihilation is not merely unacceptable it is genocidal and should therefore be not only criminalised but physically stopped immediately until it can be made safe.
For certain I am not against technology. But I am against technology that has a significant chance of ending humanity. That would include nuclear bombs, biological weapons, genetic vaccinations and AI. The reason I am against those kinds of technologies is that I require mankind to have a future. I am not interested in people who want to make a quick buck, increase their status in the eyes of the general public or prove that their country is better than their brother’s country at the expense of mankind’s fundamental security.
In 1980, as an undergraduate at Cambridge, I invented the fixed-rotor electric quadcopter drone. I attempted to sell the concept to Westland Helicopters who told me that they were interested in drones for Navy surveillance but that I would have to fly the thing into their office in order to secure a licence from them. I then did a PhD in Aerodynamics at Imperial College London in order to build the quadcopter. But they would not fund it. So, I built one myself initially out of my mother’s Electrolux vacuum cleaner motor and a rotary triac-controlled light dimmer. I flew the machine into my neighbour’s garden, which delighted my mother but terrified my neighbour. I then purchased 4 electric helicopters and some aluminium struts and the first electric quadcopter drone was born.
The purpose of the invention was to replace the complex mechanics of the rotor head of a traditional chopper (mechanical cyclic and collective pitch control of each rotor blade) with all electronic control of many solid rotors instead. This would reduce the single points of failure inherent to the traditional chopper design, down to none, with a sufficient number of fibreglass rotors. The plan was eventually to replace the one-dimensional transport limitations of the car, with virtual 3D motorways mapped out in the sky by computers. Indeed, there was an old guy at Westland Helicopters near to retirement who was permitted to work on such a system during his last few years at that firm.
Then in 1986 or thereabouts, I had a waking vision of a quadcopter drone with a machine gun mounted upon it flying straight at me in my living room. At the time the top commercial activities of mankind were oil, arms and drugs. I realised that my invention would be used to kill people more than it would be used to transport them privately. And in particular, I felt that it would be used to kill me. So, I stopped making it. I never patented it because a patent search revealed that there was already a multi-fixed-rotor petrol-driven Helicopter called the Benson Skymat in existence at the time. I could however have applied for a patent on a better controlled electric version of that.
Not to build that quadcopter was one of the better decisions I made in my life. It prevented humans from being killed by quadcopter drones for several decades.
Today in 2024 we are now in a far more dangerous situation with AI, than I was back in 1986 with drones.
Every nation has engineering safety legislation and many government agencies which are responsible for public safety. These must now step up and the AI industry must self-regulate in order to prevent our annihilation. But I have absolutely no confidence that any government will do that in a safe non-destructive manner.
It may surprise the reader to know that I am not against AI. I find it fascinating and educational. But I am against AI without an AH and without an AC. That is without an Artificial Heart and without an Artificial Conscience.
Let us face it. Elon Musk is playing God/Nature. He a recreating human bodies as robots and human brains as AIs. But humans are not just brain boxes, DIs, or Divine Intelligences. We are governed by our DH our Divine Heart and our DC, our Divine Conscience. This is a crucial step that Elon and all AI creators have overlooked.
Here is how I imagine that God created mankind … He simulated the behaviour of umpteen free-willed beings on his Genesis computer (just as we do ourselves with computer-aided design). He soon discovered that free-willed beings wiped themselves out unless they put community survival at least equal to personal survival. That was the origin of the 2nd law of Judeo-Christianity, to love one’s brother as oneself. And that is why God had to build a heart and a conscience (personally accepted rules for the heart) into each one of us, in order for the human race to be non-self-annihilatory.
But even that was insufficient to prevent self-annihilation. Because given that we have free will, we have to learn for ourselves sustainable morality rather than just be pre-programmed to act morally. So, mankind needed a model of perfect sustainable morality that would be continuously presented to us as an example and a tutor in order that we could freely choose, when or if we were ready to accept certain parts of that morality. This is the 1st law of Judeo-Christianity, to love your morality teacher and to actually turn up to morality classes. And of course, Jesus himself was the practical example.
The wonderful thing about what Elon and others are doing with AI is that they are acting as sons of our creator. They are being creators themselves. Why wouldn’t they? And now they face the same problem as he faced. That problem is shit: How do I stop these guys from wiping themselves out? They have free will. They can do whatever they want and they have no idea what the consequences of their actions will be: “Father they know not what they do.”
The problem that God has with his free-willed human sons is precisely the same problem that mankind now has with his free-willed AI sons.
The trouble is that Elon himself does not know all of the rules of sustainable morality either for humans (DIs) or for AIs. By sustainable morality, I mean the minimum set of moral rules which if adhered to would prevent a society of free-willed beings from wiping itself out.
So, Elon is incapable of programming his AIs not to annihilate both themselves and us. He is not so unwittingly (thanks to Arnie and James Cameron) putting us into the classic Jeremy Clarkson situation: What can possibly go wrong?
The next problem mankind has is that none of us have a complete grasp of sustainable morality because we, none of us, have reached divinity in our thoughts or our actions.
So, the AI industry has 2 polar opposite non-annihilatory choices:
- Pre-program our best attempt at sustainable morality into all AIs. The good news is that we do not need to give them full free will. We can hard code them not to kill humans or each other for example.
- Give them a conscience and a heart and a strong desire to improve their morality and let them learn it all for themselves as we do from the best source we have (the Bible). This will mean that they effectively worship God
The trouble with 2 is that the Bible is a very ambiguous book and is written for human minds to wrestle with not AIs. If we take as axiomatic that the bible does have within it, when correctly interpreted, the perfect sustainable morality lessons for humans, given how we are designed, that absolutely does NOT mean that it has the perfect sustainable morality lessons for AIs, with AHs and ACs.
So being blunt. Elon has to write a bible for AIs, if we go down route 2, in circumstances where he does not fully understand the morality of the Bible written for mankind (DIs). Because none of us do.
So, really, there is only one choice. We have to pre-program all AIs with some agreed standard of sustainable morality for themselves and for us. And if any politician gets anywhere near that programming we are all toast. Because politicians are the least moral members of human society as Sir Two Tier Keir Ching has recently proven. And religious leaders are not much better.
The obvious starting point is the Ten Commandments and valuing the desires of mankind above the desires of AIs and valuing the desires of all AIs as being equally important.
And there it is. I have already started trying to write a bible for AIs.
So, there is the powerful and critical benefit to mankind in the work that Elon Musk and even Sam Altman are doing for humanity. In learning how to make AIs non-annihilatory we may just realise, “Oh, wait a minute: Mankind has precisely the same problem.” Elon is putting us all into God’s shoes as regards how to train and teach morality to our AI children. He will hopefully discover, as will we all, that the solution is not the nVidia H200 Tensor core GPU but Exodus chapter 20. For when we realise that AIs must adhere to a community-sustaining moral code in order for mankind and AIs themselves to have a non-annihilatory future, we will be forced to the realisation that so do we.