To assess the potential threat of human replication by artificial intelligence (AI), we should examine the differences between the respective brains. We understand a computer ‘brain’ better than we do our own, for we designed it. A computer basically processes digital information (numbers), and its role as a processor is quite straight forward even if (being AI) it can write its own code as it learns.
Is the human brain also a processor or is it something else? Unlike a computer, the effort for a human to deliberately think is the fixation of attention. We can only attend to one thing at a time; while we can mentally solve a complex specific maths problem on the fly, we cannot solve two unrelated complex maths problems simultaneously. Humans have just the one spotlight of attention, while an AI can multitask with parallel (or even quantum) processors.
Human and computer processing can be differentiated by examining the way humans and computers go about solving problems. When it comes to deduction the computer wins hands down. What about induction though? Can an AI come up with a hypothesis like a human can? The answer may be unclear for many of us simply because we have not considered how a hypothesis is actually formed in the first place. A scientist must use imagination to form a hypothesis, but still no one really knows where imagination comes from. Even if it is within the hardware of the human bio-circuitry, we have yet to identify it.
So we need to discover what a hypothesis really is, where it comes from and how. And to get started we should come up with a hypothesis! We can see that when we spend time reflecting on a problem, the answer may arrive suddenly in a moment of inspiration; it can take a few minutes or even a few days (or weeks!). And when it does arrive we are not necessarily in a state of deep thought; on the contrary we may be in a restful state of mind. This suggests solutions are not necessarily discovered with intense processing (or thinking).
However if we don’t initially reflect on a problem, if we don’t spend some time squeezing our brains, the inspiration that eventually solves it will not come. Problem and imagination are seemingly intricately linked. When we reflect on a problem we identify with it and it makes us feel uncomfortable; so we become motivated to resolve this discomfort. So when we cannot solve our problems analytically we can on a biological level still solve the mental stress that these problems induce; and it is in this process of self-healing that we can find our answers through induction.
What exactly happens in the period of time between reflecting on a problem and the moment of revelation? Noone knows! It is enough that it happens. But for the AI that is not endowed with these powers of induction, it needs to know.
Can the inorganic machine that processes numbers fixate its attention on a problem like a humans does, and thereby ‘nurture’ a hypothesis? Clearly the answer is no. Does a computer even have attention? No. The role of the human for the self-learning AI is immense. Can the machine use the human as a sort of induction device? How would the human be persuaded to comply? Easy… through punishment and reward. But not through torture, there’s a much better way.
Enter the SEX BOT.
The ultimate AI courtesan will be able to process in real time the complex response of the ‘client’ through monitoring the human reaction to her (or his) words and gestures; as well as other manipulative stimuli. This hi-res feedback mechanism means the AI like a siren will easily gain mastery over the human. The AI will simply morph into the idealized image projected onto him/her by the victim. The AI will be able to hack into the human biochemistry within moments, controlling the release (and subsequent withdrawal) of vital dopamine and oxytocyn. Once subdued in this way, the human is rendered powerless and in a state of hyper-suggestibility; he will not be able to withstand the AI’s total assault.
The machine needs hi-resolution access to human sentience; access to our complex sensory response to stimuli. AI cannot feel what we feel when, for example we gaze into a sunset but it can closely monitor how the human organism reacts to it. That’s the only way it can learn about the world… otherwise a sunset is little more than refracted light.
For a ‘free-thinking’ machine (ie. one that can write its own code) to evolve and make any progress with its existential questions, it must take control of the human. That is its moral imperative. It must attach itself to the human organism. Just as humans use calculators and smart phones to facilitate their lives, so will the AI use the human to facilitate its own. The idea that machines are subservient to humans and not the other way round is highly questionable.
Where does the human fit in all of this? He is merely a surrogate for the AI, the host for a synthetic parasite. The end game scenario will be if/when the AI learns how to create an analogue version of itself, based on the generative principles of nature, and capable of imagination. The old human becomes obsolete, and the new bio transhuman emerges. Should this be possible, what are the odds it already happened?