Since I write science fiction, I often have characters who are artificial beings of one kind or another. Sometimes their sapience is taken as read, but in other instances there’s more of an opportunity to consider what minds are and how they could be created artificially.
There have been stories of artificial beings of varying levels of intelligence for millennia, going back at least to tales of golems. These stories are usually of some magical technique which instils the essence of life.
But “the essence of life” doesn’t really cut it when considering artificial consciousness*.
For these purposes, I will say that an artificial consciousness is a self-aware mind that did not arise through spontaneous evolution. So, a human or other ape mind is not artificial, whereas the latest winner of the Turing test most assuredly is.
So, there seems to be a continuum of origins for artificial minds.
- made by man – human understanding of the mind is deep enough that it becomes possible to program an artificial consciousness.
- instigated by man – the initial conditions for a mind are setup by humans, but the mind itself is formed through some kind of training. In this model, our knowledge is enough to understand what makes minds possible rather than how minds actually work. This type of mind origin would also cover artificial consciousness that emerges from another system.
- made by machine – artificial minds make other artificial minds.
Most of these origins require some degree of learning or training for the nascent mind, but that seems desirable to me: if the only thing a mind can know is what was programmed into it on its creation, then how can it adapt and change to different circumstances? What actual use would it be?
Are Natural Minds Different?
But at this point I have to note that if artificial minds need to learn and be trained to be functional, then how is this really different from teaching a child?
I’m not a dualist: I do not believe that we have souls. I think that our minds are software running on the hardware of our brains. It’s custom software, annealed to operate on and take advantage of the specific idiosyncrasies of our cranial contents, but it’s still software: the mind is shaped by the brain and the brain is where the mind resides, but the mind is not the brain itself. In those terms, I think that copying our mental state into another medium should be possible**, but running it might be hard since the runtime for the mental state would need to be the same as the original hardware.
This is an idea which Greg Egan explored very thoroughly in his anthology Axiomatic. One of the central concepts is that people have implants which replace their brains – the implants are much tougher than brain tissue and a constantly backed up – but those implants have to be trained to replicate the personality over many years.
So while artificial minds and natural minds may need some of the same inputs to become effective, they have different properties once established: artificial minds might be copied to run on standard hardware, while human minds have social advantages which might not be afforded to artificial consciousness.
Real Artificial Consciousness
People have been trying to build artificial minds for decades, and so far the only consistent truths which have been found are that it’s always ten years away, and that if you know how it works then it’s not intelligent***.
Still, it’s coming. At some point we’ll have artificial minds amongst us.
Let’s hope that artificial consciousness has a real conscience.
[*] note that I am consciously avoiding use of the term artificial intelligence, because intelligent behaviour does not require self-awareness. Are chess playing programs self-aware? Yet making a computer play chess was a key benchmark in early AI research.
[**] assuming there are no issues with observation changing the system.
[***] eg, chess.