Why are computers not sentient yet?
Computers have not been around for very long (although perhaps longer than one might think). They are now, however, ubiquitous and clearly here to stay. It has something to do with their universality, which is also why these things are getting more and more powerful at every instant. Given how powerful they already are, in a sense it's flattering that no machine -that we know of- has yet reached consciousness. But how long will this last? How will we know when a machine really becomes conscious? What does this word even mean? How will this new consciousness arise? And what will happen next?
The usual answers to these five questions are, in order:
1. not long;
2. with a Turing test or a variant of it;
3. we don't know but we don't need to know for the Turing test;
4. we have no idea;
5. something big, hopefully nothing bad.
On the last two questions however, screenwriters are actually not short of ideas.
My personal all time favourite scenario would be from a movie in the late 80s, "Electric Dreams", where a commodore computer percolates to consciousness after being spilt some champagne on its keyboard. What happens then? I can't recall exactly but eventually the owner gets laid. Well loads of movies share this simple plot of a machine that has become or is becoming sentient. We have of course Terminator's Skynet, 2001 Space Odyssey's HAL (which by the way is IBM shifted by 1 letter..), or obviously movies like I, Robot..
As for books, it seems you simply can't be a science fiction writter if you don't have your own story. Probably the best I've read is from Dan Simmon's Hyperion, where the so-called "technocenter" has evolved from the very real and ongoing Tierra Project in the artificial life community. I like this scenario because we don't get to design the conscious program architecture, and it doesn't occur by accident either. Rather, consiousness arises out of evolutionary dynamics, it creeps out of the cognitive night by gradual changes and mutations, each of which is beneficial to the artificial species. But it doesn't tell us what the architecture is, or what the critical mutation was, if there was only one.
In all these cases -with the honourable exception of Electric Dreams- we notice that the birth of a conscious machine is always a threat to humanity - while in the superior Electric Dreams it is only a threat to abstinence. It turns out that this concern is shared by some people working in a field called "Friendly AI" (friendly as in "how do we make this conscious machine friendly", because No, Asimov's famous three laws probably won't fly).(note that abstinence might be another shared concern in the friendly AI community).
According to friendly AI, the problem is not really couched in terms of consciousness, but rather in the ability to improve one's cognitive algorithms. Essentially, the idea goes like that: the minute we create a machine that is able to learn how to improve on its own programs, the thing will grow out of control exponentially fast. Before we know, given its presumaby considerable computational resources, it would reach IQs that no human has even contemplated. This moment in our future has been called by some, rather pompously, the "Singularity". It has a leader, an institute, and generally makes many people think hard in the silicon valley, and at least one person in Australia - a very smart and distinguished researcher on consciousness.
Can someone please tell these guys that the probem has been solved in the late 80s by a bottle of champagne and a commodore computer? Admittedly, any kind of liquid could work. In fact in my own estimates kitten pee would be sufficient for the powerful computers we have nowadays. And you even get a chance to rediscover the Welsh language. Genius? Yes.
|Studies show that kitten pee might be sufficient to make current laptops conscious, with possibly happy side effects that bear on celtic languages. Courtesy of Dr Kim.|