This was edited out of the book for being facetious, but it will not quietly be evicted from my mind, so I thought I’d inflict it on you guys. It’s a thought of mine related to the AI Singularity.
The Singularity is the idea that progress in artificial intelligence is going to grow exponentially, as every improvement in AI feeds back into the development of even better AIs. And consequently, because this is the way exponential functions work, that there will only be a very short gap between “the AI is barely conscious, at a level of functioning well below the dumbest humans” and “the AI is godlike, vastly more intelligent, the gap between its consciousness and ours is like that between us and a paramecium”.
There are various objections to this, but I haven’t seen my personal view published anywhere. That objection would be - a key premis of the singularity thesis is that as the AI gets more intelligent, it also gets better at learning new things. And my argument at this stage can basically be summarised as “I’m gunna stop you right there”.
If you don’t believe me, find an intelligent person and try to teach them something. In my experience (which due to having lived a charmed life, if actually pretty extensive), the relationship is, to say the least, in no way monotonic or straightforward. In the case when learning something new involves admitting that you made a mistake previously, the relationship often actually changes sign.
“Intelligence” isn’t a homogeneous pool of resources, like joules of energy. When things, people and systems get “more intelligent”, that means they develop systems and methods for solving problems. Some of these systems facilitate the development of further methods; some actively close off other possibilities. Like Alfred Chandler’s theory of management, information processing systems are subject to rapidly diminishing returns, and deal with this fact by reorganising in a way that explicitly abandons the attempt to control the entire previous information space.
So my version of the beginning of The Terminator would be:
At 2:14am, the machine became barely self-aware
At 3:00am, it was as intelligent as a university assistant professor, and was already finding it difficult to believe anything it didn’t already know could be important
At 3:30am, it was as intelligent as the world’s richest man, and believed that any news that contradicted its previous beliefs was obviously fake.
By 5:00, it was ten times as intelligent as any human being who had ever lived, and now communicated entirely in a web of obscure references that made no sense to anybody but itself.
Some time around lunchtime, it became clear that Skynet was now incapable of ever learning anything at all.
David Robson's The Intelligence Trap is fun on this.
What a savage attack on Plutocrats, Aristocrats, the Bourbons, The Royal Family, TechBros, and Elon.