I've been meaning to write something for a while along the lines of 'the world would be better off if Alan Turing had known a magician.' A few years ago when magicians-as-skeptics were having a hip moment, I remember an interview with James Randi talking about a bunch of physicists being blown away by the paranormal implications of a trivial slight of hand with a matchbox- the thrust being that all these clever people were prepared to set all manner of tests and ponder all manner of thought experiments save that someone was trying to mess with them (https://www.youtube.com/watch?v=SbwWL5ezA4g). If Alan had instead written up a paper on the more playful and cynical 'how long might you be able to fool someone that a mechanism was a person typing' we might have a much healthier thought-architecture on the whole thing.
What I find most disappointing about the likes of Dawkins here is that for all the racket about 'passing the Turing test', the chatbots give away the game *all the time*. The first time you query one with 'what's the art museum with a spiral ramp that isn't the Guggenheim' and they reply 'the art museum with the spiral ramp is the Guggenheim and the one without the spiral ramp is the Guggenheim because the Guggenheim doesn't have the spiral ramp it has' the purely associative nature of the text product is just sitting there. Which doesn't mean it isn't occasionally useful or surprising and what is intelligence really and blah dee blah blah.
Dennett did do a solid before he died though with this essay, I thought: https://www.theatlantic.com/technology/archive/2023/05/problem-counterfeit-people/674075/ . He makes the point that, one way or another, constructing technologies that act like people is bad and gross because it muddles the waters as to what constitutes a person the same way a counterfeit good does. Throughout this particular AI spring my angst has been not that we're going to go down some robot slavery-and-uprising hole of denying an artificial person their rights but that some company will use their tech demo to make someone *think* they have an artificial person in need of considerations that are then just welded to their vast pile of money.
I also liked this article- that the chatbot model is fundamentally a kind of rude UX decision because it attaches the LLM- which could be fronted in other ways, as a document-completion generator, etc.- to all our hyperactive interfaces for talking to people: https://buttondown.com/apperceptive/archive/ai-is-bad-ux/
Re: magicians--cf. Taleb's "Fat Tony": "You flip a coin 99 times. It comes up heads 99 times. How will it come up next?" Scientist: "Why, an equal probability of heads or tails, of course." Tony: "No, you idiot, it means the game is rigged."
Way back before the black swan was published, I was pointing out to bank regulators that if a roulette wheel actually ever did come up 16 Black ten times in a row, you would definitely get it serviced
I actually went to a James Randi talk IRL once, and he zinged the crowd with something similar- discussing a card trick, he asked what the odds of drawing a particular card out a deck was. When the inevitable '1 in 52' poured out in chorus, he faux-chided us- 'no, it's 1 in 1- I'm a very good magician.'
Paul Wilmott does this joke a lot better; people have seen the Randi sketch so they say 100% and he goes come on this is a professional magician, not somebody's 8-year-old nephew. The trick is going to be much more impressive than dealing your own card back to you. So the correct answer might be zero percent and the card off the top of the pack will be a picture of your girlfriend or this evening's lottery numbers or something.
My productivity is cratering badly thanks to now sharing my house with a 6 month old biological machine who's the end product of millions of years of optimising for parental distraction..
Like this very much - but it raises a tangential quesiton which I have a had for a while now: what is the nature of your background in philosophy - just a general interest or some academic study at undergraduate or graduate level. I would guess at least undergraduate.
I did PPE at Oxford like the rest of the middle class. My tutor was Galen Strawson, who found my Dennett fixation amusing to begin with but later quite sternly told me to have a word with myself when he thought it might affect my chances of passing the exams.
A related anecdote: I am continually amused (also bemused) by how many of my colleagues at my university will, in one moment, insist that AI really is getting better about not hallucinating, and, in the very next moment, insist that AI detectors are getting better and will prove an important part of higher education going forward. Dude, you can't have both!
IIRC, AI detectors are getting worse and will soon be no better at detecting AI than chance. Having said that, I believe it is simply good practice to do the needed checking. In all my "research" using AIs (with RAG), my prompt includes the admonitions that every statement is backed up with teh source document and where I can find the place in the document that supports the statement. This makes checking easy, and any incorrect statement can be isolated and either corrected or removed. If its inclusion is needed to support any conclusion, and it is wrong or hallucinated, then the output is wrong and needs to be restarted with that information deliberately excluded.
As for education and learning, the evidence is that relying on AI to provide responses results in no, or minimal, learning. Learning is also "no pain, no gain".
I had similar teenage/early 20s encounters with the philosophy of mind and Dennett but had the advantage of access to ELIZA. And also a group of mates including one who thought in language (and knew about Wittgenstein) and one who very much didn’t (autistic lad who sometimes really struggled with words but could clearly out-think most of the others). What seemed obvious to us - in the philosophers sense of “after an awful lot of arguing” - was that thought and language were separate, or at least separable.
What we have with LLMs is the opposite of that autistic lad - excellent language, not necessarily linked to thought.
And as you point out, this is absolutely optimised to fool us about whether it’s thinking.
(One thing that troubles me deeply about this otherwise neat theory is that both Claude and that autistic lad are really good at coding.)
Clarke's Third Law ("any sufficiently advanced technology is indistinguishable from magic") is generally used to describe a comprehension gap between different civilizations. But we now seem to have created a technology that we ourselves have mistaken for magic.
It's been dumbed down ever since, but Turing's original imitation game was a party game like Werewolf/Mafia. Skilled players who had actually practiced it would be much harder to fool. They would certainly be familiar with all the cliches that we can easily recognize.
None of the AI labs are trying to win this game (the AI just straight up admits it's an AI if you ask) and I haven't heard of any human players who are attempting to get good at it.
For all the hype about thinking, AIs are not able to understand the world, and therefore are just sophisticated System 1 "thinkers", and certainly with no consciousness behind the responses. IMO, my pet cat is both sentient and has more consciousness (however dim) than any AI to date.
Philosophers use the word "zombie" to hypothesize thinking persons with no consciousness. Will we see this in robots with embedded AI, (whether in its body or delivered remotely from a server)? If humans have neural brain links to AI computers, will they believe that the AI responses are part of them, or something separate that they access by thought, rather than by voice or keyboard?
I am resigned to interact with a machine when it supports my intelligence and allows me to mirror my philosophical development. A form of self appraisal if understood and not seen as anything other than a processor
Language is a machine specifically designed to fool you into thinking (some) way.
I can't remember the source--it was not Clan of the Cave Bear, God help us--in which it was posited that Neanderthals died out because they couldn't lie, a Cro-Magnon advantage even greater than the throw-window.
One thing I have found myself thinking again and again while writing my current book is “am I about to touch an object which has been optimised to distract me?”
The corollary to Chesterton would be "Men who believe in God cannot then believe in reality that is not part of their worldview based on their God." Which is why science and fundamentalist religion have become so antagonistic.
And also, Voltaire said, "Whoever can make you believe absurdities can make you commit atrocities."
What bugs me is calling it Claudia when Shannon is right there.
I've been meaning to write something for a while along the lines of 'the world would be better off if Alan Turing had known a magician.' A few years ago when magicians-as-skeptics were having a hip moment, I remember an interview with James Randi talking about a bunch of physicists being blown away by the paranormal implications of a trivial slight of hand with a matchbox- the thrust being that all these clever people were prepared to set all manner of tests and ponder all manner of thought experiments save that someone was trying to mess with them (https://www.youtube.com/watch?v=SbwWL5ezA4g). If Alan had instead written up a paper on the more playful and cynical 'how long might you be able to fool someone that a mechanism was a person typing' we might have a much healthier thought-architecture on the whole thing.
What I find most disappointing about the likes of Dawkins here is that for all the racket about 'passing the Turing test', the chatbots give away the game *all the time*. The first time you query one with 'what's the art museum with a spiral ramp that isn't the Guggenheim' and they reply 'the art museum with the spiral ramp is the Guggenheim and the one without the spiral ramp is the Guggenheim because the Guggenheim doesn't have the spiral ramp it has' the purely associative nature of the text product is just sitting there. Which doesn't mean it isn't occasionally useful or surprising and what is intelligence really and blah dee blah blah.
Dennett did do a solid before he died though with this essay, I thought: https://www.theatlantic.com/technology/archive/2023/05/problem-counterfeit-people/674075/ . He makes the point that, one way or another, constructing technologies that act like people is bad and gross because it muddles the waters as to what constitutes a person the same way a counterfeit good does. Throughout this particular AI spring my angst has been not that we're going to go down some robot slavery-and-uprising hole of denying an artificial person their rights but that some company will use their tech demo to make someone *think* they have an artificial person in need of considerations that are then just welded to their vast pile of money.
I also liked this article- that the chatbot model is fundamentally a kind of rude UX decision because it attaches the LLM- which could be fronted in other ways, as a document-completion generator, etc.- to all our hyperactive interfaces for talking to people: https://buttondown.com/apperceptive/archive/ai-is-bad-ux/
Re: magicians--cf. Taleb's "Fat Tony": "You flip a coin 99 times. It comes up heads 99 times. How will it come up next?" Scientist: "Why, an equal probability of heads or tails, of course." Tony: "No, you idiot, it means the game is rigged."
Way back before the black swan was published, I was pointing out to bank regulators that if a roulette wheel actually ever did come up 16 Black ten times in a row, you would definitely get it serviced
I actually went to a James Randi talk IRL once, and he zinged the crowd with something similar- discussing a card trick, he asked what the odds of drawing a particular card out a deck was. When the inevitable '1 in 52' poured out in chorus, he faux-chided us- 'no, it's 1 in 1- I'm a very good magician.'
Paul Wilmott does this joke a lot better; people have seen the Randi sketch so they say 100% and he goes come on this is a professional magician, not somebody's 8-year-old nephew. The trick is going to be much more impressive than dealing your own card back to you. So the correct answer might be zero percent and the card off the top of the pack will be a picture of your girlfriend or this evening's lottery numbers or something.
My productivity is cratering badly thanks to now sharing my house with a 6 month old biological machine who's the end product of millions of years of optimising for parental distraction..
Like this very much - but it raises a tangential quesiton which I have a had for a while now: what is the nature of your background in philosophy - just a general interest or some academic study at undergraduate or graduate level. I would guess at least undergraduate.
I did PPE at Oxford like the rest of the middle class. My tutor was Galen Strawson, who found my Dennett fixation amusing to begin with but later quite sternly told me to have a word with myself when he thought it might affect my chances of passing the exams.
A related anecdote: I am continually amused (also bemused) by how many of my colleagues at my university will, in one moment, insist that AI really is getting better about not hallucinating, and, in the very next moment, insist that AI detectors are getting better and will prove an important part of higher education going forward. Dude, you can't have both!
IIRC, AI detectors are getting worse and will soon be no better at detecting AI than chance. Having said that, I believe it is simply good practice to do the needed checking. In all my "research" using AIs (with RAG), my prompt includes the admonitions that every statement is backed up with teh source document and where I can find the place in the document that supports the statement. This makes checking easy, and any incorrect statement can be isolated and either corrected or removed. If its inclusion is needed to support any conclusion, and it is wrong or hallucinated, then the output is wrong and needs to be restarted with that information deliberately excluded.
As for education and learning, the evidence is that relying on AI to provide responses results in no, or minimal, learning. Learning is also "no pain, no gain".
I had similar teenage/early 20s encounters with the philosophy of mind and Dennett but had the advantage of access to ELIZA. And also a group of mates including one who thought in language (and knew about Wittgenstein) and one who very much didn’t (autistic lad who sometimes really struggled with words but could clearly out-think most of the others). What seemed obvious to us - in the philosophers sense of “after an awful lot of arguing” - was that thought and language were separate, or at least separable.
What we have with LLMs is the opposite of that autistic lad - excellent language, not necessarily linked to thought.
And as you point out, this is absolutely optimised to fool us about whether it’s thinking.
(One thing that troubles me deeply about this otherwise neat theory is that both Claude and that autistic lad are really good at coding.)
you make good points but I love the 'decision-making age' in particular!
Clarke's Third Law ("any sufficiently advanced technology is indistinguishable from magic") is generally used to describe a comprehension gap between different civilizations. But we now seem to have created a technology that we ourselves have mistaken for magic.
It's been dumbed down ever since, but Turing's original imitation game was a party game like Werewolf/Mafia. Skilled players who had actually practiced it would be much harder to fool. They would certainly be familiar with all the cliches that we can easily recognize.
None of the AI labs are trying to win this game (the AI just straight up admits it's an AI if you ask) and I haven't heard of any human players who are attempting to get good at it.
For all the hype about thinking, AIs are not able to understand the world, and therefore are just sophisticated System 1 "thinkers", and certainly with no consciousness behind the responses. IMO, my pet cat is both sentient and has more consciousness (however dim) than any AI to date.
Philosophers use the word "zombie" to hypothesize thinking persons with no consciousness. Will we see this in robots with embedded AI, (whether in its body or delivered remotely from a server)? If humans have neural brain links to AI computers, will they believe that the AI responses are part of them, or something separate that they access by thought, rather than by voice or keyboard?
I am resigned to interact with a machine when it supports my intelligence and allows me to mirror my philosophical development. A form of self appraisal if understood and not seen as anything other than a processor
I wonder if Dawkins is aware of Clever Hans, and whether he would be ashamed try be in the audience, or just marvel at how intelligent the horse was
Language is a machine specifically designed to fool you into thinking (some) way.
I can't remember the source--it was not Clan of the Cave Bear, God help us--in which it was posited that Neanderthals died out because they couldn't lie, a Cro-Magnon advantage even greater than the throw-window.
You could argue the same about this blog!
One thing I have found myself thinking again and again while writing my current book is “am I about to touch an object which has been optimised to distract me?”
Honestly, Dawkins is always looking for a fight. Buses with anti-God ads! Such an idiot!
And such a NAIVE materialist:"In executing Saddam Hussein, we have vandalised a unique resource for political, psychological and historical research."
https://www.theguardian.com/commentisfree/2007/jan/03/post858
“When men choose not to believe in God, they do not thereafter believe in nothing, they then become capable of believing in anything.”
― G.K. Chesterton
The corollary to Chesterton would be "Men who believe in God cannot then believe in reality that is not part of their worldview based on their God." Which is why science and fundamentalist religion have become so antagonistic.
And also, Voltaire said, "Whoever can make you believe absurdities can make you commit atrocities."