Happy new year to all. I’ve been thinking a bit about how people are actually using AI products Partly from talking to them at the conferences I went to in December, partly from observing some of my family members, particularly the younger ones, incorporating them into their workflow. But also, I’m now overhearing people talking about having “asked ChatGPT” in coffee shops – it’s at that level of mainstream now.
What interests me is that nobody really seems to want the actual output. There seems to be a short initial phase of “wow this is amazing”, quickly followed by “but it’s not quite right”. Everyone seems to intuitively grasp that using AI tools to write your college essays is the equivalent of cheating at patience while using them for any real-world application is a game of Russian roulette where you have no firm grasp of how many chambers are loaded with a career-destroying mistake.
But, people seem to like including an AI stage in their process. There’s a sort of iterative stage of typing something in, getting a response, seeing what you like and dislike about the response and then formulating your own ideas that pushes things along. For some tasks – like programming ones – it’s euphemistically nicknamed a “copilot” relationship, but looks more like a Socratic method, where the computer is finding the vector average of text strings in its dataset, which you interpret as leading questions to trigger a similar process of finding vector averages from the bigger and better-connected database in your head.
More interestingly, though, people seem to go through a similar process in areas where the computer isn’t actually any good; where the vector average of token strings doesn’t really correspond to anything substantial and it shows. It seems to still be valuable to type in a few prompts, get back a mediocre lump of semi-attached verbiage and start thinking about that.
In many ways, even “autocomplete” or “the motorised prayer wheel” seem to be giving the artificial intelligence too much credit. The way that people seem to be using it is more like the technology that used to be called “talking to a pillow”. We’ve created a cybernetic teddy bear; something that helps to sustain an illusion of conversation that people can use in order to facilitate the well-known psychological fact that putting your thoughts into words and trying to explain them to someone else is a good way to think and have ideas. (That this would be a big use case ought to have been obvious to anyone who knew the history of ELIZA).
I genuinely don’t know how revolutionary this might be, even if this is all there is to it. A machine that doesn’t get bored listening to you could be an incredible boost to a lot of people. It’s actually quite hopeful in my view; although it is nowhere near as science fictional and glam as “AGI”, this could be a very relevant use case.
We know that the human need for attention is almost insatiable. A lot of social problems have at their root the fact that some children learn that although negative attention isn’t as nice as positive attention, it’s still attention and it’s a lot easier to get. A low-quality substitute for human attention that’s much easier to produce could do a lot of good, although I feel like it might need to be carefully regulated in the way that most other low-quality mass-market products that mess around with your brain chemistry are.
I think it was Rob Pike who told the story of a professor who would not let you ask him a question until you’d asked the teddy bear on the chair outside his office. It’s like that, but the teddy bear consumes four litres of water and a bucket of coal per question
> The way that people seem to be using it is more like the technology that used to be called “talking to a pillow”. We’ve created a cybernetic teddy bear; something that helps to sustain an illusion of conversation that people can use in order to facilitate the well-known psychological fact that putting your thoughts into words and trying to explain them to someone else is a good way to think and have ideas.
FWIW this is actually a thing in software engineering: https://en.wikipedia.org/wiki/Rubber_duck_debugging