This is a post in the series roughly entitled “Take Jokes Seriously And Metaphors Literally, You Will Be Surprised At What Is Revealed”. It’s something that crossed my radar last week when Tony Blair made a Rare Intervention and said that the public sector needed to “embrace” AI technology.
My immediate reaction was to make the joke which forms the title of this post, but then I got wondering – what do people actually mean when they say this sort of thing?
As far as I can tell, the usual intention is to find a nicer way of saying “stop being such a Luddite”[1]. It’s such a generic term that I can’t find a first-use etymology for it, but I think the phrase went mainstream after Microsoft adopted its “embrace and extend” strategy with respect to the Internet in the late 1990s.
Of course, that strategy was a bit controversial, as it turned out in later litigation that the full name was “embrace, extend and extinguish” – it was Microsoft’s plan to add proprietary bits and pieces to open standards, then make them so popular that nobody used the non-Microsoft bit any more. But “Embrace” was the crucial first step towards the evil plot. In order to kill the open standards, Microsoft strategists realised that they needed to get all their people to start using and reorienting the business towards the internet, which was not an obvious or easy step for people who had been used to owning every bit of the IT environment.
So, to an extent, telling people to “embrace” a new tech thing does mean telling them to stop fearing and avoiding it. But as we can see from the Microsoft example, the intention is for the relationship to go significantly beyond a quick platonic hug. “Embrace” actually seems to have the meaning “reorganise everything you’re doing around the new technology”. In the specific context of Mr Blair’s Rare Intervention with respect to public services and AI, the subtext seems to be “the public sector should make a very large investment in data cleaning and organisation so as to make all its datasets visible and suitable for AI training”, with a further sub-sub-text of “and having done this expensive-looking work, it should make the training sets available to the private sector at significantly below fully allocated cost”.
That might be exactly the right thing to do, of course – this sort of foundational investment in a general purpose asset is exactly the kind of thing that it makes sense to pay for out of general taxation. Places that make these sorts of investments tend to do better than places which don’t, and that’s more important than worrying about how they might fit into any given system of accounting.
But it would be better, surely, to make the case out in the open – rather than “embrace the technology”, to be clear on the face of it that what we’re talking about is an investment, which might go wrong. There’s a natural tendency to ignore mundane but vital things like data cleaning, and it isn’t helpful things in such a way as to pretend that the only reason the National Health Service isn’t training a large language model on fifty years of prescription data right now is that they’re being awkward. It really isn’t just a matter of giving the computer a cuddle; “embracing” a new technology is more like welcoming a newborn family member who’s going to need a lot of looking after and feeding before they make a contribution.
[1] Eric Hobsbawm fans, consider yourself recognised and heard – I also read his essay and am aware that the Luddites were not really just ignorantly fearful of change, had a distinct economic motivation, can possibly be seen as the beginning of working class consciousness etc.
A different thing that embracing AI could mean is "purchase large contracts from AI companies". This might take the form of replacing customer service with AI chat bots, or enabling all your employees to write their emails using LLMs, or predicting diagnoses with AI, or many other things.
Ironically, I have just taken on a contract to do exactly that—data cleaning and dataset curation/release—for a public institution (in the US). I think they see it both as being about getting people in the AI/ML space to work on/with their data, and as a way of establishing the right kind of rules and good practice for their data around rights, bias, personal privacy, etc.