17 Comments
User's avatar
Z Giles's avatar

“Basically, the only viable future is one in which AI agents have some means of identifying themselves, credibly establishing their provenance and certifying themselves as non-abusive; otherwise, no system can afford to take the risk of interacting with them.”

For a similar reason, I’d argue that companies providing identity verification software are going to become increasingly important to online functioning. As the costs of producing passable online content decreases, the ability to identify that this information did indeed come from a verifiable (and potentially non-AI) source is going to be essential towards navigating the post-gAI environment.

Expand full comment
TW's avatar

Unless agents destroy that industry. Nobody really cares about privacy, not enough to give up Stuff, but all of the resistance disappears if you're establishing rock-solid credentials for a robot that's completely untethered to any human identity. Privacy and security for people may well end up about where it is now. A few hot market segments, general consumer indifference, with the equivalent of dna-strand-mapping-level of shopping and planning agents.

Expand full comment
Kalen's avatar

Visions like that might be superficially tempting, but to me they always really illustrate the intellectual bankruptcy at the core of this whole wave of hype.

Like, let's just imagine that this agent could actually do this (which is huge- the last mile problems abound). How many of those tasks involve labor where the figuring out what you want is very different from the asking for it? The magical choice of twenty people to invite and sending them a note (as if this was a big ask when email address books exist), the choice of where to go on vacation (if you know what you want to see, then this is irrelevant, if you don't, this is indistinguishable from a tour company), editing the memo that, if it needed to be sent at all, presumably was mostly information already in your head and needed to be fed into the agent by writing it down, the notion that the 'negotiations with the bowling alley' weren't, again, a person needing to express their preferences between times and prices, etc., Like, these aren't hard things- they might also be *inevitable* things. The idea that tech utopia is that tickets are one notch easier to get- I dunno, it reeks of an impoverished imagination.

And the idea that this means that ads just go away or something feels hilarious to me. We have ten years of hilarious and horrifying examples of classifier being effectively 'hacked' by single pixel changes to images and the like, and someone thinks that letting robots (whose primary challenge remains *making the right decision in a noisy human world*) will make some kind of hyper-rational, Mr. Spock responsible fiduciary choice for you instead of *being even easier to make click on garbage* is comical- especially because the clicking agent is probably available for scammers and advertisers to test against!

And that's even before we get to the fact that the current iteration of these technologies- cloud hosted, as yet unprofitable, backed by enormous amounts of computation- is almost certain to be more-or-less ad-supported soon. Between Claude's Golden Gate obsession and Grok's white genocide system prompt, it's clear that these, like any piece of software, aren't oracles descended from the heavens- they're boxes that say what their owners want them to say, and it seems certain that will soon include messaging that is to the benefit of their owners and not their users.

Expand full comment
John Harvey's avatar

Excuse me, this really strikes a nerve.

Why is it that our "future" planners all think nothing of a $500 birthday party for a five-year-old, and booking a trans-Atlantic trip by jet like it was catching a bus?

Because they can.

But where is the planning for the people who are at risk of bankruptcy from medical bills, college debt, A.I. took their job, etc., and whose vote means nothing...the other half or more of America ?

How about making a difference by solving the "can't afford this country anymore" and the "nobody listens" and "I am being treated like a mere object to be used" problems?

Sounds like the Silicon Valley and "thought-leader" names are solving their own problems, again.

This is why Harris lost.

Out of touch, and tone deaf.

Reality check: a real birthday party isn't just an item to check off on a busy achiever's to-do list. A child is not a problem to be solved, or optimized.

A birthday party is about loving your child!

Make your own damn cake for your child, and have an intimate family birthday party at home for your child, and let them blow the candles out, and show them that you love them. And let them play and laugh, and not be rushed out of a venue for the next paying customers.

Isn't a child worth that much?

Do NOT be "efficient," just once.

And, please, don't embarrass your less affluent neighbors' children, who won't get $500 parties your kids get, and can't. Those kids count too, even if they are not yours.

If you can't do these things, how can you invent our future? By what right would you?

This is just how it looks from where I live.

Forgottenville, USA.

Expand full comment
splendric the wise's avatar

I don’t get the assumption that credibility requires regulation, since reputation works just as well.

If everyone knows that Google Agents act unscrupulously, everyone can just refuse to deal with Google Agents. If everyone knows that Microsoft Agents are honorable, you can decide it’s worthwhile to deal with Microsoft Agents. It is trivial for Microsoft to set things up so that their Agents can identify themselves in a way that other Agents cannot easily fake.

Microsoft and Google have an incentive to make their Agents useful. Which means they have an incentive to make them honorable.

No real role for regulation that I can see.

Expand full comment
Dan Davies's avatar

well yeah, but think about the (broadly defined) regulatory apparatus which has to be there before you get to a sentence like

"If everyone knows that Google Agents act unscrupulously, everyone can just refuse to deal with Google Agents. If everyone knows that Microsoft Agents are honorable, you can decide it’s worthwhile to deal with Microsoft Agents."

First, I absolutely don't agree that it's trivial for Microsoft to solve the problem of certifying "Microsoft Agents" in a way that can't be spoofed. It's *possible* but it requires them to program, maintain and distribute a certification system and for all users to have their access set up. I can't accept that "something like HTTPS, but for much weirder and less standardised use cases" is "trivial" - it looks much more like a regulated system.

Second, the point of having an agent is that it's *your* agent. It does what you tell it to do (plus a bit of potential unintended variance), not what Google or Microsoft tell it to do. The concept of "Microsoft agents are honorable" has to mean "Microsoft has solved and implemented the problem of ensuring its agents can't do dishonorable things even if the customer tells them to".

Third, what about roll-your-own agents, or those released by start-ups? Either the overall system is permissive in terms of "innocent until proven guilty" or it generally won't deal with agents whose provenance it doesn't know. The first of these is exactly the failure mode I'm talking about - it's unsustainable.

So the system with "no real role for regulation" is a natural oligopoly in which only a small number of companies are allowed to produce agents, which they securely certify and approve the uses to which they can be put.

In other words, it's considerably more regulated than the Apple Store. I think this kind of market libertarian argument doesn't work in this sort of context because the concept of "reputation" is only something that can be applied to the kind of being that can be the proper location of accountability, not an AI agent.

Expand full comment
splendric the wise's avatar

Maybe it’s just coming at this from a CS background, but HTTPS didn’t require any government regulation. Neither did public key cryptography. It really is a solved problem for one computer to identify itself to another computer.

The bowling alley’s Agent “picks up the phone”, if it’s a disreputable Agent calling you hang up, or just charge them an extra fee to make the reservation. If abuse is common, you default to mistrust with unknown Agents.

It’s not that it’s impossible to have a disreputable Agent. It’s that people get annoyed that their disreputable Agent gets them bad terms, so they replace it with a reputable one.

If Microsoft catches you violating the ToS with their Agent, they ban you from their service, since they don’t want customers like that.

Expand full comment
gregory byshenk's avatar

This seems a bit too simplistic. Yes, "It really is a solved problem for one computer to identify itself to another computer." But the "identify"ing is not the issue; the issue is determining the "reputation" of an as-yet-unknown computer/agent.

And though this may not require regulation, it does impose costs. For example, a basic cert from Digicert - in order to identify one's site and connect it to the "reputation" of Digicert (ie: Digicert vouches for your site), costs $26 per month or $312 per year (and the prices go up from there). (And yes, letsencrypt is free, but that is because it doesn't guarantee much.)

Which likely means that either a) one ends up paying significant monetary costs to somehow demonstrate the reputation of one's agent(s), or b) one signs up with an oligopoly (either MS, Google, or...) and depends on their reputation - and ends up with other (possibly non-monetary) costs (when your MS agent knows - and to some extent *needs* to know - everything about you).

Expand full comment
splendric the wise's avatar

Yeah, I’m thinking it’ll be the oligopoly one. Seems very valuable to own someone’s personal Agent. I’m guessing several big players in tech will eventually end up trying to sell people on their Agent.

But you can do personalized reputation as well, at least theoretically. If Agent #2BR02B has been trustworthy the past few times you’ve dealt with him, you can stop charging him a deposit to take up your time. As long as it’s Agents on the other end of the transaction, it’ll be easy for them to track this kind of thing.

Expand full comment
gregory byshenk's avatar

Don't forget, though, that this is also a form of lock-in.

That is, if you can only (practically, without having to pay extra) interact with the other agents that you have already dealt with (and built a reputation), then there are added costs to opting for a different one.

Expand full comment
John Quiggin's avatar

I was just going to write about the provenance problem, making precisely the point that misinformation is a more intractable problem than hallucinations. Musk's Grok has already shown some spectactular examples, but the big problem will be the examples that can't be so easily detected.

Nothing specific to AI here - it is an information medium and the same issues have been around since the first humans learned to lie. But new ways of detecting cheaters are needed

Expand full comment
John Smith's avatar

Great take on this but the original author (Mr Thompson) misses the whole point of being a person and having a life. Hell, the whole point of having a birthday party for a 5-yo is the actual logistics and planning. Which begs a larger question of what happens when all the “boring” things are outsourced? Automation, internet—none of it has made us happier or more fulfilled. Mr Thompson would do well to ponder the aphorism about you “wherever you go, there you are.” Once we can’t run from ourselves (because all of the “boring” bits have been solved), it will be VERY interesting.

Expand full comment
Sean Matthews's avatar

'People keep telling me that various of the current problems of hallucination will soon be solved'.

That will be with the expected breakthroughs (as I heard Stuart Russell say).

Expand full comment
Daniel Derrett's avatar

Bit mean to fantasy fiction, most of which base their imagined societies on periods of history when people absolutely did believe in magic.

Expand full comment
Luis Villa's avatar

It’s not just hallucinations, it’s also that security of agents is essentially completely unsolved. Highly recommend this from Simon Willison on the various unsolved (unsolvable?) hacks: https://simonwillison.net/2025/Apr/9/mcp-prompt-injection/

Expand full comment
Andrew Smith's avatar

But will the AI agents on which the discussion is based exist any time soon? It's one thing to have Alexa add something to the shopping list, but the envisaged capabilities are a completely different thing, and certainly not something current LLM's are designed to do. They are about patterns in language, not Natural Language Processing.

Expand full comment
Alexander Harrowell's avatar

actually an interesting question to what extent existing AIs trained on the common crawl are influenced by all the ads they ingested.

Expand full comment