Discussion about this post

User's avatar
Z Giles's avatar

“Basically, the only viable future is one in which AI agents have some means of identifying themselves, credibly establishing their provenance and certifying themselves as non-abusive; otherwise, no system can afford to take the risk of interacting with them.”

For a similar reason, I’d argue that companies providing identity verification software are going to become increasingly important to online functioning. As the costs of producing passable online content decreases, the ability to identify that this information did indeed come from a verifiable (and potentially non-AI) source is going to be essential towards navigating the post-gAI environment.

Expand full comment
Kalen's avatar

Visions like that might be superficially tempting, but to me they always really illustrate the intellectual bankruptcy at the core of this whole wave of hype.

Like, let's just imagine that this agent could actually do this (which is huge- the last mile problems abound). How many of those tasks involve labor where the figuring out what you want is very different from the asking for it? The magical choice of twenty people to invite and sending them a note (as if this was a big ask when email address books exist), the choice of where to go on vacation (if you know what you want to see, then this is irrelevant, if you don't, this is indistinguishable from a tour company), editing the memo that, if it needed to be sent at all, presumably was mostly information already in your head and needed to be fed into the agent by writing it down, the notion that the 'negotiations with the bowling alley' weren't, again, a person needing to express their preferences between times and prices, etc., Like, these aren't hard things- they might also be *inevitable* things. The idea that tech utopia is that tickets are one notch easier to get- I dunno, it reeks of an impoverished imagination.

And the idea that this means that ads just go away or something feels hilarious to me. We have ten years of hilarious and horrifying examples of classifier being effectively 'hacked' by single pixel changes to images and the like, and someone thinks that letting robots (whose primary challenge remains *making the right decision in a noisy human world*) will make some kind of hyper-rational, Mr. Spock responsible fiduciary choice for you instead of *being even easier to make click on garbage* is comical- especially because the clicking agent is probably available for scammers and advertisers to test against!

And that's even before we get to the fact that the current iteration of these technologies- cloud hosted, as yet unprofitable, backed by enormous amounts of computation- is almost certain to be more-or-less ad-supported soon. Between Claude's Golden Gate obsession and Grok's white genocide system prompt, it's clear that these, like any piece of software, aren't oracles descended from the heavens- they're boxes that say what their owners want them to say, and it seems certain that will soon include messaging that is to the benefit of their owners and not their users.

Expand full comment
15 more comments...

No posts