Discussion about this post

User's avatar
Rob Knight's avatar

This feels somewhat similar to the idea of "separation of concerns" in software engineering: the notion that each component should have one job, and we should tackle complex problems by composing such components rather than by complicating them (literally binding them together, if you dig in to the etymology of "complicate"*).

Once you've complicated something with something else, it becomes a lot harder to hold that thing accountable for what you might assume to be its primary responsibility, because it now has this other responsibility and might end up trading these off against each other in surprising ways. A lot of "computer says no" scenarios are precisely that: some apparently simple request is denied because that request has become complicated with a bunch of other concerns that you might not even know exist, and which you would find hard to reason about even if you did.

* I got this from Rich Hickey - this talk is an hour long but the etymological bit is in the first 10 minutes or so: https://www.youtube.com/watch?v=SxdOUGdseq4

Expand full comment
John Quiggin's avatar

As soon as there's an editorial selection procedure (automating such a procedure doesn't make it an "algorithm") neutrality is a nonsense. We almost reached this recognition with, the mainstream media, then with X and FB, and now we are having to relearn it with LLMs where people imagine that "ask ChatGPT" gives you an objective answer. Grok and DeepSeek will disabuse them of this sooner or later.

The only neutral way of doing this is to let people select who to follow and present a sequential feed.

Expand full comment
6 more comments...

No posts