Discussion about this post

User's avatar
PointyShinyBurning's avatar

Generally good but worth pointing out that the "hallucination" issue in LLMs isn't closely related to the general problem of under/overfitting. The system in general is trained to find statistically likely follow-up utterances rather than produce statements which are true; unlike reinforcement systems like AlphaGo which are trained directly on the ground truth of victory or defeat. The things it says sometimes coincide with truth or truthy statements due to their preponderance in the training set but there's very likely to be a hard ceiling for anything we might call accuracy absent some components, which we currently have no idea how to build or integrate, corresponding to the _rest_ of a mind.

Expand full comment
gregvp's avatar

My opinion on AGI and surpassing human-level intelligence is not important. However, the AI boosters' claims of massive sudden increases in GDP are extraordinary and need extra-ordinarily robust justification, not mere hand-waving. Sure, assume AI cracks fusion power, say. It'll still be two or three decades before we get a few pilot plants built, and several more decades before there's meaningful economic impact.

Yes, jobs dealing with information, particularly where mistakes don't result in explosions or collapses, are at risk. But the decay process will be slow. We may be in for a repeat of the Long Depression of 1873 to 1899-ish, back to back with the Great Depression of the 1930s.

(https://en.wikipedia.org/wiki/Long_Depression)

OK, can 't resist. Why does no one talk about Moravec's Paradox any more? (https://en.wikipedia.org/wiki/Moravec%27s_paradox) Doing things that humans find "skilled" does not impress me. Make me a sandwich. (https://xkcd.com/149/)

Expand full comment
24 more comments...

No posts