Essentially agreed with the caveat that I’m not sure that stops the AI from blowing us all to kingdom come as a staging post on its argument with itself or indeed to substantiate one of its priors. I’m fairly sanguine either way tbh
Also a related point is that as the AI systems start generating more and more of the content on the internet it is training more and more on its own output, reinforcing the things it already knows and further closing off the poissibility of learning "new" knowledge.
I am generally much more worried about “AI” being used for much more banally problematic purposes (disinformation campaigns, filling children’s entertainment with generated sludge, making the arts even less financially viable, etc.) than I am concerned about The Singularity.
It seems to me that the fear mongering about super-intelligences has become a thought terminating cliche. “The real impact of these AI systems is small potatoes compared to the coming apocalypse, ignore these current issues and focus on my particular flavor of end-times hysteria.” That’s not totally fair to the singularity folks, but they’ve sucked enough oxygen out of the AI debate that I’m not particularly inclined to be charitable.
This piece is a nice, succinct counterexample to the doom and gloom, and one that I hadn’t encountered previously. It may very well be that “exponentially smarter” was never a meaningful statement to begin with.
I expect you are a geek like delong. At about minute 33 it talks about controlling aircraft and then about controlling reactors. Controlling AI may be a similar problem.
David Robson's The Intelligence Trap is fun on this.
What a savage attack on Plutocrats, Aristocrats, the Bourbons, The Royal Family, TechBros, and Elon.
Essentially agreed with the caveat that I’m not sure that stops the AI from blowing us all to kingdom come as a staging post on its argument with itself or indeed to substantiate one of its priors. I’m fairly sanguine either way tbh
Also a related point is that as the AI systems start generating more and more of the content on the internet it is training more and more on its own output, reinforcing the things it already knows and further closing off the poissibility of learning "new" knowledge.
Brill. The short way to do this is spend some time at the University of Oxford (or Cambridge. I imagine MIT is similar.)
I am generally much more worried about “AI” being used for much more banally problematic purposes (disinformation campaigns, filling children’s entertainment with generated sludge, making the arts even less financially viable, etc.) than I am concerned about The Singularity.
It seems to me that the fear mongering about super-intelligences has become a thought terminating cliche. “The real impact of these AI systems is small potatoes compared to the coming apocalypse, ignore these current issues and focus on my particular flavor of end-times hysteria.” That’s not totally fair to the singularity folks, but they’ve sucked enough oxygen out of the AI debate that I’m not particularly inclined to be charitable.
This piece is a nice, succinct counterexample to the doom and gloom, and one that I hadn’t encountered previously. It may very well be that “exponentially smarter” was never a meaningful statement to begin with.
Back in the day I had a near relationship with this topic: https://www.youtube.com/watch?v=9Lhu31X94V4&t=1837s
I found it via Brad DeLong.
I expect you are a geek like delong. At about minute 33 it talks about controlling aircraft and then about controlling reactors. Controlling AI may be a similar problem.