6 Comments

We organize society around trust instead of incentives and coercion. https://malorbooks.com/psychology-of-consciousness.php

Expand full comment

I think Corey is right here. Also I think his argument is more general than yours: it's not just being able to find the core of an argument in a text, it's the ability to find the core element of what's going on in the world around us.

I am not an actual AI guy, at all, but I have digested some work on the subject, including some of the modern arguments, as well as quite a bit from pioneer Marvin Minsky. (Some of Minsky's later classes at MIT are viewable on youtube, which is amazing as fuck.) Minsky said that one of the core difficulties of AI is taking an undifferentiated input and extracting the elements that matter for the task at hand -- in other words, what Corey said.

Notice what Corey says about psychoanalysis, at the end of that piece. The question is how do we order our own thinking vis-a-vis the world. I don't see how one could possibly summarize this as "skimming for content" ... or as something anyone could ever hope to do with their phone.

Are you sure that you're picking up what Corey is putting down?

Expand full comment

I don't think I'm disagreeing with Corey - I do say that the thing he wants to measure exists, just that it might stand in a different relationship to "being able to write an essay" than the one we'd previously believed, just as it turned out that long multiplication and calculating square roots wasn't really the essence of maths.

Expand full comment

If I could be a bit pedantic, I think Corey isn't talking about "being able to write an essay": he's talking about actually writing specific essays.

I think he is saying that 'writing an essay (about a specific topic)' is one way, and perhaps the most important way, in which we (humans) learn whether or not we actually know what we think we know (about that topic). We very often believe that we understand our own thoughts, and understand the topic, but when we try to put our ideas down in text we find that we don't. This is definitely an experience that I am having, over and over, as I try to write up my own (strange) ideas.

I guess your calculator analogy would be suggesting that we could borrow some ideas from a future chatbot? We might borrow "off the shelf" shallow ideas in order to think deeper thoughts?

Maybe it's not such a crazy thought after all. I spend way too much time thinking about the limits of thinking already, so I'll probably sit with this one for a while too. Thanks for taking the time to reply.

Expand full comment

This reminds me of similar arguments about spreadsheets. Prior to spreadsheets some boss would ask a questions and a week later get back an answer. They might ask about a different value for one of the variables and get another week. With spreadsheets, they could get the results in seconds. Often they would try a dozen alternatives because it took so little time to see the results. (e.g. What is we reduced our error rate by 5%)

Of course this only worked for certain kinds of problems, which now seemed trivial.

Expand full comment

Yes, good point - sorry, I'm just catching up with comments right now. This was a very big step-change for lots of management, particularly in the financial sector; it made a lot of different organisational forms possible. Will return to this.

Expand full comment