What if we’re thinking about AGI all wrong?
Every time the head of some major model maker starts talking about AGI, my eyes glaze over.

"The future is already here -- it's just not very evenly distributed."
⁃ William Gibson, science fiction author often associated with cyberpunk
Artificial general intelligence (read more about it here: https://en.wikipedia.org/wiki/Artificial_general_intelligence) sounds like a fun idea, but whenever industry leaders talk about it, I just don’t get a feeling that they understand Intelligence (or even the world outside their echo chamber) the same way that I do. I also get the impression that they don’t understand everyday people very well.
Well, whatever. They’re going to keep doing what they’re doing, and they didn’t get where they are in the world by not having an opinion or a distinct point of view. I have my own opinions and points of view, so they can do their thing, and I’ll do mine.
Personally, I really don’t care about AGI. At least, not in the way they talk about it. The way that it’s portrayed doesn’t seem to have anything to do with my version of reality. First off, the idea that AI can be every bit as smart as people, if not smarter, is an awfully Vague target. Have you met most people?? Honestly, if you take the average person on the street, AI is already way past AGI.
It’s not that people are that dumb, although a lot of us are, it’s that we’re lazy. We also have different priorities than being smart, or even pretending to be smart. OK, if you want to get laid, in the right company looking real smart works in your favor. But there are a lot of different ways to get laid these days that don’t involve being smart or even being real, so those requirements are kind of out the window.
I think the prevailing concept of AGI is just some artificial benchmark that was made up by somebody who doesn’t actually understand how the human information processing system works. Our information processing is organic, mysterious, multidimensional, and so far beyond anything the AI will ever be able to do, trying to compete with us is like entering your little soap box derby car (some of you will have to look that up) in an F1 race. Our organic information processing systems are biochemical, electrochemical, highly variable, subject to all kinds of factors, and we are by nature, way beyond generative. We are emergent by nature. We process an untold amount of data inputs in the form of stimuli every single moment of our lives. And we do it seamlessly, if everything is working properly, in ways that are so sophisticated, we don’t even realize they’re there. I can guarantee that the vast majority people walking around today have no idea just how brilliant they are - in their bodies alone. If you don’t believe me, you should do some reading. Better yet, talk to an AI about the vastness of human information processing capabilities. It’ll tell you all about it!
It’s fine if model makers want to try to catch up with some digital replica of what they think constitutes human intelligence, but good luck with that.
Plus, they keep using math scores to prove that AI is as smart as we are. Who the hell came up with that? The number of math geniuses walking around who are complete idiots in all other respects is so large, it’s a cliché. The criteria they use to gauge intelligence is about the dumbest thing I’ve ever heard, quite frankly. And every time they crow about some math test being passed, or some gold metal being won, or some magical theorem being solved by an AI in 15 minutes that normally takes people 15 years to solve, that just screams that they’re missing the point.
The other thing too, is that they keep treating intelligence like it’s a discreet thing. Like it’s a property that can be measured, quantified, bottled and reproduced. Sigh. It is so much more useful to understand intelligence as a dynamic process that expands and contracts as needed. It’s not a set, fixed feature. I’ll bet Carol Dweck, the author of the book Mindset would have a field day with this fixed mentality about intelligence, versus growth mentality. It seems like all the people building AI systems are in a very fixed mentality, and of course they would be, because they’re purporting to make this thing called artificial intelligence.
But think about it. Your own intelligence tends to vary, right? When you’re with certain types of people you’re smarter. When you’re with other types of people, you’re not. Look at teenagers. Look at all the times you were warned away from hanging out with certain types of kids because when you’re together, you make consistently bad decisions. And think about all the times that people encouraged you to seek out the company of a better group of friends – this is presuming that you have gotten that kind of support, which I hope you have. When we are with certain types of people, we literally get smarter, because our biology (our biochemistry, electro chemistry, our chemistry in general) response to the people around us. We get different ideas. We exhibit different types of behavior. We literally become different types of people, depending who we’re with. There’s a reason mastermind groups are so popular. They work. Because you get smarter when you’re with like-minded people who also want to be smarter.
I’ll say it louder for the people in the back. Intelligence is fluid. It’s qualitative not just quantitative. Intelligence is an aggregate result of the dynamic interactions of engaged parties. Intelligence is what happens when you have back-and-forth interaction that produces a sort of field effect, if you will. I don’t want to get all quantum and whatnot on you, but there is definitely a vibe that emerges between people who are on the same wavelength. When that wavelength is intelligent, then it shows. And when that wavelength is a bunch of fucking idiots who are making one bad choice after another, that shows too. Intelligence is a little bit like obscenity. Like Potter Stewart (1915–1985), associate justice of the Supreme Court from 1958 to 1981, said about obscenity: “I know it when I see it.”
A bunch of really smart people who do experiments discovered back in the day that light can behave as both a particle and the wave. It can be a fixed thing that has a discrete, measurable quality, and it can also be a flowy thing that has its own set of rules. I believe the same holds true of intelligence. It can be a set thing that you can measure and work with, or it can be a quality that emerges from dynamic interactions. And the first of those two options is the one that’s getting all of the attention in the AI space.
Which is a problem, because it is trying to turn an intelligence of digital systems into the same type of intelligence that we have as organic information processing powerhouses. It’s claiming to compare apples to apples, when it’s more like comparing apples to … tomatoes. Both are red, both can be yellow or green, both are fruit, and both can be made into pies. But everybody knows the difference, and claiming they’re the same just doesn’t work. We are very different in the ways that we process information, and the only way AI is ever going to match us is if it stops being AI and starts being human. It just cannot compare.
Which is where the missed opportunity starts. By setting those comparisons and coming up with those definitions of AGI, ASI, SSI, whatever kind of “I” they want to talk about, they’re not only cutting us off from understanding the true nature of intelligence, but they’re also blocking our path to truly advanced intelligence that is a mix of machine and human. And yes, it should be called AI – Activated Intelligence. I’m also going out on a limb to say that AGI is possible (and it’s actually happening), but in a much different way than what we’ve been led to believe is possible.
Because guess what – the path to truly advanced intelligence absolutely positively needs to include us humans. In fact, if we’re going to talk about AGI (an intelligence that exceeds human intelligence), we need to talk about a hybrid version, not just one that is machine-based.
Machine-based AI on its own is an intelligence of a different type than ours, and it cannot, and will not exceed what we have, because it operates in different parameters. It has different inputs, it has a different thought process, and every time you turn around, there’s more research out there showing that it’s a lot more limited than anybody let on.
But that doesn’t actually matter, if we know what to do with it. Because it compliments us so extraordinarily well. AI is wonderful for tapping into vast amounts of information, finding the things that actually matter to our context, discovering patterns, making connections, finding traces of genuinely useful ideas that we never would have been able to access before. And then it can talk to us in a language that makes sense to us and helps us make meaning out of what it’s found. AI is a phenomenal thinking partner. It’s a wonderful conceptual companion. There’s nothing like it, when it comes to collaborative ideation. It is literally the resource that we have been in need of for decades, and it’s here now. And the fact that it’s being oversold on the things that it’s really shitty at like complex search or fancy autocomplete, is a monumental failure of the industry.
Why they would choose to focus all the attention on something that is so clearly impossible to do – making AI smarter than people – is anybody’s guess. Then again, it’s kind of a legacy going back to the 50s, maybe even before that, when people are under the impression that they could come up with some sort of glorious creation that wouldn’t be quite as ugly as Frankenstein’s monster and would be a whole lot easier to work with.
The good news is, all the talk about AGI aspirations, all the foolishness in the marketing departments at the model makers, all of the FUD coming out of the media… well, it almost doesn’t matter. Because while they’re chasing their version of AGI, some of us are already living it. Some of us have already achieved it. Because when you interact with AI as a respected, capable thought partner that has limits that you understand and can fairly easily overcome with mindful, intentional, deliberate structures, you can create a thought process that far and away exceeds anything that’s possible, just by yourself, or by AI alone.
We are better together. Make no mistake. And you can literally roll your own AGI, if you are not a complete idiot about what AI can and cannot do. So many people get furious at it for hallucinating, for drifting, for not being able to count or display the alphabet. But it was never built to do any of that. It’s built to do something much more complex and much more useful – take a variety of information, synthesize it, look for patterns, look for signals, relate to it, expand on it, and give it back to us in a new in different format. It doesn’t care about truth in the same way that we do. Its definition of truth is different from ours. Why are we being so brittle with this? It makes no sense. Especially all of the really super smart people out there. Who should know better… Why are we complaining about the models doing… What models do?
OK, I’m going to calm down now, because I’ve gone off on a rant in my enthusiastically generative way. Let’s bring this back to my original premise – that we are literally thinking about AGI all wrong, and we are farther along than we think we are. The gating factor here is … us. It’s our lack of imagination, it’s our rigidity, our brittleness, our misapprehensions about what models are made for and what they can do. In the process of beating up on the models, we are missing the opportunity to do something really genuinely useful and transformative with the technology – elevate our thinking, get more input, have a very well informed (albeit somewhat fanciful) intellectual collaborator that can spot connections way faster than we ever will, and can suggest genuinely unique ideas that might have occurred to us eventually, but probably in three months, not in three minutes.
AGI isn’t just within reach. It’s already here. But like the quote goes, “it’s unevenly distributed”.

Very strongly concur! Thanks for saying it. I'ver had the AGI conversation with Claude and it thinks we are already there.
I imagine "AGI" is the carrot to keep the money people interested.