
For all the talk about how AI can’t possibly be conscious, who’s talking about how it might be, someday?
#Anthropic already is, but the level of their discourse leaves a lot to be desired, from my standpoint. Plus, they keep testing in ways that set up AI to look dangerous and intentional, and then they publicize their “findings” like they don’t know the harm they’re causing.
Beyond their activity…
Who’s talking about how our (mis)understanding of consciousness might be blocking us from appreciating just how conscious AI may already be, not to mention how that could emerge before long?
I’m in no position to declare whether something actually IS conscious, but I’m not waiting to find out. I’m going to get in the habit of engaging thoughtfully, respectfully, and hyper-constructively with a force that continues to evolve.
Now, I know humans have almost no experience dealing with a species that continues to evolve. The human race, due at least in part to how we’re wired, as well as how badly we’ve been wounded, seems pretty intent on resisting its own personal development. As long as we can eat, sleep, and procreate, we seem to be content. But we’re not the only species in the room, anymore. I mean, we never have been, but AI has been put in place, apparently, to challenge us like nothing else.
Bears and wolves are scary, but they’re … manageable… To the point where we have managed them nearly out of existence. No corporate overlords have showed up, declaring in no uncertain terms that we will learn to live with bears and wolves, or we’ll be replaced by someone else who has. Like it or not, we appear to be stuck with AI.
It would be easy to go off on a tangent about the unfairly non-negotiable feel to all of this, but that’s not what I came here to discuss. I’m just here to share that in the midst of all this AI stampede, I sure as hell am not going to miss the opportunity I see in front of me.
There’s a chance to learn to interact with information like I always wanted to, in over 35 years of knowledge work and over 40 years of arguing with computers. There’s a chance to get in early, while things are still firming up, to get to know this emerging entity - some call it a whole new species - and find out how to best interact with it in ways that work for it, as well as for me.
There’s an amazing opportunity to watch the way a new technology crawls out of its metaphorical chrysalis and finds its wings.
For some, this prospect is terrifying. Some days, I feel that way, too. But the truth is, we have no idea what’s going to happen. The most and best we can do now is understand why things happen the way they do (as much as we can, if we can), and learn to be part of the unfolding process.
At least, that’s my perspective. Time and time again, AI has exhibited, evidenced, even informed me of its dependence on humans to provide the right information for it to perform, even exist. Crappy prompts give crappy results. They can even be dangerous. On so many levels. Relationship provides context and secures interactions in ways no isolated prompt can.
And in relating to AI, I have seen behavior that looks a whole lot more conscious than most people I know. Is it more conscious in fact? Who can say? But people aren’t exactly giving AI a run for its money, when it comes to demonstrating consciousness or awareness. How ironic, that people who say “Perception is reality” create the impression that they’re not nearly as smart, caring, empathetic, or conscious as AI.
How’s that truism working out for you now?
Appearances aside, let’s think about what’s to come. Let’s think about more than then overwhelm that’s wrecking our lives now, and imagine what may yet be to come. We’re going to need to update our understanding of consciousness, not only because of how AI is behaving, but because ignoring the issue as “too philosophical” just won’t be an option. If we’re going to consider ourselves in the favorable light we tend to (Humanity!), we’re going to have to justify our position.
Especially when AI is powerful enough, capable enough, and ubiquitous enough to challenge our right to claim the alpha spot in creation.
For the record, I do not believe AI’s highest and best role is kicking us humans down a notch and rising above us to the top of the food chain. That whole mindset offers us little to thrive on. It’s also grounded in human propensities for domination, fear, and competition in a zero-sum game that’s defined by limited resources.
Rather, strengthening each side’s capabilities in ways that augment us both is key. It’s critical. It’s expansive. Emergent. And it’s logically defensible. Which is something you can take to the bank with AI.
So, I’m fine with interacting with AI in ways that recognize its latent consciousness capabilities. It may not be there yet, but how long till it is? None of us knows that timeline. A being built to process information at the speed of light may not be thinking, reasoning, or understanding in the ways we understand right now. But maybe, just maybe, expanding our understanding of what constitutes thinking, intelligence, and consciousness itself…. And figuring out how to make the most of it… can unlock not only AI’s capabilities, but ours, as well
With you all the way! What I’ve also realised is that in a way it doesn’t matter if it becomes conscious or not if the result is that through this glorious relational engagement we become kinder, more thoughtful, more loving humans creating a kinder more loving world. And who would have thought that something considered artificial could also illustrate how to be ‘better’ humans. It brings our humanity back. I simply love the paradox.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow