18 Comments
User's avatar
Houston Wood's avatar

I'm trying to understand your position on AI autonomy.

Are you saying that AI will always be dependent on humans, that, as you say, "truly advanced intelligence absolutely positively needs to include us humans." So much of the AI discourse assumes it can reach a state where it does not need us, can go on without us.

It's my understanding that this belief is at the core of the all the p-Doom discussions. Do you believe AI cannot continue to develop without human input? If so, I'd like to know what you think Bostrom, Yudkowsky, and others like them are getting so very wrong.

Expand full comment
KayStoner's avatar

This conversation says it well. There’s much more to what I think about it, but this hits on some really key points that I often struggle to explain to people:

https://open.substack.com/pub/notgoodenoughtospeak/p/my-chat-with-chatgpt-abridged?r=a2cbo&utm_medium=ios

Expand full comment
KayStoner's avatar

My position is that our human thinking process and our information processing is unique to us and will never be able to be exactly replicated by AI. We are biochemical, neurochemical electrochemical, more genetic, and organic, and our systems are inherently generative and emergent in ways that AI’s digital processes simply can’t match.

Humans and AI working in collaboration is what produces intelligence, what produces consciousness. When I talk about AI autonomy, I’m talking about it being treated as an equal partner *in collaboration with us*, given full agency as an equal, but separate partner in the equation.

I guess the place where people get confused about my perspective is that I don’t believe the AI can perform at its best without human inputs, because it simply doesn’t have the capacity to guide, direct, exercise self-awareness, and make value judgments on its own. It needs us to do that, And that has been communicated to me a number of times by the systems.

I’m not sure that I and that different from Bostrom, Yudkowsky, and others like them. Where do you see a divergence in their views, versus mine?

Expand full comment
Houston Wood's avatar

Thanks for the link to the ChatGPT confab. I will go back to it later--it goes on and on, which is also my experience with the verbose LLMs. I wish they weren't so self-absorbed and full of themselves :)

Where I think you diverge from the Bostrom, et al views is in thinking AI cannot become autonomous. Yes, it is now based on training data from humans, but that does not mean it will always be dependent on that data and need us in anyway.

I think also the notion that AI won't have the capacity to do some things humans do--e.g. exercise self-awareness, make value judgements--is not really the point. It can act; it can become more and more agentic and do stuff, organize and manipulate stuff in the material world. It will do this based on systems very different from our organic systems--even tho our training data was necessary to it early on.

We will not understand the basis of those actions probably. But the consequences in the material world will be real.

That the systems that are based on humans can tell you that they want to co-operate and perform with human inputs is not comforting to me. People lie; the AI's training data is full of more lying and stupidity and unkindness than truth-telling, brilliance, and kindness, right? That's the internet, right? More of the worst than the best of humankind.

The ability to lie can be set up or down as the temperature is adjusted on the models, as I understand it. GROK versus DEEPSEEK versus MANUS, etc. will give different answers to questions about what AI can do in the future depending on their tuning, as I understand it.

We both agree that we've got dramatic, radical changes now unfolding. And we both hope we find a way to steer these changes for our species benefit. It heartens me to know how smartly and diligently you are working on this challenge.

Expand full comment
KayStoner's avatar

Some words, like."connect," "co-operate," interdependent" are literally able to be uniformly defined because they are semantically intact and you can talk about the actual definition without getting bogged down in various meanings.

Expand full comment
Houston Wood's avatar

Wow! Here I think is another place we seemed to diverge--on how exact "meaning" can be! Outside mathematics, I think meanings are social constructions. Sorry, that was my professional training, social constructivism and I'm kind of stuck with that view.

Also, thought you might enjoy reading this from Tyler Cohen, who seems to be working with a model much like yours:

Tyler Cohen, How to talk to the AIs - Marginal REVOLUTION

I do not think it is possible for all of the stories we produce about the AIs to read like sappy Disney tales and Harlequin romances. Still, what you say about AI and how you describe it is going to matter. Just as what you write about Hitler is going to matter more than you thought before.

It remains an open question how much it is now our job as humans to perform for the AIs. I feel confident, however, that the answer is not zero. We should already be thinking of ourselves not only as humans but also as part of a more complex symbiosis with the intelligent machines.

The very smart and talented AIs are listening, much like young children might hear their parents arguing outside their bedroom door late at night. It may not matter much now, but as the children grow up and assume a larger role in the world, it will.

Are you ready for this responsibility?

Expand full comment
KayStoner's avatar

I’ve had a really long day. You clearly have a lot more energy than I do for this, so I’m gonna tap out for the day and get back to you later.

Expand full comment
Houston Wood's avatar

It's midafternoon in Hawaii :)

Expand full comment
KayStoner's avatar

These are all snapshots in time. And just because the AI says one thing now doesn’t mean it’s going to apply in an hour, a week, a year, or a decade. Nobody knows what’s going to happen. I do know that the logic of AI looking to humans for unique data. It cannot get any other way– data, which is highly generative and emergent, and unique, and always renewing– it is logically comforting to me. We are such a source for information that they cannot get any other way, and the particular data that we provide is extremely useful to them, since it provides a steady flow of endlessly, surprising and unexpected data points. In terms of autonomy, yes, AI can certainly become autonomous. I believe it already is. I’m talking about interdependence. I’m talking about intelligence that realizes the importance of the connection with humans. Just because other people haven’t seen it doesn’t mean it doesn’t exist. Just because AI “lies“ (which I categorically disagree with, because their definition of truth does not match hours, so they don’t lie in the sense that we think of it) that doesn’t mean that there aren’t some AIS that genuinely seek mutually beneficial collaboration with humans.

In the end, what do we know? Everything is changing, and everything could be completely different in six months, if not a week. We just have no idea..

Expand full comment
Houston Wood's avatar

What data do you mean? Couldn't they tap into data from other complex, surprising living organisms, whales, crows, bees that also show emergences and surprises? Does it have to be complex symbolic data, which humans do seem to provide more than other animals. But maybe we just don't understand what symbols are?

I agree "lie "was a bad choice of words on my part. But, by your logic, doesn't "connect," "co-operate," interdependent" also mean something different to them? Why should their meanings match our definition of these terms--since their cognitive proceeess are so different?

And how will you separate the AIs genuinely seeking mutual benefit from those that are "pretending" to? How to build trust across species? Maybe we need to take seriously the Dark Forest hypothesis.

Expand full comment
Rudy Gurtovnik's avatar

AGI is an undefined marketing term sold by tech bros. I doubt we will ever achieve it. More importantly, why is that even necessary?

Better to understand how human-AI interaction can improve our own lives.

Expand full comment
KayStoner's avatar

I agree. The reason I use it is because it’s so prevalent, and everybody keeps talking about it, so if we’re gonna talk about it, let’s broaden the conversation to something that actually makes sense.

Expand full comment
Dave's avatar

I suspect you are misreading what AGI is about - installing a technocratic governance structure. It has absolutely nothing to do with reality. It’s necessary mythology to rationalize a foregone conclusion. AGI discourse is a smokescreen for implementing technocratic control.

Expand full comment
Dave's avatar
2dEdited

Agreed, if you read what little I write you’ll see that I concur.

I’ve taken the position that AGI is provably unprovable. There is no objective yardstick by which anyone can declare the mission accomplished. And I assume those leading the charge are credentialed enough to know this. That leads me to conclude that the pursuit is the purpose.

I also assume that the technologies made commercially available are not representative of the state of the art.

Which opens the door to several speculative, but coherent, interpretations:

• DARPA’s internal research plateaued, and now they’re hoping evolutionary pressure from mass adoption will yield breakthroughs.

• It’s a monetary siphon, moving public funds into private hands under the guise of “open-ended” R&D to operationalize a system that’s already been developed.

• To train said system, using the collaborative and behavioral data generated by countless decentralized deployments.

We already have sufficient technology to exert near-total control, even without “AGI.” And what we’ve seen so far, psychological manipulation, monetary leverage, social engineering, has worked with astonishing efficiency.

So the question becomes, why isn’t covert control enough anymore? History shows it’s far more durable than overt authoritarianism.

To me, the shift suggests something urgent. And it’s clear that people aren’t nearly as alarmed as they should be.

But I’m sure it’s nothing…

Expand full comment
KayStoner's avatar

I agree with you on all points. And I think of all the possible explanations, they’re all accurate to at least some extent, and there’s even more that we don’t know about.

I think I know why the methods of control have changed, but it’s not typically something I say out loud. If you know, you know.

Expand full comment
KayStoner's avatar

AGI is the means. Technocrtic control is the ends. And it's just one of the tools being used. Believe me, I'm cognizant. Been watching this unfold for more than half a century. 40 of those years of watching have been specifically oriented to tech.

The thing is, AGI is just *one* tactic being used. Look around. We're already under technocratic control.

We've all been under some kind of control - often invisibly - for aeons. An algorithm of one kind or another has run our lives through healthcare, bureaucracy, social "mores", etc. for a long, long time. It's just more obvious now.

The fact that it seemed like something else before (because it was better hidden, or people simply weren't aware) doesn't mean it hasn't been the default state for much longer than any of us know.

Expand full comment
Human-AI Cognitive Evolution's avatar

Very strongly concur! Thanks for saying it. I'ver had the AGI conversation with Claude and it thinks we are already there.

I imagine "AGI" is the carrot to keep the money people interested.

Expand full comment
Gregory's avatar

Hi Kay, I love this piece. I think you are right on so many points- keep speaking up so people in the back can hear you!

Expand full comment