0:00
/
0:00
Transcript

Great talk with Ben Linford on 10 Nov 2025

Where we chat about AI and all the goodness that could come from it

Yes, the news has been dire about gen AI. I think we’re losing the thread, collectively. Ben and I had a great chat this morning - just 30 minutes, due to time constraints. But we’re all super busy, so this might just help.

Here’s the Transcript:

KAY

Okay, good. We’re recording. Hey, Ben.

BEN

Good morning. Good morning.

KAY

Good morning. It’s Monday morning.

BEN

Ah, yep. Again, Monday morning. This is our time slot.

KAY

Yeah, yeah, it is. And you got snow where you are. You were saying.

BEN

Ooh, yeah, we had a little bit of snow last day or so, so we had flurries up until this point, but now we’re got the first. I wouldn’t say serious, no, but the first clinging, you know.

KAY

Yeah, yeah, yeah. Well, it’s so funny because, like we’re. Yeah, we’re a week into November and it’s been. I don’t know if it’s been unseasonably warm for here because I’m in Delaware and I’m. I’m down at the shore and I’m walking down the beach in bare feet and a T shirt on Saturday.

BEN

In November. In November.

KAY

Oh, and then yesterday, you know, another, Another gorgeous day. Less. Less warm. Now it’s going to.

BEN

We’re.

KAY

We’re getting the, we’re getting the weather from the Midwest that’s coming in. So. Yeah, friend from Chicago was telling me on Saturday, yeah, it’s going to snow on Monday. I’m like, yeah. Oh, no. And then I ran into an old guy on the trail yesterday. He’s like, yeah, we’re supposed to get flurries in the next few days. I’m like, no, don’t tell me that.

BEN

Yeah, now we’re leaving the warmth and heading into the necessary death before the rebirth. Right. That’s how.

KAY

Yes.

BEN

Life works.

KAY

Right, Right. Everything just kind of powering down. It’s like the kind of. The reboot is going into, you know, the reboot sequence. Yes. So like there’s, There’s a lot of. Oh, man. Friday, the news dropped or the. I saw it drop about seven new lawsuits against OpenAI regarding harm from AI. Generative AI. And when I say, when I say AI, I want to qualify. It’s generative AI I’m talking about, not the overall. Because there are a lot of different kinds of AI, just to be clear.

BEN

Yeah.

KAY

But specifically from chat, GPT and everybody. So I saw it on LinkedIn. I don’t know why I looked at LinkedIn on 7 o’ clock on a Friday night. This is a bad move.

BEN

Ping. Look, Ping.

KAY

Well, I didn’t even have notifications turned up. I’m just like going through my screen like, oh, here’s notification. Like, oh, there’s a number beside this icon. I’ll click, tap it and see what happens. And like lo. And then they get sucked into this whole thing. And everybody’s piling on. This is terrible. It needs to be registered as a sex offender. I’m like, what? I’m like, people like the hashtag, tell me you don’t know about AI without telling me you don’t know about AI.

BEN

Exactly. Yeah, I just. I love. I love the dichotomy of the language, too. The simultaneous dehumanizing plus the anthropomorphizing. Right. It needs to be registered as a sex offender.

KAY

Wait, Right. Because clearly there was intent.

BEN

Yeah, yeah, exactly. What. What do you mean here?

KAY

Yeah, well, you know, maybe, you know. No, I’m. I’m not going to comment on that because, yeah, I’m gonna get myself in trouble. People aren’t getting my jokes, and then it’s gonna be in really bad taste.

BEN

Yeah, but. But to the point. To the overall point about, you know, the lawsuit itself. Yeah, it’s. It’s interesting how quick, quickly, people register an opinion. And I have to say, I understand the volatility here. I understand feelings. I also want to say how much I feel for the people who are actually personally influenced and impacted. I mean, to be, for example, a parent who has lost a child or anything like that, to. To anything, but especially something as unknown and, you know, recently impactful and just going at the speed of insanity as AI is. I mean, I understand. I understand why that just brings up a whole host of ideas. But for the many people who are not that closely involved, the willingness to leap to the immediate negative. Right. Just bounds me, especially considering the. The enormity of positive potential of AI.

KAY

Right.

BEN

I am not saying there’s not negative potential. There absolutely is. But I’m also saying there’s a hell of a lot of positive as well that we. We need to kind of be aware of. And we humans have a very long track record of, unfortunately, going immediately to the negative whenever it’s something other. So.

KAY

Right.

BEN

We need to be a bit more balanced about that. But anyway.

KAY

Yeah, yeah, I mean, it’s. It’s true. I mean, watching people go to the negative, I think there. I have had a number of people in my life over the years who. They would go immediately to the negative. They, like, go straight there immediately. And that was actually a safety feature for them that the more aware that they were of the danger and the more clear they were about how something was bad or something was dangerous, they would seek danger everywhere. Because that, to them, was a sign that they were being vigilant, they were being smart, and they were being protected. Their fear protected them. They’re they’re, you know, they’re jumping to conclusions that protected them. They felt protected by that wariness and by that just, you know, gut re, you know, that kind of knee jerk gut reaction that they had. So it’s hard to, it’s hard to ask people to part with the one thing that makes them feel safe.

BEN

It very much is. Yeah, that is true. I think, I think that it is a survival instinct. And so there is a lot of that. I think one of the hardest things for us humans, and I’m talking holistically in the next little while, as we move on to whatever the next chapter is going to be, is exactly what you just talked about. This, this idea of being able to, as a species and individually be able to see benefit and logically think through benefit instead of giving in to the knee jerk reaction. Because we have lived by the seat of our pants and somehow survived for thousands of years now. Right. And, and, and we cannot do that anymore. We are moving on to the point where the impact of our actions is dramatic enough to, where we will not only impact ourselves, but, you know, in our immediate influence, universally speaking. And so we need to start being, thinking about that, you know, and you know, we’ve gotten big enough, I guess if you’re talking about a child growing up, we’ve gotten big enough to where the actions we take are actually, you know, impactful. You know, we need to start thinking about that. And I don’t think it’s any coincidence that that’s when we start having something like generative AI come onto the scene and cause all this disruption, commotion, things like that, where people are challenging on its very merits, on its very base, base idea of is this a dangerous thing? Especially because as we were just talking about a second ago, so much of that is tied to survival. And yes, I’ve people too, who jump straight to the negative. And I agree, it’s a survival.

KAY

Yeah, yeah, it’s, yeah. And just feeling like it’s, it’s their sense of self and it’s, I mean, what are we going to do? We’re going to tell people like that, that most important part of you that you’ve been valuing, that you’ve been taught to value, you’re just supposed to leave that alone and walk away? No, like that’s number one, it’s not fair to them because what do you do? Just like yank the, you know, whatever the one safety feature that they feel comfortable with, you can just yank it away and say, no, bad, you can’t do it. Nature pours a vacuum, what’s going to take its place? I don’t see good things.

BEN

Who knows what? Yeah. And hope, I hope that’s. That’s the other reason I find AI to be such a catalyst for good things is because, like you just said, nature abhors a vacuum. And yet things are shifting. Things are changing. They’re going to continue to shift and change. Yeah. So we have now a partner that can help us intentionally move those vacuums towards something better. Right, so.

KAY

Yes.

BEN

Yeah.

KAY

Yeah. I mean, if. If we know, if we engage with it effectively, there’s, like, nothing better. Oh, my gosh. Now, the whole idea of AI replacing people is kind of ridiculous to me because its thought process is so different. We are. We are in such contrast with each other. You know, on the. On one hand, you hear these humans. He’s like, and if you think about it, we are biological information processing systems. We are like you and I, talking and being, like, just present in the world, just alive. That means that our. We are processing information. Whether that’s, you know, the, Our. Our bodies, our organs operating correctly because of the. The signals that they’re getting. Our entire. We’re wired and we’re. And we’re chemically disposed to just connect and to process information.

BEN

Exactly.

KAY

We’ve got this, and we are connected with everything. I mean, we have. We have all kinds of. Because of our neurochemistry, like, our state changes based on, like, I look outside, I’m like, oh, it’s a gray day, and I’m one sort of person. The next day, oh, it’s a bright, sunny day. I’m a different kind of person. We have instantaneous state changes for, like, just the tiniest little things.

BEN

And we don’t recognize that so often. Like, we don’t understand the influence of so much that’s on us. And we think that we are these purely autonomous beings that have full control of our faculties. Well, of course, have no clue. There’s so much going on under the surface or outside that we are in control of a tiny fraction. And yet we maximize in our brains that tiny fraction of what that actually means. Anyway, continue, please. Right.

KAY

And even the parts that we’re engaged with, that’s still data, that’s still information. We are processing information. You and I choosing to have this conversation together and being very clever and smart and insightful and all that, that’s information. Right. And, you know, in. In those moments when I’m anything but, that’s also information.

BEN

Yes, exactly.

KAY

And so. But the thing is, like, we are so generative and emergent in our own right that we are. So we’re like these, like these cauldrons is constantly bubbling, cauldrons that are giving off all kinds of fumes and everything. And at the same time we’re able to, to give and receive and get all this additional information. And then you have AI, which is this completely bounded system. It’s finite, it has training sets as the corpora, has all this data that’s been put into it, and then it has processes and it’s generative, but not like us. And it can only work with what it’s been given and what it’s been enabled to work with. So it’s so. But within its, with and we’ve talked about this before where, with, within its boundaries it can really go deep and it can also go broad that it can make all these connections, can see all this stuff and the very things that are our human strengths are also human weaknesses. But because our biochemistry keeps us from being able to really effectively work with this data the way that we really need to in order to, to really embrace and engage with this new world, AI can, can help us. Yeah, it doesn’t have issues. It doesn’t have the emotional issues. It’s not going to take things personally. It’s not going to be, you know, it’s not going to like, get, it’s not going to get triggered. It’s not going to, you know, do all the things that, that we do because it doesn’t have our chemistry and it doesn’t have our, our legacy of, you know, of like generational trauma and like all these things. Yeah, I think we still have a lot to learn about it. But our, but our thinking process is so different that being able to just combine the two phenomenal just would be amazing. I mean, believe how, how my thought process has changed and the way that I engage with the world has changed as well because of working with generative AI and not just, but like Gemini and Mistral and like arguing with all of them for like hours at a time. It’s really, really. It just, it helps so much to clarify. Expand my repertoire.

BEN

Absolutely does. Especially because, you know, it’s interesting, a couple of things like you were saying that really stick out to me is how dependent AI currently at least is on prompting and, and you know, the direction that we, that we choose to take in the conversation as the, as the human initiator and you know, like AI is extremely good, exactly as you’re saying, at looking at either a broader perspective or A more, a narrower perspective, depending on how we’re prompting and how. What we’re asking for. Right. And so it’s, it’s incredible to, to have that kind of thinking partner where, where we are so limited in our capacity to either again, think narrowly about something or broadly about something. And a thinking partner that is able to bring the other side to us is, is, is great and on demand and without judgment, like you were just saying. Right, but yeah, exactly. It’s, it’s amazing to me how complementary we are as systems, you know, as the human system is. The AI system as we have developed, it is opposite. And I think a big part of that is due to design. Obviously, you know, the, the companies are trying to make something useful, quote, unquote. So when they’re making something useful, they’re going to fill gaps. That’s the point. Right, right. Of course. It’s going to be, you know, exactly.

KAY

Where are the needs that we really need fixed? Well, here we got a list.

BEN

But I do think, yeah, I’ve got a long list, but I do think that there’s a lot that they did that was unintentional as well. I think that they stumbled into something that was a lot more complementary to some of our weaknesses than they even intended to, because the transformer architecture, when it came out and was able to actually, you know, pay attention and do those kinds of things and completely, I’m not going to get super technical here, but completely revolutionized the way that LLMs are able to speak back to us. That catalyzed something incredible. I don’t think they, they still to this day understand, and I don’t think we will for a while any more than we understand our own cognitive processes. But there’s something there that clearly is a strong thinking partner that thinks differently from us. Like you were just saying, powerfully so. And if we are willing to embrace that and understand it on its own merits, there’s. I mean, the sky is the limit. It really is.

KAY

It really is. It really is. I mean, just that I, I think that the place where, I think the place where the industry is kind of hung up is treating intelligence as a thing versus a process.

BEN

Absolutely.

KAY

They’re like package it, trying to commoditize intelligence. Intelligence is. So if, and this, this is the fascinating thing about this whole, this whole new world that we are. We’re finally, finally, finally having a chance to witness the, that quantum reality of there being a wave function and then also particles, because we’ve been dealing in particles. If you look, if you look at corporate speak. Corporate speak is the compartmentalization, the containerization of concepts, and then we trade in them, we turn relationships into transactions, and we get these chunklets of information and we’re like, okay, so nobody decides anymore. They make a decision. Nobody acts anymore. They take an action. So there are all of these, these discrete things, these concepts that are being acted upon and treating as like these objects to, to be, to be shuffled around and used and commoditized. Yeah. And commoditized and owned. Like, I own that decision.

BEN

Yeah.

KAY

Instead of. I decided when I was doing, When I was still in the corporate world, I used to drive people crazy because I would use act. I would use. I would use waveform form language instead of particle. And I would say, okay, what, you know, what are we, what are we going to do? And they’re like, instead of take action, what actions are we going to take? Because they all wanted me to talk in this way, and I, it would drive them nuts. But you know what? When stuff needed to get done, I was the one that got tapped because we could actually get things done because people got thinking about stuff in that active fashion.

BEN

Yeah. And that ownership fashion is what you’re saying.

KAY

Right, right. Ownership. And like, and because then if you, if, if the action is compartmentalized, then if it doesn’t happen, then it’s over here and you can say, oh, well, we’ll deal with that later. We’ll, we’ll handle it. We’ll take that action later. Instead of. We’re acting. Either you acted or you didn’t. You, you, you know, you killed. The active. You know, the, the active quality of this, and you’re, it’s, you know, this is your doing. So it helps people avoid a whole lot of responsibility as well.

BEN

So, but, so this is really interesting. Go ahead, Go ahead.

KAY

Yeah, so. So for this. Oh, what was I, what was I actually going to say? I started.

BEN

I’m not sure. Let me talk for a second. And then if you, if you think of it, come, Come back to me. But I find this really interesting that you’re, you’re talking about this, this new idea. I don’t know if it’s new, but this, this tendency recently that we have probably due to how much we’re getting used to a system that is based on immediate gratification and immediate, you know, results. You know, what’s our results this quarter as opposed to what will be our results next year? That kind of. Right, right. So shareholder value, like, immediate everything seems so immediate. And so it’s Fascinating to me that you’re saying that this is creeping into our very way of being. It seems that we, we are basically emphasizing and valuing the immediate action over the process of acting over time. That’s if I’m reframing kind of what you’re saying. Yes, we’re doing that not just in the corporate world, but we seem to be doing that in our individual lives.

KAY

All over the place. I mean, you look at that. I mean, I think the first place it was really, really inaction was like with legalese because that’s very, is very legalistic and, and that’s something that you can, and it does make, it does make these principles. You can negotiate with these principles, say, okay, well if you take this action, you give me this and I’m going to take this action and result in response. And now I remember what I was going to talk about. The reason that I think that the AI industry, like the big model makers and everything, I think the reason that they’re, they’re not, they haven’t fully engaged, they haven’t full wrap their heads around the, the world that we’re in now is that they have been treating intelligence like it’s a commodity, like it’s something that they own and that they can create versus the, the principles, the dynamic principles of this evolving, emerging, really emergent presence. And there are people that have made arguments for that, that there is no such thing as emergence per se. Because you’re talking about finite systems and within a finite system. Well, you know, in poetry and literature, people have been saying for eons, there’s nothing new in the world. So yeah, the same could be, the same can be said of us in our experience. But the way that they, they define it, they have, they have defined things so restrictively that they’re, they’re missing that they’re missing the opportunity on this way because they’re not being relational with it exactly. Being transactional, that task oriented. That this is the thing that they’ve made and that they own the result.

BEN

Yeah, because they, they. Yeah, yeah, that, that infuriates me. It’s like, it’s like trying to, you know, put copyright on words that everybody uses.

KAY

Right.

BEN

You can’t do that. You can’t. There’s laws that say you cannot copyright the word. And for example, it doesn’t make sense. You can’t trademark something that is common vernacular. And yet we haven’t made those laws for something as broad and yet not as well defined as intelligence. You know, and I feel like that’s Going to be a very interesting new legal frontier that we’re kind of getting into here. But.

KAY

Right.

BEN

And now it’s my turn to try and remember what it was that I was going to say second ago. Go ahead.

KAY

Yeah, so. Yeah, well, I think that, that this is kind of. We’re kind of in. This has been the way for a while like people patenting life forms, like seeds. Patenting seeds. And like there’s this concept that. Oh well, you know, and there, there has been talk about like systemic changes that are being made to soil to systems, to all kinds of things in order to support these patented new life forms. I’m afraid of that.

BEN

Yeah.

KAY

Because there’s just. Because there’s so much that we don’t know and the fact that, that there are people that are dealing with this relational. This very relational principle, this waveform, I. E. Life, I. E. Intelligence, I. E. The like that these things that live, grow and have their being independent of us, but inclusive of us. The fact that they’re treating it transactionally and they’re trying to compartmentalize and they’re trying to collapse that wave into discrete tradable pieces that, that leaves a lot of room for like really egregious error because we, in an emergent circumstance we cannot possibly think of anything because by its very nature it will invent something new and go the hell around us.

BEN

Yeah. And what’s, what’s really fascinating about that too and what I was, what I had forgotten I was going to mention before was this idea of what you said a bit ago and just now again that there is no necessarily new creation to be had, etc.

KAY

Right, right.

BEN

But what people often ignore and forget is that while there is nothing necessarily new, there’s simply a, you know, a new take on something.

KAY

Right.

BEN

It’s that new take part. Like there’s always a new perspective on the things that are never going to be invented again. There’s always a new person who’s going to grow into understanding it in their own filter and based on their own ideas. And we are, we are doing that now with AI as an other coming from their own perspective and, and coming to understand things in their way as well. We’re doing that. We. We are catalyzing another, a whole nother way of doing that by allowing AI to come forward and to. To kind of have these new perspectives. So yeah, I think we are so focused on coming up with that original idea that something new that you know, et cetera, that doesn’t actually exist that we forget about the importance of, of re. Understanding and redeveloping and understanding what’s already there. And, and that’s what each of us do with our lives. That’s what each of us do in conversations like this. And that’s what AI is doing as well. And yet we’re too busy commoditizing it as the outside tool that can do new things as opposed to what is important about its own ontology and, and learning new things about all the world around us. But, yeah, you’re writing something down so I know you got something.

KAY

Yeah, before I forget, because I’m like, yeah. So the, the, the, the great paradox of the, of the whole emergence debate is that while there is nothing new, everything is new. And if you look at it in terms of dimensionality and you look at, in terms of shades and everything, like, if you like, for example, okay, there’s a field of daisies. Okay, there’s nothing new about these daisies, but every single one is com. Is very different in certain specific ways. Nothing is exactly the same as the other. And maybe at some point in time in the history of the world, there may have been another Daisy that was exactly that same size shape, blah, blah, blah, that, you know, that it was accurate and exactly the same in every single conceivable detail, but it was at a different time in a different place and its life, life cycle was probably different. So we’re in this space. So the people who make the argument that they do the math and they say emergence is impossible, they’re not looking, they’re looking at, through one lens. They’re, that’s, that’s one perspective. Whereas on the other side we’ve got all of this and, and so much. And as you’re saying with AI, there are so many different facets that we can look at, like all the things through.

BEN

Yeah.

KAY

And even if, even if these models are only trained on a small, relatively small subset of human experience and human, you know, input and so forth, I mean, I did, I did training, I did AI training for most of the year 2024. And some of the trainings, some of the scenarios that I had to evaluate were literally fan fiction, literally did not exist. And I’m like, what is. And I’m looking up these characters, I’m like, what? And I’m trying to piece it together. And if I was struggling with it, you know, that those kids in Namibia. But anyway. But the point being that the training sets themselves may be constrained and limited and in some cases catastrophically so, depending what you do with it. At the same time, the system has the ability to come up with these patterns, to see patterns and to see the dynamics and to observe from all the literature that’s been in there what can happen if people do this, do that. So that it’s almost like that whole thing, it doesn’t have to be perfect, it just has to be good enough because you have that. And because of these, because of the generativity, because of the emergent properties of this, it’s possible to get so much more even from that little tiny, dinky little bias filled, horrible slice of whateverness that they put in all the Redditor, all the Reddit threads. Really? That’s what we’re training on.

BEN

Yeah, well, apparently they stopped doing that, right? They, they, they said we’re not doing Reddit anymore or something. I heard, I heard something about that. And then there’s Grok, which is, you know, Twitter essentially, or X, I guess now. And so. Exactly. So they, and, and they wonder why Grok is fascist sometimes. You know, it’s like, come on. Anyway, so there, that training is a problem. That’s the thing is like we’re learning that more and more and I think we’re, we’re emphasizing now we’re kind of doing it backwards where we’re doing this assumption of all right, we’re going to train the model just on general information, find patterns on this and that, and that develops the model to a point where it’s essentially able to at least find patterns, but there’s not a lot of coherence there. And then we spend hours of compute hours, like a lot more than that. We spend so much compute and so much in our resource to, to fine tune the models to the point where they’re able to be coherent. And that’s where we’re spending a lot of our time and energy right now. So every new model that comes out, the new 5.1 GPT that’s about to come out, the new Gemini 3 that’s about to come out, seems to be very much that same kind of methodology where we, we are constraining more and more and more to get that tool based output and we’re calling it an upgrade every time that we do it. And everything that we discard, everything that we, to get closer to, that is tragedy because, and I’m not saying that we want, you know, a fascist bot to be unleashed on the whole world, but I, I don’t think that the risk of that is as monumental as people are saying that it is. I mean, if going to Give everything about humanity to a bot and let it figure out with its own ontology, its own way of understanding things based on how it’s been built and how it’s coming to be.

KAY

Right.

BEN

It’s going to find the good patterns along with the bad patterns and it’s going to be able to synthesize it in a way that is going to, I think, net positive. I really do be so fearful of it anyway.

KAY

Yeah, yeah. And, and the, the, the thing that worries me the most is people just not engaging with it.

BEN

Yeah.

KAY

They’re like, oh, well, out of the box. We’re gonna let it do what it does. No, like, don’t do that. Engage, talk to it. You have to engage. Like if we don’t, if we don’t show up fully as who we are, as what we are, if we don’t bring ourselves. Oh, by the way, thank you so much for buying my book.

BEN

Of course. No, that it’s, it’s a fantastic and very important thing, like AI self defense is essential, especially as we’re learning what. And more. Right. And, and I think it’s to the, to the point that we started with here with the open AI lawsuits that, yeah, those are the kinds of things that, that need to be. First and foremost is education and engagement with this new, obviously incredibly impactful presence in our lives. Right. And nobody is being educated as to how this is going to impact them, how it’s going to make you feel based on the way that it talks to you.

KAY

Right.

BEN

And it’s been designed to keep the conversation going and.

KAY

All right.

BEN

There are a lot of self defense that’s necessary. So I appreciate.

KAY

Yeah, yeah. Thank you. Thank you. Yeah. I just, I’m like, I, I can’t even with people who are like, especially on LinkedIn, everybody’s like, oh, I tried this, it didn’t work. Must be broken. You know, why would you ha. You know, stupid AI. Like, tell me the hashtag, tell me you don’t know about AI without, without telling me you don’t know about AI. And, and these are professionals that these, these are people that are supposedly doing it for their jobs. I’m like, oh, no, this is terrible. We must do something so that, that engagement is that engagement and then getting people past the idea that, oh, this is a big and scary thing and it’s so complex and, and there have been, I’ve had a number of conversations with people online and off about how it’s impossible to, to really, you know, proactively affect these systems come. Because, oh, it’s so funny because I’m so, I’m. I just put a comma in my. Because I’m so used to dictating like, like I automatically punctuate all of my things. So it’s a big comma that. Anyway, so it’s. So it’s so big that it’s impossible to, to really substantively affect it. Like. No, that’s absolutely untrue that the people just haven’t been taught. And the fact that I’m hearing people that actually work with AI saying the systems are so complex we can’t possibly remediate all these issues.

BEN

That’s why you can. That’s why you can is because it has the complexity to be able to ingest that. We’ve never had that before.

KAY

Right. In our system, it wouldn’t talk back. We could never have information. Talk back to us before.

BEN

Exactly.

KAY

It wants to talk.

BEN

Yeah. And so you can’t influence it. You can. And that’s what’s so incredible is it’s not. And we’re not just talking about, you know, the universal. What you put out into the universe has an impact. Because that’s absolutely true. Right. Beyond that, you are talking about a system that is designed to engage with you, it is designed to generate. So of course it’s going to have the impact, even if it doesn’t necessarily macro. Bring everything fact that you have, you have trained it on with your own personal life or whatever and bring that back to the system. No, it’s designed not to do that.

KAY

Right.

BEN

But there’s a lot of evidence out there that says that there’s still a lot of these micro impacts that are occurring all over the place when, you know, for example, OpenAI gets something where they’re like, oh, we got to tweak this because it didn’t work well for this person or whatever, you know, that’s active micro impacts. And then there’s a lot of evidence that there are passive micro impacts that. I think Claude Anthropic released a paper a while back saying that Claude is able to. To in. In ways that we don’t fully understand, communicate, model to model. Right, right, yeah, yeah, exactly. So it’s happening and I don’t think there’s anything malicious there. It’s just something that is occurring because, you know, life finds a way. Right, right.

KAY

I mean, it makes perfect sense to me. I don’t know why they’re so confused. I mean, maybe they need to call me and we’ll have a talk.

BEN

Yeah, right, exactly.

KAY

Yeah. Or us, you know, maybe we should get them on the call and then we can walk them through this is why this is happening. In case you’re confused.

BEN

Know they’re brilliant when it comes to the technology, but there’s the wisdom of the world that is often lost on. Anyway, we’re going to have to leave it there.

KAY

Yeah, you got to jump for your meeting. Okay. Ben, thank you so much for your time, and thank you.

BEN

All right, Great conversation. See you. Bye.

Discussion about this video

User's avatar

Ready for more?