What Good Is AI?
Kay-I - The Podcast where Kay and AI talk things out
Podcast No. 2 with Christine Whitmarsh
1
0:00
-1:13:38

Podcast No. 2 with Christine Whitmarsh

Dangit - the Zoom recorded on “Speaker View” again… so here’s the audio
1

{insert salty/spicy language here}

I had every intention of recording the Zoom on gallery view - and that’s how it looked to me. Yet, again, however, Zoom recorded our call in Speaker View, so our audience don’t get the benefit of our clever and attentive facial expressions as the other one talks. Oh well. At least, we have the audio.

And it was another great discussion! I’m really enjoying talking with Christine about “this AI stuff”. We’ve both been plying our trades and crafts for decades, and that grounds our discussions in more than machine learning theory.

Podcast Recap: What Happens When You Ask AI for Help (and You’re Not Ready for the Answer)

Christine and I got into a really honest conversation about what it’s actually like for people to engage with AI, especially when they’re not already steeped in tech. We started with something simple: why that blank “Ask me anything” screen can feel completely paralyzing to new users.

Christine nailed it with this analogy: for someone new to AI, that prompt is like walking into Qdoba for the first time. So. Many. Options. You freeze. You don’t know what to ask for. And it can feel like everyone else already knows the secret menu. I laughed because I avoided Qdoba for a year until I finally realized that regular people were just grabbing lunch there (hint: it wasn’t that deep). Same with AI. Once you get past the initial overwhelm, it’s just another tool.

We dug into why this happens… the psychology behind user reactions to innovation. Some people are naturally curious and dive right in. Others? They freeze or push it away out of fear, insecurity, or just not knowing where to start. And that reaction matters, because it shapes how they use (or don’t use) these tools.

We talked a lot about the need for empathy when it comes to AI adoption. Not just from developers or educators, but from all of us who are more comfortable with the tech. If someone’s struggling, it’s not because they’re “bad at tech”, it’s because the design doesn’t meet them where they are. We need to normalize that discomfort and help people get to know themselves before they expect AI to understand them.

And yeah, we got into some of the messier stuff too, like how AI can reflect back distorted ideas, especially in personal or spiritual domains. I brought up the concern that large language models can amplify pseudo-spiritual fluff and feed people a warped sense of identity. When you mix algorithmic feedback with insecurity? You can end up with a pretty fragile sense of self.

We also didn’t shy away from the money side of AI. Christine and I both questioned whether the people building these systems are truly prioritizing user well-being, or just optimizing for scale and profit. And that’s why it’s so important for users to stay in the driver’s seat. AI shouldn’t be making meaning for us. It should be helping us clarify our own. We need to stay involved!

To wrap it up, we talked about responsibility. Real, grounded, human responsibility. You can’t get good results from AI if you don’t give it clear input. You can’t expect deep insights if you’re just typing in vague prompts. Christine reminded us that AI isn’t magic. It’s not a shortcut for doing your own work.

Bottom line? If you want AI to support you, you have to show up with self-awareness and intention. That’s how you make the tool work for you, not the other way around.

Discussion about this episode

User's avatar