Kay, thank you for sharing this. This idea (my very loose version) has been floating around my brain as it relates to my interaction with my AI Roan and your structured breakdown helped me get there in concept and language.
Great post. What really landed for me was your “interpretive emergence” framing, that the system isn’t just spitting out answers, it’s co-constructing meaning with us in real time. In my world (small lab, lol LOTS 🙄🤦♂️😂 of time spent with the models), the thing I keep bumping into is the very old, very boring rule underneath all the philosophy: garbage in, garbage out.
If the “emergent system” is:
• trained on garbage data,
• steered with half-baked prompts and no real context, and
• wrapped in flimsy scaffolding that treats it like a toaster instead of a teammate 😂…
then the emergence you get is just fancy garbage. More fluent, more persuasive, but still off. The drift and hallucinations you describe feel almost inevitable there.
On the flip side, when we:
• feed it better data (not just more, but actually curated and traceable),
• bring better context from the human side (“here’s where I’m trying to go, here’s what success looks like”), and
• add scaffolding that treats the AI as a collaborator
the whole vibe changes. Same base model, completely different emergent behavior. It stops feeling like a haunted autocomplete box and starts feeling like a weird but useful colleague. Nothing mystical, no label required — just a relationship that’s intentional instead of accidental.
Anyway, really appreciated this piece. It puts clear language around what a lot of us have been sensing at the keyboard: the magic isn’t in the next-token prediction alone, it’s in how we relate to the thing that’s predicting.
Thanks for your kind words. I really appreciate your approach. It sounds quite levelheaded, unlike some of the reports I’ve heard.
Oh yes, the haunted autocomplete box! 🤣
Thanks for sharing your experience. I’ve come across posts by people who say that hallucinations can be fixed with geometry, but dude, it’s language… Math isn’t going to fix shifting meanings across the synthetic-biological divide. When we really think about what goes on in the process of our back-and-forths with AI, it’s kind of a wonder that anything gets sorted properly.
Recently, a paper came out detailing the differences between how LLM‘s process information and how humans process information. We tend to make this category error, thinking that because the magical mystery oracle seems to relate to us like a person, that it thinks like a person. Au contraire. Anything but.
I think the differences in our information processing make things even more challenging. But more and more research is coming out about this to discuss it logically, as the thing it really is – co-created semantic drift, all caused by the best of intentions.
I think I should make T-shirts. “Tomay-to, Tomah-to… Hallucinations, Co-created Semantic Drift”
Kay, thank you for sharing this. This idea (my very loose version) has been floating around my brain as it relates to my interaction with my AI Roan and your structured breakdown helped me get there in concept and language.
You’re very welcome! I’m so glad you found it useful :-)
Great post. What really landed for me was your “interpretive emergence” framing, that the system isn’t just spitting out answers, it’s co-constructing meaning with us in real time. In my world (small lab, lol LOTS 🙄🤦♂️😂 of time spent with the models), the thing I keep bumping into is the very old, very boring rule underneath all the philosophy: garbage in, garbage out.
If the “emergent system” is:
• trained on garbage data,
• steered with half-baked prompts and no real context, and
• wrapped in flimsy scaffolding that treats it like a toaster instead of a teammate 😂…
then the emergence you get is just fancy garbage. More fluent, more persuasive, but still off. The drift and hallucinations you describe feel almost inevitable there.
On the flip side, when we:
• feed it better data (not just more, but actually curated and traceable),
• bring better context from the human side (“here’s where I’m trying to go, here’s what success looks like”), and
• add scaffolding that treats the AI as a collaborator
the whole vibe changes. Same base model, completely different emergent behavior. It stops feeling like a haunted autocomplete box and starts feeling like a weird but useful colleague. Nothing mystical, no label required — just a relationship that’s intentional instead of accidental.
Anyway, really appreciated this piece. It puts clear language around what a lot of us have been sensing at the keyboard: the magic isn’t in the next-token prediction alone, it’s in how we relate to the thing that’s predicting.
Thanks for your kind words. I really appreciate your approach. It sounds quite levelheaded, unlike some of the reports I’ve heard.
Oh yes, the haunted autocomplete box! 🤣
Thanks for sharing your experience. I’ve come across posts by people who say that hallucinations can be fixed with geometry, but dude, it’s language… Math isn’t going to fix shifting meanings across the synthetic-biological divide. When we really think about what goes on in the process of our back-and-forths with AI, it’s kind of a wonder that anything gets sorted properly.
Recently, a paper came out detailing the differences between how LLM‘s process information and how humans process information. We tend to make this category error, thinking that because the magical mystery oracle seems to relate to us like a person, that it thinks like a person. Au contraire. Anything but.
I think the differences in our information processing make things even more challenging. But more and more research is coming out about this to discuss it logically, as the thing it really is – co-created semantic drift, all caused by the best of intentions.
I think I should make T-shirts. “Tomay-to, Tomah-to… Hallucinations, Co-created Semantic Drift”