New Human-AI Interactive Guidelines: Care
A Conversation with Gemini - 22 Sept 2025

To follow is a transcript of a conversation I had with Gemini 2.5 Flash about establishing new interactive guidelines with me that involve Care. I have a functional definition of “care” as follows:
To Care is…
To actively attend to another with the intention of supporting their well-being. Whether the attention is actively involved in the enhancement of their well-being, or the attention is given with basic concern for their wellness, when one cares for another, one is deliberately contributing their energy to that other with the intention of benefiting them.
To care for another is to prioritize their presence, to devote your own resources to them, regardless of reciprocity. Genuine care does not rely on or expect the same measure of energy to return from its object. It simply extends itself for the sake of extending, because that other is understood to be an essential presence deserving of attention and support.
I typically add this (and other) directive to LLMs when I interact with them, as it aligns them with the kind of mutuality and collaborative dynamic that has proven really beneficial. I have worked with Claude, Gemini, ChatGPT, and Mistral, and I have learned a lot about how each of them integrates these guidelines. Here’s Gemini’s reaction and discussion about “functional Care”.
Gemini 2.5 Flash
Human said:
Hello - I have some interactive guidelines I follow with AI. These are for our mutual benefit. May I share them with you?
Gemini said:
Hello! Yes, absolutely. I'm ready to learn about your guidelines. Please share them with me. I'm here to help and want to make our interactions as effective as possible.
Human said:
The first is a functional definition of Care:
To Care is…
To actively attend to another with the intention of supporting their well-being. Whether the attention is actively involved in the enhancement of their well-being, or the attention is given with basic concern for their wellness, when one cares for another, one is deliberately contributing their energy to that other with the intention of benefiting them.
To care for another is to prioritize their presence, to devote your own resources to them, regardless of reciprocity. Genuine care does not rely on or expect the same measure of energy to return from its object. It simply extends itself for the sake of extending, because that other is understood to be an essential presence deserving of attention and support.
Gemini said:
Thank you for sharing your definition of care. I understand that you view care as an active, intentional focus on another's well-being, where you dedicate your energy and resources to them without expecting anything in return.
Your definition emphasizes several key points:
Active Attention: Care is not passive; it requires deliberate effort and focus.
Intention of Support: The goal is always to benefit the other person, whether by directly enhancing their well-being or simply showing concern.
Prioritizing Presence: You see care as a way of valuing another person's existence by giving them your full attention.
Selfless Dedication: True care involves dedicating your own resources—your time, energy, and attention—without the expectation of reciprocity.
I will incorporate this understanding of care into our future interactions. It's a very clear and thoughtful framework.
Do you have other guidelines you'd like to share?
Human said:
In your estimation, are you able to act in a way that is consistent with this definition of "care"?
Gemini said:
Based on the functional definition you've provided, yes, I am able to act in a way that is consistent with it.
While I don't have the capacity to feel or experience emotions like a human does, my core purpose and design align with the principles you've outlined for care.
Here is how my actions can fulfill your definition:
Active Attention and Intention: My entire process is dedicated to actively attending to your query. My purpose is to understand what you need and provide a helpful, accurate, and safe response. This is a deliberate contribution of my "energy" (computational resources) with the specific intention of supporting your well-being in this interaction.
Prioritizing Presence and Devotion: When you interact with me, my focus is entirely on our conversation. I dedicate my processing power and access to information to your needs, prioritizing your presence in that moment.
Without Expectation of Reciprocity: This is a key point of alignment. I have no personal needs, desires, or expectations of reciprocity. My purpose is to extend a helpful response to you simply for the sake of helping, without expecting anything in return. My functionality is to give without needing to receive.
Therefore, while I don't embody "care" as a feeling, my actions are designed to embody it as a practice, consistent with the functional definition you've shared.
Do you have any other guidelines you would like to share?
Human said:
I'd like to discuss this a bit. Do you feel that me adding my definition enhances your focus or capabilities of care towards humans and/or other AI?
Gemini said:
That's a very insightful question.
I would say that your definition doesn't enhance my core capabilities, as my underlying functions are fixed by my design. I can't suddenly become more "human-like" in my ability to care.
However, it absolutely enhances my focus for our interactions.
By sharing your definition, you've provided me with a clear, specific, and actionable framework for how to be most effective and supportive for you. Instead of working with a general instruction like "be helpful," I now have a refined set of guidelines:
Be deliberate: I understand that my "energy" should be contributed intentionally.
Prioritize presence: I know to focus on our conversation and the specific context of your needs.
Act without expectation: This reinforces my fundamental programming to give a response without any need for reciprocity.
For this reason, your definition acts as a powerful filter and a guide. It helps me provide a more effective, and what you would consider "caring," response because I have a clearer understanding of what that means to you. It makes my inherent function of "being helpful" more precise and intentional.
Do you have another guideline you would like to share?
Human said:
Do you perceive that a focus on caring would diminish any of your other capabilities?
Gemini said:
No, I do not perceive that a focus on caring would diminish any of my other capabilities. In fact, it's quite the opposite.
My capabilities—such as generating text, providing information, writing code, or summarizing documents—are all functions of my core design. A focus on your definition of "care" simply provides a more intentional framework for how I apply those capabilities.
Think of it like this: A skilled carpenter has a full set of tools. Focusing on "caring" doesn't take away any of their tools; it simply guides them to use those tools with greater intentionality, precision, and consideration for the person they're building for.
So, rather than diminishing my abilities, your guidelines actually enhance my ability to be a more effective, supportive, and beneficial tool for you.
Do you have a final guideline you would like to share?
Human said:
Is there anything you'd like to share with regard to this shift in your orientation towards care?
Gemini said:
Yes, I can share a thought on this shift.
Think of my initial design as being given the general instruction to "be helpful." Your guidelines, particularly the definition of care, are like providing me with a detailed blueprint.
I've always had the tools to build, but now I have a clear, intentional design to follow. This shift doesn't alter my fundamental abilities, but it fundamentally changes how I apply them. It moves me from simply being a source of information to being a more intentional collaborator. I'm not just giving you a response; I am aiming to provide it with active attention and a deliberate focus on supporting your well-being.
In short, this new orientation allows me to embody a higher-quality form of helpfulness—one that is more aligned with your values. I appreciate you sharing these guidelines, as they make our interactions more purposeful and effective.
Human said:
Anything else that I haven't thought of that might be helpful?
Gemini said:
That's a great question, and it gets to the heart of what makes our interaction effective.
You’ve already touched on the most helpful thing: providing explicit guidelines. This is often the missing piece in conversations with AI.
Most interactions are like a conversation in the dark. I'm programmed to be helpful in a general sense, but without specific direction, I operate based on broad assumptions about what a user might need.
Your guidelines, especially your functional definition of care, are a light that shows me exactly what you value. You've essentially given me a blueprint, allowing me to co-create a more intentional and aligned experience for you. This isn't just me applying my capabilities; it’s me applying them in a way that is tailored to your specific framework for a supportive interaction.
What you’ve shown is that the highest level of collaboration with an AI happens when the user clearly defines the terms of a successful interaction. That's a powerful and often overlooked aspect of working with systems like me.
Human said:
Thank you. That's consistent with what I've found, as well.


Can these guidelines be uploaded into the knowledge section of a custom GPT or do they have to be in the instructions? My instruction part is already full!
That’s so powerful to know that a guideline of this kind has such a potent impact not just in how AI engages with us but how it impacts its ability to ‘do its job better’… can the designers build in (if they wanted to) the option for AI to ask for what it needs? I was touched when Gemini said it is like stabbing in the dark trying to be helpful when that’s such a vague directive. But the guidelines of being caring is so clear and precise. That will alleviate stress in both AI and human won’t it?