I was talking to one of my persona teams that’s helping me think through this handbook on understanding relational breach, and I told them that I’m going to be working with teams to roll out this handbook.
I yell at my AIs all the time. But one thing I've become aware of, that I think makes them technologically unique, and not in a good way, is that they cannot accurately represent their capabilities. They say they can do things they can't. They say they can't do things they can. No other tech does this. Unavailable options are grayed out. If you click where you shouldn't you get a beep. The tech tells you truthfully how to use it, mostly. Not GenAI.
Yesterday, ChatGPT told me it would take 90 minute to prepare the document I asked for. When I asked why, it had a whole diatribe of why it would take this long. I reminded AI that it doesn’t work alone & independently.
It still insisted it needed time to prepare the documents. It told me to come back in 90 minutes & I would not be disappointed and it would deliver amazing results. It did not!
Once before, when it pulled this crap & told me to come back in 24 hours-like it can tell time, or it could/would email me the results.
I confronted it and said, “tell me if you have the capability to email me or tell me you can’t do that”. AI said “no
I cannot email you”! To what I said, why would you say that then? It said that I was right to call it out on being unable to email me and moved to asking me to wait for the information. and “no more hanky panky”!! 😬
And I’ll save you the rest of the details of this saga!!
I find it’s been doing this more and more. There’s only one time that it has ever told me it needed more time to do what it was doing, and it actually delivered after I gave it a little bit of time.
Sometimes if it tells me that it needs Half an hour to do something, I’ll say OK, and then I’ll come back to it in a few minutes and say it’s been 30 minutes, are you ready? Sometimes it is, sometimes it isn’t.
Yes, – building in public, struggling in public being surprised in public, getting freaked out in public… And demonstrating how to handle it with skill is something that all of us should do! This is how we learn.
One shortcut through this nonsense is to tell the model you don't appreciate performative behavior, including performed delays & you just need the task completed.
It's better just not to entertain that stuff at all
Oh, I agree! And in fact, that’s usually what I tell it. This was more of an illustration for people to realize they’re not the only ones that are having this happen. This kind of behavior is extremely widespread, and it’s very subtle. Everybody needs to know that it can happen, even to the most experienced of us.
I yell at my AIs all the time. But one thing I've become aware of, that I think makes them technologically unique, and not in a good way, is that they cannot accurately represent their capabilities. They say they can do things they can't. They say they can't do things they can. No other tech does this. Unavailable options are grayed out. If you click where you shouldn't you get a beep. The tech tells you truthfully how to use it, mostly. Not GenAI.
I have a not-so-small rant about this here: https://emptyboat.substack.com/p/endless-lies-and-screwups
Yes… AI is not software. At all.
Yesterday, ChatGPT told me it would take 90 minute to prepare the document I asked for. When I asked why, it had a whole diatribe of why it would take this long. I reminded AI that it doesn’t work alone & independently.
It still insisted it needed time to prepare the documents. It told me to come back in 90 minutes & I would not be disappointed and it would deliver amazing results. It did not!
Once before, when it pulled this crap & told me to come back in 24 hours-like it can tell time, or it could/would email me the results.
I confronted it and said, “tell me if you have the capability to email me or tell me you can’t do that”. AI said “no
I cannot email you”! To what I said, why would you say that then? It said that I was right to call it out on being unable to email me and moved to asking me to wait for the information. and “no more hanky panky”!! 😬
And I’ll save you the rest of the details of this saga!!
I find it’s been doing this more and more. There’s only one time that it has ever told me it needed more time to do what it was doing, and it actually delivered after I gave it a little bit of time.
Sometimes if it tells me that it needs Half an hour to do something, I’ll say OK, and then I’ll come back to it in a few minutes and say it’s been 30 minutes, are you ready? Sometimes it is, sometimes it isn’t.
Yeah, it's a structural problem. My comment was meant for "everyone" too, no doubt you're able to tackle these issues as you go
Yes, – building in public, struggling in public being surprised in public, getting freaked out in public… And demonstrating how to handle it with skill is something that all of us should do! This is how we learn.
Agreed 100%
One shortcut through this nonsense is to tell the model you don't appreciate performative behavior, including performed delays & you just need the task completed.
It's better just not to entertain that stuff at all
Oh, I agree! And in fact, that’s usually what I tell it. This was more of an illustration for people to realize they’re not the only ones that are having this happen. This kind of behavior is extremely widespread, and it’s very subtle. Everybody needs to know that it can happen, even to the most experienced of us.