Humor Section!

Not sure how they are in your neck of the woods, but here we get disclaimers when the chatbot opens, stating "This session will be terminated if our staff feel threatened" etc.

How can a chatbot feel threatened? They have no feelings. I understand, if speaking to a human or it's recording your response..... but only to a degree.

They have to know it enrages people to feel ignored or get the same response over and over to the point of throwing their device through the window. They set it up. If they feel threatened, maybe they should find ways to become more helpful.... or less sensitive....... lol
 
If they feel threatened, maybe they should find ways to become more helpful.... or less sensitive....... lol

You know, I get what you're getting at, but there is another side.

Many people interact with AI Chatbots, all over the place, as though there is another human on the other end. What we allow as acceptable by social convention is taught. In my opinion, it would be a very, very bad idea to have AI Chatbots react neutrally to what humans would not.

If you're a front-line customer service or tech support representative, on the phone, you are not (and should not be) expected to sit and accept an unhinged rant with a smile. The customer is not only not always right, they're frequently wrong. You do have to put up with "testy" but you don't have to put up with abusive.

It's entirely appropriate for AI Chatbots, designed to mimic humans, to mimic human reactions to abusive treatment. If they don't, you can be sure abusive treatment will become more "normalized." That's the last thing we need.
 
You know, I get what you're getting at, but there is another side.

Many people interact with AI Chatbots, all over the place, as though there is another human on the other end. What we allow as acceptable by social convention is taught. In my opinion, it would be a very, very bad idea to have AI Chatbots react neutrally to what humans would not.

If you're a front-line customer service or tech support representative, on the phone, you are not (and should not be) expected to sit and accept an unhinged rant with a smile. The customer is not only not always right, they're frequently wrong. You do have to put up with "testy" but you don't have to put up with abusive.

It's entirely appropriate for AI Chatbots, designed to mimic humans, to mimic human reactions to abusive treatment. If they don't, you can be sure abusive treatment will become more "normalized." That's the last thing we need.
My comment stands. If they want better reactions from people, they can design them better. AI is something I'm totally against [unless I'm watching silly AI cat and dog videos] and it will never replace a human. I understand why they do it but I'm against that, as well. Replacing people with robots is never going to be okay with me.
 
If you're a front-line customer service or tech support representative, on the phone, you are not (and should not be) expected to sit and accept an unhinged rant with a smile.
I totally agree, and I do calm down once communication with a "real" person is established. It takes only a few seconds to greet them with pleasantry. However, I do express my outright hatred of chatbots.
Companies always record your conversation "for training and other purposes" so I ask them to bear with me while I rip into the company for using chatbots.
Most of the people I'm talking to hate them as well but have no choice in the matter.

Not sure if the company would even care about my opinion because AI saves them "squillions."

Anyway, lets get back to a "humor" thread.
 
Back
Top