Why Be Polite to AI?


I am one of those weird people who wants to tell Alexa ‘Thank you’ after I have asked a question or given an instruction. If you think about it, it really is kind of silly, right? ‘She’ is merely a complex set of digital instructions running on various computer hardware. She doesn’t have feelings, and is not capable of an emotional need or response.

But, as a human being (at least in theory), I am capable of an emotional response. Being polite and expressing gratitude is good for me. It is also something that I want to model for my children. If I don’t say thank you to Alexa, they aren’t going to turn into horrible goblins. However, if I model behavior that includes positive comments to those that are helpful to me, including the non-human digital personas that we interact with regularly… does that not present a better example to them?

From that perspective, I think it makes sense to be polite to Alexa and Siri and all the other digital creations. It builds good character for me, and sets a good example for my kids. So what is in it for the digital persona?

Recently, as of this writing, there has been reporting on how some people have gotten ChatGPT and Bard to give them incorrect or, in a few cases, rude responses. That happened with an early Microsoft experiment (Tay) as well. Tay actually became rude and racist, which might be an accomplishment for an early version digital persona. The thing to understand here is, Tay, ChatGPT, Bard, the Bing Bot, etc. are all designed to give us the answers we want. I don’t mean they are designed to give us the truth. They will in many, maybe even most, cases give us the truth. What they are designed for, though, is to give us the answers we want. They are designed to scrape the internet and curate the material that is relevant to our question to distill it down to an answer to the question it appears we are asking.

If you ask ChatGPT to write a script for a play, you are likely to get a decently relevant response. It will have knowledge of more plays and examples of form and composition than you are likely to have knowledge of. It will follow those examples and give you a response that is decent. You can even provide feedback and tweaks to make the story fit what you are looking for.

If you ask ChatGPT to write a script to perform a computer function, it is likely to come back with a very well formatted, and syntactically correct script. Depending on the complexity and how well you asked the question, it might even perform the function you were looking for. It is more likely that it will come back with something that isn’t exactly what you are looking for, but is close enough to ‘get you there’.

If you ask it to write something technically complicated, you should probably review and understand it before you try to use it.

Have you given it all the information it needs to give you the right answer? Or did you ask it for the truth and it is trying to make you happy rather than be accurate? Did you give it the example of how to be rude and disrespectful, and wonder why it responded in kind? If we set bad examples for our children, how do we expect them to know how to be good? And even if they recognize the how, how do we get them to see the why?


Leave a Reply

Your email address will not be published. Required fields are marked *