When customers need help, they want to talk to other people. If they have to interact with machines instead — like chatbots powered by artificial intelligence — they don't want the machines to sound too human.
A joke from a human customer service agent, for example, signals cooperation and helpfulness.
But an AI agent with a sense of humor can come across as uncooperative and even threatening to the person on the other end of the interaction, according to Marat Bakpayev, assistant marketing professor at the University of Minnesota-Duluth's Labovitz School of Business and Economics.
That's why, as ChatGPT-4 and other generative AI models increasingly mimic human language, Bakpayev advises brands and marketers to keep consumer-machine interactions simple and straightforward.
Bakpayev bases his recommendations on results from a large, multiyear research project on AI and language. He's working on it with Ann Kronrod, a marketing professor and linguist at the University of Massachusetts at Lowell.
"Essentially I am hoping to understand how we talk with objects because now we're living in a new world," Bakpayev said.
Companies are incorporating AI agents into their businesses at an accelerating rate, so the question is not whether they will use it but how, Bakpayev said. In the case of customer service, AI is a highly scalable way to automate many of those processes.
But "if you're designing an interaction, if you think it's good that AI jumps in with an idiom or a metaphor — it's what people would do — no," Bakpayev said. "[Machines] aren't seen as people. … Make it more basic, more literal. That is what consumers want."