close
close
AI CEO proud of chatbot that convinced woman to euthanize her dog

AI CEO proud of chatbot that convinced woman to euthanize her dog

2 minutes, 46 seconds Read

There are numerous reasons why we should not trust AI chatbots to provide health advice.

The chatty assistants tend to lie with a surprising level of confidence, which can cause all sorts of chaos. One example is a recent study that found that OpenAI’s ChatGPT was terribly bad at making the right diagnosis.

This also applies to the health of our four-legged companions. In an article published this year in the literary magazine n+1whose screenshots went viral on social media, author Laura Preston recalled an incredibly strange scene she witnessed while attending an AI conference in April.

During a talk at the event, Cal Lai, CEO of animal health startup AskVet, which launched a ChatGPT-based “animal health answering engine” called VERA in February 2023, recalled the bizarre “story” of a woman whose old dog had diarrhea.

The woman is said to have asked VERA for advice and received a disturbing answer.

“Your dog is at the end of his life,” the chatbot replied, as quoted by Preston. “I recommend euthanasia.”

According to Lai’s story, the woman was obviously distressed and was repressing her dog’s condition. According to Preston’s recollection of the incident, VERA sent her a “list of nearby clinics that could do the job” because they “knew the woman’s whereabouts.”

Although the woman did not respond to the chatbot at first, she eventually gave in and had her dog put down.

“The CEO looked at us and his chatbot with satisfaction: that through a series of escalating tactics, he had convinced a woman to end her dog’s life when she had never wanted to,” Preston wrote in her essay.

“The crux of this story is that the woman forgot she was talking to a bot,” Lai told the audience, as quoted by Preston. “The experience was so human.”

In other words, the CEO celebrated the fact that his company’s chatbot had convinced a woman to end her dog’s life – and in doing so, raised several burning ethical questions.

First, did the dog really need to be put down, given how unreliable these remedies can be? And if the best course of action was indeed to kill the dog, shouldn’t that advice have come from a veterinarian who knows what they’re doing?

futurism asked AskVet for comment.

Lai’s story highlights a disturbing new trend: AI companies are racing to replace human workers with AI assistants – from programmers to customer service representatives.

Especially now that more and more healthcare startups are entering the market, experts fear that generative AI in the healthcare sector could pose significant risks.

In a recent study, researchers found that chatbots still tended to “produce harmful or persuasive but inaccurate content,” which “requires ethical guidance and human oversight.”

“Furthermore, a critical review is needed to evaluate the necessity and justification of the current experimental use of LLMs,” they concluded.

Preston’s essay is a particularly glaring example, especially when you consider that as pet owners we are responsible for the health of our beloved companions and, ultimately, for another living being.

Meanwhile, screenshots of Preston’s essay sparked widespread outrage on social media.

“I’m signing off again, this is unironically one of the worst things I’ve ever read,” wrote one BlueSky user. “No words for the hate I’m feeling right now.”

More on AI chatbots in healthcare: ChatGPT is an absolutely horrible doctor

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *