By Dave Wright , Workflow contributor
When Alan Turing, the father of modern computer science, used to visit Zurich, he frequented the Café Bar ODEON. There he met anti-apartheid activist Nelson Mandela, who struck up a conversation with him about the potential of technology to bridge gaps or deepen divisions. The two became fast friends.
A lovely story, except none of that actually happened. The entire colorful account was conjured up by ChatGPT in its answer to the prompt: “Describe in a scene how Alan Turing met Nelson Mandela.”
Generative AI bots like ChatGPT make up stories, or “hallucinate,” all the time. But it’s easy to believe they’re telling the truth. After all, this is what generative AI excels at: giving us logical, coherent, and personable answers to questions, even if the answers aren’t true. As a result, most people say they trust the content that generative AI produces, according to a poll by Capgemini.
But it might not be long before the public’s trust wears thin. Already, we’ve seen high-profile mistakes and bad behavior, sometimes with serious consequences. Earlier this year, ChatGPT accused a law professor of sexual harassment, citing an article in The Washington Post that didn’t exist. In Australia, a mayor is threatening to file the first defamation lawsuit against OpenAI, ChatGPT’s creator, unless it corrects the bot’s false claims that he was imprisoned for bribery. And Google’s AI Bard generates false and harmful narratives more often than not, the Center for Countering Digital Hate reported.
To use generative AI productively at work, we need to be able to trust the bots. But have the bots earned it?
Workflow Guide
Related