Using an AI chatbot or voice assistant makes it harder to spot errors

You May Be Interested In:‘We got stuck in puddles’: skiers upset by lack of snow on Swedish slopes


Voice assistants provide information in a casual way

Edwin Tan/Getty Images

The conversational tone of an AI chatbot or voice-based assistant seems like a good way to learn about and understand new concepts, but they may actually make us more willing to believe inaccuracies, compared with information presented like a static Wikipedia article.

To investigate how the way we receive information can change how we perceive it, Sonja Utz at the University of Tübingen, Germany, and her colleagues asked about 1200 participants to engage with one of three formats.

share Paylaş facebook pinterest whatsapp x print

Similar Content

Leon Chew, The Call, Holly Herndon and Mat Dryhurst with sub, Serpentine, 2024
Musical AI harmonises with your voice in a transcendent new exhibition
Microsoft’s new “Copilot Vision” AI experiment can see what you browse
Microsoft’s new “Copilot Vision” AI experiment can see what you browse
B&W illustration of Archimedes, a bearded man in ancient Syracuse, using a lever and fulcrum to hoist a globe-like object.
“MyTerms” wants to become the new way we dictate our privacy on the web
Strava on App Store displayed on a phone screen and Strava website displayed on a screen in the background
Fitness app Strava is tightening third-party access to user data
A man standing in modern apartment and using a smart phone
Smartphone flaw allows hackers and governments to map your home
Still images from three videos generated with Tencent's HunyuanVideo.
A new, uncensored AI video model may spark a new AI hobbyist movement
The News Spectrum | © 2024 | News