© 2025 WYSO
Our Community. Our Nation. Our World.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

As Iran and Israel fought, people turned to AI for facts. They didn't find many

AILSA CHANG, HOST:

It can be hard enough to figure out what's happening during an armed conflict. As NPR's Huo Jingnan and Lisa Hagen report, AI is adding more challenges. It's generating ever more realistic video, and AI chatbots are helping create and spread false information.

HUO JINGNAN, BYLINE: In the first days after the Israeli military surprise airstrikes on Iran, this video started circulating on X.

(SOUNDBITE OF ARCHIVED RECORDING)

AI-GENERATED VOICE: (As unidentified newscaster, speaking Azeri).

HUO: The narration is in Azeri, and the video is made to look like a newscast. It shows what looks like drone footage of a bombed-out airport it says is in Israel.

LISA HAGEN, BYLINE: But it's not real. This video is AI-generated. On X, one post of the video got close to 7 million views. In the comments, users try to confirm whether the video was authentic by asking Grok, the integrated AI chatbot on X. Emerson Brooking is with the Digital Forensic Research Lab. It's part of the nonpartisan policy group the Atlantic Council.

EMERSON BROOKING: What we're seeing is an AI mediating the experience of warfare.

HUO: He says since its invention, mass media has been shaping public opinion about war, conflict and politics. Today, hyperrealistic AI content and chatbots are adding new dimensions to these older dynamics. And Brooking says it makes sense that people are drawn to it.

BROOKING: There is a difference between experiencing conflict just on a social media platform and experiencing it with a conversational companion who's endlessly patient, who you can ask about the different sides of the conflict, what really happened, etc.

HAGEN: But just because you can ask a chatbot about a war doesn't mean you should trust its response.

HANY FARID: I don't know why I have to tell people this, but you don't get reliable information on social media or an AI bot.

HAGEN: Hany Farid is a professor who specializes in media forensics at the University of California, Berkeley.

FARID: First of all, it's not what it was designed for. Chatbots aren't designed to analyze images for real versus fake.

HAGEN: As attacks between Israel and Iran picked up, users on X who asked Grok if photos and videos in their feeds were authentic got contradictory answers

HUO: At NPR, we tried running similar queries on OpenAI and Google's chatbots, sending them images to fact-check. While they got some things right, they still made mistakes. Another chatbot from the company Anthropic said it couldn't authenticate them one way or the other.

HAGEN: Chatbots can be helpful. Farid says they're part of his toolkit, but he's a forensics professional who always double-checks what they spit out.

FARID: You have to understand - what can it do? What can it not do? And if you don't know, you're just asking to be lied to.

HAGEN: Which can be one way to satisfy the appetite for information during a war or a fast-moving news event. It's the same urge that invites content creators to flood timelines with sensational media in these moments.

HUO: Now, tech companies don't share much data about how often people use AI chatbots to seek out news. Darren Linvill is a Clemson University professor who studies how states like China, Iran or Russia use digital tools for propaganda.

DARREN LINVILL: They're not doing anything that they weren't doing before. They're just doing it with fewer people and doing it at a higher volume.

HUO: Linvill says decades ago, foreign influence campaigns could take years to bear fruit. Today, he says, generative AI can do this in days or even hours.

HAGEN: This technology helps with all kinds of things - coming up with false narratives, memes or even filling out an entire fake news website. But the most effective way these messages actually reach a willing audience is still through prominent people or influencers that these state actors sometimes pay for.

LINVILL: They spread a lot of bread on the water, and some percentage of that picks up and becomes a prominent part of the conversation.

HAGEN: Whether it comes to propaganda or looking for information in uncertain moments, the researchers we spoke to said the most potent ideas are the ones that confirm what people already want to believe.

Lisa Hagen...

HUO: ...And Huo Jingnan, NPR News. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Lisa Hagen
Lisa Hagen is a reporter at NPR, covering conspiracism and the mainstreaming of extreme or unconventional beliefs. She's interested in how people form and maintain deeply held worldviews, and decide who to trust.