Fake News Expert On How False Stories Spread And Why People Believe Them
DAVE DAVIES, HOST:
This is FRESH AIR. I'm Dave Davies in for Terry Gross. So do you remember reading that Hillary Clinton paid Jay Z and Beyonce $62 million dollars for performing at a rally in Cleveland before the election? You might have, but the story is false, one of many posted on hyper-partisan websites and spread by aggressive social media campaigns during the presidential election.
Our guest, Craig Silverman, has spent much of his career as a journalist writing about issues of accuracy in media. He wrote a column for the Poynter Institute called Regret the Error and later a book of the same name on the harm done by erroneous reporting. He also launched a web-based startup called Emergent devoted to crowdsourcing the fact-checking of fake news.
He's now the media editor for the website BuzzFeed, and he spent much of this year writing about fake news, rumors and conspiracy theories that gained currency in the presidential campaign - where they came from, why they got so much engagement on social media and what should be done to reduce their impact on public discourse. I spoke to Craig Silverman yesterday.
Well, Craig Silverman, welcome to FRESH AIR. You did an analysis comparing how stories done by, you know, mainstream major news organizations like The New York Times, The Washington Post did it in terms of engagement on Facebook. You compared those stories to other stories done by essentially fake websites or fake election stories. What did you find?
CRAIG SILVERMAN: Something pretty surprising. So what we did was we looked at 19 major media organizations and we found sort of the election content from them that had performed best on Facebook. And to kind of define what that means, Facebook doesn't give us a lot of data. That's I think one of the frustrations overall that a lot of researchers and other people have. But what Facebook will give you is one number for a piece of content that gives you the total count of the number of shares and comments and reactions for that one piece of content.
And so what we did was we looked across 19 major media organizations and we picked out the best-performing stuff about the election. And then because I'd been looking at fake news for so long, I have lists of more than 50 websites that completely publish fake information. I looked at what had performed best from them about the election, and I also did some searches to try and find other things about conspiracy theories that I knew had spread.
So at the end of the day, when we kind of looked at the numbers we saw that nine months before the election, six months before the election, the overall Facebook engagement for those top election stories from the mainstream outlets was greater than the fake news that was spreading. But when we went three months before the election, that critical time, we actually saw the fake news spike. And we saw the mainstream news engagement on Facebook for those top 20 stories decline.
And so at the end of the day, in that critical moment, the fake news of those top 20 stories was getting more engagement on Facebook than some of the stories from the biggest media outlets in the U.S. And that was incredibly surprising. I didn't actually expect fake news to win out in that sense.
DAVIES: So what does that tell us?
SILVERMAN: Something was going on there where, one, you can conclude that the stuff that they were creating and spreading was really resonating with voters. But two, I think you also have to look at Facebook itself and wonder how is it that these sites that have for so long been so marginal, how is it that these sites that in many cases had only been launched in the previous, you know, six to 12 months, how is it possible that they're getting stuff out there on this massive platform, Facebook, that is getting more engagement than, you know, op-eds and commentary pieces from major media, than, you know, big stories about the Trump campaign, about the Clinton campaign? It just didn't make sense to me.
But there's no question that these fake stories resonated with people. There's no question that they saw them and they shared them or they commented on them and they liked them, and that created tremendous velocity on Facebook. And a lot of people saw them, and that's a really surprising thing and it's a distressing thing.
DAVIES: When you looked at the fake news stories, did they tend to help one side more than another?
SILVERMAN: Yeah. Overwhelmingly, when you looked at the stories that performed really well on Facebook, they were pro-Trump or they were anti-Clinton. And that was a very clear trend. The stuff that was anti-Trump was just not getting the same traction. And so I think what happened is that a lot of people creating fake news looked at this and said, well, let's go all-in on Trump.
DAVIES: You've discovered that there are more than a hundred websites peddling false news stories for the American voting public from a single town in Macedonia in the Balkans. Tell us about this place. Who's pushing this stuff?
SILVERMAN: There's a town in central Macedonia called Veles, you know, population is roughly around 40 to 50,000 people. And, you know, to back up for a second, myself and a researcher named Lawrence Alexander were interested in pro-Trump movements outside of the United States and outside of North America. And so working with Lawrence, we kind of went from, you know, one European country in particular to the next to sort of see, you know, are there other pro-Trump Facebook pages? Are there pro-Trump websites? Who are the people running these? You know, what messaging are they using?
We were interested in that element of it, and we were interested for a couple of reasons. The first reason is that a lot of pro-Trump content online was getting really good engagement. And so we figured that there might be people from other countries around the world who are sort of trying to grab a piece of that. And then the second piece was that we were interested to see if there were any other foreign entities that were helping push pro-Trump messaging outside of the U.S.
And, obviously, one of the ones that comes to mind when you think about that would be Russia. Are there any sites pushing pro-Trump messaging in Europe that had connections perhaps to Russian interests? So we went looking and we found some sites in different places. And we found this small cluster initially in Macedonia, and that was sort of of interest. You know, why would people in Macedonia be running sites that seemed very pro-Trump in English?
And we looked a little bit closer and did more research and we found that actually The Guardian months earlier had pointed to over a hundred websites about U.S. politics in this small town of Veles. So we did our own research and we turned up a number of 140 sites. And I went and I visited every single one of these 140 sites personally. And we started to build a spreadsheet to sort of see like, OK, so what kind of content are they publishing and do they lean right or left or do they seem kind of, you know, in the middle?
And as I filled out the spreadsheet it became very clear that they were overwhelmingly pro-Trump. And as I visited the websites and read their content, I saw that a lot of the stuff that they were pushing was misleading, was to the extreme of partisanship and also occasionally was false. And so we dug in even more and realized that among the top shared articles from, you know, these range of sites, the majority of, like, the top five were actually completely false. So at that point, once we understood the content that they were publishing and how many there were, we really wanted to understand so who are the people behind these sites?
DAVIES: OK, so you got in contact with these folks. Who were they?
SILVERMAN: Yeah, I shot out a lot of emails. You know, the sites that we had found, we found sort of who was registered as the owner of the sites. And most people didn't respond to me. Some people got back to me and politely declined saying that they weren't interested in talking to me.
But a few people through email or through finding them on Facebook and talking to them did speak to me. And one of the things that stood out right away was that a lot of them were very young. In fact, a lot of the people I spoke to were still in their teens or in their 20s and in university. And, you know, the biggest question is why. You know, why run politics sites, why pro-Trump politics sites and how would a teenager in Macedonia think to do this?
And I think there were probably still some unanswered questions there. But the answer that they always gave me was that, you know, it was simply for money. There are a lot of sites run out of Veles, run out of Macedonia in general that we found. In particular, there's a huge cluster of websites in English about health issues because they find that that content does really well.
And if they sign up, for example, for Google AdSense, an ad program, they can get money as people visit their sites and it's pretty straightforward. So they tried election sites, and over time they all came to realize that the stuff that did the best was pro-Trump stuff. They got the most traffic and most traction.
DAVIES: So these are young people, teenagers in some cases, who aren't driven by ideology. They're making a buck. Do they make money from doing well on Facebook?
SILVERMAN: Facebook directly doesn't really earn them a lot of money. But the key thing about Facebook - and this is true whether you're running a politics site out of Macedonia or whether you run a very large website in the U.S. - Facebook is the biggest driver of traffic to, you know, news websites in the world now. You know, 1.8 billion people log into Facebook every month. And they'll...
DAVIES: They see stuff they like, they hit the link and they go to the site, right?
SILVERMAN: That's it. And so the key thing there is you have to get your content onto Facebook. You have to get in front of people so that they start sharing it and clicking on it. And so Facebook is the driver. And what we've found in subsequent investigations was that a lot of the folks running these sites in Macedonia also have these - sometimes they're creating or buying fake Facebook accounts which they then use to go online and to drop links to the stories on their websites into, for example, pro-Trump Facebook groups that exist online. Maybe they have thousands or tens of thousands of members. If you put your story in there, hopefully you're going to get some clicks. And so that's what they were doing. They were using Facebook to drive the traffic to the website where they had ads from Google and where they would earn money from that traffic.
DAVIES: Right. Now, these young people in Macedonia I don't expect know enough about American politics to produce convincing fake election stories. Where were they getting the content?
SILVERMAN: This is, again, one of the key things you wonder because, I mean, the English of some of the folks that I interviewed, you know, was OK, but not enough for them to kind of create this content themself. And so as we were looking at the sites, we realized that, you know, if you, for example, would place the headline for a particular article on a Macedonian site in Google, you would probably find that exact headline on a site run out of the U.S. maybe a day or two earlier.
And so in the end, what they were doing was paying attention to the content that was doing well on Facebook, you know, looking in those pro-Trump Facebook groups to see what people were sharing. And then they would, you know, copy that text completely or just kind of pick a little piece out of it and link back. And so it was a very easy thing for them to do. They didn't have to come up with stories. They certainly weren't doing reporting. They were kind of copying and pasting and getting their stuff onto Facebook.
And the key here is it speaks to kind of what happens on Facebook, which is that it doesn't matter who necessarily created the story first. It matters who was able to get it to move the most on Facebook. That's who would earn the most money.
DAVIES: So these guys figured out there was money to be made from pushing this. When you talked to them, did you say hey, this isn't true, voters are being misinformed? Did they care?
SILVERMAN: There wasn't a huge amount of concern over that. And I think part of it is that, you know, Veles, it's a small place. It's - economically, it's not doing very well. Used to be heavy industry there. A lot of those jobs are gone now. And I think there was an element almost - in some of the people I was speaking to, there was almost an element of pride saying, you know, we're here in this small country that most Americans probably don't even think about, and we're able to, you know, put stuff out and earn money and to run a business. And I think there was a bit of pride in that. One of the people that I spoke to, who was a bit older - he was in his 20s - you know, he said that yeah, I mean, people know that a lot of the content is false. But that's what works.
And that's, again, one of the really big takeaways, I think, from this election, is - and I heard this not only from people in Macedonia, but from people in the U.S. who run very large - what people sort of refer to as hyper-partisan conservative Facebook pages. You know, meaning that their content is very, very slanted and very much appealing to their core audience there. They all said that when it came down to it, the fake stuff performed better on Facebook. And if you weren't doing some stuff that was misleading or fake, you were going to get beat by people who were.
DAVIES: You actually did an analysis where you looked at the accuracy of stories done by hyper-partisan sites on the right, hyper-partisan sites on the left and mainstream sites. What did you find about how well accurate stuff played and how well false stuff played?
SILVERMAN: So a team of us at BuzzFeed, you know, we spent seven consecutive weekdays looking at all of the posts that went up on the Facebook pages of a selection of, as you said, pages on the right, pages on the left. And then we also looked at three mainstream pages as well as kind of, you know, a base of comparison. And what we found overall is that the content that performed best was - you know, fell into two categories. So one is - was sort of misleading or false completely. That would get high engagement, meaning high shares on Facebook. That's - we really zeroed in on shares because that's how you drive the most traffic.
And then the other type of content that performed really well was, you know, memes, like a photo that just sort - kind of expressed a very partisan opinion. These - you know, they weren't necessarily factually based, but they really kind of riled up the base. And for the pages that were partisan pages on the right and the left, if you had stuff that really appealed to people's existing beliefs - really appealed to, you know, a negative perception of Hillary Clinton, a negative perception of Donald Trump - even if it, you know, completely bent the truth, that would perform much better than a sort of purely factual thing.
And when we looked at the performance of the mainstream pages, you know, the engagement for the partisan pages was much better. And I don't think that's only because we did find false and misleading stuff on the partisan pages. I think it's also because frankly, you know, the mainstream pages weren't posting as many memes. They weren't posting as many videos. And that stuff does really well on Facebook. It's strange, but you would think the media - big media companies are better at kind of running their Facebook pages. But honestly, these partisan pages on the right and the left were just much better at understanding what does well on Facebook.
DAVIES: Craig Silverman is the media editor for BuzzFeed. We'll continue our conversation in just a moment. This is FRESH AIR.
(SOUNDBITE OF MUSIC)
DAVIES: This is FRESH AIR, and we're talking about fake news in the presidential campaign with BuzzFeed media editor Craig Silverman. When we left off, we were talking about BuzzFeed's analysis of how false, highly partisan content did better on Facebook than stories reported by mainstream media.
SILVERMAN: So at the core of this is - there's two factors that are at play here. So one is a human factor and one is kind of a platform or algorithmic factor. So on the human side, there's a lot of research out there going back a very long time that looks at sort of how humans deal with information. And one of the things that we love as humans - and this this affects all of us. We shouldn't think of this as just being something for people who are very partisan. We love to hear things that confirm what we think and what we feel and what we already believe. It's - it makes us feel good to get information that aligns with what we already believe or what we want to hear.
And on the other side of that is when we're confronted with information that contradicts what we think and what we feel, the reaction isn't to kind of sit back and consider it. The reaction is often to double down on our existing beliefs. So if you're feeding people information that basically just tells them what they want to hear, they're probably going to react strongly to that. And the other layer that these pages are very good at is they bring in emotion into it, anger or hate or surprise or, you know, joy. And so if you combine information that aligns with their beliefs, if you can make it something that strikes an emotion in them, then that gets them to react.
And that's where the kind of platform and algorithms come in. Which is that on Facebook, you know, the more you interact with certain types of content, the more its algorithms are going to feed you more of that content. So if you're reading stuff that aligns perfectly with your political beliefs, it makes you feel really good and really excited and you share it, Facebook is going to see that as a signal that you want more of that stuff. So that's why the false misleading stuff does really well is because it's highly emotion-driven. It tells people exactly what they want to hear. It makes them feel very comforted and it gets them to react on the platform. And the platform sees that content does really well and Facebook feeds more of it to more people.
DAVIES: There are plenty of stories that debunk inaccurate information and you looked at how they do. How do they compare to the false stories?
SILVERMAN: They don't, unfortunately. And this goes back to a research project I did a couple of years ago where I was really very focused, one, on the spread of rumors and misinformation online and two, also how media organizations deal with this new world where it's very easy for something, you know, that's simply a tweet claiming something to suddenly get huge exposure and huge distribution. And the same obviously happens on Facebook on an even bigger scale. So when you're a journalist and you see something out there, you're not sure whether it's true or false, how do you deal with it? Do you wait and do your work? Or because potentially millions of people have already seen it, do you kind of write about it?
So I did a research project where we kind of identified rumors that we saw out there. We looked at how media coverage was of it. And this is how I started to encounter a lot of fake news stories and started to realize that there were entire websites that existed that just had completely fake stuff. So when I would see that completely fake stuff circulating, I would - we would look at the Facebook and other types of social engagement for it. And then we would also try to find any debunking of it from Snopes or from other sources. And honestly during that research project, we really only found I think one example where the debunkings (ph) actually had actually gotten more engagement on social, and in particular Facebook, than the false information.
DAVIES: It's just not as much fun?
SILVERMAN: Yeah. There's a few things going on there. And again, psychology comes into it a little bit. So one, when people create the false stuff and if they're smart about it - if I put it that way - you know, they know that it needs to appeal to emotion. They know that maybe if it can have a sense of urgency, if it can be tied to things people care about, that's probably going to do well in terms of fake stuff. Whereas when you come in as the debunker, what you're doing is actively going against information that people are probably already, you know, willing to believe and that gets them emotionally. And to tell somebody I'm sorry that thing you saw and shared is not true is you coming in in a very negative way unfortunately.
And so the reaction is often for people to get defensive and to disagree with you. And just in general you just seem like kind of a spoil sport. You're ruining the fun or you're getting in the way of their beliefs. And a lot of times when I put debunkings out there, you know, some of the reactions I get are people saying, well, it might as well be true. You know, he could have said that or that could have happened. Or, of course, you get accusations that, you know, you're biased. And so the debunkings just don't appeal as much to us on a psychological level. There's some emotional resistance to wanting to be wrong. That's a very natural human thing. And they're just not as shareable because the emotion there isn't as real and raw as something that makes you angry, for example.
DAVIES: Craig Silverman is media editor for BuzzFeed. After a break, he'll talk about how Facebook handles fake news and what it might do better. I'm Dave Davies and this is FRESH AIR.
(SOUNDBITE OF MUSIC)
DAVIES: This is FRESH AIR. I'm Dave Davies in for Terry Gross. We're speaking with BuzzFeed media editor Craig Silverman about the spread of fake news in the presidential campaign. Silverman spent a lot of this year investigating fake news, reporting that a torrent of false web stories came from a small town in Macedonia and discovering that late in the campaign fake news did better on Facebook than stories from mainstream media sources.
As the campaign proceeded, did the campaigns themselves or Donald Trump through his tweets have any role in building an audience for these fake news stories?
SILVERMAN: Yeah. I think the Trump campaign was so remarkable for so many reasons when we talk about this specific area. So the first that I think needs to be mentioned is that, you know, the Trump campaign itself helped circulate false news stories, 100 percent fake news stories from 100 percent fake news websites.
DAVIES: Give us an example.
SILVERMAN: So the one that comes to mind right away, this is a story that was on a website that is made to look like ABC News but its domain is slightly different. And the story that was published, you know, long before the election claimed that a protester had been paid $3,500 to go and protest at Trump rally. And this fed into perceptions that the people who are against Trump were being paid by big interests.
And that story did pretty well on Facebook. It got a fair amount of engagement. But it was tweeted by Eric Trump. It was tweeted by Corey Lewandowski, who was a campaign manager for Donald Trump, and it was tweeted by Kellyanne Conway, who was his campaign manager, not that long before the election. So when you have people in positions of power and influence putting out fake news - and I want to say, you know, there's no evidence that they knew it was fake and put it out there to fool people. I think in each case they genuinely believed it was true because, as we've discussed, I think it fed into the message their campaign wanted to put out. And it's really kind of unprecedented to think of people that high in a campaign actively putting out misinformation and it happening from several people. You would have thought that after one or two of them did it, you people would have talked to them. So that piece is really, really remarkable.
The other one that I think has to be mentioned is that Donald Trump, on a very frequent basis throughout the campaign and now that he is the president-elect, says things that are not true and things that are demonstrably false. And when you have somebody who is in that position of power, with that amount of influence, with that amount of people who are very passionate about him and what they think he can bring to the country, putting out false information - you know, I think it lays the groundwork for other false information to get out there. And it creates a fertile environment for folks to start kind of making things up because the door is wide open. And I think that there is something unique about the Trump campaign in that respect.
DAVIES: Did you see any of this kind of activity among Democrats?
SILVERMAN: So there certainly was false information circulating that was, you know, anti-Trump or pro-Clinton. I certainly can't think of an example from the Clinton campaign of them actively falling for fake news or what have you. But there's no question that there were things that were false that spread about Donald Trump.
I can think of one meme that I saw where people had misquoted him. One that was really popular actually was one that falsely claimed he had given a quote to People magazine many years ago basically saying that if I ever ran for president, I would run as a Republican because conservatives are so stupid they'll believe anything. And this was turned into a meme.
It spread a lot on Facebook. It was debunked so many times. We debunked it at BuzzFeed. Snopes has debunked it. And it just kept going and going and going because this is something I think a lot of Democrats wanted to believe. But it also has to be said, I mean, all of the analysis that we've done about misinformation related to the election has shown that pro-Trump misinformation and anti-Clinton misinformation far outperformed anything that came from the other side.
DAVIES: Do we have any idea what the effect of this was on this election? Has anybody tried to figure out how many minds were changed, how many votes might have been affected?
SILVERMAN: I don't think anybody's going to get a definitive answer on that. It's really tough to conclude the effect of media on people's voting habits because there are so many factors. But - so I think anyone who believes that fake news won Trump the election is wrong. There's no data to support that. And I say this as somebody who's been looking at this data in a lot of different ways. There's no smoking gun. There's - I don't think we'll ever get it.
But when we look at some of the data about the impact of misinformation, it's really significant. So we at BuzzFeed partnered with Ipsos to do a survey of 3,000 Americans. And one of the things we wanted to find out was their familiarity with fake news headlines about the election. And what we found in the end after testing a group of five fake news headlines that went really big during the election and six real news headlines that went really big during the election is that 75 percent of the time, the Americans who were shown a fake news headline and had remembered it from the election believed it to be accurate.
And that's a really shocking thing. It's impossible to go the next step and say, well, they voted because of that. But I think one of the things this election has shown is that people will believe fake news, misinformation will spread and people will believe it and it will become part of their worldview.
DAVIES: Yeah, but did I hear that right, three quarters of us will believe a fake headline, think it's true?
SILVERMAN: This is - this was a pretty high number, a shocking number. And that number is based on more than 1,500 judgments about fake news headlines from people in this sample of 3,000 people. So it's a pretty good sample size. It's not definitive, but it's a high number. And it is shocking because, I mean, for example, one of these headlines that we tested claimed that Hillary Clinton had been proven to have sold weapons to ISIS. I mean, that was one of the headlines and people did believe that.
DAVIES: We're speaking with Craig Silverman. He is the media editor for BuzzFeed. We'll continue our conversation after a short break. This is FRESH AIR.
(SOUNDBITE OF MUSIC)
DAVIES: This is FRESH AIR. And we're speaking with Craig Silverman. He is the media editor for BuzzFeed. He's been writing about accuracy in reporting and fact-checking for many years. And he spent a lot of time this election season writing about fake news.
Let's talk about Facebook's role here. Now, Facebook founder Mark Zuckerberg has said a number of different things about this. At one point, he said, you know, we're a tech company, not a media company. And that's what Facebook does - right? - it allows millions of people to share content and is kind of reluctant maybe to regulate it. What have its own policies done to propagate or limit the spread of false information?
SILVERMAN: One of the core things that Facebook does or I suppose what it doesn't do as much as it can tries to not act as kind of a censor or a control point on what people are sharing. And I think overall, you know, that's a good thing. I don't want Facebook deciding whether what I put up is good enough to be shown to other people or not.
And this goes into what you mentioned about it sees itself as a platform, meaning it's a place where you can put stuff out and they help it reach lots of people and help you connect with other people. But, of course, that cuts both ways. So me sharing stuff about my family, me sharing news stories I've read that I care about can get just as much attention and can move just as easily as somebody who's consciously created something false and is working to get it to spread on Facebook. So the platform mentality creates the opportunity there.
And the scale of Facebook, which I think people should never get comfortable with how big it is because it's unlike anything in human history. There are almost 2 billion people logging in every month around the world. We've never had a communications system where people are connected in this way that has reached that amount of scale.
And so I think along with, you know, the platform mentality, there's also the scale piece of it that is a huge factor in false information spreading because there's just so many people there. And there's so much potential for information to move and to spread. And Facebook at the scale it's at to a certain extent I feel is almost unknowable. You can't even fathom understanding what's happening on it at any given moment. I think that's not only true for us, but it's also true for them because I think from what we've seen Mark Zuckerberg say, I think he's been really taken aback by what happened, to be honest.
DAVIES: Taken aback by the volume of fake news stories that spread during the election, you mean?
SILVERMAN: Yeah. I mean, his first comments were very dismissive. You know, the first thing he said publicly about it was at a tech conference and he talked about it being, you know, just a crazy idea that fake news had an effect on the election. And, you know, I agree with him. I'm not ready to say that fake news decided the election for Trump. But I also think that completely dismissing, you know, the growing evidence that we have that fake news got huge, huge engagement on his platform, you know, it doesn't make any sense. And he got a lot of blowback for that comment. And I think over time they have come to realize inside Facebook that, you know, this is actually a big issue. And they faced a lot of criticism. And now, you know, the most recent thing from Mark Zuckerberg is they announced seven things they're going to do to try and reduce the spread of misinformation on their platform.
DAVIES: What are they? And do you think they're effective?
SILVERMAN: So one of the things that they're going to do, for example, is to make it easier for average users of Facebook if they see something that's false to be able to kind of flag it and say, hey, this is a false piece of content and signal to Facebook that this is something that you don't want to see but also something that they should look at and prevent from spreading further. Now, that's a feature that actually already exists in some ways. The problem is that it's very hidden. So they want to make that a little more evident for people. And I think that's a good thing. The downside as always is that how many people are going to use that? And would people use it to kind of flag stuff they simply just disagree with rather than things that are false?
They've also acknowledged that there's a role of their algorithms in this. And they're looking at ways of having algorithms be able to recognize stuff that might be misinformation. And that may sound like a simple thing to say. It's incredibly complicated to do. And I've spoken to a lot of researchers who were very focused in this area of trying to automatically identify misinformation or automatically identify rumors and there is no algorithm that exists today that can do that at a high level of accuracy. So the opportunity is for Facebook to really do huge amounts of leadership in this area and do some innovation.
The downside is for them to hand too much over to an algorithm that starts suppressing free speech and suppressing other things it's not supposed to. One of the last things he mentioned that's probably worth noting is they've announced that they are not going to allow fake news websites to have access to the advertising tools on Facebook. So if you run a fake news website, they're going to try and stop you from, for example, paying to promote that post to more people on Facebook. And that's a good thing. And I think if we can cut off some of the financial incentives around fake news, that overall it can be quite powerful.
DAVIES: Right. And, of course, that requires somebody to make a judgment about what a fake new site is.
SILVERMAN: Yeah. Now, this is where things are getting very tricky. And as much as we talk about algorithms when we talk about Facebook, you know, there are humans who are involved in the review of content. When you flag something as offensive on Facebook, it's possible that it may automatically then be scanned by an algorithm and realize that that is an image, for example, maybe they've already banned. But a lot of the time it ends up in front of a person on their content review team who has to make a judgment call. And I can tell you from speaking to people in conservative media, they are extremely concerned.
One, they view Facebook as a liberal organization. They think it's biased against conservative points of view. And two, they're extremely concerned that if Facebook starts trying to weed out fakes, they're going to have people with a liberal point of view who disagree with an article potentially suppressing it. And so there is risk now of suppression of free speech and suppression of different points of view if these things were to go in the wrong direction.
DAVIES: Do you have a view yourself about what they should be doing?
SILVERMAN: You know, the first thing in terms of what should be done is that - is the answer is kind of a lot of things. And that's an unsatisfying answer to give, but it speaks to the complexity of this problem. When people started circulating lists of fake news websites, it was a huge problem because a lot of the sites on those lists, sure, they may publish some stuff that's misleading or false but they weren't publishing stuff a hundred percent in those areas. And there were a lot of simply ideologically-driven sites that were on these lists. And so if, for example, Facebook wanted to just implement a big bad blacklist, get rid of lots of sites, that would be a terrible, terrible outcome. So it's not as simple as I think some people have suggested it can be.
I would like to see them make flagging more of - more easy for people and to make sure that it can't be abused. I think they absolutely need to innovate in the area of algorithmic detection of misinformation. I also think, frankly, they do need to increase the amount of people who are reviewing content, whether it's for being offensive or other things because the scale of their platform is so big that I don't think they've put the human element in there in the right places. So them figuring out where that can be applied and how to guard against ideologically-driven decisions is a big thing. And to be honest, I think that they should figure out ways to identify the sites that are a hundred percent fake news and to see how they're sharing. Are they just being shared among small groups of people who all sort of think the same way and realize that that probably isn't a story that should spread further. So I'm not a huge proponent of blacklists but I think that analyzing the content and knowing what it is and knowing how it's being shared is really important.
The other unsexy thing finally, I think, is that we need to put this in our education system. There are a lot of people being fooled by fake news. There are a lot of people who don't know how to kind of check out the story they're reading online and that's understandable. It's not a matter of intelligence. We're consuming media in very different ways. We're having a whole menu of links and things from all different kinds of sources fed to us every day by Facebook. And that's very different from opening up a newspaper and knowing where everything was coming from. So I think we do in our schools need to start thinking about how we integrate more media literacy and critical thinking education so that people can make better judgments for themselves.
DAVIES: What's Google's role in all of this?
SILVERMAN: Well, Google is in a lot of ways the financial engine for fake news. And similar to Facebook, Google is - considers itself a technology company, not a media company, and considers itself in many ways a platform, which means they're not there, you know, to decide what should or shouldn't be published. They're there to facilitate these things.
The biggest piece for Google - aside from obviously the search element which can send a lot of traffic to these sites - the biggest piece is AdSense, which is their very big advertising network, which you can sign up for very quickly. If you put up a website and you've got some content on it and you submit to AdSense to apply to put ads on it, you can get approved relatively quickly. They do review the site to see if it goes against any of their terms of service. But once you have the ads up there, that's how you make the money.
And I have to say, I mean, the vast majority of the fake news websites and kind of dubious websites that I come across are running AdSense. And oftentimes they're doing things that are in direct opposition to the terms of service that Google says it applies to the sites that are supposed to be in that program.
DAVIES: How would they violate Google's terms of service?
SILVERMAN: Well, let's take the Macedonian sites as an example. One of the things that Google looks for in approving a site for AdSense is that they're adding value in some way. So if you, for example, set up a site and all you did was copy and paste content from other sites and you added nothing to it, you should not be approved for AdSense.
But that's what a lot of the Macedonia's sites that I looked at were doing. They were either completely plagiarizing or just quickly taking from elsewhere and copying it almost completely. There's no reason why those sites should have been approved. They violate Google's own terms of service for AdSense, and they just shouldn't be able to make money that way, but they do.
And so these review processes, whether it's Facebook, you know, trying to review a piece of content to figure out whether it's false or offensive or Google trying to figure out whether a site is in line with its advertising standards, you know, there's always going to be mistakes. There's always going to be things that slip through cracks because when you're a platform, you have reached such massive scale that it's very hard to do quality control.
DAVIES: Well, Craig Silverman, thanks so much for speaking with us.
SILVERMAN: Thank you.
DAVIES: Craig Silverman is media editor of BuzzFeed and author of the book "Regret The Error." This is FRESH AIR. Transcript provided by NPR, Copyright NPR.