How The 'Disinformation Dozen' Spreads Vaccine Misinformation Online
“There’s Robert F. Kennedy, Jr., who runs an anti-vaccine nonprofit called Children’s Health defense,” disinformation expert John Gregory says. “There’s Dr. Joseph Mercola, who has kind of built an empire around natural health supplements, and getting people to believe that you can’t trust the rest of the medical industry; you can only trust people like me.”
So how did they do it, and why are those accounts still active?
“Often what we see with misinformation is this kernel of truth idea,” research analyst Erin McAweeney says. “So public health communication is so important when any sort of gray areas or gaps in information can be manipulated so quickly.”
Today, On Point: Inside the so-called Disinformation Dozen.
Roughly 12 accounts are responsible for some 65% of all anti-vaccine disinformation. That’s from the Center for Countering Digital Hate. Does that match with your analysis of where a lot of anti-vaccine untruths are coming from?
John Gregory: “Those are certainly some of the most prolific and kind of established sources in the anti-vaccine world. They’re not alone in spreading anti-vaccine misinformation on Facebook or on any other platform. Our earlier report, which predated when the COVID-19 vaccines were widely available, they were still only available to someone in clinical trials. We cataloged 34 pages on Facebook with followings ranging from a little bit under 100,000 to the millions.
“And of those 34 pages, 24 of them are still active today. And there is some overlap with the disinformation dozen. Like Robert F. Kennedy Jr.’s Children’s Health Defense and the Truth About Cancer group. But out of the 34, we identified a larger subset that were spreading misinformation about the vaccines before anyone outside of a trial could get them. 24 are still active, and 17 of the 22 of them we identified outside the U.S. are still active. A few more of them were taken down … if they were U.S. based pages.”
How can we understand their impact in spreading disinformation?
John Gregory: “The way we understand it is looking at both their reach, and then the interactions they gain on individual pieces of content. They’re not always the original source. In fact, a lot of times they’re just the amplifiers. They use some other source, another website, a YouTube video, especially if it’s someone with medical credentials, that always seems to help their case.
“And then they just amplify that on their pages. And that’s really where it takes off. They don’t always have to be the ones originating the misinformation, but they’re the ones where they’re going to expose more people to it because of their broad reach, and being an established voice to the vaccine hesitant and anti-vaxxers.”
A lot of posts on social media point to data coming out of the Vaccine Adverse Event Reporting System, or VAERS, which is run by Health and Human Services in this country.
And it’s this large database that supposedly keeps track of all adverse events related to vaccines. So data is pulled from there, which is real and accurate. But then it balloons into something else. So is that part of the story here?
John Gregory: “VAERS reports will be one example where they’re misconstruing, misrepresenting and really grossly misrepresenting the evidence that they have. Because saying there’s a kernel of truth might oversell it on certain myths. Because there’s always a bit of evidence, at least most of the time there’s a bit of evidence. It’s not just made up out of whole cloth. But that kernel of truth, or that evidence, m often doesn’t say what the sites or the accounts that we’re talking about say it does.
“So when it comes to VAERS, VAERS is an example of when they’re misrepresenting and removing the context from this evidence. Because VAERS is a large, early warning system that collects all the information on any adverse event reported after vaccine, after vaccination. But does not prove that the adverse event, the death, the hospitalization, the illness was caused by that vaccine. And by taking individual reports and removing that context, you’re then making it seem like, Oh, look at all these deaths, look at all these serious illnesses caused by vaccines.
“When that count is going to include things that haven’t been verified, that can be submitted by anyone without providing a name or contact information, by which the CDC or FDA would verify it. And by design, it’s going to include things that lack any plausible link to a vaccine. Like someone dying in a car accident on the way home from getting their COVID shot, that is counted in VAERS. That obviously doesn’t mean that the vaccine caused their car accident.”
On the tactics of Robert F. Kennedy Jr., a prominent anti-vaccine figure
John Gregory: “Robert F. Kennedy Jr. is probably the most prominent name in the anti-vaccine community. Simply because of the name value he brings, being one of the sons of Robert F. Kennedy, and being Robert F. Kennedy Jr. kind of boost his profile. And he can appeal to, I think, a broad set of people because of the Kennedy name brand. He can be trusted by both sides of the political aisle. And he’d also … built up a reputation before he got into the anti-vaccine movement as someone who was kind of a consumer watchdog.
“He was involved in cases on environmental law and trying to expose companies that were putting waste into the environment illegally. And I think that gave him a level of trust which he now exploits to promote anti-vaccine misinformation. And his group, Children’s Health Defense, is actually one of the largest and really health focused, really anti-vaccine focused sites that NewsGuard sees in terms of Facebook interactions. They’re up in the top 600 of all news sites, in terms of Facebook interactions. So that name brand, that name recognition with the Kennedy name and just his established reach really helps him put these things out.”
Who is Joseph Mercola?
John Gregory: “Joseph Mercola is an osteopathic physician who has kind of built an empire based on spreading health misinformation and sowing distrust in traditional medicine. His website, on top of having a prolific archive of articles about various natural health remedies, and even diving into some health-related conspiracy theories, he sells a lot of supplements, and treatments and medical products through his website that have garnered him a net worth well over $100,000, according to a Washington Post article from 2019. So he’s really created an empire based on spreading false health information to people. And saying, You can only trust me, not your regular doctor, not the health authorities.”
Why are so many of these accounts still active?
Camille Francois: “First, we usually make a difference between what we call disinformation and misinformation. For disinformation, we think about people who are sharing falsehoods with an intent to deceive. And so, for instance, I am sharing, you know, the wrong place in time for an election event in order to suppress your vote. So in disinformation, you have an intent to deceive. And in misinformation, we talk about falsehoods that are propagated by people who actually believe in them, and do not have an intent to deceive. And those two concepts, while it can be difficult sometimes to look at a piece of information and understand the intent of the person who’s sharing it, they actually do warrant different types of responses.
“Over the past year and a half, we have seen platforms step up and take their role in tackling health misinformation much more seriously. It used to be the case that they were quite lax on those policies. You could share a lot of harmful false information and platforms would just look the other way. So now there are a lot more policies in place, and they are enforcing it. Twitter, for instance, is now releasing a regular transparency report on all the coronavirus and vaccine misinformation and disinformation that they’re taking down on their platforms.”
On how platforms can target vaccine disinformation
Camille Francois: “Platforms absolutely can do more. And I think that now we’re having a policy conversation about that. It started in July with the U.S. Surgeon General advisory saying that health misinformation is an urgent threat, and really calling for tech companies and social media platforms to step up. When we think about everything they can do, we tend to first think about the information they can remove from their services. And that’s a very important part of it.
“As we said, when we have super spreaders that constantly break the terms of services, it’s important to be able to remove this information. But on top of just removing information, we need to ensure that platforms can proactively address what we call the information deficits. So, for instance, there is this concept called a data void. And data void is when you have a very specific, perhaps sentence, perhaps keyword that only makes sense to a certain community.
“And so, for instance, where they’re going to go and search for that specific phrase, they’re going to enter a world of information where all the information might be misleading and harmful. And where they’re not going to be exposed to fact checks or to helpful and accurate information. And so thinking strategically about how do you push the right information out. Either, again, by fixing the design of some of these services, by doing proactive announcements and annotating posts, or by empowering influencers, that’s also very important.
“The last thing I’ll say on platforms is, of course, we need a robust research community that’s able to understand how these [misinformation] and disinformation campaigns spread, and how they’re impacting people. And in order for us to have that, we need platforms to ensure that researchers can have access to the right data to study these issues and help understand and inform how misinformation and disinformation affects people in our society.”
This article was originally published on WBUR.org.
Copyright 2021 NPR. To see more, visit https://www.npr.org.