It is the never-ending battle for YouTube.
Every minute, YouTube is bombarded with videos that run afoul of its many guidelines, whether pornography or copyrighted material or violent extremism or dangerous misinformation. The company has refined its artificially intelligent computer systems in recent years to prevent most of these so-called violative videos from being uploaded to the site, but continues to come under scrutiny for its failure to curb the spread of dangerous content.
In an effort to demonstrate its effectiveness in finding and removing rule-breaking videos, YouTube on Tuesday disclosed a new metric: the Violative View Rate. It is the percentage of total views on YouTube that come from videos that do not meet its guidelines before the videos are removed.
In a blog post, YouTube said violative videos had accounted for 0.16 percent to 0.18 percent of all views on the platform in the fourth quarter of 2020. Or, put another way, out of every 10,000 views on YouTube, 16 to 18 were for content that broke YouTube’s rules and was eventually removed.
“We’ve made a ton of progress, and it’s a very, very low number, but of course we want it to be lower,” said Jennifer O’Connor, a director at YouTube’s trust and safety team.
The company said its violative view rate had improved from three years earlier: 0.63 percent to 0.72 percent in the fourth quarter of 2017.
YouTube said it was not disclosing the total number of times that problematic videos had been watched before they were removed. That reluctance highlights the challenges facing platforms, like YouTube and Facebook, that rely on user-generated content. Even if YouTube makes progress in catching and removing banned content — computers detect 94 percent of problematic videos before they are even viewed, the company said — total views remain an eye-popping figure because the platform is so big.
YouTube decided to disclose a percentage instead of a total number because it helps contextualize how meaningful the problematic content is to the overall platform, Ms. O’Connor said.
YouTube released the metric, which the company has tracked for years and expects to fluctuate over time, as part of a quarterly report that outlines how it is enforcing its guidelines. In the report, YouTube did offer totals for the number of objectionable videos (83 million) and comments (seven billion) that it had removed since 2018.
While YouTube points to such reports as a form of accountability, the underlying data is based on YouTube’s own rulings for which videos violate its guidelines. If YouTube finds fewer videos to be violative — and therefore removes fewer of them — the percentage of violative video views may decrease. And none of the data is subject to an independent audit, although the company did not rule that out in the future.
“We’re starting by simply publishing these numbers, and we make a lot of data available,” Ms. O’Connor said. “But I wouldn’t take that off the table just yet.”
YouTube also said it was counting views liberally. For example, a view counts even if the user stopped watching before reaching the objectionable part of the video, the company said.
QAnon, the right-wing conspiracy theory community, had another bad day on Thursday.
Following the letdown of Jan. 20 — when, contrary to QAnon belief, former President Donald J. Trump did not declare martial law, announce mass arrests of satanic pedophiles and stop President Biden from taking office — some QAnon believers revised their predictions.
They told themselves that “the storm” — the day of reckoning, in QAnon lore, when the global cabal would be brought to justice — would take place on March 4. That is the day that U.S. presidents were inaugurated until 1933, when the 20th Amendment was ratified and the date was moved to January. Some QAnon believers thought that it would be the day that Mr. Trump would make a triumphal return as the nation’s legitimate president, based on their false interpretation of an obscure 19th century law.
Law enforcement agencies, worried about a repeat of the Jan. 6 riot at the Capitol, took note of QAnon’s revised deadline and prepared for the worst. The Department of Homeland Security and the F.B.I. sent intelligence bulletins to local police departments warning that domestic extremist groups had “discussed plans to take control of the U.S. Capitol and remove Democratic lawmakers.” And the House of Representatives canceled plans to be in session on Thursday, after the Capitol Police warned of a possible QAnon-inspired plot to stage a second assault on the Capitol.
But the Capitol was quiet on Thursday, and QAnon supporters did not erupt in violence. Mr. Trump remains a former president, and no mass arrests of pedophiles have been made.
Even before their latest prophecy failed, QAnon believers were divided about the movement’s future. Some movement influencers who originally promoted the March 4 conspiracy theory had walked back their support for it in recent days, insisting it was a “false flag” operation staged by antifa or other left-wing extremists in order to make QAnon look bad.
On Thursday, as it became clear that no storm was underway, some QAnon believers defiantly maintained that there was still time for Mr. Trump to stage a coup and take office. One Telegram channel devoted to QAnon chatter lit up with false claims that Bill Gates, Dr. Anthony S. Fauci, Representative Alexandria Ocasio-Cortez and other prominent officials had been arrested or executed for treason already, and that “doubles and A.I. clones” had been activated to preserve the illusion that they were still alive.
But other believers contested those claims and appeared resigned to postponing their day of reckoning yet again.
“It may not happen today,” one poster on a QAnon message board wrote. “But when it happens, everyone will see it! As Q predicted. And yes, it will be much much sooner than in four years. We are talking about days (weeks max).”
Twitter said on Monday that it would start applying labels to tweets that contained misleading information about Covid-19 vaccines, and would enforce its coronavirus misinformation policies with a new five-tier “strike” system.
Tweets that violate the policy will get labels with links to official public health information or the Twitter Rules, the company said in a blog post. Twitter said these labels would increase its ability to deploy automated tools to identify and label similar content across the platform. The company’s goal is to eventually use both automated and human review to address Covid-19 misinformation, the post said, but it added that it would take time for the system to be effective.
Twitter will notify people when it applies a label to one of their tweets, and repeated violations of the Covid-19 policy will result in stricter enforcement, the company said. Two or three strikes result in a 12-hour account lock, while four strikes is a seven-day account lock. After five strikes, Twitter said, the company will permanently suspend the account. (Twitter allows users to submit appeals if accounts are locked or suspended in error.)
The company said it was making these changes to encourage healthy conversation on the platform and help people find reliable information. Since introducing its Covid-19 guidance last March, Twitter said, it had removed more than 8,400 tweets and notified 11.5 million accounts of possible violations worldwide.
Two years ago, YouTube changed its recommendation algorithm to reduce the visibility of so-called borderline content — videos that brush up against its rules but do not explicitly violate them — in an effort to curb the spread of misinformation and conspiracy theories on the site.
But those changes did not stop the rapid spread of videos about QAnon, a debunked internet conspiracy theory, according to a research report on Tuesday from Pendulum, a company that tracks misinformation on YouTube.
Online video channels with QAnon content generated more than one billion views in 2020, with 910 million on YouTube alone, up 38 percent from 2019, the report said. When YouTube began to directly crack down on people posting the QAnon conspiracy theories in October, the largest channels moved to smaller platforms, BitChute and Rumble.
Sam Clark, a co-founder of Pendulum, said the research “indicates that moderation done by YouTube has not been enough to stop the growth of overall viewership of this content.”
The report demonstrated the critical role that YouTube, a subsidiary of Google, played in helping to move QAnon from a fringe phenomenon into the mainstream with violent offline consequences.
In a recent national poll, 17 percent of respondents said they believed in one of the core tenets of QAnon — that a group of devil-worshiping elites who run a child sex ring are trying to control politics and the media. And QAnon believers were involved in the deadly Capitol riot in January as well as other offline violence.
“While we welcome more peer-reviewed research, our data contradicts Pendulum’s findings, and just over the past months alone, we have terminated many prominent QAnon channels and removed thousands of videos for violating our policies,” Farshad Shadloo, a YouTube spokesman, said in a statement.
Mr. Shadloo said Pendulum’s sampling was not comprehensive and did not accurately reflect what was popular or what was watched on YouTube. He added that a number of factors could drive an increase in views, including a sudden increase in media coverage, attention from public figures and sharing outside YouTube.
After YouTube changed its algorithm in January 2019, it said views from recommendations among a set of pro-QAnon channels fell more than 80 percent. The updated policy in October said YouTube would no longer allow “content that targets an individual or group with conspiracy theories that have been used to justify real-world violence.”
Pendulum said YouTube had removed 91,000 videos from 285 of the largest QAnon channels and removed about half of those channels altogether. YouTube has not disclosed the full impact of its policy change, but said the majority of its prominent QAnon channels had been terminated.
But YouTube’s actions did not stop the biggest creators of QAnon content. They simply moved to smaller video platforms with less restrictive moderation policies, such as BitChute and Rumble.
When YouTube took action in October, the number of daily views of QAnon channels on all three platforms fell to 1.3 million from 2.7 million. As followers of those top creators moved to the smaller platforms, daily views rose again, to 2.2 million in December.
And after the attack on the Capitol, QAnon channels had their highest-viewed month ever — topping their previous record by 30 percent, with most of the views on BitChute and Rumble.
Pendulum labeled a channel a QAnon channel when 30 percent of more of its most-viewed videos discussed the conspiracy theory in a supportive way or indicated that the content creator was a believer.
On Monday, Facebook announced that it was banning vaccine misinformation. It followed up on Wednesday by removing the Instagram account of Robert F. Kennedy Jr., one of the most prominent anti-vaccine activists on social media.
Facebook has become increasingly aggressive in recent months at combating a deluge of false health claims, conspiracy theories and rumors. The company is acting at a critical moment, as vaccinations against the coronavirus roll out across the globe. Facebook has said it consulted with the World Health Organization and other leading health institutes to determine a list of false or misleading claims around Covid-19 and vaccines in general.
Even so, dozens of prominent anti-vaccine activists remained active on Facebook and Instagram on Thursday, according to an analysis by The New York Times. Some of the accounts had large followings, including the Instagram account for Children’s Health Defense, the nonprofit organization that Mr. Kennedy runs, which has over 172,000 followers.
A search for the word “vaccine” on Instagram on Thursday showed that four of the top 10 accounts took strong anti-vaccine positions. A search for the hashtag #vaccine got three results, one of which was #vaccinetruthadvocate, a term that anti-vaccine activists often use to spread their message. The hashtag was appended to more than 12,000 posts.
“This is going to take some time, however, but we are working to address what you raise,” a Facebook spokeswoman said in a statement.
Researchers who study misinformation said Facebook continued to struggle to contain Covid-19 falsehoods.
“Months after they promised to crack down on Covid misinformation, we reported hundreds of posts containing dangerous misinformation to Facebook, but just one in 10 of those posts were removed,” said Imran Ahmed, chief executive of the nonprofit Center for Countering Digital Hate. “Millions of people are being fed dangerous lies which lead them to doubt government guidance on Covid and on vaccines, prolonging the pandemic. These lies cost lives.”
Here’s a look at some of the prominent accounts still spreading anti-vaccine misinformation on Instagram.
Children’s Health Defense
The nonprofit regularly promotes seminars and webinars with vaccine skeptics through its Instagram account, and posts misleading accounts of death and injury associated with the Covid vaccine. Many of its posts receive tens of thousands of likes. The organization did not return a request for comment.
An author and public speaker who has campaigned for years against vaccines, Ms. Elizabeth has over 122,000 Instagram followers on her Health Nut News page and 23,700 on another page she runs. She regularly shares content that argues against “mandatory vaccination.” She did not return a request for comment.
Mr. Ayyudurai, an Indian-American politician, has over 299,000 followers on Instagram. He has spread the false claim that Covid-19 can be treated with vitamin C. He has also accused the “deep state,” or the conspiracy theory that a secret cabal runs the government, of spreading Covid-19. He did not return a request for comment.
Misinformation about the second impeachment trial against former President Donald J. Trump is swirling online at a much slower clip than the first impeachment trial against him — at least so far.
The media insights company Zignal Labs collected misinformation narratives around the impeachment proceedings from Jan. 25 to Feb. 9, and found three emerging falsehoods that had gotten thousands of mentions on social media and cable television and in print and online news outlets.
The falsehoods, though, had not gained as much traction as misinformation about Mr. Trump’s first impeachment trial or the outcome of the 2020 election. Still, the data shows how virtually any news event is an opportunity to spread lies and push divisive rumors, helped along by social media algorithms, eager audiences and a broken fact-checking system.
Here are the three most popular misinformation narratives about the impeachment proceedings.
Nancy Pelosi is responsible for the Capitol attack: 30,300 mentions
The falsehood that Congresswoman Nancy Pelosi somehow knew that a mob would storm the Capitol and is using the impeachment trial as a “diversion” effort was amplified by Senator Ron Johnson on Fox News on Feb. 7.
“We now know that 45 Republican senators believe it’s unconstitutional,” Mr. Johnson said on Fox News, referring to the impeachment proceedings. “Is this another diversion operation? Is this meant to deflect away from what the speaker knew and when she knew it? I don’t know, but I’m suspicious.”
A video clip of the interview was viewed at least 2.1 million times on Twitter.
The attack on the Capitol was preplanned, undercutting the basis of the impeachment trial: 8,135 mentions
The falsehood that the Capitol attack was preplanned and “undercuts Trump impeachment premise” gained traction on Feb. 8 when a conservative outlet called Just the News published an article detailing the claim. The article was shared 7,400 times on Twitter and at least 3,000 times on Facebook.
The founder of Just the News, John Solomon — a Washington-based media personality who was instrumental in pushing falsehoods about the Bidens and Ukraine — shared the falsehood from his own Twitter account, collecting thousands of likes and retweets. Other Twitter users then picked up the rumor, further amplifying the false narrative.
Focusing on what was planned in advance should have no bearing on the impeachment trial itself, according to 144 constitutional law scholars who submitted a written analysis of the case against Mr. Trump. They said many of them believe that “President Trump can be convicted and disqualified because he is accused of violating his oath through an ‘extraordinary, unprecedented repudiation of the president’s duties to protect the government’ through his ‘further acts and omissions after he incited the crowd to attack the Capitol.’”
Renewed calls to impeach Obama over ‘spying on Trump’: 5,017 mentions
The narrative that it is not too late to impeach former President Barack Obama started to gain traction on Jan. 26 on Twitter. Thousands of Twitter users shared an old suggestion from Representative Matt Gaetz, a Florida Republican, that if a former president can be impeached, Mr. Obama should be tried for spying on Trump.
The false narrative was a revival of “Spygate” — a labyrinthine conspiracy theory involving unproven allegations about a clandestine Democratic plot to spy on Mr. Trump’s 2016 campaign. But the theory fizzled as the past four years saw none of Mr. Trump’s political enemies charged with crimes. And in 2019, a highly anticipated Justice Department inspector general’s report found no evidence of a politicized plot to spy on the Trump campaign.
Facebook said on Monday that it plans to remove posts with erroneous claims about vaccines from across its platform, including taking down assertions that vaccines cause autism or that it is safer for people to contract the coronavirus than to receive the vaccinations.
The social network has increasingly changed its content policies over the past year as the coronavirus has surged. In October, the social network prohibited people and companies from purchasing advertising that included false or misleading information about vaccines. In December, Facebook said it would remove posts with claims that had been debunked by the World Health Organization or government agencies.
Monday’s move goes further by targeting unpaid posts to the site and particularly Facebook pages and groups. Instead of targeting only misinformation around Covid-19 vaccines, the update encompasses false claims around all vaccines. Facebook said it had consulted with the World Health Organization and other leading health institutes to determine a list of false or misleading claims around Covid-19 and vaccines in general.
In the past, Facebook had said it would only “downrank,” or push lower down in people’s news feeds, misleading or false claims about vaccines, making it more difficult to find such groups or posts. Now posts, pages and groups containing such falsehoods will be removed from the platform entirely.
“Building trust and confidence in these vaccines is critical, so we’re launching the largest worldwide campaign to help public health organizations share accurate information about Covid-19 vaccines and encourage people to get vaccinated as vaccines become available to them,” Kang-Xing Jin, head of health at Facebook, said in a company blog post.
The company said the changes were in response to a recent ruling from the Facebook Oversight Board, an independent body that reviews decisions made by the company’s policy team and rules on whether they were just. In one ruling, the board said that Facebook needed to create a new standard for health-related misinformation because its current rules were “inappropriately vague.”
Facebook also said it would give $120 million in advertising credits to health ministries, nongovernmental organizations and United Nations agencies to aid in spreading reliable Covid-19 vaccine and preventive health information. As vaccination centers roll out more widely, Facebook said it would help point people to locations where they can receive the vaccine.
Mark Zuckerberg, Facebook’s founder and chief executive, has been proactive against false information related to the coronavirus. He has frequently hosted Dr. Anthony Fauci, the nation’s top infectious disease expert, on Facebook to give live video updates on the American response to the coronavirus. In his private philanthropy, Mr. Zuckerberg has also vowed to “eradicate all disease,” pledging billions to fighting viruses and other diseases.
Yet Mr. Zuckerberg has also been a staunch proponent of free speech across Facebook and was previously reluctant to rein in most falsehoods, even if they were potentially dangerous. The exception has been Facebook’s policy to not tolerate statements that could lead to “immediate, direct physical harm” to people on or off the platform.
Facebook has been criticized for that stance, including for allowing President Donald J. Trump to remain on the platform until after the Jan. 6 riot at the U.S. Capitol.
For years, public health advocates and outside critics took issue with Facebook’s refusal to remove false or misleading claims about vaccines. That led to a surge in false vaccine information, often from people or groups who spread other harmful misinformation across the site. Even when Facebook tried updating its policies, it often left loopholes that were exploited by misinformation spreaders.
Facebook on Monday said it would also change its search tools to promote relevant, authoritative results on the coronavirus and vaccine-related information, while making it more difficult to find accounts that discourage people from getting vaccinated.
Since Representative Alexandria Ocasio-Cortez, the New York Democrat, took to Instagram Live on Monday to describe what the Jan. 6 riot was like from inside the Capitol complex, critics have claimed that she wasn’t where she said she was, or that she couldn’t have experienced what she described from her location.
These claims are false.
While Ms. Ocasio-Cortez was not in the main, domed Capitol building when the rioters breached it, she never said she was. She accurately described being in the Cannon House Office Building, which is part of the Capitol complex and is connected to the main building by tunnels.
In her livestream, Ms. Ocasio-Cortez recalled hiding in a bathroom and thinking she was going to die as unknown people entered her office and shouted, “Where is she?” They turned out to be Capitol Police officers who had not clearly identified themselves, and Ms. Ocasio-Cortez said so on Instagram. She did not claim that they were rioters — only that, from her hiding spot, she initially thought they were.
During the riot, reporters wrote on Twitter that the Cannon building was being evacuated because of credible threats, and that Capitol Police officers were running through the hallways and entering offices just as Ms. Ocasio-Cortez described.
The false claims about her statements have spread widely online, much of the backlash stemming from an article on the conservative RedState blog and a livestream from the right-wing commentator Steven Crowder. On Thursday, Representative Nancy Mace, Republican of South Carolina, tweeted, “I’m two doors down from @aoc and no insurrectionists stormed our hallway.”
But Ms. Ocasio-Cortez never said insurrectionists had stormed that hallway, and Ms. Mace herself has described being frightened enough to barricade her own door. A spokeswoman for Ms. Mace said on Friday that the congresswoman’s tweet had been intended as “an indictment of the media for reporting there were insurrectionists in our hallway when in fact there were not,” and that it “was not at all directed at Ocasio-Cortez.”
“As the Capitol complex was stormed and people were being killed, none of us knew in the moment what areas were compromised,” Ms. Ocasio-Cortez tweeted in response to Ms. Mace’s post. (A spokeswoman for Ms. Ocasio-Cortez said the lawmaker had no additional comment.)
Others have corroborated Ms. Ocasio-Cortez’s account and confirmed that the Cannon building was threatened, even though the rioters did not ultimately breach it.
Ari Rabin-Havt, a deputy manager for Senator Bernie Sanders’s 2020 presidential campaign, tweeted that he was in the Capitol tunnels during the attack. As Mr. Rabin-Havt moved toward the Cannon building, he wrote, members of a SWAT team yelled at him to find a hiding place.
And Representative Katie Porter, Democrat of California, said on MSNBC that after the Cannon building was evacuated, she and Ms. Ocasio-Cortez sheltered in Ms. Porter’s office in another building. She said Ms. Ocasio-Cortez was clearly terrified, opening closets to try to find hiding places and wishing aloud that she had worn flats instead of heels in case she had to run.
Jacob Silver contributed reporting.
Dominion Voting Systems, one of the largest voting machine vendors in the United States, filed a defamation lawsuit against Rudolph W. Giuliani on Monday, accusing him of spreading a litany of falsehoods about the company in his efforts on behalf of former President Donald J. Trump to subvert the election.
The lawsuit chronicles more than 50 inaccurate statements made by Mr. Giuliani in the weeks after the election, and issues a point-by-point rebuttal of each falsehood. Here are four of the most common false statements Mr. Giuliani made about Dominion Voting Systems.
1. The Company’s Origin
Mr. Giuliani regularly stated, falsely, that Dominion “really is a Venezuelan company” and that it “depends completely on the software of Smartmatic,” a company “developed in about 2004, 2005 to help Chavez steal elections.”
As Dominion writes in its lawsuit: “Dominion was not founded in Venezuela to fix elections for Hugo Chávez. It was founded in 2002 in John Poulos’s basement in Toronto to help blind people vote on paper ballots.” The suit later adds that the headquarters for the company’s United States subsidiary are in Denver.
2. Programming Votes
Another often-repeated claim was that Dominion had programmed its machines to flip votes: “In other words when you pressed down Biden, you got Trump, and when you pressed down Trump you got Biden.”
This has been proved false by numerous government and law enforcement officials, including former Attorney General William P. Barr, who said in December: “There’s been one assertion that would be systemic fraud, and that would be the claim that machines were programmed essentially to skew the election results. And the D.H.S. and D.O.J. have looked into that, and so far, we haven’t seen anything to substantiate that.”
Similarly, a joint statement by numerous government and elections officials and agencies, including the National Association of State Election Directors, the National Association of Secretaries of State, and the Cybersecurity and Infrastructure Security Agency, stated that there was “no evidence that any voting system deleted or lost votes, changed votes, or was in any way compromised.”
The hand recount in Georgia also affirmed that the machine recounts were accurate in that state.
3. Antrim County, Mich.
Mr. Giuliani zeroed in on Antrim County, Mich., falsely claiming that a “Dominion machine flipped 6,000 votes from Trump to Biden” there, and that machines in the county were “62 percent inaccurate,” had a “68 percent error rate” and had an “81.9 percent rejection rate.”
Mr. Giuliani’s focus on Antrim County stems from human errors made by the county clerk on election night. According to the lawsuit, the clerk “mistakenly failed to update all of the voting machines’ tabulator memory cards.” But the suit says that “her mistakes were promptly caught as part of the normal canvass process before the election result was made official.” The Michigan secretary of state’s office also conducted a hand audit of all presidential votes in Antrim County that found the machines were accurate.
4. A Problematic Expert
Mr. Giuliani claimed that his accusations, particularly in Antrim County, were backed up by experts. But he largely relied on one man, Russell Ramsland Jr., a former Republican congressional candidate from Texas, who, according to the lawsuit filed by Dominion, had also publicly favored false conspiracy theories.
Dominion spent more than five pages on Mr. Ramsland’s lack of credentials to properly examine equipment, noting that he had a “fundamental misunderstanding of election software.” The suit also quotes the former acting director of the U.S. Election Assistance Commission Voting System Testing and Certification program, saying the report produced by Mr. Ramsland “showed a ‘grave misunderstanding’ of Antrim County’s voting system and ‘a lack of knowledge of election technology and process.’”
Twitter said on Monday it would allow some users to fact-check misleading tweets, the latest effort by the company to combat misinformation.
Users who join the program, called Birdwatch, can add notes to rebut false or misleading posts and rate the reliability of the fact-checking annotations made by other users. Users in the United States who verify their email addresses and phone numbers with Twitter, and have not violated Twitter’s rules in recent months, can apply to join Birdwatch.
Twitter will start Birdwatch as a small pilot program with 1,000 users, and the fact-checking they produce will not be visible on Twitter but will appear on a separate site. If the experiment is successful, Twitter plans to expand the program to more than 100,000 people in the coming months and will make their contributions visible to all users.
Twitter continues to grapple with misinformation on the platform. In the months before the U.S. presidential election, Twitter added fact-check labels written by its own employees to tweets from prominent accounts, temporarily disabled its recommendation algorithm, and added more context to trending topics. Still, false claims about the coronavirus and elections have proliferated on Twitter despite the company’s efforts to remove them. But Twitter has also faced backlash from some users who have argued that the company removes too much information.
Giving some control over moderation directly to users could help restore trust and allow the company to move more quickly to address false claims, Twitter said.
“We apply labels and add context to tweets, but we don’t want to limit efforts to circumstances where something breaks our rules or receives widespread public attention,” Keith Coleman, a vice president of product at Twitter, wrote in a blog post announcing the program. “We also want to broaden the range of voices that are part of tackling this problem, and we believe a community-driven approach can help.”
Followers of QAnon, the pro-Trump conspiracy theory, have spent weeks anticipating that Wednesday would be the “Great Awakening” — a day, long foretold in QAnon prophecy, when top Democrats would be arrested for running a global sex trafficking ring and President Trump would seize a second term in office.
But as President Biden took office and Mr. Trump landed in Florida, with no mass arrests in sight, some believers struggled to harmonize the falsehoods with the inauguration on their TVs.
Some QAnon believers tried to rejigger their theories to accommodate a transfer of power to Mr. Biden. Several large QAnon groups discussed on Wednesday the possibility that they had been wrong about Mr. Biden, and that the incoming president was actually part of Mr. Trump’s effort to take down the global cabal.
“The more I think about it, I do think it’s very possible that Biden will be the one who pulls the trigger,” one account wrote in a QAnon channel on the messaging app Telegram.
Others expressed anger with QAnon influencers who had told believers to expect a dramatic culmination on Inauguration Day.
“A lot of YouTube journalists have just lost one hell of a lot of credibility,” wrote a commenter in one QAnon chat room.
Still others attempted to shift the goal posts, and simply told their fellow “anons” to hang on and wait for future, unspecified developments.
“Don’t worry about what happens at 12 p.m.,” wrote one QAnon influencer. “Watch what happens after that.”
And some appeared to realize that they’d been duped.
“It’s over,” one QAnon chat room participant wrote, just after Mr. Biden’s swearing-in.
“Wake up,” another wrote. “We’ve been had.”
Followers hoping for guidance from “Q,” the pseudonymous message board user whose posts power the movement, were bound to be disappointed. The account has been silent for weeks, and had not posted Wednesday.
Ron Watkins, a major QAnon booster whom some have suspected of being “Q” himself, posted a note of resignation on his Telegram channel on Wednesday afternoon.
“We have a new president sworn in and it is our responsibility as citizens to respect the Constitution,” he wrote. “As we enter into the next administration please remember all the friends and happy memories we made together over the past few years.”
Some of the people who stormed the Capitol last week haven’t been solely focused on the election. They have also been prominent purveyors of coronavirus falsehoods.
There was Mikki Willis, a video producer who helped make “Plandemic,” a 26-minute slickly produced narration that was viewed by millions in May that falsely claimed a shadowy cabal of Democratic elites was using the virus and a potential vaccine to profit and gain power.
Then there was Simone Gold, who was part of a group of doctors who were in a viral video on the steps of the Supreme Court in July sharing multiple misleading claims about the coronavirus.
Both appeared in videos of the Capitol siege.
Their presence demonstrates how the disinformation networks that drove the spread of Covid-19 falsehoods are integrated with the networks spreading voter fraud disinformation, said Kate Starbird, a University of Washington associate professor studying online disinformation.
Several prominent anti-vaccination activist groups, including the Natural News website as well as several large groups and influencers on Facebook and Instagram with hundreds of thousands of followers, reveled in the events on the Capitol and posted prolifically about it. “Grab some 🍿 and enjoy the show!” said one Instagram post with images of the Capitol being raided. The post collected 2,700 likes.
People connected to those networks, Ms. Starbird said, “are saturated in disinformation and experiencing a very different, grievance-based reality than the rest of us.”
Mr. Willis entered the Capitol building, but said in a Facebook post that he did not go in far and left quickly. In a speech right after the siege, he was captured in a video speaking to a crowd of people and referring to those who had pushed into the Capitol as “compatriots.” He also railed against the left and the “diabolical” and “corrupt tyrants” of the mainstream media. “This is psychological warfare,” Mr. Willis said in the video. “This is what war looks like today.”
In an email, Mr. Willis said he had made plans to attend the rally at the Capitol because he was “deeply concerned about the loss of our civil liberties.” He added that he found out too late that the rally he was meant to attend, called Health Freedom DC, included the “Make America Great Again” tagline associated with President Trump. Mr. Willis said he did not support any political party and his presence in videos had been “terribly distorted.”
In his email, Mr. Willis said, “I’ve only seen the violence on TV and social media.”
Ms. Gold, the doctor who shared misleading information about the coronavirus, appeared in a video on the tiled Rotunda floor at the Capitol, reciting a speech from a sheaf of papers protesting a “massive medical establishment,” according to The Washington Post.
In a separate video that went viral after “Plandemic,” which collected tens of million views in July, Ms. Gold appeared with a group of doctors calling themselves America’s Frontline Doctors, which she founded. The group was sponsored by conservative activists called the Tea Party Patriots Action and the video spread the misleading message that hydroxychloroquine was an effective coronavirus treatment and that masks did not slow the spread of the virus.
Ms. Gold later said, “I do regret being there.” She did not respond to requests for comment.
Facebook said on Thursday that it is identifying people involved in storming the Capitol last week and disabling their accounts.
An unsubstantiated claim that Ginni Thomas, the wife of Supreme Court Justice Clarence Thomas and a prominent conservative activist, “paid” for dozens of buses to ferry demonstrators to Washington have proliferated online after a pro-Trump mob breached the Capitol last week.
Just three tweets making the claim amassed more than 420,000 retweets and shares. Ms. Thomas did endorse the protests in Facebook posts on Wednesday (she appears to have since deleted her Facebook page) and has previously spread conspiracy theories.
Ms. Thomas did not immediately respond to emails and a phone call for comment, but there is no evidence that she funded transportation for the rioters.
The rumors may have originated from — and mischaracterized — a popular tweet from the writer Anne Nelson pointing out that Ms. Thomas is on the advisory board of Turning Point USA, a conservative student group.
The founder of Turning Point USA, Charlie Kirk, said in a since-deleted tweet that Turning Point’s political action arm and an affiliated group, Students for Trump, were sending more than 80 “buses of patriots to D.C. to fight for the president” on Jan. 6.
While 80 buses was the number that Turning Point Action had committed to funding, Mr. Kirk’s tweet was “ultimately inaccurate” as the groups ended up sending just seven buses from New Jersey, North Carolina, and other locations, according to a spokesman for Turning Point.
Ms. Thomas did not fund any buses herself, the spokesman said. Ms. Nelson, the author of a book about an influential conservative group whose members include Ms. Thomas, also told The New York Times that the claim that Ms. Thomas “paid for buses” is “far beyond any of the documentation I’ve presented.”
An itinerary provided to The New York Times by Turning Point noted that the buses would arrive at the South Lawn of the White House at 9 a.m. on Jan. 6, and that there was no exact time for the buses to depart because the duration of Mr. Trump’s speech was unclear. It did not provide any instructions about joining the march to the Capitol, and Brian Caviness, a student who traveled with the group, was quoted by The Fort Worth Star-Telegram as saying that he did not do so as “that wasn’t part of the plan.”
The Federal Bureau of Investigation said on Friday that there was no evidence that supporters of the antifa movement — a loose collective of antifascist activists — had participated in the pro-Trump mob that breached the Capitol building on Wednesday.
Steven D’Antuono, an assistant director at the agency, said in a call with reporters that there was “no indication” of the group’s involvement among the rioters who stormed the Capitol.
Since Wednesday, far-right activists and allies of the president have made the claim, often while presenting easily disproved evidence, that the rioters were made up of antifa supporters, not backers of President Trump.
Among those pushing the falsehood were Representative Matt Gaetz, a Florida Republican, who said while objecting to the electoral votes for Mr. Biden that people in the mob were “in fact members of the violent terrorist group antifa.” Ken Paxton, the attorney general of Texas, also said antifa was involved.
But even President Trump acknowledged that the people who supported him — not liberal activists — had invaded the Capitol. At one point on Wednesday he told the mob, “we love you.”
An analysis by the media insights company Zignal Labs found that the unfounded rumor had been mentioned 411,099 times across cable television, social media, and in print and online news outlets on Wednesday and Thursday. It was by far the most widely shared false or misleading claim about the Capitol Hill mob, Zignal said.
Adam Goldman contributed reporting.
Misinformation and distortions of the truth have run rampant on social media in the days after a mob of Trump loyalists stormed the Capitol on Wednesday, disrupting lawmakers counting electoral votes to certify President-elect Joseph R. Biden Jr.’s win.
A conservative outlet, The Washington Times, claimed that facial recognition showed evidence that the mob was made up of members of antifa, a loose network of anti-fascist activists. The article has since been corrected. Other misleading and false articles and posts claimed that the mob’s work was a “setup” or an “inside job.” And still others said President Trump would soon declassify information on how the election was stolen.
The media insights company Zignal Labs compiled a list of the most popular false and misleading narratives on social media about Wednesday’s events, counting their mentions on cable television and social media and in print and online news outlets on Wednesday and Thursday. Here is the list.
1. Rioters on the Capitol were actually antifa: 411,099 mentions
The false narrative that antifa supporters were actually behind the unrest at the Capitol peaked at 66,122 mentions on Wednesday evening, according to Zignal’s data. Rep. Matt Gaetz even referenced the false Washington Times article as proof that the mob was “in fact members of the violent terrorist group antifa.”
On Thursday, The Washington Times published a new version of its article, reporting that it was actually “neo-Nazis and other extremists” who were identified in photos of the mob, after BuzzFeed News challenged the outlet’s reporting.
2. The mob’s actions were a “setup” and an “inside job”: 122,287 mentions
The idea that the mob’s work was an inside job spread widely on social media, even though there was no evidence to support the conspiracy theory. People said the setup had been planned by the “deep state,” which is shorthand for the conspiracy theory about Democratic elites secretly exercising political control over the public. The narrative peaked at 12,593 mentions from 6 p.m. to 7 p.m. on Wednesday, according to Zignal’s data.
3. President Trump knew that the mob would happen, and people should “trust the plan” and “hold the line”: 83,990 mentions
The distorted idea that President Trump knew about the mob’s actions in advance and that people should “trust the plan” and “hold the line” was widespread especially among supporters of the conspiracy movement QAnon — which is based upon the false premise that the country is run by a Democrat-led cabal of pedophiles whom President Trump is bringing down.
4. The mob at the Capitol was made up of people “posing as MAGA”: 64,258 mentions
A popular false narrative that people in the mob were simply “posing as MAGA” peaked early on Wednesday, before accusations specifically zeroed in on antifa.
5. President Trump will “declassify” information on how the election was stolen: 63,190 mentions
Some supporters of the president pushed the falsehood that he would soon “declassify” information on how the election was stolen, in spite of overwhelming evidence — and a host of court rulings — that no widespread fraud was found in the election.
In some versions of the baseless rumor, people stated that this was the real reason that Mr. Trump’s opponents in Congress were calling on the president to be stripped of his power from office under the disability clause of the 25th Amendment.