Categories
Bosworth, Andrew (1982- ) Computers and the Internet Data-Mining and Database Marketing Facebook Inc News and News Media Online Advertising Parscale, Brad (1976- ) Political Advertising Presidential Election of 2020 Rumors and Misinformation Russian Interference in 2016 US Elections and Ties to Trump Associates Social Media Uncategorized

Lord of the Rings, 2020 and Stuffed Oreos: Read the Andrew Bosworth Memo

On Dec. 30, Andrew Bosworth, a longtime Facebook executive and confidant of Mark Zuckerberg, wrote a long memo on the company’s internal network.

In the post, titled “Thoughts for 2020,” Mr. Bosworth — who oversaw Facebook’s advertising efforts during the 2016 election and is now in charge of the company’s virtual and augmented reality division — admitted that President Trump’s savvy use of Facebook’s advertising tools “very well may lead to” his re-election. But he maintained that the company should not change its policies on political advertising, saying that doing so in order to avert a victory by Mr. Trump would be a misuse of power, comparing it to a scene from “The Lord of the Rings.”

Mr. Bosworth, who is seen by some inside Facebook as a proxy of sorts for Mr. Zuckerberg, also weighed in on a variety of issues that have vexed Facebook for the past few years, including data privacy scandals, Russian interference, political polarization and the debate over whether Facebook is healthy for society.

Here is the full post as written:

The election of Donald Trump immediately put a spotlight on Facebook. While the intensity and focus of that spotlight may be unfair I believe it isn’t unjust. Scrutiny is warranted given our position in society as the most prominent of a new medium. I think most of the criticisms that have come to light have been valid and represent real areas for us to serve our community better. I don’t enjoy having our flaws exposed, but I consider it far better than the alternative where we remain ignorant of our shortcomings.

One trap I sometimes see people falling into is to dismiss all feedback when they can invalidate one part of it. I see that with personal feedback and I see it happening with media coverage. The press often gets so many details wrong it can be hard to trust the veracity of their conclusions. Dismissing the whole because of flaws in parts is a mistake. The media has limited information to work with (by our own design!) and they sometimes get it entirely wrong but there is almost always some critical issue that motivated them to write which we need to understand.

It is worth looking at the 2016 Election which set this chain of events in motion. I was running our ads organization at the time of the election and had been for the four years prior (and for one year after). It is worth reminding everyone that Russian Interference was real but it was mostly not done through advertising. $100,000 in ads on Facebook can be a powerful tool but it can’t buy you an American election, especially when the candidates themselves are putting up several orders of magnitude more money on the same platform (not to mention other platforms).

Instead, the Russians worked to exploit existing divisions in the American public for example by hosting Black Lives Matter and Blue Lives Matter protest events in the same city on the same day. The people who shows up to those events were real even if the event coordinator was not. Likewise the groups of Americans being fed partisan content was real even if those feeding them were not. The organic reach they managed sounds very big in absolute terms and unfortunately humans are bad at contextualizing big numbers. Whatever reach they managed represents an infinitesimal fraction of the overall content people saw in the same period of time and certainly over the course of an election across all media.

So most of the information floating around that is widely believed isn’t accurate. But who cares? It is certainly true that we should have been more mindful of the role both paid and organic content played in democracy and been more protective of it. On foreign interference, Facebook has made material progress and while we may never be able to fully eliminate it I don’t expect it to be a major issue for 2020.

Misinformation was also real and related but not the same as Russian interference. The Russians may have used misinformation alongside real partisan messaging in their campaigns, but the primary source of misinformation was economically motivated. People with no political interest whatsoever realized they could drive traffic to ad-laden websites by creating fake headlines and did so to make money. These might be more adequately described as hoaxes that play on confirmation bias or conspiracy theory. In my opinion this is another area where the criticism is merited. This is also an area where we have made dramatic progress and don’t expect it to be a major issue for 2020.

It is worth noting, as it is relevant at the current moment, that misinformation from the candidates themselves was not considered a major shortcoming of political advertising on FB in 2016 even though our policy then was the same as it is now. These policies are often covered by the press in the context of a profit motive. That’s one area I can confidently assure you the critics are wrong. Having run our ads business for some time it just isn’t a factor when we discuss the right thing to do. However, given that those conversations are private I think we can all agree the press can be forgiven for jumping to that conclusion. Perhaps we could do a better job exposing the real cost of these mistakes to make it clear that revenue maximization would have called for a different strategy entirely.

Cambridge Analytica is one of the more acute cases I can think of where the details are almost all wrong but I think the scrutiny is broadly right. Facebook very publicly launched our developer platform in 2012 in an environment primarily scrutinizing us for keeping data to ourselves. Everyone who added an application got a prompt explaining what information it would have access to and at the time it included information from friends. This may sound crazy in a 2020 context but it received widespread praise at the time. However the only mechanism we had for keeping data secure once it was shared was legal threats which ultimately didn’t amount to much for companies which had very little to lose. The platform didn’t build the value we had hoped for our consumers and we shut this form of it down in 2014.

The company Cambridge Analytica started by running surveys on Facebook to get information about people. It later pivoted to be an advertising company, part of our Facebook Marketing Partner program, who other companies could hire to run their ads. Their claim to fame was psychographic targeting. This was pure snake oil and we knew it; their ads performed no better than any other marketing partner (and in many cases performed worse). I personally regret letting them stay on the FMP program for that reason alone. However at the time we thought they were just another company trying to find an angle to promote themselves and assumed poor performance would eventually lose them their clients. We had no idea they were shopping an old Facebook dataset that they were supposed to have deleted (and certified to us in writing that they had).

When Trump won, Cambridge Analytica tried to take credit so they were back on our radar but just for making [expletive] claims about their own importance. I was glad when the Trump campaign manager Brad Parscale called them out for it. Later on, we found out from journalists that they had never deleted the database and had instead made elaborate promises about its power for advertising. Our comms team decided it would be best to get ahead of the journalists and pull them from the platform. This was a huge mistake. It was not only bad form (justifiably angering the journalists) but we were also fighting the wrong battle. We wanted to be clear this had not been a data breach (which, to be fair to us, it absolutely was not) but the real concern was the existence of the dataset no matter how it happened. We also sent the journalists legal letters advising them not to use the term “breech” which was received normally by the NYT (who agreed) and aggressively by The Guardian (who forged ahead with the wrong terminology, furious about the letter) in spite of it being a relatively common practice I am told.

In practical terms, Cambridge Analytica is a total non-event. They were snake oil salespeople. The tools they used didn’t work, and the scale they used them at wasn’t meaningful. Every claim they have made about themselves is garbage. Data of the kind they had isn’t that valuable to being with and worse it degrades quickly, so much so as to be effectively useless in 12-18 months. In fact the United Kingdom Information Commissioner’s Office (ICO) seized all the equipment at Cambridge Analytica and found that there was zero data from any UK citizens! So surely, this is one where we can ignore the press, right? Nope. The platform was such a poor move that the risks associated were bound to come to light. That we shut it down in 2014 and never paid the piper on how bad it was makes this scrutiny justified in my opinion, even if it is narrowly misguided.

So was Facebook responsible for Donald Trump getting elected? I think the answer is yes, but not for the reasons anyone thinks. He didn’t get elected because of Russia or misinformation or Cambridge Analytica. He got elected because he ran the single best digital ad campaign I’ve ever seen from any advertiser. Period.

To be clear, I’m no fan of Trump. I donated the max to Hillary. After his election I wrote a post about Trump supporters that I’m told caused colleagues who had supported him to feel unsafe around me (I regret that post and deleted shortly after).

But Parscale and Trump just did unbelievable work. They weren’t running misinformation or hoaxes. They weren’t microtargeting or saying different things to different people. They just used the tools we had to show the right creative to each person. The use of custom audiences, video, ecommerce, and fresh creative remains the high water mark of digital ad campaigns in my opinion.

That brings me to the present moment, where we have maintained the same ad policies. It occurs to me that it very well may lead to the same result. As a committed liberal I find myself desperately wanting to pull any lever at my disposal to avoid the same result. So what stays my hand?

I find myself thinking of the Lord of the Rings at this moment. Specifically when Frodo offers the ring to Galadrial and she imagines using the power righteously, at first, but knows it will eventually corrupt her. As tempting as it is to use the tools available to us to change the outcome, I am confident we must never do that or we will become that which we fear.

The philosopher John Rawls reasoned that the only moral way to decide something is to remove yourself entirely from the specifics of any one person involved, behind a so called “Veil of Ignorance.” That is the tool that leads me to believe in liberal government programs like universal healthcare, expanding housing programs, and promoting civil rights. It is also the tool that prevents me from limiting the reach of publications who have earned their audience, as distasteful as their content may be to me and even to the moral philosophy I hold so dear.

That doesn’t mean there is no line. Things like incitement of violence, voter suppression, and more are things that same moral philosophy would safely allow me to rule out. But I think my fellow liberals are a bit too, well, liberal when it comes to calling people Nazi’s.

If we don’t want hate mongering politicians then we must not elect them. If they are getting elected then we have to win hearts and minds. If we change the outcomes without winning the minds of the people who will be ruled then we have a democracy in name only. If we limit what information people have access to and what they can say then we have no democracy at all.

This conversation often raises the alarm around filter bubbles, but that is a myth that is easy to dispel. Ask yourself how many newspapers and news programs people read/watched before the internet. If you guessed “one and one” on average you are right, and if you guessed those were ideologically aligned with them you are right again. The internet exposes them to far more content from other sources (26% more on Facebook, according to our research). This is one that everyone just gets wrong.

The focus on filter bubbles causes people to miss the real disaster which is polarization. What happens when you see 26% more content from people you don’t agree with? Does it help you empathize with them as everyone has been suggesting? Nope. It makes you dislike them even more. This is also easy to prove with a thought experiment: whatever your political leaning, think of a publication from the other side that you despise. When you read an article from that outlet, perhaps shared by an uncle or nephew, does it make you rethink your values? Or does it make you retreat further into the conviction of your own correctness? If you answered the former, congratulations you are a better person than I am. Every time I read something from Breitbart I get 10% more liberal.

What does all of this say about the nature of the algorithmic rewards? Everyone points to top 0.1% content as being acutely polarized but how steep are the curves? What does the top 1% or 5% look like? And what is the real reach across those curves when compared to other content? I think the call for algorithmic transparency can sometimes be overblown but being more transparent about this type of data would likely be healthy.

What I expect people will find is that the algorithms are primarily exposing the desires of humanity itself, for better or worse. This is a Sugar, Salt, Fat problem. The book of that name tells a story ostensibly about food but in reality about the limited effectiveness of corporate paternalism. A while ago Kraft foods had a leader who tried to reduce the sugar they sold in the interest of consumer health. But customers wanted sugar. So instead he just ended up reducing Kraft market share. Health outcomes didn’t improve. That CEO lost his job. The new CEO introduced quadruple stuffed Oreos and the company returned to grace. Giving people tools to make their own decisions is good but trying to force decisions upon them rarely works (for them or for you).

In these moments people like to suggest that our consumers don’t really have free will. People compare social media to nicotine. I find that wildly offensive, not to me but to addicts. I have seen family members struggle with alcoholism and classmates struggle with opioids. I know there is a battle for the terminology of addiction but I side firmly with the neuroscientists. Still, while Facebook may not be nicotine I think it is probably like sugar. Sugar is delicious and for most of us there is a special place for it in our lives. But like all things it benefits from moderation.

At the end of the day we are forced to ask what responsibility individuals have for themselves. Set aside substances that directly alter our neurochemistry unnaturally. Make costs and trade-offs as transparent as possible. But beyond that each of us must take responsibility for ourselves. If I want to eat sugar and die an early death that is a valid position. My grandfather took such a stance towards bacon and I admired him for it. And social media is likely much less fatal than bacon.

To bring this uncharacteristically long and winding essay full circle, I wanted to start a discussion about what lessons people are taking away from the press coverage. My takeaway is that we were late on data security, misinformation, and foreign interference. We need to get ahead of polarization and algorithmic transparency. What are the other big topics people are seeing and where are we on those?

Categories
Bickert, Monika Computers and the Internet Facebook Inc Presidential Election of 2020 Rumors and Misinformation Social Media Uncategorized United States Politics and Government Video Recordings, Downloads and Streaming

Facebook Says It Will Ban ‘Deepfakes’

WASHINGTON — Facebook said on Monday that it would ban videos that are heavily manipulated by artificial intelligence, known as deepfakes, from its platform.

In a blog post, a company executive said Monday evening that the social network would remove videos altered by artificial intelligence in ways that “would likely mislead someone into thinking that a subject of the video said words that they did not actually say.”

The policy will not extend to parody or satire, the executive, Monika Bickert, said, nor will it apply to videos edited to omit or change the order of words.

Ms. Bickert said all videos posted would still be subject to Facebook’s system for fact-checking potentially deceptive content. And content that is found to be factually incorrect appear less prominently on the site’s news feed and is labeled false.

The company’s new policy was first reported by The Washington Post.

Facebook was heavily criticized last year for refusing to take down an altered video of Speaker Nancy Pelosi that had been edited to make it appear as though she was slurring her words. At the time, the company defended its decision, saying it had subjected the video to its fact-checking process and had reduced its reach on the social network.

It did not appear that the new policy would have changed the company’s handling of the video with Ms. Pelosi.

The announcement comes ahead of a hearing before the House Energy & Commerce Committee on Wednesday morning, during which Ms. Bickert, Facebook’s vice president of global policy management, is expected to testify on “manipulation and deception in the digital age,” alongside other experts.

Because Facebook is still the No. 1 platform for sharing false political stories, according to disinformation researchers, the urgency to spot and halt novel forms of digital manipulation before they spread is paramount.

Computer scientists have long warned that new techniques used by machines to generate images and sounds that are indistinguishable from the real thing can vastly increase the volume of false and misleading information online. And false political information is circulating rapidly online ahead of the 2020 presidential elections in the United States.

In late December, Facebook announced it had removed hundreds of accounts, including pages, groups and Instagram feeds, meant to fool users in the United States and Vietnam with fake profile photos generated with the help of artificial intelligence.

David McCabe reported from Washington, and Davey Alba from New York.

Categories
china Chinese Nationalist Party (Taiwan) Computers and the Internet Democratic Progressive Party (Taiwan) Elections Han Kuo-yu Politics and Government Propaganda Rumors and Misinformation Social Media Taiwan Tsai Ing-wen Uncategorized Voting and Voters

Awash in Disinformation Before Vote, Taiwan Points Finger at China

TAIPEI, Taiwan — At first glance, the bespectacled YouTuber railing against Taiwan’s president, Tsai Ing-wen, just seems like a concerned citizen making an appeal to his fellow Taiwanese.

He speaks Taiwanese-accented Mandarin, with the occasional phrase in Taiwanese dialect. His captions are written with the traditional Chinese characters used in Taiwan, not the simplified ones used in China. With outrage in his voice, he accuses Ms. Tsai of selling out “our beloved land of Taiwan” to Japan and the United States.

The man, Zhang Xida, does not say in his videos whom he works for. But other websites and videos make it clear: He is a host for China National Radio, the Beijing-run broadcaster.

As Taiwan gears up for a major election this week, officials and researchers worry that China is experimenting with social media manipulation to sway the vote. Doing so would be easy, they fear, in the island’s rowdy democracy, where the news cycle is fast and voters are already awash in false or highly partisan information.

China has been upfront about its dislike for President Tsai, who opposes closer ties with Beijing. The Communist Party claims Taiwan as part of China’s territory, and it has long deployed propaganda and intimidation to try to influence elections here.

Polls suggest, however, that Beijing’s heavy-handed ways might be backfiring and driving voters to embrace Ms. Tsai. Thousands of Taiwan citizens marched last month against “red media,” or local news organizations supposedly influenced by the Chinese government.

That is why Beijing may be turning to subtler, digital-age methods to inflame and divide.

Recently, there have been Facebook posts saying falsely that Joshua Wong, a Hong Kong democracy activist who has fans in Taiwan, had attacked an old man. There were posts about nonexistent protests outside Taiwan’s presidential house, and hoax messages warning that ballots for the opposition Kuomintang, or Chinese Nationalist Party, would be automatically invalidated.

So many rumors and falsehoods circulate on Taiwanese social media that it can be hard to tell whether they originate in Taiwan or in China, and whether they are the work of private provocateurs or of state agents.

Taiwan’s National Security Bureau in May issued a downbeat assessment of Chinese-backed disinformation on the island, urging a “‘whole of government’ and ‘whole of society’ response.”

“False information is the last step in an information war,” the bureau’s report said. “If you find false information, that means you have already been thoroughly infiltrated.”

Taiwanese society has woken up to the threat. The government has strengthened laws against spreading harmful rumors. Companies including Facebook, Google and the messaging service Line have agreed to police their platforms more stringently. Government departments and civil society groups now race to debunk hoaxes as quickly as they appear.

The election will put these efforts — and the resilience of Taiwan’s democracy — to the test.

“The ultimate goal, just like what Russia tried to do in the United States, is to crush people’s confidence in the democratic system,” said Tzeng Yi-suo of the Institute for National Defense and Security Research, a think tank funded by the government of Taiwan.

Fears of Chinese meddling became acute in recent months after a man named Wang Liqiang sought asylum in Australia claiming he had worked for Chinese intelligence to fund pro-Beijing candidates in Taiwan, buy off media groups and conduct social media attacks.

Mr. Wang’s account remains largely unverified. But there are other signs that Beijing is working to upgrade its techniques of information warfare.

Twitter, which is blocked in mainland China, recently took down a vast network of accounts that it described as Chinese state-backed trolls trying to discredit Hong Kong’s protesters.

A 2018 paper in a journal linked to the United Front Work Department, a Communist Party organ that organizes overseas political networking, argued that Beijing had failed to shape Taiwanese public discourse in favor of unification with China.

In November, the United Front Work Department held a conference in Beijing on internet influence activities, according to an official social media account. The department’s head, You Quan, said the United Front would help people such as social media influencers, live-streamers and professional e-sports players to “play an active role in guiding public opinion.”

“We understand that the people who are sowing discord are also building a community, that they are also learning from each other’s playbooks,” said Audrey Tang, Taiwan’s digital minister. “There are new innovations happening literally every day.”

In Taiwan, Chinese internet trolls were once easily spotted because they posted using the simplified Chinese characters found only on the mainland.

That happens less these days, though there are still linguistic slip-ups.

In one of the YouTube videos from Mr. Zhang, the China National Radio employee, a character in the description is incorrectly translated into traditional Chinese from simplified Chinese. Mr. Zhang did not respond to a message seeking comment.

Puma Shen, an assistant professor at National Taipei University who studies Chinese influence efforts, does not believe that disinformation from China is always guided by some central authority as it spreads around the internet.

“It’s not an order from Beijing,” Mr. Shen said. Much of the activity seems to be scattered groups of troublemakers, paid or not, who feed off one another’s trolling. “People are enthusiastic about doing this kind of stuff there in China,” he said.

In December, Taiwan’s justice ministry warned about a fake government notice saying Taiwan was deporting protesters who had fled Hong Kong. The hoax first appeared on the Chinese social platform Weibo, the ministry said, before spreading to a Chinese nationalist Facebook group.

Sometimes, Chinese trolls amplify rumors already floating around in Taiwan, Mr. Shen said. He is also on the lookout for Taiwanese social media accounts that may be bought or supported by Chinese operatives.

Ahead of midterm elections in 2018, his team had been monitoring several YouTube channels that discussed Taiwanese politics. The day after voting ended, the channels disappeared.

After Yu Hsin-Hsien was elected to the City Council that year in Taoyuan, a city near Taipei, mysterious strangers began inquiring about buying his Facebook page, which had around 280,000 followers. Mr. Yu, 30, immediately suspected China.

His suspicions grew after he demanded an extravagantly high price and the buyers accepted. Mr. Yu, who represents Ms. Tsai’s party, the Democratic Progressive Party, did not sell.

“Someone approaches a just-elected legislator and offers to buy his oldest weapon,” Mr. Yu said. “What’s his motive? To serve the public? It can’t be.”

Recently, internet users in Taiwan noticed a group of influencers, many of them pretty young women, posting messages on Facebook and Instagram with the hashtag #DeclareMyDeterminationToVote. The posts did not mention candidates or parties, but the people included selfies with a fist at their chest, a gesture often used by Han Kuo-yu, the Kuomintang’s presidential candidate.

Mr. Han’s campaign denied involvement. But some have speculated that China’s United Front might be responsible. The United Front Work Department did not respond to a fax requesting comment.

One line of attack against Ms. Tsai has added to the atmosphere of mistrust and high conspiracy ahead of this week’s vote.

Politicians and media outlets have questioned whether Ms. Tsai’s doctoral dissertation is authentic, even though her alma mater, the London School of Economics, has confirmed that it is.

Dennis Peng hosts a daily YouTube show dedicated to proving otherwise. His channel has 173,000 subscribers. Theories about Ms. Tsai’s dissertation have circulated in China, too, with the help of the Chinese news media.

Mr. Peng, a former television anchor, once supported Ms. Tsai. He was proud that Taiwan elected a female president. Now he says he is not being paid by anyone, including China, to crusade against her.

He is not worried about being smeared as fake news.

“Let news and fake news compete against each other,” Mr. Peng said. “I trust that most people aren’t so stupid. Everybody eventually figures it out.”

Steven Lee Myers contributed reporting. Wang Yiwei contributed research from Beijing.

Categories
Democratic National Committee News and News Media Presidential Election of 2020 Rumors and Misinformation Shorenstein Center on Media, Politics and Public Policy Social Media Uncategorized

2020 Campaigns Throw Their Hands Up on Disinformation

In 2018, Lisa Kaplan assembled a small team inside the re-election campaign for Senator Angus King, an independent from Maine. Wary of how Russia interfered in the 2016 presidential election, it set out to find and respond to political disinformation online.

The team noticed some false statements shared by voters, and traced the language back to Facebook pages with names like “Boycott The NFL 2018.” It alerted Facebook, and some pages were removed. The people behind the posts, operating from places like Israel and Nigeria, had misled the company about their identity.

Today, Ms. Kaplan said, she knows of no campaigns, including among the 2020 presidential candidates, that have similar teams dedicated to spotting and pushing back on disinformation.

They may “wake up the day after the election and say, ‘Oh, no, the Russians stole another one,’” she said.

The examples are numerous: A hoax version of the Green New Deal legislation went viral online. Millions of people saw unsubstantiated rumors about the relationship between Ukraine and the family of former Vice President Joseph R. Biden Jr. A canard about the ties between a Ukrainian oil company and a son of Senator Mitt Romney, the Utah Republican, spread widely, too.

Still, few politicians or their staffs are prepared to quickly notice and combat incorrect stories about them, according to dozens of campaign staff members and researchers who study online disinformation. Several of the researchers said they were surprised by how little outreach they had received from politicians.

Campaigns and political parties say their hands are tied, because big online companies like Facebook and YouTube have few restrictions on what users can say or share, as long as they do not lie about who they are.

But campaigns should not just be throwing their hands up, said some researchers and campaign veterans like Ms. Kaplan, who now runs a start-up that helps fight disinformation. Instead, they said, there should be a concerted effort to counter falsehoods.

“Politicians must play some defense by understanding what information is out there that may be manipulated,” said Joan Donovan, a research director at Harvard University’s Shorenstein Center. Even more important for politicians, she said, is pushing “high-profile and consistent informational campaigns.”

Too many campaigns are now left on their heels, said Simon Rosenberg, who tried to thwart disinformation for the Democratic Congressional Campaign Committee before the 2018 midterm election.

“The idea of counterdisinformation doesn’t really exist as a strategic objective,” he said.

Political groups are not ignoring false information. Bob Lord, the chief security officer of the Democratic National Committee, encourages campaigns to alert his organization when they see it online.

The committee also gives advice on when and how to respond. He said campaigns must decide when the costs of ignoring a falsehood outweighed drawing additional attention to it by speaking out.

But he said his reach was limited.

“The amount of disinformation that is floating around can cover almost any possible topic,” Mr. Lord said, and his team cannot look into each reported piece. If campaigns need connections to social media companies, he said, “we’re happy to make some.”

In September, President Trump’s re-election campaign released an ad that included an incorrect statement about Mr. Biden’s dealings with Ukraine. The campaign posted the ad on Facebook and the president’s Twitter account. Between the two services, the ad has been viewed more than eight million times.

Mr. Biden’s campaign publicized letters that it had written to Facebook, Twitter, YouTube and Fox News, asking the companies to ban the ad. But it remained up. In mid-November, the Biden campaign released a website called Just the Facts, Folks.

Jamal Brown, a spokesman for the Biden campaign, said it was not the campaign’s responsibility alone to push back on all falsehoods. But, he said, “it is incumbent upon all of us, both public- and private-sector companies, users, and elected officials and leaders, to be more vigilant in the kinds of content we engage and reshare on social media.”

Several months ago, a team at the Democratic Congressional Campaign Committee flagged some ads on Facebook to the office of Representative Ilhan Omar, a Minnesota Democrat. The ads called for an investigation into unfounded accusations that she had violated several laws.

After the committee and Ms. Omar’s campaign contacted Facebook, the company said it would limit the prominence of the ads in people’s feeds. But the ads, which have now reached over one million views, remain active.

Facebook does not remove false news, though it does label some stories as false through a partnership with several fact-checking organizations. It has said politicians like Mr. Trump can run ads that feature their “own claim or statement — even if the substance of that claim has been debunked elsewhere.”

In October, Twitter announced plans to forbid all political ads. But the company does not screen for false accusations. Twitter said it did not want to set a precedent for deciding what is and is not truthful online.

In an email, Ms. Omar said it was “not enough” to rely on private companies alone.

“We as a nation need to think seriously about ways to address online threats to our safety and our democracy while protecting core values like free speech,” she said.

Academics and researchers said it was surprising how little outreach there had been from campaigns that faced disinformation operations. Many of the researchers can dissect when a false idea first appeared online, and how it spread.

Graham Brookie, the director of the Atlantic Council’s Digital Forensic Research Lab, said there needed to be “more ingrained information sharing” among politicians, campaign staff, social media companies, civil society groups and, in some cases, law enforcement to counteract the increasing volume of election disinformation.

But when disinformation is used as a tool in partisan politics, Mr. Brookie said, the discussion becomes “a Rorschach test to reaffirm each audience’s existing beliefs, regardless of the facts.”

“One side will accuse the other, and then disinformation itself is weaponized,” he said.

Chris Pack, communications director of the National Republican Congressional Committee, said the disinformation that his party fought was “perpetuated by a liberal press corps that is still incapable of wrapping their heads around the fact that President Trump won the 2016 election.”

That leaves some in the research community wary of wading in at all, said Renee DiResta, the technical research manager for the Stanford Internet Observatory, which studies disinformation.

“I think this is a concern for a lot of academics who don’t want to work directly with a campaign,” Ms. DiResta said, “because that would be problematic for their neutrality.”

Nick Corasaniti contributed reporting.