Categories
Biden, Joseph R Jr Bosworth, Andrew (1982- ) Computers and the Internet Dorsey, Jack Facebook Inc Google Inc Instagram Inc Online Advertising Political Advertising Presidential Election of 2020 Sandberg, Sheryl K Social Media Trump, Donald J Twitter Uncategorized United States Politics and Government Warren, Elizabeth Zuckerberg, Mark E

Facebook Says It Won’t Back Down From Allowing Lies in Political Ads

SAN FRANCISCO — Defying pressure from Congress, Facebook said on Thursday that it would continue to allow political campaigns to use the site to target advertisements to particular slices of the electorate and that it would not police the truthfulness of the messages sent out.

The stance put Facebook, the most important digital platform for political ads, at odds with some of the other large tech companies, which have begun to put new limits on political ads.

Facebook’s decision, telegraphed in recent months by executives, is likely to harden criticism of the company heading into this year’s presidential election.

Political advertising cuts to the heart of Facebook’s outsize role in society, and the company has found itself squeezed between liberal critics, who want it to do a better job of policing its various social media platforms, and conservatives, who say their views are being unfairly muzzled.

The issue has raised important questions regarding how heavy a hand technology companies like Facebook — which also owns Instagram and the messaging app WhatsApp — and Google should exert when deciding what types of political content they will and will not permit.

By maintaining a status quo, Facebook executives are essentially saying they are doing the best they can without government guidance and see little benefit to the company or the public in changing.

In a blog post, a company official echoed Facebook’s earlier calls for lawmakers to set firm rules.

“In the absence of regulation, Facebook and other companies are left to design their own policies,” Rob Leathern, Facebook’s director of product management overseeing the advertising integrity division, said in the post. “We have based ours on the principle that people should be able to hear from those who wish to lead them, warts and all, and that what they say should be scrutinized and debated in public.”

Other social media companies have decided otherwise, and some had hoped Facebook would quietly follow their lead. In late October, Twitter’s chief executive, Jack Dorsey, banned all political advertising from his network, citing the challenges that novel digital systems present to civic discourse. Google quickly followed suit with limits on political ads across some of its properties, though narrower in scope.

Reaction to Facebook’s policy broke down largely along party lines.

The Trump campaign, which has been highly critical of any attempts by technology companies to regulate political advertising and has already spent more than $27 million on the platform, largely supported Facebook’s decision not to interfere in targeting ads or to set fact-checking standards.

“Our ads are always accurate so it’s good that Facebook won’t limit political messages because it encourages more Americans to be involved in the process,” said Tim Murtaugh, a spokesman for the Trump campaign. “This is much better than the approaches from Twitter and Google, which will lead to voter suppression.”

Democratic presidential candidates and outside groups decried the decision.

“Facebook is paying for its own glowing fake news coverage, so it’s not surprising they’re standing their ground on letting political figures lie to you,” Senator Elizabeth Warren said on Twitter.

Ms. Warren, who has been among the most critical of Facebook and regularly calls for major tech companies to be broken up, reiterated her stance that the social media company should face tougher policies.

The Biden campaign was similarly critical. The campaign has confronted Facebook over an ad run by President Trump’s campaign that attacked Joseph R. Biden Jr.’s record on Ukraine.

“Donald Trump’s campaign can (and will) still lie in political ads,” Bill Russo, the deputy communications director for Mr. Biden, said in a statement. “Facebook can (and will) still profit off it. Today’s announcement is more window dressing around their decision to allow paid misinformation.”

But many Democratic groups willing to criticize Facebook had to walk a fine line; they have pushed for more regulation when it comes to fact-checking political ads, but they have been adamantly opposed to any changes to the ad-targeting features.

On Thursday, some Democratic outside groups welcomed Facebook’s decision not to limit microtargeting, but still thought the policy fell short.

“These changes read to us mostly as a cover for not making the change that is most vital: ensuring politicians are not allowed to use Facebook as a tool to lie to and manipulate voters,” said Madeline Kriger, who oversees digital ad buying at Priorities USA, a Democratic super PAC.

Facebook has played down the business opportunity in political ads, saying the vast majority of its revenue came from commercial, not political, ads. But lawmakers have noted that Facebook ads could be a focal point of Mr. Trump’s campaign as well as those of top Democrats.

Facebook’s hands-off ad policy has already allowed for misleading advertisements. In October, a Facebook ad from the Trump campaign made false accusations about Mr. Biden and his son, Hunter Biden. The ad quickly went viral and was viewed by millions. After the Biden campaign asked Facebook to take down the ad, the company refused.

“Our approach is grounded in Facebook’s fundamental belief in free expression, respect for the democratic process and the belief that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is,” Facebook’s head of global elections policy, Katie Harbath, wrote in the letter to the Biden campaign.

In an attempt to provoke Facebook, Ms. Warren’s presidential campaign ran an ad falsely claiming that the company’s chief executive, Mark Zuckerberg, was backing the re-election of Mr. Trump. Facebook did not take the ad down.

Criticism seemed to stiffen Mr. Zuckerberg’s resolve. Company officials said he and Sheryl Sandberg, Facebook’s president, had ultimately made the decision to stand firm.

In a strongly worded speech at Georgetown University in October, Mr. Zuckerberg said he believed in the power of unfettered speech, including in paid advertising, and did not want to be in the position to police what politicians could and could not say to constituents. Facebook’s users, he said, should be allowed to make those decisions for themselves.

“People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society,” he said.

Facebook officials have repeatedly said significant changes to its rules for political or issue ads could harm the ability of smaller, less well-funded organizations to raise money and organize across the network.

Instead of overhauling its policies, Facebook has made small tweaks. Mr. Leathern said Facebook would add greater transparency features to its library of political advertising in the coming months, a resource for journalists and outside researchers to scrutinize the types of ads run by the campaigns.

Facebook also will add a feature that allows users to see fewer campaign and political issue ads in their news feeds, something the company has said many users have requested.

There was considerable debate inside Facebook about whether it should change. Late last year, hundreds of employees supported an internal memo that called on Mr. Zuckerberg to limit the abilities of Facebook’s political advertising products.

On Dec. 30, Andrew Bosworth, the head of Facebook’s virtual and augmented reality division, wrote on his internal Facebook page that, as a liberal, he found himself wanting to use the social network’s powerful platform against Mr. Trump.

But Mr. Bosworth said that even though keeping the current policies in place “very well may lead to” Mr. Trump’s re-election, it was the right decision. Dozens of Facebook employees pushed back on Mr. Bosworth’s conclusions, arguing in the comments section below his post that politicians should be held to the same standard that applies to other Facebook users.

For now, Facebook appears willing to risk disinformation in support of unfettered speech.

“Ultimately, we don’t think decisions about political ads should be made by private companies,” Mr. Leathern said. “Frankly, we believe the sooner Facebook and other companies are subject to democratically accountable rules on this, the better.”

Mike Isaac reported in San Francisco and Cecilia Kang reported from Washington. Nick Corasaniti contributed reporting from New York.

Categories
Bosworth, Andrew (1982- ) Facebook Inc News and News Media Online Advertising Presidential Election of 2020 Trump, Donald J Uncategorized Zuckerberg, Mark E

Don’t Tilt Scales Against Trump, Facebook Executive Warns

SAN FRANCISCO — Since the 2016 election, when Russian trolls and a tsunami of misinformation turned social media into a partisan battlefield, Facebook has wrestled with the role it played in President Trump’s victory.

Now, according to a memo obtained by The New York Times, a longtime Facebook executive has told employees that the company had a moral duty not to tilt the scales against Mr. Trump as he seeks re-election.

On Dec. 30, Andrew Bosworth, the head of Facebook’s virtual and augmented reality division, wrote on his internal Facebook page that, as a liberal, he found himself wanting to use the social network’s powerful platform against Mr. Trump. But citing the “Lord of the Rings” franchise and the philosopher John Rawls, Mr. Bosworth said that doing so would eventually backfire.

“I find myself desperately wanting to pull any lever at my disposal to avoid the same result,” he wrote. “So what stays my hand? I find myself thinking of the Lord of the Rings at this moment.

“Specifically when Frodo offers the ring to Galadrial and she imagines using the power righteously, at first, but knows it will eventually corrupt her,” he said, misspelling the name of the character Galadriel. “As tempting as it is to use the tools available to us to change the outcome, I am confident we must never do that or we will become that which we fear.”

In a meandering 2,500-word post, titled “Thoughts for 2020,” Mr. Bosworth weighed in on issues including political polarization, Russian interference and the news media’s treatment of Facebook. He gave a frank assessment of Facebook’s shortcomings in recent years, saying that the company had been “late” to address the issues of data security, misinformation and foreign interference. And he accused the left of overreach, saying that when it came to calling people Nazis, “I think my fellow liberals are a bit too, well, liberal.”

Mr. Bosworth also waded into the debate over the health effects of social media, rejecting what he called “wildly offensive” comparisons of Facebook to addictive substances like nicotine. He instead compared Facebook to sugar, and said users were responsible for moderating their own intake.

“If I want to eat sugar and die an early death that is a valid position,” Mr. Bosworth wrote. “My grandfather took such a stance towards bacon and I admired him for it. And social media is likely much less fatal than bacon.”

The post by Mr. Bosworth, a former head of Facebook’s advertising team, provides an unusually candid glimpse of the debates raging within Facebook about the platform’s responsibilities as it heads into the 2020 election.

The biggest of those debates is whether Facebook should change its rules governing political speech. Posts by politicians are exempt from many of Facebook’s current rules, and their ads are not submitted for fact-checking, giving them license to mislead voters with partisan misinformation.

Last year, platforms like Twitter and Google announced restrictions to their political advertising tools ahead of the 2020 election.

Facebook and its chief executive, Mark Zuckerberg, have faced heavy pressure from Democrats and Republicans, including Mr. Trump’s campaign, not to restrict its own powerful ad platform, which allows political campaigns to reach targeted audiences and raise money from supporters. But other politicians, and some Facebook employees, including a group that petitioned Mr. Zuckerberg in October, have argued that the social network has a responsibility to stamp out misinformation on its platform, including in posts by politicians.

Mr. Bosworth said that even though keeping the current policies in place “very well may lead to” Mr. Trump’s re-election, it was the right decision.

Dozens of Facebook employees pushed back on Mr. Bosworth’s conclusions, arguing in the comments section below his post that politicians should be held to the same standard as other Facebook users. They debated whether Facebook should ban or remove posts by politicians, including Mr. Trump, that included hate speech or forms of misinformation.

One Facebook employee warned that if the company continued to take its current approach, it risked promoting populist leaders around the world, including in the United States.

A Facebook spokeswoman provided a statement from Mr. Bosworth in which he said that the post “wasn’t written for public consumption,” but that he “hoped this post would encourage my co-workers to continue to accept criticism with grace as we accept the responsibility we have overseeing our platform.”

Ultimately, the decision on whether to allow politicians to spread misinformation on Facebook rests with Mr. Zuckerberg. In recent months, he has appeared to stand firm on the decision to keep the existing ad policies in place, saying that he believes Facebook should not become an arbiter of truth. But he has also left himself room to change his mind. In November, a Facebook spokesman said the company was “looking at different ways we might refine our approach to political ads.”

Among those lobbying Mr. Zuckerberg is President Trump himself, who claimed on a radio show on Monday that Mr. Zuckerberg had congratulated him on being “No. 1” on Facebook during a private dinner.

Mr. Bosworth said he believed Facebook was responsible for Mr. Trump’s 2016 election victory, but not because of Russian interference or the Cambridge Analytica scandal, in which millions of Facebook users’ data was leaked to a political strategy firm that worked with the Trump campaign. Mr. Bosworth said the fallout from the Cambridge Analytica revelations — uncovered by The Times, working with The Observer of London and The Guardian — rightly changed the conversation around how Facebook should handle user data, and which companies should be given access to that data.

But, he said, Mr. Trump simply used Facebook’s advertising tools effectively.

“He didn’t get elected because of Russia or misinformation or Cambridge Analytica,” Mr. Bosworth wrote. “He got elected because he ran the single best digital ad campaign I’ve ever seen from any advertiser. Period.”

Mr. Bosworth, a longtime confidant of Mr. Zuckerberg’s who is viewed by some inside Facebook as a proxy for the chief executive, has been an outspoken defender of the company’s positions in the past.

In 2018, BuzzFeed News published a memo Mr. Bosworth wrote in 2016 justifying the company’s growth-at-all-costs ethos, in which he said the company’s mission of connecting people was “de facto good,” even if it resulted in deaths.

After the memo’s publication, a Facebook executive said the company wished it could “go back and hit delete” on Mr. Bosworth’s 2016 post.

Categories
Bosworth, Andrew (1982- ) Computers and the Internet Data-Mining and Database Marketing Facebook Inc News and News Media Online Advertising Parscale, Brad (1976- ) Political Advertising Presidential Election of 2020 Rumors and Misinformation Russian Interference in 2016 US Elections and Ties to Trump Associates Social Media Uncategorized

Lord of the Rings, 2020 and Stuffed Oreos: Read the Andrew Bosworth Memo

On Dec. 30, Andrew Bosworth, a longtime Facebook executive and confidant of Mark Zuckerberg, wrote a long memo on the company’s internal network.

In the post, titled “Thoughts for 2020,” Mr. Bosworth — who oversaw Facebook’s advertising efforts during the 2016 election and is now in charge of the company’s virtual and augmented reality division — admitted that President Trump’s savvy use of Facebook’s advertising tools “very well may lead to” his re-election. But he maintained that the company should not change its policies on political advertising, saying that doing so in order to avert a victory by Mr. Trump would be a misuse of power, comparing it to a scene from “The Lord of the Rings.”

Mr. Bosworth, who is seen by some inside Facebook as a proxy of sorts for Mr. Zuckerberg, also weighed in on a variety of issues that have vexed Facebook for the past few years, including data privacy scandals, Russian interference, political polarization and the debate over whether Facebook is healthy for society.

Here is the full post as written:

The election of Donald Trump immediately put a spotlight on Facebook. While the intensity and focus of that spotlight may be unfair I believe it isn’t unjust. Scrutiny is warranted given our position in society as the most prominent of a new medium. I think most of the criticisms that have come to light have been valid and represent real areas for us to serve our community better. I don’t enjoy having our flaws exposed, but I consider it far better than the alternative where we remain ignorant of our shortcomings.

One trap I sometimes see people falling into is to dismiss all feedback when they can invalidate one part of it. I see that with personal feedback and I see it happening with media coverage. The press often gets so many details wrong it can be hard to trust the veracity of their conclusions. Dismissing the whole because of flaws in parts is a mistake. The media has limited information to work with (by our own design!) and they sometimes get it entirely wrong but there is almost always some critical issue that motivated them to write which we need to understand.

It is worth looking at the 2016 Election which set this chain of events in motion. I was running our ads organization at the time of the election and had been for the four years prior (and for one year after). It is worth reminding everyone that Russian Interference was real but it was mostly not done through advertising. $100,000 in ads on Facebook can be a powerful tool but it can’t buy you an American election, especially when the candidates themselves are putting up several orders of magnitude more money on the same platform (not to mention other platforms).

Instead, the Russians worked to exploit existing divisions in the American public for example by hosting Black Lives Matter and Blue Lives Matter protest events in the same city on the same day. The people who shows up to those events were real even if the event coordinator was not. Likewise the groups of Americans being fed partisan content was real even if those feeding them were not. The organic reach they managed sounds very big in absolute terms and unfortunately humans are bad at contextualizing big numbers. Whatever reach they managed represents an infinitesimal fraction of the overall content people saw in the same period of time and certainly over the course of an election across all media.

So most of the information floating around that is widely believed isn’t accurate. But who cares? It is certainly true that we should have been more mindful of the role both paid and organic content played in democracy and been more protective of it. On foreign interference, Facebook has made material progress and while we may never be able to fully eliminate it I don’t expect it to be a major issue for 2020.

Misinformation was also real and related but not the same as Russian interference. The Russians may have used misinformation alongside real partisan messaging in their campaigns, but the primary source of misinformation was economically motivated. People with no political interest whatsoever realized they could drive traffic to ad-laden websites by creating fake headlines and did so to make money. These might be more adequately described as hoaxes that play on confirmation bias or conspiracy theory. In my opinion this is another area where the criticism is merited. This is also an area where we have made dramatic progress and don’t expect it to be a major issue for 2020.

It is worth noting, as it is relevant at the current moment, that misinformation from the candidates themselves was not considered a major shortcoming of political advertising on FB in 2016 even though our policy then was the same as it is now. These policies are often covered by the press in the context of a profit motive. That’s one area I can confidently assure you the critics are wrong. Having run our ads business for some time it just isn’t a factor when we discuss the right thing to do. However, given that those conversations are private I think we can all agree the press can be forgiven for jumping to that conclusion. Perhaps we could do a better job exposing the real cost of these mistakes to make it clear that revenue maximization would have called for a different strategy entirely.

Cambridge Analytica is one of the more acute cases I can think of where the details are almost all wrong but I think the scrutiny is broadly right. Facebook very publicly launched our developer platform in 2012 in an environment primarily scrutinizing us for keeping data to ourselves. Everyone who added an application got a prompt explaining what information it would have access to and at the time it included information from friends. This may sound crazy in a 2020 context but it received widespread praise at the time. However the only mechanism we had for keeping data secure once it was shared was legal threats which ultimately didn’t amount to much for companies which had very little to lose. The platform didn’t build the value we had hoped for our consumers and we shut this form of it down in 2014.

The company Cambridge Analytica started by running surveys on Facebook to get information about people. It later pivoted to be an advertising company, part of our Facebook Marketing Partner program, who other companies could hire to run their ads. Their claim to fame was psychographic targeting. This was pure snake oil and we knew it; their ads performed no better than any other marketing partner (and in many cases performed worse). I personally regret letting them stay on the FMP program for that reason alone. However at the time we thought they were just another company trying to find an angle to promote themselves and assumed poor performance would eventually lose them their clients. We had no idea they were shopping an old Facebook dataset that they were supposed to have deleted (and certified to us in writing that they had).

When Trump won, Cambridge Analytica tried to take credit so they were back on our radar but just for making [expletive] claims about their own importance. I was glad when the Trump campaign manager Brad Parscale called them out for it. Later on, we found out from journalists that they had never deleted the database and had instead made elaborate promises about its power for advertising. Our comms team decided it would be best to get ahead of the journalists and pull them from the platform. This was a huge mistake. It was not only bad form (justifiably angering the journalists) but we were also fighting the wrong battle. We wanted to be clear this had not been a data breach (which, to be fair to us, it absolutely was not) but the real concern was the existence of the dataset no matter how it happened. We also sent the journalists legal letters advising them not to use the term “breech” which was received normally by the NYT (who agreed) and aggressively by The Guardian (who forged ahead with the wrong terminology, furious about the letter) in spite of it being a relatively common practice I am told.

In practical terms, Cambridge Analytica is a total non-event. They were snake oil salespeople. The tools they used didn’t work, and the scale they used them at wasn’t meaningful. Every claim they have made about themselves is garbage. Data of the kind they had isn’t that valuable to being with and worse it degrades quickly, so much so as to be effectively useless in 12-18 months. In fact the United Kingdom Information Commissioner’s Office (ICO) seized all the equipment at Cambridge Analytica and found that there was zero data from any UK citizens! So surely, this is one where we can ignore the press, right? Nope. The platform was such a poor move that the risks associated were bound to come to light. That we shut it down in 2014 and never paid the piper on how bad it was makes this scrutiny justified in my opinion, even if it is narrowly misguided.

So was Facebook responsible for Donald Trump getting elected? I think the answer is yes, but not for the reasons anyone thinks. He didn’t get elected because of Russia or misinformation or Cambridge Analytica. He got elected because he ran the single best digital ad campaign I’ve ever seen from any advertiser. Period.

To be clear, I’m no fan of Trump. I donated the max to Hillary. After his election I wrote a post about Trump supporters that I’m told caused colleagues who had supported him to feel unsafe around me (I regret that post and deleted shortly after).

But Parscale and Trump just did unbelievable work. They weren’t running misinformation or hoaxes. They weren’t microtargeting or saying different things to different people. They just used the tools we had to show the right creative to each person. The use of custom audiences, video, ecommerce, and fresh creative remains the high water mark of digital ad campaigns in my opinion.

That brings me to the present moment, where we have maintained the same ad policies. It occurs to me that it very well may lead to the same result. As a committed liberal I find myself desperately wanting to pull any lever at my disposal to avoid the same result. So what stays my hand?

I find myself thinking of the Lord of the Rings at this moment. Specifically when Frodo offers the ring to Galadrial and she imagines using the power righteously, at first, but knows it will eventually corrupt her. As tempting as it is to use the tools available to us to change the outcome, I am confident we must never do that or we will become that which we fear.

The philosopher John Rawls reasoned that the only moral way to decide something is to remove yourself entirely from the specifics of any one person involved, behind a so called “Veil of Ignorance.” That is the tool that leads me to believe in liberal government programs like universal healthcare, expanding housing programs, and promoting civil rights. It is also the tool that prevents me from limiting the reach of publications who have earned their audience, as distasteful as their content may be to me and even to the moral philosophy I hold so dear.

That doesn’t mean there is no line. Things like incitement of violence, voter suppression, and more are things that same moral philosophy would safely allow me to rule out. But I think my fellow liberals are a bit too, well, liberal when it comes to calling people Nazi’s.

If we don’t want hate mongering politicians then we must not elect them. If they are getting elected then we have to win hearts and minds. If we change the outcomes without winning the minds of the people who will be ruled then we have a democracy in name only. If we limit what information people have access to and what they can say then we have no democracy at all.

This conversation often raises the alarm around filter bubbles, but that is a myth that is easy to dispel. Ask yourself how many newspapers and news programs people read/watched before the internet. If you guessed “one and one” on average you are right, and if you guessed those were ideologically aligned with them you are right again. The internet exposes them to far more content from other sources (26% more on Facebook, according to our research). This is one that everyone just gets wrong.

The focus on filter bubbles causes people to miss the real disaster which is polarization. What happens when you see 26% more content from people you don’t agree with? Does it help you empathize with them as everyone has been suggesting? Nope. It makes you dislike them even more. This is also easy to prove with a thought experiment: whatever your political leaning, think of a publication from the other side that you despise. When you read an article from that outlet, perhaps shared by an uncle or nephew, does it make you rethink your values? Or does it make you retreat further into the conviction of your own correctness? If you answered the former, congratulations you are a better person than I am. Every time I read something from Breitbart I get 10% more liberal.

What does all of this say about the nature of the algorithmic rewards? Everyone points to top 0.1% content as being acutely polarized but how steep are the curves? What does the top 1% or 5% look like? And what is the real reach across those curves when compared to other content? I think the call for algorithmic transparency can sometimes be overblown but being more transparent about this type of data would likely be healthy.

What I expect people will find is that the algorithms are primarily exposing the desires of humanity itself, for better or worse. This is a Sugar, Salt, Fat problem. The book of that name tells a story ostensibly about food but in reality about the limited effectiveness of corporate paternalism. A while ago Kraft foods had a leader who tried to reduce the sugar they sold in the interest of consumer health. But customers wanted sugar. So instead he just ended up reducing Kraft market share. Health outcomes didn’t improve. That CEO lost his job. The new CEO introduced quadruple stuffed Oreos and the company returned to grace. Giving people tools to make their own decisions is good but trying to force decisions upon them rarely works (for them or for you).

In these moments people like to suggest that our consumers don’t really have free will. People compare social media to nicotine. I find that wildly offensive, not to me but to addicts. I have seen family members struggle with alcoholism and classmates struggle with opioids. I know there is a battle for the terminology of addiction but I side firmly with the neuroscientists. Still, while Facebook may not be nicotine I think it is probably like sugar. Sugar is delicious and for most of us there is a special place for it in our lives. But like all things it benefits from moderation.

At the end of the day we are forced to ask what responsibility individuals have for themselves. Set aside substances that directly alter our neurochemistry unnaturally. Make costs and trade-offs as transparent as possible. But beyond that each of us must take responsibility for ourselves. If I want to eat sugar and die an early death that is a valid position. My grandfather took such a stance towards bacon and I admired him for it. And social media is likely much less fatal than bacon.

To bring this uncharacteristically long and winding essay full circle, I wanted to start a discussion about what lessons people are taking away from the press coverage. My takeaway is that we were late on data security, misinformation, and foreign interference. We need to get ahead of polarization and algorithmic transparency. What are the other big topics people are seeing and where are we on those?

Categories
Bickert, Monika Computers and the Internet Facebook Inc Presidential Election of 2020 Rumors and Misinformation Social Media Uncategorized United States Politics and Government Video Recordings, Downloads and Streaming

Facebook Says It Will Ban ‘Deepfakes’

WASHINGTON — Facebook said on Monday that it would ban videos that are heavily manipulated by artificial intelligence, known as deepfakes, from its platform.

In a blog post, a company executive said Monday evening that the social network would remove videos altered by artificial intelligence in ways that “would likely mislead someone into thinking that a subject of the video said words that they did not actually say.”

The policy will not extend to parody or satire, the executive, Monika Bickert, said, nor will it apply to videos edited to omit or change the order of words.

Ms. Bickert said all videos posted would still be subject to Facebook’s system for fact-checking potentially deceptive content. And content that is found to be factually incorrect appear less prominently on the site’s news feed and is labeled false.

The company’s new policy was first reported by The Washington Post.

Facebook was heavily criticized last year for refusing to take down an altered video of Speaker Nancy Pelosi that had been edited to make it appear as though she was slurring her words. At the time, the company defended its decision, saying it had subjected the video to its fact-checking process and had reduced its reach on the social network.

It did not appear that the new policy would have changed the company’s handling of the video with Ms. Pelosi.

The announcement comes ahead of a hearing before the House Energy & Commerce Committee on Wednesday morning, during which Ms. Bickert, Facebook’s vice president of global policy management, is expected to testify on “manipulation and deception in the digital age,” alongside other experts.

Because Facebook is still the No. 1 platform for sharing false political stories, according to disinformation researchers, the urgency to spot and halt novel forms of digital manipulation before they spread is paramount.

Computer scientists have long warned that new techniques used by machines to generate images and sounds that are indistinguishable from the real thing can vastly increase the volume of false and misleading information online. And false political information is circulating rapidly online ahead of the 2020 presidential elections in the United States.

In late December, Facebook announced it had removed hundreds of accounts, including pages, groups and Instagram feeds, meant to fool users in the United States and Vietnam with fake profile photos generated with the help of artificial intelligence.

David McCabe reported from Washington, and Davey Alba from New York.

Categories
Democratic National Committee News and News Media Presidential Election of 2020 Rumors and Misinformation Shorenstein Center on Media, Politics and Public Policy Social Media Uncategorized

2020 Campaigns Throw Their Hands Up on Disinformation

In 2018, Lisa Kaplan assembled a small team inside the re-election campaign for Senator Angus King, an independent from Maine. Wary of how Russia interfered in the 2016 presidential election, it set out to find and respond to political disinformation online.

The team noticed some false statements shared by voters, and traced the language back to Facebook pages with names like “Boycott The NFL 2018.” It alerted Facebook, and some pages were removed. The people behind the posts, operating from places like Israel and Nigeria, had misled the company about their identity.

Today, Ms. Kaplan said, she knows of no campaigns, including among the 2020 presidential candidates, that have similar teams dedicated to spotting and pushing back on disinformation.

They may “wake up the day after the election and say, ‘Oh, no, the Russians stole another one,’” she said.

The examples are numerous: A hoax version of the Green New Deal legislation went viral online. Millions of people saw unsubstantiated rumors about the relationship between Ukraine and the family of former Vice President Joseph R. Biden Jr. A canard about the ties between a Ukrainian oil company and a son of Senator Mitt Romney, the Utah Republican, spread widely, too.

Still, few politicians or their staffs are prepared to quickly notice and combat incorrect stories about them, according to dozens of campaign staff members and researchers who study online disinformation. Several of the researchers said they were surprised by how little outreach they had received from politicians.

Campaigns and political parties say their hands are tied, because big online companies like Facebook and YouTube have few restrictions on what users can say or share, as long as they do not lie about who they are.

But campaigns should not just be throwing their hands up, said some researchers and campaign veterans like Ms. Kaplan, who now runs a start-up that helps fight disinformation. Instead, they said, there should be a concerted effort to counter falsehoods.

“Politicians must play some defense by understanding what information is out there that may be manipulated,” said Joan Donovan, a research director at Harvard University’s Shorenstein Center. Even more important for politicians, she said, is pushing “high-profile and consistent informational campaigns.”

Too many campaigns are now left on their heels, said Simon Rosenberg, who tried to thwart disinformation for the Democratic Congressional Campaign Committee before the 2018 midterm election.

“The idea of counterdisinformation doesn’t really exist as a strategic objective,” he said.

Political groups are not ignoring false information. Bob Lord, the chief security officer of the Democratic National Committee, encourages campaigns to alert his organization when they see it online.

The committee also gives advice on when and how to respond. He said campaigns must decide when the costs of ignoring a falsehood outweighed drawing additional attention to it by speaking out.

But he said his reach was limited.

“The amount of disinformation that is floating around can cover almost any possible topic,” Mr. Lord said, and his team cannot look into each reported piece. If campaigns need connections to social media companies, he said, “we’re happy to make some.”

In September, President Trump’s re-election campaign released an ad that included an incorrect statement about Mr. Biden’s dealings with Ukraine. The campaign posted the ad on Facebook and the president’s Twitter account. Between the two services, the ad has been viewed more than eight million times.

Mr. Biden’s campaign publicized letters that it had written to Facebook, Twitter, YouTube and Fox News, asking the companies to ban the ad. But it remained up. In mid-November, the Biden campaign released a website called Just the Facts, Folks.

Jamal Brown, a spokesman for the Biden campaign, said it was not the campaign’s responsibility alone to push back on all falsehoods. But, he said, “it is incumbent upon all of us, both public- and private-sector companies, users, and elected officials and leaders, to be more vigilant in the kinds of content we engage and reshare on social media.”

Several months ago, a team at the Democratic Congressional Campaign Committee flagged some ads on Facebook to the office of Representative Ilhan Omar, a Minnesota Democrat. The ads called for an investigation into unfounded accusations that she had violated several laws.

After the committee and Ms. Omar’s campaign contacted Facebook, the company said it would limit the prominence of the ads in people’s feeds. But the ads, which have now reached over one million views, remain active.

Facebook does not remove false news, though it does label some stories as false through a partnership with several fact-checking organizations. It has said politicians like Mr. Trump can run ads that feature their “own claim or statement — even if the substance of that claim has been debunked elsewhere.”

In October, Twitter announced plans to forbid all political ads. But the company does not screen for false accusations. Twitter said it did not want to set a precedent for deciding what is and is not truthful online.

In an email, Ms. Omar said it was “not enough” to rely on private companies alone.

“We as a nation need to think seriously about ways to address online threats to our safety and our democracy while protecting core values like free speech,” she said.

Academics and researchers said it was surprising how little outreach there had been from campaigns that faced disinformation operations. Many of the researchers can dissect when a false idea first appeared online, and how it spread.

Graham Brookie, the director of the Atlantic Council’s Digital Forensic Research Lab, said there needed to be “more ingrained information sharing” among politicians, campaign staff, social media companies, civil society groups and, in some cases, law enforcement to counteract the increasing volume of election disinformation.

But when disinformation is used as a tool in partisan politics, Mr. Brookie said, the discussion becomes “a Rorschach test to reaffirm each audience’s existing beliefs, regardless of the facts.”

“One side will accuse the other, and then disinformation itself is weaponized,” he said.

Chris Pack, communications director of the National Republican Congressional Committee, said the disinformation that his party fought was “perpetuated by a liberal press corps that is still incapable of wrapping their heads around the fact that President Trump won the 2016 election.”

That leaves some in the research community wary of wading in at all, said Renee DiResta, the technical research manager for the Stanford Internet Observatory, which studies disinformation.

“I think this is a concern for a lot of academics who don’t want to work directly with a campaign,” Ms. DiResta said, “because that would be problematic for their neutrality.”

Nick Corasaniti contributed reporting.