Last Wednesday, on May 6 2020, Facebook announced the members of the company’s new Facebook Oversight Board. The board, which will eventually reach 40 members, is currently composed of 20 Facebook-independent legal scholars, journalists, former politicians and even a Nobel Peace Prize laureate.
The new move was announced by Zuckerberg in November 2018 in an attempt to provide more oversight to Facebook’s controversial content moderation. This means the board members will have final say on what content Facebook should delete and what it should leave on the site. From the initial inception of the independent body to the company’s members announcement last week, over 2,000 experts from around the world have weighed in on the development of the oversight board. Members plan to begin operations this fall. With certain bylaws to adhere to, the board itself will act as a Supreme Court of sorts for Facebook’s content moderation, with Mark Zuckerberg himself unable to contest its final decisions.
In recent years the social media giants Twitter and Facebook have been put under increasing governmental and public pressure to be more transparent about their internal practices. Outcry began when Facebook’s mishandling of user data and breaches of privacy were blatantly displayed to the world in March 2017, when whistleblower Christopher Wylie exposed the Facebook-Cambridge Analytica scandal to The New York Times. Wylie had previously worked at Cambridge Analytica, the now-shutdown data analytics firm. He blew the whistle on the company’s purchase of personal Facebook data on over 70 million unwitting Americans. This data harvest was then used by the firm to build psychological profiles on users and then to sell that curated data to political campaigns in the form of targeted political advertising. The psychological profiles were essentially an accumulation of the user’s digital footprint, their likes and dislikes for instance, to build a personality based on their online activity. For example, the data was used by Donald Trump’s 2016 presidential campaign to micro-target US voters on various digital platforms based on their psychographic profiles. The political significance of social media was not widely recognized up until this point.
In the years that followed Facebook's Cambridge Analytica scandal, the US government has scrambled to understand how a breach like this could have occurred right under its nose. Mark Zuckerberg was called to testify on Capitol Hill. In his opening remarks Zuckerberg admitted that his company "didn't do enough" to prevent the site from being used for harmful purposes. Since March 2019 he has called for governments and regulators to play a "more active role" in the judgements that the site must make on a daily basis. In an editorial for the Washington Post he argued that new regulation was needed in the following four areas: "harmful content, election integrity, privacy and data portability." His new Oversight Board might only be part of the answer.
Zuckerberg’s decision to implement a Facebook Oversight Board may have broader implications for other social media behemoths. Content deletion and policing have long been contentious issues for social media companies, and if effective, this could prove to be a new strategy companies can use to forgo government regulation. Lawyer Kate Klonick, who was reached out to for comment, told WIRED’s Steven Levy that this is an historic moment because it’s “the first time a private transnational company has voluntarily assigned a part of its policies to an external body like this.”
the weaponizatoin of social media
Shortly thereafter, the world learned of things like the havoc that millions of propaganda ‘bot’ accounts could wreak on a democratic nation like the United States. It was recognized that seemingly harmless social media sites could be weaponized by bad-faith state and non-state actors to disrupt public discourse and potentially influence election integrity.
To lawmakers around the country, it has become increasingly apparent that allowing social media sites to continue operating unregulated has resulted in a web that more closely resembles the Wild West than the “vast democratic forums of cyberspace” that Justice Kennedy had in mind before he retired.
It was in the opinion of a 2017 First Amendment Supreme Court case, Packingham v. North Carolina, that Justice Kennedy argued that cyberspace, social media in particular, is now the most important place for the exchange of views. He argued that the First Amendment should be applied to social media as it was applied to speech.
Formerly this would have been described as a public forum, a place where the general public could openly debate, express their views, address grievances to their government and overall say anything that is protected by the First Amendment.
Since the advent of the Internet, and specifically of social media sites, speech has migrated from sidewalks and parks to Facebook newsfeeds and Twitter timelines.
A 2019 Pew Research Center study into social media use in the United States, found seven in ten Americans now connect with each other using social media sites, and the percentage of American adults using social media rose from 5 percent to 72 percent in only 14 years.
This mass migration begs the question of how an individual’s speech is being protected in the digital sphere as it was in the physical space of a public forum.
The answer to this question has yet to be addressed at a federal level and in light of recent scandals that exposed big tech’s lack of regulation across a myriad of issues - privacy, data and speech control to name a few - a fire has been lit under the big tech regulation debate.
Jonathan Kotler, avid privacy advocate and media law professor at the University of Southern California, proposed a less traditional approach to social media regulation. The first step? Not calling social media sites the press.
“Just stop calling these guys the press then we can regulate them,” he said. “They’re masquerading as the press because that way they remain unregulated, but they’re not. No more than a guy who goes out to buy a scalpel declares he’s a doctor."
He voiced his support for the implementation of state regulation rather than federal due to the idea that state governments would be more efficient and because “they are not afraid to try what works.”
Kotler also expressed a caveat that would make governmental regulation far more effective, but that he knows is unlikely to transpire.
“We have to overrule Citizens United,” Kotler said, in reference to the landmark 2010 Citizens United v. Federal Election Commission court case that declared corporations had no campaign finance spending limits. He added, “If I know one thing, I know that money is not speech, it’s money, and Congress is in big tech’s pocket.”
However, despite Kotler’s nuanced approach to regulation, he holds steadfast that it is necessary to implement. “I think [social media giants] abuse their power, I think their power is tremendous and needs to be curtailed,” he said.
“I think the lack of regulation online is the biggest threat to democracy in my lifetime. In highschool I spent the whole week under my wooden desk in San Francisco because we were afraid that Khrushchev was going to nuke the United States during the Cuban Missile Crisis. That was much less a threat to American society than social media are,” he added.
the proliferation of disinformation
One of the greatest concerns to the government and public alike since the Cambridge Analytica scandal has been the threat of bad-faith actors weaponizing social media sites and using them to undermine democratic practices around the world. One such instance of this phenomenon has been the widespread dissemination of disinformation on Twitter and Facebook, both of which host a great majority of public discourse, particularly political discourse. This has presented social media users with the unprecedented threat of constant, and sometimes targeted, exposure to false news primarily intended to sow tension, distrust and polarization.
These malicious bad-faith actors have harnessed the use of propaganda bots and 'sockpuppet' accounts to infiltrate these sites. They have been able to do this so pervasively through the fact that these fake accounts are intended to parade as real social media users. The sheer number of these accounts and their continuously improving sophistication has meant that users and sites have both struggled to identify and combat this problem in any meaningful way.
In a 2017 study conducted by USC and Indiana University into human-bot interaction on social media, researchers estimated that 9 percent to 15 percent of active Twitter accounts are bots. The researchers conducting the study called this a conservative estimate. Additionally, Facebook reportedly has anywhere between 67 million to 140 million bot accounts.