is it time for the government to regulate social media?

Social Media Giants Face Public Backlash

Speech has migrated from sidewalks and parks to Facebook newsfeeds and Twitter timelines

Last Wednesday, on May 6 2020, Facebook announced the members of the company’s new Facebook Oversight Board. The board, which will eventually reach 40 members, is currently composed of 20 Facebook-independent legal scholars, journalists, former politicians and even a Nobel Peace Prize laureate.

The new move was announced by Zuckerberg in November 2018 in an attempt to provide more oversight to Facebook’s controversial content moderation. This means the board members will have final say on what content Facebook should delete and what it should leave on the site. From the initial inception of the independent body to the company’s members announcement last week, over 2,000 experts from around the world have weighed in on the development of the oversight board. Members plan to begin operations this fall. With certain bylaws to adhere to, the board itself will act as a Supreme Court of sorts for Facebook’s content moderation, with Mark Zuckerberg himself unable to contest its final decisions.

In recent years the social media giants Twitter and Facebook have been put under increasing governmental and public pressure to be more transparent about their internal practices. Outcry began when Facebook’s mishandling of user data and breaches of privacy were blatantly displayed to the world in March 2017, when whistleblower Christopher Wylie exposed the Facebook-Cambridge Analytica scandal to The New York Times. Wylie had previously worked at Cambridge Analytica, the now-shutdown data analytics firm. He blew the whistle on the company’s purchase of personal Facebook data on over 70 million unwitting Americans. This data harvest was then used by the firm to build psychological profiles on users and then to sell that curated data to political campaigns in the form of targeted political advertising. The psychological profiles were essentially an accumulation of the user’s digital footprint, their likes and dislikes for instance, to build a personality based on their online activity. For example, the data was used by Donald Trump’s 2016 presidential campaign to micro-target US voters on various digital platforms based on their psychographic profiles. The political significance of social media was not widely recognized up until this point.

In the years that followed Facebook's Cambridge Analytica scandal, the US government has scrambled to understand how a breach like this could have occurred right under its nose. Mark Zuckerberg was called to testify on Capitol Hill. In his opening remarks Zuckerberg admitted that his company "didn't do enough" to prevent the site from being used for harmful purposes. Since March 2019 he has called for governments and regulators to play a "more active role" in the judgements that the site must make on a daily basis. In an editorial for the Washington Post he argued that new regulation was needed in the following four areas: "harmful content, election integrity, privacy and data portability." His new Oversight Board might only be part of the answer.

Zuckerberg’s decision to implement a Facebook Oversight Board may have broader implications for other social media behemoths. Content deletion and policing have long been contentious issues for social media companies, and if effective, this could prove to be a new strategy companies can use to forgo government regulation. Lawyer Kate Klonick, who was reached out to for comment, told WIRED’s Steven Levy that this is an historic moment because it’s “the first time a private transnational company has voluntarily assigned a part of its policies to an external body like this.”

the weaponizatoin of social media

Shortly thereafter, the world learned of things like the havoc that millions of propaganda ‘bot’ accounts could wreak on a democratic nation like the United States. It was recognized that seemingly harmless social media sites could be weaponized by bad-faith state and non-state actors to disrupt public discourse and potentially influence election integrity.

To lawmakers around the country, it has become increasingly apparent that allowing social media sites to continue operating unregulated has resulted in a web that more closely resembles the Wild West than the “vast democratic forums of cyberspace” that Justice Kennedy had in mind before he retired.

It was in the opinion of a 2017 First Amendment Supreme Court case, Packingham v. North Carolina, that Justice Kennedy argued that cyberspace, social media in particular, is now the most important place for the exchange of views. He argued that the First Amendment should be applied to social media as it was applied to speech.

Formerly this would have been described as a public forum, a place where the general public could openly debate, express their views, address grievances to their government and overall say anything that is protected by the First Amendment.

Since the advent of the Internet, and specifically of social media sites, speech has migrated from sidewalks and parks to Facebook newsfeeds and Twitter timelines.

A 2019 Pew Research Center study into social media use in the United States, found seven in ten Americans now connect with each other using social media sites, and the percentage of American adults using social media rose from 5 percent to 72 percent in only 14 years.

This mass migration begs the question of how an individual’s speech is being protected in the digital sphere as it was in the physical space of a public forum.

Cartoon by Ben Garrison

The answer to this question has yet to be addressed at a federal level and in light of recent scandals that exposed big tech’s lack of regulation across a myriad of issues - privacy, data and speech control to name a few - a fire has been lit under the big tech regulation debate.

Jonathan Kotler, avid privacy advocate and media law professor at the University of Southern California, proposed a less traditional approach to social media regulation. The first step? Not calling social media sites the press.

“Just stop calling these guys the press then we can regulate them,” he said. “They’re masquerading as the press because that way they remain unregulated, but they’re not. No more than a guy who goes out to buy a scalpel declares he’s a doctor."

Image Credit: Joe Flood / Flickr

He voiced his support for the implementation of state regulation rather than federal due to the idea that state governments would be more efficient and because “they are not afraid to try what works.”

Kotler also expressed a caveat that would make governmental regulation far more effective, but that he knows is unlikely to transpire.

“We have to overrule Citizens United,” Kotler said, in reference to the landmark 2010 Citizens United v. Federal Election Commission court case that declared corporations had no campaign finance spending limits. He added, “If I know one thing, I know that money is not speech, it’s money, and Congress is in big tech’s pocket.”

However, despite Kotler’s nuanced approach to regulation, he holds steadfast that it is necessary to implement. “I think [social media giants] abuse their power, I think their power is tremendous and needs to be curtailed,” he said.

“I think the lack of regulation online is the biggest threat to democracy in my lifetime. In highschool I spent the whole week under my wooden desk in San Francisco because we were afraid that Khrushchev was going to nuke the United States during the Cuban Missile Crisis. That was much less a threat to American society than social media are,” he added.

the proliferation of disinformation

One of the greatest concerns to the government and public alike since the Cambridge Analytica scandal has been the threat of bad-faith actors weaponizing social media sites and using them to undermine democratic practices around the world. One such instance of this phenomenon has been the widespread dissemination of disinformation on Twitter and Facebook, both of which host a great majority of public discourse, particularly political discourse. This has presented social media users with the unprecedented threat of constant, and sometimes targeted, exposure to false news primarily intended to sow tension, distrust and polarization.

These malicious bad-faith actors have harnessed the use of propaganda bots and 'sockpuppet' accounts to infiltrate these sites. They have been able to do this so pervasively through the fact that these fake accounts are intended to parade as real social media users. The sheer number of these accounts and their continuously improving sophistication has meant that users and sites have both struggled to identify and combat this problem in any meaningful way.

In a 2017 study conducted by USC and Indiana University into human-bot interaction on social media, researchers estimated that 9 percent to 15 percent of active Twitter accounts are bots. The researchers conducting the study called this a conservative estimate. Additionally, Facebook reportedly has anywhere between 67 million to 140 million bot accounts.

Should Twitter and Facebook be Regulated?

This animated video considers some of the reasons why social media giants might soon need to be regulated.

Audio Clips

This is a compilation of soundbites from interviews. Included are interview snippets from media law professor Jonathan Kotler, UK media law expert Lord Anthony St John, cyber security researcher Daniel Kats and NewsGuard broadcast editor Joe Danielewicz. The clips are ordered from top left to bottom right.

the propaganda bots problem

The number of real people that these bot accounts can influence was widely underestimated until the extent of their impact on the 2016 Presidential election became clear. Research carried out by two economics professors on the impact of social media and fake news produced by bots on the 2016 election showed that in many cases, just one single bot account was able in some cases to “reach as many readers as Fox News, CNN, or The New York Times.” Moreover, they showed that “the most popular fake news stories were more widely shared on Facebook than the most popular mainstream news stories.”

Daniel Kats, principal researcher at NortonLifeLock Inc., has spent the last year delving into the quantifiable data of disinformation on social media and the role of ‘propaganda bots’ in spreading it. NortonLifeLock is one of the most prominent American software companies that works on providing cybersecurity software and products to Internet users. Formerly, Kats specialized in the intersection of machine learning and cybersecurity.

BRENDAN SMIALOWSKI/Getty Images

When providing his professional definition of both propaganda bots and sockpuppet accounts, Kats emphasized the fundamental key to their influence. According to Kats, the sockpuppet accounts, or an “imposter” account, are controlled by a “central puppet master.” For example, the Russian company the Internet Research Agency was a puppet master during the 2016 presidential election. Their job is to control large groups of sockpuppet accounts and try to get them to act in concert. Kats said that on Facebook this is called “coordinated inauthentic activity” or “mass campaigns.” These accounts pick a news story or a tweet and “push it out to all their phones.” Then the rest of their accounts act as “amplifiers” for this particular message. The goal is to get the message trending in order to reach as many people as possible.

Kats said that companies the size of Facebook and Twitter had eclipsed the realm of an ordinary private enterprise and should therefore have to contend with different rules. “If you are a platform over a certain size, it is your responsibility to police content on your platform with respect to disinformation,” he said. “There has to be special rules that allow independent auditing of whether they’re doing a good job.” In effect, Facebook’s new Oversight Board will do just this.

“Facebook has just done an abysmal job and is doing worse than the bare minimum of having any sort of transparency,” Kats added.

When it comes to users recognizing bots themselves as a potential solution, Kats is not too hopeful. Even though he looks at bot accounts every day in his research he admitted he has “real trouble” separating a real person from a bot. He thinks metadata associated with posts is the most accurate detection strategy, but this kind of information is not available to users. “The truth is they kind of look like you and me,” he said. “They even make spelling mistakes just like the average person.”

In the future, he said the “nightmare scenario” is “disinformation that fundamentally alters how we perceive truth, how we perceive consensus and how we perceive society and each other.”

lessons from regulation in the United Kingdom

In the United Kingdom, however, this kind of nightmare scenario might already have been curtailed. As part of his 2019 election manifesto, Prime Minister Boris Johnson included the Communications Bill, which addressed a wide range of issues including a crackdown on the regulation of content.

The government has announced it is planning on granting new powers to Ofcom, the Office of Communications, which is a government-approved regulatory authority of the media. Ofcom will be able to enforce stricter accountability for social media platforms to protect people from harmful content, such as child abuse and terrorism, and to pull such content down from these sites without delay.

Lord Anthony St. John served for four years on the House of Lords Select Committee for Communications and Social Media from 2014 to 2017. He said that the previous methodology of parliament was to rely on social media giants to self-regulate, much like in the United States. He said that similarly to the U.S., the “feeling of freedom of speech is of paramount importance.”

However, Lord St. John said that due to increasing public pressure to grant people protections from illicit content on these sites, that “regulations will soon be introduced,” and that “this will tighten up the scrutiny of content.” Though, he doesn’t see this regulation ever reaching the point of sweeping censorship or “draconian legislation.”

For the U.S. to consider following the more hands-on approach of its European and British counterparts in the foreseeable future seems unlikely. It would require sweeping federal legislation to regulate big tech that on the face of it, might go against America’s natural free enterprise inclination. Afterall, social media sites are private companies.

However, Kats warned that if the government does not put pressure on these companies to do better and “have some sort of stick to threaten them with” then “nothing is going to change.” He said, “When the thing at stake is democracy and truth, it seems extremely wrong that there is so little accountability.”

He also added that the issue of disinformation online is a “phenomenon that doesn’t just affect politics.” He said that this is an important point for social media users to understand.

“It really shapes the world around you in very weird and hard to anticipate ways,” Kats said.

He added, “it can create protests and riots in places where there were none before. It can sow ethnic hatred in places where there were tensions between ethnic groups. It can spread disinformation about the efficacy of vaccines and about what you should do during a pandemic, like whether you should stay home or not, whether you should trust your government or not."

He said that while fellow students or colleagues may be able to recognize this issue, “do you trust everyone else to be able to differentiate [between disinformation and real information]?” And if that answer is no, “you have to really think that there need to be guardrails around this.”

The companies are not deaf to the public’s concerns. Twitter announced on May 11 that the company would begin adding labels to disputed COVID-19 tweets in an effort to combat the proliferation of misinformation on its platform. Facebook’s implementation of its new Oversight Board may have ushered in a new era for the social media giants. An era that might be marked by control and regulation rather than growth and freedom.

The Internet Research Agency

The location shown on Google Maps is one of the known addresses of a Russian company called the Internet Research Angency, also referred to in Russian internet slang as 'the Trolls from Olgino'. This company engages in the online manipulation tactics known as 'trolling', meaning they are a troll farm that employs people to disseminate propaganda and false information using fake accounts. The company started advocating for President-elect Trump using these methods as early as December 2015.

QUIZ

here goes the question

Answer 1 is here
Answer 2 is here
Answer 3 is here
Answer 4 is here