The Morningside Post

View Original

Should Social media self-regulate with a self-created supervising group? Give it a shot!

By: Yizhu Yuan 

Last month, Facebook’s ‘Oversight Board’ made its first ruling in which it overturned 4 in 5 decisions made by the company. Facebook originally removed these four posts, but the board restored them. Social media companies have faced shrinking trust from the public and decreasing patience from the government. Controversial as the Broad may be, we should give it a chance.

After years of self-regulation with warning labels and recent account suspensions, social media platforms were still blamed for responding ‘too little and too late’ toward hate speech and disinformation. But they were also accused by other critics of deciding what people can and cannot read. If they undertake regulating responsibly, they would be berated for policing speech, but if they limit their efforts, accusations of laziness follow. 

Even after decades of debates among scholars and politicians, the threshold of deciding what kinds of political speech should be taken down is still blurry. Because of the tremendous number of posts generated and the incredible speed at which they circulate, it is difficult, if not impossible, for governments to detect and respond to inappropriate content in a timely manner. It was estimated in 2018 that 9 million messages and 3 million links were sent and shared per hour by Facebook’s 3 billion users. Toward regulating these posts, Facebook touted an algorithm as one solution, and human moderators will step in if automation went wrong. In this way, social media can check those problematic contents at the beginning and limit their influence.  

The Oversight Board serves as a source of confidence in platforms’ self-regulation. Consisting of 20 journalists, lawyers, human rights activists and academics, the Board is tasked with judging Facebook’s removal decisions, checking the consistency between Facebook’s decisions and its policies, and recommending policy changes. The Board operates with total independence from the company. Facebook appointed the first Board member in May 2020, and then made the consultation process public. In October 2020, the Board started to accept cases in which users have exhausted their previous efforts to appeal

In its first ruling, the Board targeted issues including hate speech, sexual images, dangerous individuals’ speeches, and disinformation. One principle is clear: In making its decision, the Board takes context as well as users’ subjective intent into account, with an aim of overcoming the embedded shortcomings of algorithms. In one case, an Instagram post that aims at raising public awareness about breast cancer was removed, because it contained images with uncovered female nipples. The Board concluded that Facebook’s removal decision was inconsistent with its policies. It argued that the post should be covered by the exception of Facebook’s Community Standard on Adult Nudity and Sexual Activity, since its intent was to ‘raise awareness about a cause or educational or medical reasons’. The Board seemed to ‘err on the side of free speech’, as described by its member Alan Rusbridger, the former editor-in-chief of The Guardian since it starts its review ‘with the supremacy of free speech’, and then considers ‘the cause in this particular case why free speech should be curtailed’.

The Board’s decisions are binding. Monika Bikert, the vice president of content policy at Facebook, said in a statement that Facebook would obey the board’s restoring decisions and reply to the Board’s recommendation of change in 30 days. 

Yes, a 20-member Oversight Board can’t stop powerful Facebook from over-regulating users’ speech. But it provides a clear and transparent appeal process, and therefore a possible check on the company’s controversial decisions. The Board improves the regulating standards for future work. While it’s impossible to regulate in a once-and-for-all fashion, the complexity of content regulation is the exact reason why we need ongoing experiments to find a way out. 

It is also clear that pressures for content regulation will only mount. Studies from Pew Research Center indicated that after the rioting at the U.S. Capitol, the majorities from both parties recognized the association between political figures’ language use and the possibility of violence, and believed that inciting content should be removed. Government is rethinking these companies’ immunity from the legal consequences of the user-generated content, as shown in the ongoing discussions on the reforms of Section 230 of the Communications Decency Act.

Predictably, the future work of self-regulators in internet companies will be criticized along the way, as some see the recent decision of banning people’s social media accounts as ‘dangerous precedents. While governments are struggling to regulate these platforms, the ongoing regulating efforts and experiments by these self-regulators are necessary.