The Morningside Post

View Original

OPINION: To fight disinformation on digital platforms, policymakers should promote credibility labels

Photo by Adem AY

By Zachey Kliger (MPA ’22)

“Canceling voices is a slippery slope,” Spotify CEO Daniel Ek wrote two Sundays ago in an internal memo to staffers.

Ek’s comments came in the wake of an evolving saga surrounding “The Joe Rogan Experience,” a controversial podcast hosted by comedian Joe Rogan. Spotify struck a $200 million deal in 2020 to exclusively host the podcast on their platform. 

The growing backlash against Rogan, whose podcast draws an estimated 11 million listeners per episode, began in January 2022, when a group of over 200 public health experts sent an open letter to Spotify raising concerns about Rogan using his platform to spread COVID-19 misinformation. Since then, artists Neil Young and Joni Mitchell asked Spotify to remove their music from the platform. More recently, a viral video posted by musician India.Arie, which shows Rogan using a racial slur dozens of times, has intensified calls for Spotify to act.

To be fair, Spotify’s response hasn’t been awful. The platform updated its rules for contributors and will add a content advisory warning to podcasts that feature discussion about COVID-19. And, for what it’s worth, Ek seems genuinely contrite and intent on doing better — a low bar, but still a refreshing contrast to the likes of Google and Facebook, whose executives often recycle callous talking points in response to similar criticisms over content moderation practices.

But the episode raises an important question: how can public policy effectively combat the dissemination of harmful disinformation on Spotify, Facebook, YouTube, and other digital platforms?

It’s a problem that lawmakers on Capitol Hill have been seriously grappling with for at least three years. Dozens of bills have been drafted in Congress to reduce the spread of harmful content online. The vast majority of proposals have focused on amending Section 230, a provision in a 1996 law that shields online platforms from liability for user-generated content. Many in the tech policy community caution against this approach, based on concerns that rolling back Section 230 could lead companies to over-censor content and would hurt smaller companies the most. Some proposals go further, including updating competition law to limit market concentration, mandating disclosure in political ads, or expanding data privacy.

But there is an alternative approach that has received less attention. It’s a path that prioritizes empowering social media users with a contextual understanding of the content they engage with, rather than merely penalizing the tech giants.

Take a moment to imagine the following: you are scrolling through your Facebook feed (or soon, navigating a virtual environment in the metaverse), and every piece of news you encounter has a “credibility score” attached — a color-coded indicator that aggregates data on the source quality, the author's expertise and tone of voice, and other data about the information’s authenticity. Clicking on the score, you are provided with more context about the authenticity and trustworthiness of the information, and even additional links to other news articles about the topic from a different political perspective.

To make this vision a reality, policymakers should partner with the private sector, academia, and other civil society groups to leverage emerging AI technologies to promote credibility labels on social media.

There is precedent for similar action. In November 1990, President George H.W. Bush signed into law the Nutrition Labeling and Education Act (NLEA), legislation that introduced the now iconic black-and-white health label on packaged foods. The NLEA’s initial mandate was narrow. Over time, food manufacturers voluntarily added additional information to food labels, and restaurants throughout the country began displaying nutrition information on their menus. 

Studies have shown that nutrition facts help consumers make informed decisions about food. The Union of Concerned Scientists found that 76% of adults read the label when purchasing packaged foods, and food labeling increases vegetable consumption. Researchers at the Harvard T.H. Chan School of Public Health found that restaurant customers tend to order lower-calorie foods when menus include calorie information.

So, what’s the connection? The NLEA is a prime example of the government taking a “soft” approach to educate consumers about healthy eating and shift the food industry toward making healthier products. And it is precisely the approach needed to address the present dysfunction in the social media information ecosystem.

Of course, there are important questions to consider: how could policymakers incentivize platforms to adopt credibility labels given that the First Amendment deters them from doing so? Which platforms would be required to comply? And, should AI algorithms, which have played a role in the coarsening of civic discourse, be trusted to determine the credibility of news articles?

Skeptics would also be right to point out that credibility labels wouldn’t address the platforms’ underlying ad-based business models, which incentivize them to design their systems in a way that accelerates the viral distribution of divisive content. 

These are fair criticisms, and they should be addressed. But as Renee DiResta, a disinformation researcher at Stanford University, has often alluded to, there are no silver bullets, and platforms will only redesign the user experience if public regulations push them to. While social media platforms have experimented with features like fact-checks and downranking of content, these tools are often opt-in, requiring initiative on the user’s part, and there are no universal standards guiding the language and design of these interventions. The companies are experimenting in isolation, with little motivation to implement lasting changes. 

Furthermore, there are reasons to expect that a standardized credibility label would be more effective at combating misinformation than other proposals. A number of studies provide evidence that fact-checking labels and other “friction” points prompt users to think more critically about the accuracy of content and reduce sharing of misinformation. Expanding credibility labels also avoids the pitfalls of more ambitious content moderation proposals: rather than force a tech company, or a government, to determine what constitutes misinformation or censor certain speech, credibility labels equip users with information they can use to become savvier news consumers. 

As a starting point, Congress could help fund some of the private sector start-ups working to build the type of automated detection systems that can rate the credibility of news articles online. And they could work with Facebook and Google to encourage this form of self-regulation.   

Thirty years ago, the government passed the NLEA to nudge Americans to consume healthier foods and the food industry to make healthier products. Today, we must act similarly to empower people to consume healthier information.