Office Hours @ TMP: Episode #1
(Photo/Maria Ressa/Library of Congress)
By Varun Thotli and Celia Saada
Office Hours @ TMP is a monthly column where we bring your questions on current events to Columbia professors. This month, we delve into disinformation, social media, and AI—from META's fact-checking rollback to the Paris AI Action Summit—with Professor Maria Ressa. This February, SIPA students submitted their questions to Professor Ressa, exploring how journalists and policymakers can evaluate threats to public discourse, combat AI-driven disinformation, and reclaim the truth.
A 2021 Nobel Peace Prize Laureate, Maria Ressa is an investigative journalist, press freedom advocate, and the co-founder and CEO of Rappler, a Philippines-based news outlet known for exposing disinformation networks, government corruption, and human rights abuses. At SIPA, she serves as a professor and IGP Distinguished Fellow, co-teaching “Policy Solutions for Online Mis- and Disinformation” alongside Dr. Anya Schiffrin. Her expertise lies in digital disinformation, social media manipulation, and press freedom, making her a leading voice in the fight against authoritarian threats to democracy.
Below is an abridged transcript of The Morningside Post’s podcast series, Office Hours. To hear the full interview, listen here.
Q: On Meta’s decision to end its fact-checking program, how well was fact-checking working in the first place, and what purpose did it serve?
A: Fact-checking was a band-aid that met Facebook. I'll talk about Facebook specifically because for 6 years in a row until 2021, Filipinos spent the most time online and on social media globally. 100% of Filipinos on the internet are on Facebook. At the beginning, when [truth was] attacked, I would say 2016 was when the political dominoes began to form. 2014 was when the experimentation of the geopolitical power of Russian disinformation really began. They seeded the Meta-narrative that allowed Russia to annex Crimea, and then eight years later, Putin used the same meta-narrative to invade Ukraine itself.
The fact-checking program that was put in place by Facebook around the time of the 2017–2018 Cambridge Analytica scandal brushed it up, but there were several of us—Ukraine and the Philippines—who were going to Facebook and saying, "You have a problem." Rappler, the company that I co-founded, was one of the first fact-checkers in the Philippines. It is like trying to stop a dam from falling by plugging your finger in the hole. The structure is leaky. In short, we're now living in a world where facts are not just debatable but are being shunted aside. Because of that, the world is being transformed.
Q: With Meta rolling back its fact-checking efforts, how should journalists and civil society respond? What roles do you foresee for alternative platforms or independent verification initiatives?
A: I wrote a book called How to Stand Up to a Dictator that was released in 2022, that same week that ChatGPT came out. What you will see increasingly in America, I think, are the very same things that we saw in the Philippines. Journalists will come under attack. When you make facts debatable, you make every single person living in the democracy you're in more vulnerable because the whole concept of democracy is based on trust. The breakdown of trust breaks down democracy. The only [form of] government that survives the breakdown of trust is a dictatorship. [What I want to tell] your generation [is to] be aware. None of the tech that we are using is anchored in facts, that includes ChatGPT and generative AI. At best, generative AI has a 16% error rate. It can be life-threatening or life-saving. It's the reason why journalists are legally accountable. The question I asked in How to Stand Up to a Dictator, is the question you now have to ask to everyone around the world: What are you willing to sacrifice for the truth? So, think about the community. You are talking to your community. So, what are you going to do? How will you influence or shape the information received by your community?
Q: In the current context, do you see an all-of-society effort to do this? Do you see governments at the forefront of this? Who is the leading actor when it comes to fighting for ethical AI?
A: This is creative destruction of the world we live in because the tech isn't removed from governance or geopolitical power at all. I'm saying it is up to you. It is up to us, right? This feels like watching a train wreck in slow motion, and we know where it's headed. The question is: will you accept it? What can you do? I would say if you are a software engineer, what are the ethical and moral guidelines for you? There are a whole lot of other legislative measures that can be put in place, but it's moving too fast. This is the world you live in today, and the question for you as you study the world you are living in, you are also an actor in it. What will you do? What will you accept? There's no clear answer because there isn't a magic wand. The very people who should be protecting us are the ones who are benefiting from this. It goes back to the question: What are you willing to sacrifice for the truth?
Q: What can SIPA students do to help overcome this problem?
A: Please understand that the ability to write and to think with clarity—to have clarity of thought—is very different from editing something the machine spouts at you. You guys are already at the age where you do know how to write. Be careful of letting the machine do your writing for you because, when you do, your mind can atrophy. Think about it like this: You cannot outsource going to the gym. If you want to be physically in shape, you've got to do the hard work. If you want your mind to be sharp and have clarity of thought, and want to learn to write and communicate your ideas, do it.
Q: Do you have any key takeaways about the Paris AI Summit?
A: If you look at the SIPA website, you'll see that for about a year, the Institute of Global Politics had a technology initiative that incubated something we called the Trust and Safety Tools Consortium. What [Camille François and I] did for a year was produce two papers, two convenings that looked at how we can still put safety measures in place. Because as people are talking with learned helplessness, we'd still need to keep acting. That's what the Trust and Safety Tools Consortium did because they were talking without the help of a law from the government. They created a baseline. This is the ceiling for safety—child safety, safety from pornography. Then, because they were in the big companies, tech companies actually gave funding to create the NGO that was rolled out at the AI Action Summit. It's now called ROOST—Robust, Open, Online Safety Tools. What I like about what we're doing at the Institute of Global Politics at SIPA—is that we're embracing technology, but we're embracing the technology in and of itself that, in the end, should have the same principles as the Universal Declaration of Human Rights. SIPA is going after and looking at safety, which is different from other universities that rolled out manipulative systems. We won't name names.
Q: If you have any parting messages to our audience, audience here to see the students before we head out?
A: I'm really glad to be here. I've taught two classes so far and the students are incredible. You're incredible. Let me end with this: It's funny because in the Philippines, when I decided to hold the line for Rappler, our lawyers told me, “You're crazy.” Go negotiate. I've spent my career living by these standards and ethics. When it matters, you either live by them or your whole life is a lie. You are in a formative stage of your life, and you can make a choice. It's hard because you're ultimately going to want to look for a job, but [choose to] feed your soul and feed the future. Create the future that you want. These are all your choices. This is the beauty of it.