In the past year, content moderation notifications like the one above have become ubiquitous across platforms. From the 2020 presidential election to the COVID-19 pandemic, misinformation has emerged in almost every corner of the internet. Technologists and researchers alike now find themselves navigating increasingly difficult questions about both the spread of this misinformation and the best methods to combat it. Earlier this month, I had the privilege of speaking with Chase Small and Isabella Garcia-Camargo, two student researchers (and social media users) at Stanford’s Internet Observatory, about their work confronting mis- and disinformation and understanding the broader contours of information flow on social media. Ultimately, Chase and Isabella both agree that the complexity of misinformation necessitates an interdisciplinary and cross-industry solution.
Chase Small is a junior studying International Relations. Prior to his work at the Internet Observatory, Chase served as a research assistant to Dr. Colin Kahl, the former National Security Advisor for then-Vice President Biden. He also has experience working for Paper Airplanes, a tutoring and cultural exchange organization engaging students from the Middle East. Next year, Chase will serve as the Chair for Stanford in Government.
Isabella Garcia-Camargo serves as the Project Manager for the Internet Observatory’s Election Integrity Partnership. Isabella’s work has spanned both the public and private sector, including internships at Facebook, Google, CISA, and Schmidt Futures. She holds a B.S. in computer science from Stanford and previously served as a research assistant for Professors Rob Reich, Mehran Sahami, and Jeremy Pittman in Computer Science and Public Policy. Outside of the classroom, Isabella is a national champion rower.
When did you join the Internet Observatory?
Chase Small: I joined last August and I joined as a research assistant for their election integrity partnership efforts for the 2020 election.
Isabella Garcia-Camargo: I joined the Observatory after taking the first iteration of Alex Stamos’s class in the fall of 2019, my senior year. It was a guinea pig class with only twelve students. I really enjoyed the class and became passionate about the problems that we were discussing. Ultimately, I approached him and asked if I could work for him. He had just started the Observatory’s tech team, so I joined the team in the winter of 2020 and have worked in a number of capacities since.
Where did your interest in disinformation/misinformation originate?
CS: I think it was twofold. One, I was learning a lot of really interesting stuff in my classes. Professor Herb Lin at Stanford really animated my interest in cyber-enabled information warfare, and Professor Michael McFaul added this really interesting reflection on the 2016 election and the role that Russia’s information campaigns played in that. Ultimately, I was getting really interested in the geopolitics and political ramifications of tech and social media. Especially being right in the heart of Silicon Valley, I feel like Stanford’s right at the intersection. The other side of it was more personal. My own grandma who has always had some affinity for conspiracies has in recent years spent hours and hours of her day on Twitter and YouTube. She’s very deep into the QAnon world.
IGC: I’ve always been really interested in online speech and content moderation. CS 182 with Prof. Rob Reich definitely sparked this interest in an academic sense. I’m from a more conservative household, so the class really addressed my questions about how we talk about problems where people disagree, online. I ended up taking Prof. Reich’s policy practicum where we wrote a report about Facebook’s Oversight Board and presented it to Facebook. From there, I found Alex and continued my work with the Observatory.
What are some of the projects you’ve taken on since starting at the Observatory?
CS: I’ve been involved in two projects that are closely related. In the fall, we were working on the 2020 Election as the Election Integrity Partnership (EIP), a collaboration between the Internet Observatory and a number of other partner research organizations. We engaged in a lot of rapid response research and examined this phenomenal data set of all the misinformation that we received in preparation for the election. Post-election, we have pivoted gears, and we now have the Vitality Project (VP) which is focused on vaccine misinformation. I’ve taken on a little more responsibility in this project, helping lead the analysts’ workflow which has been really fun.
IGC: I started on tech infrastructure, where I worked on several takedown reports. I then assisted on a few research projects and then, in the summer, I helped launch the Election Integrity Partnership. That has probably been the biggest thing I’ve done with the Observatory. I now work on the Virality Project which is focused on vaccine disinformation. I’m really excited to continue to learn about how to translate a long-term academic research project into real-time impact.
Is most of the Observatory’s work mitigatory or analytical? What are the aims of analytical projects?
CS: I haven’t had a ton of visibility into all the other awesome work that the Internet Observatory does, as these two projects are just a sliver of all the IO’s work. We’ve engaged in some fascinating direct collaboration with social media platforms where we share the emerging narratives that we’re finding, so they can take appropriate action. We also do analytical work. In fact, we just published a lengthy report on the information dynamics of the Election informed by the EIP. The Virality Project hopes to eventually do the same, and we’re currently putting out blog posts, including one especially recent post on the emerging narratives on vaccine passports. Ultimately, it’s a combination of both mitigation and analysis.
IGC: I wouldn’t say we’re mitigating at the moment. We spend a lot of time putting out fires. While every day we move closer towards a capacity for mitigation, there’s an incredible amount of technical and organizational infrastructure needed to do this well. Platforms throw millions of dollars at this problem every year. We are doing rapid analysis of platform takedowns, as well as interesting security analyses like the recent report on Clubhouse data. The EIP was probably the closest we’ve come to actually preempting misinformation before it goes viral. We worked to identify leads about misinformation emerging and predicted how these narratives would trend, which I found incredibly exciting. The majority of our work, though, has been observing what’s already happening online.
Let’s dive a little deeper into misinformation and disinformation, specifically. How would you encapsulate the difference between misinformation and disinformation? Are these differences based on the intentions of the informants? The information itself?
CS: I would start by saying that as I’ve explored academia, I know that definitions are at the heart of many debates between academics. But, in my understanding as a student of this field, disinformation has intention and often coordination. You have an actor pushing out misinformation, so misinformation is an umbrella term. It becomes especially complicated when disinformation inadvertently spreads beyond the initial source, which sparks a debate about whether that account who shared the post engaged in misinformation or disinformation. Misinformation, however, is the umbrella term.
IGC: Misinformation is information that’s false, but not necessarily intentionally false. It’s the larger category for false rumors and other types of misleading information. Disinformation is purposely seeded and spread. This information might be distributed for political or financial objectives, and it can mislead through content or by deceiving an audience about its true origin or the identity of the informant. In this way, it could be true content that’s spread by someone who pretends to be an American. These definitions remain thorny, but intent is the primary driver in determining disinformation.
How has your perception of content management changed since you started your research? Do you feel that digital platforms are doing enough to regulate the flow of information?
CS: My perception has definitely changed. I think reading more of the international relations and security literature about Russia’s role in the 2016 Election sparked an initial question for me as to why Facebook didn’t do more. But now, seeing the emergence of misinformation narratives in real-time, there are countless gray areas about whether the content will have a significant impact or whether it even constitutes disinformation. This is especially true in the vaccination work where there is some scientific debate. Vaccine passports are a great and topical example. The question of issuing vaccine passports is a legitimate ethical and public health question about how we should record people’s vaccination history and use that for furthering public health. But, anti-vaccine and health freedom groups have exploited that space to push misinformation about the safety of vaccines and stoke fears about government overreach.
In the fall, Facebook and a lot of the other platforms engaged in a huge effort to label content that was questionable or in a gray area, and we’re still waiting for the data on the impact of that labeling on individual users. In some ways, content management becomes a behavioral science question. Ultimately, I don’t think platforms are doing enough, but I don’t think that platforms are designed to do enough, both in terms of their internal structure and their incentives. I do think projects like EIP and VP are thus important, as they’re exploring the interesting ways that these platforms can build partnerships with civil society and academia related to content management.
IGC: I’ve really swung back and forth on this. I used to work at the platforms through internships at large tech companies and was definitely very enthusiastic. I then swung the other way after learning more about content regulation and concluded that platforms weren’t doing enough. Now, after working in this field for a longer period of time, I’ve realized that content moderation is an incredibly hard problem. There are thousands of people at these companies from policy to tech working full-time on these issues with very good intentions. If there was a big button to press that could cure disinformation, these companies would definitely have the incentive to press it. I think it’s especially difficult to determine the qualities of specific content and identify the narratives that people weave with that content. These platforms must now contend with over three billion users, which creates an unbounded problem that must constantly be chased down.
How do you think about trust in content management? Should platforms be concerned that crackdowns on misinformation may further radicalize the misinformants? Conversely, how can platforms build trust with passive viewers that they are fairly moderating?
CS: That’s a fantastic question, which I think gets back to my earlier thoughts on platforms’ efforts to label content with fact checks. Removing anti-vax content tends to agitate the users in those communities and contribute to the narrative that “big tech is against us. Clearly, we have the truth, and they’re trying to censor us.” They then try to circumvent these fact checks by moving to other platforms to continue spreading their messaging. As to the second part of the question, we don’t have a great understanding of the long-term effects of content moderation on users’ behavior. Anecdotally, it doesn’t seem like my friends who aren’t engaged with this work necessarily notice or care about content moderation. There are a lot of fantastic initiatives on digital literacy which will encourage all types of users to personally research and fact check the information they encounter online.
IGC: The radicalized population problem is especially difficult to answer. When I first became curious about this work, I would ask every single guest lecturer about this, and many said that it would take serious interventions and therapy on an individual level to actually de-radicalize someone. Individuals who fall into ‘rabbit holes’ lose faith in what many would consider “authoritative” sources of information, and transition this faith towards increasingly fringe sources. As they place more trust in these fringe sources, it becomes increasingly difficult to engage with these individuals on a shared set of facts, resulting in a vicious cycle. I think we have to play a more offensive game with the middle of the population. This was especially prescient during the EIP, as I spent a lot of time thinking about the efficacy of our work. We could write countless blog posts, but if someone genuinely wants to believe false information, or for individuals coming in with a pre-existing scaffolding that the election would be illegitimate, there was very little to be done. We have to work to preempt narratives targeting the questioning portions of the population, as radicalized communities online consistently try to recruit more people.
Where are the other trouble zones for mis- and disinformation? Vaccination & Health? Identity and Sectarian Conflict? Do technologists take a different approach to moderation in different content areas?
CS: I think this is the core issue of content moderation at the moment, and it’s a more sociological than technological question. I’ve been super interested in the parallels between the election misinformation and vaccine misinformation. My initial intuition is that there’s something more clear-cut about the election information, as claims about the distribution of ballots or the types of machines used by County Boards of Elections could be easily verified or disproven. With vaccine and scientific discourse, there is still a level of uncertainty, especially when there’s conflicting research being published. There are gray areas surrounding how long vaccine immunity lasts or the deaths of certain people after they were vaccinated. But, if you were to high-five 100 million people, some of them would also die two weeks after you high-five them. It doesn’t mean your high-five killed them. These gray areas make vaccine content moderation a particular nightmare.
IGC: It’s incredibly hard to predict what the next large-scale disinformation event will be. We live in a really interesting time: what once looked like a one-off event in 2016 might turn out to be a continuous, large-scale societal problem that we must learn to live with. I think of this kind of like wildfire season in California – wildfires have been around forever, but the past couple of years, wildfire season has become more destructive with fewer, more intense blazes. At first, this was shocking, but now we are trying to learn how to live alongside these patterns.
We’re unsure whether the 2022 Election will encourage disinformation on the same scale as 2020: will these attacks on our democracy happen just every four years, or more often? Will we have another large-scale event like the COVID pandemic, and subsequent COVID-vaccine, which will push us to the limits of societal trust in both elected officials and scientific institutions? Or was 2020 just a really, really weird year? That’s the million-dollar question. Are we okay continuing to tackle these disinformation events one-by-one, or do we need to build a more robust infrastructure to meet the challenge of largescale disinformation response?
Can platforms ever be wholly fair and neutral arbiters when it comes to content moderation? Does the government need to take a more active role in regulation?
CS: This is difficult, as I think you can evaluate this from so many lenses. If we think about the value of democracy and distribution of power, for example, there is a catch-22. In the status quo, we are putting significant power in the hands of platforms and individuals within these platforms who aren’t elected. Yet, in countries where there is no premium on free speech, government-led regulation would censor dissenting political opinions or coverage of human rights violations. We’re definitely pondering this question in our work, and I think academia has some insights to offer these platforms, but that’s a difficult model to scale. Researchers like Carly Miller at Stanford work extensively on how effectively platforms implement their own content moderation policies, and I think that we should be pushing platforms for more transparency to assess the need for regulation.
IGC: I don’t know about the regulation issue here, but I do think that government needs to have the ability to better understand the conversations happening in online spaces when they relate to matters of national security or critical infrastructure. Last summer I had the opportunity to work at CISA (Cybersecurity and Infrastructure Agency) within the DHS, where I realized that election officials had no way to deal with the massive disinformation attacking election infrastructure. I believe we need a clearer through-line between the authorities we have collectively agreed are in charge of making sure our infrastructure is safe, to their ability to understand how people are discussing that infrastructure. Obviously, there is clear red tape here given the importance of the First Amendment to our democracy, but that doesn’t excuse the uneven playing field election officials found themselves on. Mobs organized on the internet to protest outside of polling stations, and we’ve already seen the same dynamics happening with regards to vaccination sites. As a result, I’m especially interested in exploring how the government, which moves incredibly slowly (often for good reason), can navigate the fast-moving dynamics of misinformation response.
What’s one tool or tactic that platforms could implement now to combat mis- and disinformation?
CS: I might be stealing from some of the main academics in this field, but I would encourage platforms to increase their data flows to researchers and set up robust systems for researchers to review their data. Prof. Persily recently released a report titled “Social Media and Democracy” that offered more concrete suggestions on the mechanisms that platforms could implement to provide researchers with adequate data. When we have robust data, we can recognize trends on disinformation and misinformation and build more nuanced policies or tools to combat this messaging.
IGC: I would explore how they can do a better job of tracking narratives and their growth over time. Oftentimes, we’re playing whack-a-mole with any new incident that arises. This leaves questions about how we show the sizes of different narratives over time and how they may merge or branch from each other. It would be especially valuable to track these narratives across different populations and platforms. For example, we could warn TikTok and Twitter that the “mark of the beast” narrative related to vaccination was spreading on Facebook among certain populations and that they should check for similar trends. Cross-platform sharing coupled with better narrative analysis — and the narrative analysis is really the backbone here because you can’t track specific pieces of content across platforms, obviously. You can track URLs, but a Facebook post will look dramatically different than a Tweet. I think this cross-platform information sharing would really improve the platforms’ ability to understand how people engage with these large political and social issues, and when that conversation skews into content which might violate their policies.
One final question: many technologists have said that combatting mis- and disinformation is a uniquely interdisciplinary problem. What disciplines and perspectives should be considered in this fight? How do you see such interdisciplinarity in the IO?
CS: I think this is an awesomely interdisciplinary field. There’s obviously the technical dimension, but there’s also a philosophical dilemma about how we approach the digital “public square” for these discussions. Finally, there’s a behavioral and psychological aspect to how posts impact viewers and the broader feeds of their connections. I hadn’t taken a lot of CS classes at Stanford before joining the IO, so I definitely was a bit intimidated when I first joined. But, I think my experience in international relations has served me well. In the same way I analyze intricate political conflicts, I try to think about the nuances underlying a lot of this digital work.
IGC: I absolutely agree that this is an interdisciplinary problem. When I had the ability to hire thirty students for the EIP, I immediately thought we would just need students who know how to code or do data analysis. I hired a bunch of CS students, but I quickly realized that they didn’t always have the critical thinking skills to bring together all of the different streams of information involved in this work. They need to understand the narrative arcs of why five different URLs, spreading in five different communities, should all be considered one cohesive disinformation incident. I found that students from the international relations department and other non-technical backgrounds actually did super well at this. They could effectively navigate the ambiguity between those five URLs and track the narrative arcs created by that information. Those soft skills were equally if not, more valuable than technical skills. After all, much of our work requires the same skill set that it takes to produce a two hundred page report for an international relations class.