Rosen Transcript Following Questioning of Facebook, Twitter, and Google CEOs

WASHINGTON, D.C. – Today, during a hearing of the Senate Commerce, Science, and Transportation Committee, U.S. Senator Jacky Rosen (D-NV), co-chair and co-founder of the Senate Bipartisan Task Force for Combating Anti-Semitism, questioned Mark Zuckerberg, CEO of Facebook, Jack Dorsey, CEO of Twitter, and Sundar Pichai, CEO of Google on instances of anti-Semitism, white supremacy, and foreign disinformation on their platforms. A transcript of the Senator’s full exchange can be found below, and a video of the Senator’s full exchange can be found aquí.

ROSEN: My colleagues in the majority have called this hearing in order to argue that you are doing too much to stop the spread of disinformation, conspiracy theories, and hate speech on your platforms. I’m here to tell you that you are not doing enough.

Your platforms’ recommendation algorithms can drive people who show an interest in conspiracy theories far deeper into hate, and only you have the ability to change this. What I really want to say is this, on these platforms, the important factor to realize is that people or users are the initiators and the algorithms are the potentiators of this content.

I was doing a little cleaning in my garage like a lot of people during COVID. I’m a former computer programmer – and I actually found my old hexadecimal calculator and my little RadioShack owner’s manual here, so I know a little bit about the power of algorithms and what they can and can’t do, having done that myself. I know that you have the capability to remove bigoted, hateful, and incendiary content that will lead– and has led – to violence.

So I want to be clear, this is really not about what you can or cannot do; it’s about what you will or will not do.

Adversaries like Russia continue to amplify propaganda – on everything from the election to the coronavirus to anti-Semitic conspiracy theories – and they do it on your platforms, weaponizing division and hate to destroy our democracy and our communities.

The U.S. intelligence community warned us earlier this year that Russia is now actively inciting white supremacist violence, which the FBI and Department of Homeland Security say poses the most lethal threat to America. In recent years, we have seen white supremacy and anti-Semitism on the rise, much of it spreading online. What enables these bad actors to disseminate their hateful messaging to the American public are the algorithms on your platforms, effectively rewarding efforts by foreign powers to exploit divisions in our country.

To be sure, I want to acknowledge the work that you are already doing in this space. I’m relieved to see Facebook has taken long-overdue action in banning Holocaust denial content. But while you’ve made some policy changes, what we have seen time and time again is that what starts online doesn’t end online. Hateful words can morph into deadly actions, which are then amplified again and again; it’s a vicious cycle.

Just yesterday, we commemorated the two-year anniversary of the Tree of Life Shooting in Pittsburgh, the deadliest targeted attack on the Jewish community in American history. The shooter, in this case, had a long history of posting anti-Semitic content on social media sites, and what started online became very real for families who will now never again see their loves ones. So, there has to be accountability when algorithms actively contribute to radicalization and hate.

So Mr. Zuckerberg and then Mr. Dorsey, when you implement a policy banning hate or disinformation content – how quickly can you adjust your algorithms to reduce this content, and perhaps even more importantly, to reduce or remove the recommendation algorithm of “hate or disinformation” so that it doesn’t continue to spread. We know those recommendation algorithms continue to drive someone more specifically, and specifically, and specifically. Great when you want to buy a new sweater, not so great when you’re driving them toward hate. Can you talk to us about that, please?

DORSEY: As you know, these algorithms, machine learning, and deep learning are complex, they’re complicated, and they require testing and training. As we learn about their effectiveness, we can shift them, and we can iterate them. It does require experience. It does require a little bit of time. The most important thing that we need to build into the organization is a fast learning mindset and that agility around updating these algorithms. We try to focus the urgency of updates on any severity of harm, as you mentioned, specifically, that leads to offline harm or dangerous speech that goes into offline.

ROSEN: Mr. Zuckerberg, I’ll ask you to answer that, and then I have some questions about the nimbleness of your algorithms.

ZUCKERBERG: Senator, I think you’re focused on exactly the right thing in terms of how many people see harmful content. As we talk about putting in place regulation, reforming section 230 in terms of what we want to hold companies accountable for, I think what we should judge companies on is how many people see harmful content before companies act on it. I think being able to act on it quickly and being able to act on content that is potentially going viral or going to be seen by more people before it does see a lot of people is critical. This is what we report in our quarterly transparency reports. What percent of the content that a person sees is harmful in any of the categories of harm that we track. We try to hold ourselves accountable for basically driving the prevalence of that content down. I think good content regulation here would create a standard across the whole industry.

ROSEN: I like what you said. Your recommendation algorithms need to learn to drive the prevalence of this harmful content down. I would like to see some of the information about how nimble you are on dropping down that prevalence when you do see it trending, when you do see an uptick, whether it’s by bots, by human beings, whatever that is. We need to drive the prevalence down.

Can you talk a little bit more specifically then about things you might be doing for anti-Semitism, white supremacy? We know that it is the biggest domestic threat. In the Homeland Security Committee, they [the FBI and DHS] have testified that this is the largest threat to our nation. I want to be sure that violence is not celebrated and amplified further on your platforms.

ZUCKERBERG: There’s a lot of nuance here. In general, for each category of harmful content, whether it’s terrorist propaganda or incitement of violence and hate speech, we have to build specific AI systems. One of the benefits of transparency and transparency reports into how companies are doing is we have to report on a quarterly basis how effectively we’re doing at finding those types of content so you can hold us accountable for how nimble we are. Hate speech is hard to train AI system to get good at identifying because it’s linguistically nuanced. We operate in 150 languages around the world. But, what our transparency reports show is that over the last few years, we’ve gone from proactively identifying and taking down about 20% of the hate speech on the service to now proactively identifying, I think it’s about 94% of the hate speech we end up taking down, the vast majority of that before people even have to report it to us. By having this kind of transparency requirement, which is part of what I’m advocating for in the section 230 reform, I think will be able to have a broader sense across the industry of how companies are improving in each of these areas.

ROSEN: I look forward to working with everyone on this.

###