This article is cross-posted from
As online harms surge, “Our Better Web” initiative advances at UC Berkeley
September 19 | Berkeley News
With the U.S. midterm elections approaching and political disinformation posing a continued threat to democracy, UC Berkeley’s ambitious new Our Better Web initiative is advancing efforts to study and combat online harms.
The initiative, launched on a small scale in April, is releasing its first set of projects, including a look at the prevalence of content that promotes deception, discrimination and child exploitation. Other new projects are assessing laws that pertain to online harmful content and the content- moderation strategies used by online platforms.
“Our Better Web explores the online accountability challenges that our nation — and the world — face today,” said Janet Napolitano, the former U.S. secretary for Homeland Security who now heads Berkeley’s Center for Security in Politics. “UC Berkeley recognizes that the Internet has evolved to pose profound threats to the health of our communities — and to U.S. democracy. This initiative will focus its energy to address real-world challenges with real-world solutions.”
To support the development of effective technology and policy solutions for reducing online harms, Our Better Web brings together some of the nation’s top experts in a range of Internet-related disciplines — many with close ties to Silicon Valley and Washington, D.C.
In addition to Napolitano, leadership of the initiative includes Geeta Anand, dean of the Graduate School of Journalism; Jennifer Chayes, associate provost of the Division of Computing, Data Science, and Society; Erwin Chemerinsky, dean of Berkeley Law; Hany Farid, a globally influential expert on digital forensics; and Brandie Nonnecke, director of the CITRIS Policy Lab, part of the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS).
The Berkeley project aligns with new White House goals for reducing the harms posed by tech platforms. In a Sept. 8 statement, the administration of President Joe Biden detailed a range of goals to protect children online; strengthen privacy protections; and stop the spread of illegal content, including sexual exploitation. The White House also proposes to target discrimination driven by algorithms, for example, via programs that fail to share job opportunities equally.
In the space of just a few decades, Internet platforms have brought enormous benefits to society, helping to support the free exchange of ideas and news while driving historic economic shifts that benefit virtually every community in the country.
At the same time, however, these platforms have raised extraordinary unintended risks: Toxic online cultures and extremism have been linked to real-world violence. They have been used to propel virtually unchecked racism, misogyny and ant-LGBTQIA+ hate. Social media undermine mental health and well-being, especially among young people. As vehicles for misinformation and disinformation, this advanced technology has weakened even long-established democracies.
“We’re witnessing a sharp rise in disinformation, extremism, and harmful content online,” said Nonnecke, director of Our Better Web. “In order to effectively address these challenges, we must support rigorous training and research that can lead to effective technology and policy strategies to support a trustworthy Internet.”
Our Better Web’s first set of projects focus on Section 230 of the Communications Decency Act, a federal law that largely protects online platforms from liabilities associated with user-generated content.
Our Better Web has published a report today written by Amy Benziger and Chelsea Magnant, master’s degree graduates of the Goldman School of Public Policy, on the effects of Section 230 and proposed reforms on platforms’ content moderation strategies. In collaboration with the CITRIS Policy Lab, Our Better Web now maintains a public database of all Section 230-related federal legislation.
The School of Journalism this year launched J276 Digital Accountability: Exploring Section 230 to train multidisciplinary students. The course was co-taught by Queena Kim, adjunct professor of audio at Berkeley Journalism, with Aaron Glantz, an award-winning investigative reporter.
The students produced audio stories on a wide range of issues — from the spread of misinformation in Spanish, to gaps in content moderation to protect children, to the persistence of housing discrimination on Facebook. Their stories aired on National Public Radio stations, KGO radio (San Francisco's ABC News affiliate) and Hecho en California, the most listened-to Spanish-language news program in Northern California. This fall, two additional stories are set to air on the Latino Rebels podcast, produced by Pulitzer Prize-winning Futuro Media.
“For our democracy to work, we need local and national journalists to be trained to perform critically important roles as we enter the 2022 midterm elections and begin covering the 2024 presidential election,” said Anand. “They need to learn how to effectively identify and report on harmful disinformation.”
This academic year, Our Better Web is expanding its journalism training, executive education, and action-oriented research by focusing its work on strategies to mitigate and neutralize harmful disinformation and “deepfakes,” which use artificial intelligence to create seemingly real, but deeply deceptive, videos of public figures and others. The initiative also will launch several multidisciplinary research projects, public events and trainings on effective technology and policy strategies.
Farid, a world-renowned expert on deepfakes, has raised serious concerns about their effects on democracy — and has urged effective interventions. “In order to stop harmful deepfakes,” he said, “we must take stock of the underlying technical and social mechanisms that drive their effectiveness. Our Better Web is well-positioned to guide impactful interventions.”
Our Better Web supports development of evidence-based technology and policy strategies to harness the power of the Internet for good, while countering its harms. Doing this effectively requires careful consideration of the impacts of interventions on issues such as free speech.
“The issue is enormously difficult: how to preserve the Internet and social media as uniquely important media for speech, while also combating harmful online content,” said Chemerinsky. “I am hopeful that this multidisciplinary group can make a real difference in the debates, policymaking and legislation.