The Center for Long-Term Cybersecurity (CLTC) was established in 2015 as a research and collaboration hub in UC Berkeley’s School of Information. Both the CLTC and I School are part of the Division of Computing, Data Science, and Society (CDSS). Founded with a grant from the Hewlett Foundation, the CLTC serves as a hub for industry, academia, policy experts, and practitioners with research and programs focused on a future-oriented conceptualization of cybersecurity and the evolving intersection of people and digital technologies to inform today’s decision-makers. CLTC’s Cybersecurity Futures 2025 scenarios aim to shape a forward-looking research and policy agenda that is intellectually and practically robust—and broadly applicable across countries and regions.
Research is central to CLTC’s mission. In 2019 alone, the center fundedmore than 30 research projects, supporting roughly 50 graduate student and faculty researchers with $1.3 million in funding. CLTC-supported researchers are tackling important questions that will shape the future of cybersecurity, including: How can organizations better detect spear-phishing cyberattacks? How could neural signals be used for online authentication? What types of defenses could help protect at-risk activists and NGOs from state-level surveillance? And how are different nations addressing artificial intelligence governance and security?
Prof. Steve Weber has been the Faculty Director of the CLTC since it was launched six years ago. In addition to cybersecurity, his research focuses on international politics, international business, and the information economy; and behavioral economics within information systems. He was the Director of the Institute of International Studies at UC Berkeley from 2003 to 2009.
Ann Cleaveland joined CLTC as executive director in 2018, responsible for growing key partnerships, managing day-to-day operations, and stewarding a strategy to fulfill the CLTC mission. She brought to the position 15 years of experience in philanthropy, non-profit management, and industry, including the ClimateWorks Foundation and start-ups such as Lucid Energy Technologies and TCHO Ventures, a chocolate manufacturer in Berkeley.
In conjunction with National Cybersecurity Awareness Month, Weber and Cleaveland share their thoughts on their work and the CLTC’s mission.
Q: You both bring interesting and diverse experiences to your positions. Ann, you've worked for a non-profit striving to improve our response to climate change, and several start-ups. Steve, your expertise is in international relations, with a focus on international and national security and the impact of technology on national systems of innovation, defense, and deterrence. How do these experiences shape the perspective you bring to the job?
Cleaveland: I think there is a lot that the cybersecurity world can learn from the climate advocacy world. Both are big complex problems that are partly about people, partly about technology, and partly about geopolitics. For example, one thing the climate advocacy world knows is that durable solutions to these kinds of problems have to have public support. So climate advocates have invested in constituency building, imagery and communications, and other grassroots organizing to push companies and governments to act. There’s very little of that work going on yet in cybersecurity. Where is the 350.org for cybersecurity, a grassroots effort that would change the narrative and help us see cybersecurity as a right we should demand?
Weber: I come to digital security issues from a more conventional angle than does Ann, though my experience points in some of the same directions. One of the under-stated realities of national security, at least in the U.S., is that politics absolutely does not “stop at the water’s edge” anymore as it did in the early Cold War days. Digital security is a “whole of society” concept. There is no silver bullet in technology, or government policy, or human behavior. Cybersecurity then is always a story about risk and risk management, which makes it about the intersection between software and hardware, people and institutions, incentives and beliefs. What does it mean to be “secure,” how much of it do we want, and what are we willing to sacrifice to get it? Those are the same questions that I used to ask about more traditional national security issues--but now they evolve much faster in the more dynamic digital environment.
Q: It seems most people think of cybersecurity in the short-term, like is my device safe? Will my vote be counted? Are my online accounts secure? How do you describe the issue of long-term cybersecurity?
Cleaveland: We agree. We use an emergency room analogy to describe much of the current state of cybersecurity. A problem comes in on the ambulance, you patch it up in the ER, and you send it on its way. Most industry professionals, policymakers, and the public are dealing with “emergency” cybersecurity problems and it’s hard to lift our sights up out of them. In fact, it’s very few people’s jobs to lift up out of them--when a piece of ransomware lands on your list of problems, you have to deal with it urgently.
As a research center, we are trying to help understand and direct more effort and attention to the longer-term challenges, by looking over the horizon at what’s coming next. The key to our work is that the insights from looking over the horizon help people with the strategic decisions they need to make to address cybersecurity problems today. Our work also aims to reduce “surprises” and address the dilemma that so many cyber-defenders experience, which is that they are reactive rather than proactive, and always feel like they are a step behind the attackers.
Weber: A good example is a body of research we did several years back that pointed to digital fragmentation, what people today call the “Splinternet,” long before it was in the headlines. Although people have long thought of the internet as a global platform, it has in fact now splintered along national lines as technology, commerce, politics, nationalism, religion, and divergent national interests have overcome the “free and open” model that the 1990s digital idealists so revered. But no one really knows how fast this trend is moving, or where it is having the most impact. So we started a research project that aims to measure as precisely as possible internet fragmentation, to put metrics around a phenomenon that is widely discussed but not widely understood, and that will shape technology supply chains and a whole host of other cybersecurity issues for the next decade.
Q: It seems like we often don't even know of cybersecurity problems until after the fact, such as foreign meddling in elections or organized hacking of accounts or critical infrastructure. How can you identify or prepare for threats five or 10 years out?
Cleaveland: It’s important to say first that there’s still much more to do in the basic blocking and tackling/defensive work of cybersecurity, Many people and organizations really haven’t caught up to the present, and are still not very good at basic “cyber hygiene” like two-factor authentication, strong passwords, and the like. You simply must do these things to take away the easy attacks, and at the same time develop capabilities to recover from incidents that will happen regardless.
But security in the digital environment is like everything else digital--it’s moving faster than most people and organizations are prepared for. Speed generally favors offense (cyberattackers) over defense. And so to level the playing field defenders have to look over the horizon, anticipate what kinds of threats will evolve, and start preparing for them in advance--much like we now see is necessary for global public health.
Weber: We’ve used scenario methodologies to help us do that work. Scenario thinking was developed in the energy industry when it became clear that it was simply impossible to predict the price of oil, and so it made sense instead to develop disciplined and imaginative models that explain how the underlying causes that shape the demand for energy could change over longer periods of time. We’ve developed this methodology so that it can be used in research settings, which means, among other things, building explicit causal arguments linked to testable hypotheses with observable implications.
We use these scenarios, not for prediction (that’s impossible), but to enable better foresight, which means seeing around corners to non-obvious changes and asking ourselves, “What incoming data would we need to see in order to make the next set of decisions about cybersecurity,” whether it be what technologies to develop, what policies and regulations to promote, what human behaviors we should try to change.
For example, one of the big insights from our Cybersecurity Futures 2025 project, which we did in 2018, was that if data protection was the concern of the 2010s, data manipulation would quickly become the principal concern of the 2020s. That shift has profound implications and all of us are living with them right now. For example, the downside risks associated with data from contact tracing apps, or for that matter intellectual property or clinical trial results in COVID vaccine development, are much greater if data were to be changed or falsified than if data were simply stolen.
Q: One of your initiatives focuses on AI Security, noting that for all their potential benefits, AI systems introduce new vulnerabilities and can yield dangerous outcomes—from the automation of cyberattacks to disinformation campaigns and new forms of warfare. How do we defend against such efforts? Can AI help in the defense?
Cleaveland: It’s a timely question because in some AI applications a similar offense/defense imbalance applies. In other words, it’s easier right now to build and deploy some AI-enabled attacks, think of deep fake videos, than it is to recognize and defend against them. Obviously, automation can help with some aspects of defense. But another part of defending against machine attacks is to mobilize not just other machines but also people--which means “democratizing” training for human defenders. Our view is that you shouldn’t have to have a Ph.D. in artificial intelligence/machine learning to be able to recognize at least some adversarial machine learning attacks. And defense shouldn’t be siloed in security operations centers--it’s much bigger than that. In collaboration with our AI Security initiative, which looks at factors that will influence the trajectory of AI security, our Daylight Security Lab is developing games, curriculum, labs, and other practical means to expand who participates in cybersecurity, from product managers to activists.
Q: What do each of you see as the most pressing need in long-term cybersecurity? Is it in hardware? Software? Awareness?
Weber: And policy… and business models… and regulation… and... it’s all of those things. But fundamentally, the most pressing need is to understand and grapple with the consequences that digital insecurity has had, and will have, for economic growth, social cohesion, democracy, and the quality of human life. Trust in digital systems is lower than it needs to be, and that’s largely because those systems--including but not limited to their technology foundations--aren’t as trustworthy as they need to be. At CLTC we see digital security as a way to amplify the upside of the digital revolution. We want to reduce the opportunity costs--the things we as a society choose not to do because we don’t trust digital technologies adequately. The relative failure of contact tracing apps is a poignant contemporary example, but it’s not nearly the only one. Secure and appropriately trusted digital technologies can do so much to improve our lives. But only if we do a much better job on cybersecurity.
Q: As the center marks its first five years, what do you see as its most valuable contribution to date?
Cleaveland: The people. There is a growing CLTC “diaspora” of dozens of CLTC-affiliated student researchers, alumni, faculty, and collaborators who are occupying leadership positions everywhere from Intel to the Anti Defamation League to universities, and in disciplines ranging from law, to design, to data science and beyond. We’re especially proud of the students and alumni from our Citizen Clinic, who are shaping the field of public-interest technology.
Weber: Yes. This is the next generation of talent who will shape cybersecurity policy and practice around the world for the next several decades. We intend to feed that diaspora and keep it growing, connected, and influential for a lot longer than five years.
Learn more about CLTC research programs by watching presentations from the 2020 CLTC Research Exchange, the center’s annual showcase of research supported through its grants program. The second day of the 2020 CLTC Research Exchange will be held from 10 a.m. - 12 p.m. Thursday, Nov. 12, and focus on “Protecting and Securing a More Inclusive Society Online.”