The Pacific Council recently hosted a teleconference with John Villasenor of the UCLA Luskin School of Public Affairs and Danielle Tarraf of the Pardee RAND Graduate School, and moderated by Benjamin Boudreaux of the Pardee RAND Graduate School. They discussed artificial intelligence's potential to disrupt geopolitics.
Here are key takeaways from the discussion:
- Villasenor pointed out that artificial intelligence impacts sectors such as national security, cybersecurity, robotics, information dissemination, agriculture, pharmaceuticals, weather and natural disaster modeling, and forecasting, among others. However, he added, “As profoundly transformative and important as AI is, it’s important not to ‘over-hype’ it. There are still limits to advanced AI. We should invest in it, understand it, mitigate risks, and seek opportunities to improve it, but it can be overhyped as an abstract resource that can solve all our problems.”
- Tarraf argued that since everyone is interconnected globally, there likely won’t be one “winner” that emerges in the race for AI dominance. “AI companies have a culture of openness and sharing, crowdsourcing efforts, competitions open to the public,” she said. “Besides research, money trails are global and investments are interconnected, as is the AI talent pool. So one nation won’t rise to the top.”
- However, Villasenor added that the United States should grow its level of innovation in AI. “It doesn’t need to be a win or lose scenario, but we should make progress because China and Russia are very ambitious in AI and value it.”
- Villasenor also said that there are benefits from applying AI to COVID-19, but warned against the pervasive spread of misinformation. “Misinformation is extremely important in the AI ecosystem,” he said. “The people creating the disinformation have the advantage since they can usually stay one or two steps ahead of those trying to defend against it. Even if 10 percent of misinformation gets traction, that can have a massive negative impact.”
- Tarraf said that in any military application of AI, the Defense Department, academia, and other industries should all work together. Villasenor added that there is an enormous range of potential applications of AI for the military. “AI creates profound ethical concerns,” he said. “But on other end of the spectrum, there are innocuous and necessary forms of AI, such as defending computer networks or blocking cyber-attacks. So much of military conflicts hinge on information, and AI can help us turn enormous amount of data into concise info for military decision-makers. We should move full speed ahead on developing AI off the battlefield.”
- Tarraf said she worries that the general public is not aware of how fragile AI systems are. “Trying something out in a well-controlled environment is one thing, but taking it out in the uncontrolled real world, out of the lab and into daily life is a completely different challenge,” she said. “It’s an incorrect assumption that AI will solve all our problems.”
- Villasenor pointed out the lack of diversity in the AI field. “That results in products and results that reflect the creator’s biases,” he said. “It’s very important to pursue diversity in the AI workforce because technology can reflect patterns of inequality and injustice in outcomes. I’d also like to see sufficient oversight to keep some guardrails on, yet not so much that we then dramatically impede the ability of people to innovate. Innovation is so central to the tech ecosystem in this country and globally.”
Listen to the full conversation below:
The views and opinions expressed here are those of the speakers and do not necessarily reflect the official policy or position of the Pacific Council.