In a recent conversation hosted by AI LA and Women in Cybersecurity (WiCyS) SoCal, Malia Mason, CEO and co-founder of Integrum and president and co-founder of WiCyS SoCal, moderated a panel on artificial intelligence, cybersecurity, and privacy. The event featured Brad Hayes, chief technology officer at Circadence and assistant professor at the University of Colorado, Boulder, Lily Li, owner of Metaverse Law, Tamer Azmy, deputy CISO at Cedars-Sinai, and Kevin Roundy, technical director at Symantec Research Labs.
The following are key takeaways from the conversation:
- Mason explored a variety of topics including how AI can best be utilized to help systems defend against attacks, user reliance on AI, the privacy of 23andMe, and other topics.
- Azmy stated that one of the issues cybersecurity professionals are currently facing in the field as it advances is that it is impossible to defend everything. “You have to look for the person within the attack, and AI should be used to see what is not normal in security locks,” he said.
- In response to a question regarding how to distinguish what is real and safe with the use of AI in technology, Li responded that with AI and hacking tools getting more and more sophisticated, “we are delegating our lives to many products and we think they will be a safe barrier. For now they are free, but soon they will probably cost money.” Additionally, with deep-fakes it will turn into a big problem because recordings and other video features can be edited, forcing forensic professionals to check everything to make sure that footage is real evidence.
- In response to deliberations on privacy and the right to be “forgotten digitally,” Roundy pointed out that “people can tell Google or any company to delete anything it has on you, but it is really challenging because you have to know where and how it is located and if it is located in a machine-learning program.” For example, one could tell Facebook to delete all of their photos, but the company will continue to employ facial recognition, raising a different type of privacy violation.
- Hayes responded to an inquiry regarding the potential negative offenses that AI could bring, explaining that “if AI can do offensive stuff, it can do defensive stuff, but the problem with that is the time scale, and this is a hard problem to solve.”
Stacey Scolinos is the Fall 2019 Communications Junior Fellow at the Pacific Council.
Artificial Intelligence Los Angeles (AI LA) is a volunteer-led community of cross-disciplinary stakeholders with more than 6,000 members who represent the science, technology, engineering, arts, and mathematics (S.T.E.A.M.) communities of the Greater Los Angeles (LA) area. They explore artificial intelligence, machine learning, and other frontier technologies, and the impacts they will have on humanity by hosting regular activities throughout the year. It’s their mission to catalyze innovation through education, conversation, and collaboration.
The views and opinions expressed here are those of the speakers and do not necessarily reflect the official policy or position of the Pacific Council.