Many feel Artificial Intelligence (AI) is the panacea for cybersecurity. Once we get AI, we will defeat the cybercriminals!
Not so fast. While AI can certainly enhance cybersecurity if appropriately applied, it is not as foolproof as one might think.
Jason Matheny, Founding Director for Georgetown’s Center for Security and Emerging Technology (CSET) and currently on the National Security Commission on Artificial Intelligence, said ‘AI systems are not being developed with a healthy focus on evolving threats, despite increased funding by the Pentagon and the private sector. A lot of the techniques that are used today were built without intelligent adversaries in mind. They were sort of innocent.’
There are three types of known threats to AI:
Despite these three vulnerabilities, less than one percent of AI research and development funding is going toward AI security. This means that, like other legacy IT, after these AI models are developed, we will need to retrofit security into the models. This pitfall has proven to be complicated and flawed.
AI and Machine Learning will enhance security, but we must never forget that intelligent humans with malicious intent are lurking. We must never get complacent and think a tool alone will protect our assets.