Although we’re a very long way from putting artificial intelligence (AI) in charge of national defense, the use of AI in cybersecurity isn’t science fiction. The ability of machines to rapidly analyze and respond to the unprecedented quantities of data is becoming indispensable as cyberattacks’ frequency, scale and sophistication all continue to increase.
The research being done today shows that automated cybersecurity systems can do many things with only limited human oversight. Through neural networks, heuristics, data science, etc. systems are being designed to identify cyberattacks, to spot and remove malware, and to find ways to fix bugs faster than any human could. In some respects, this work is simply an extension of the principles that people have got used to in their mail-filters or firewalls. That being said, there is something qualitatively different about the AI’s “end game”, i.e. having cybersecurity decisions taken by technology without human intermediation.
This novelty brings with it entirely new challenges. For example, what would legal frameworks around such cybersecurity look like? How would we regulate their creation and their use? What would we in fact regulate? There has already been some insightful writing and research done on this (see Potential AI Regulatory Problems and Regulating AI systems for example), but for policy-makers the fundamental challenge of defining what an AI is and what it is not remains. Without such fundamentals, even outcomes oriented approaches could fall short as there is no certainty about when they must be used.
“If our brains were simple enough for us to understand them, we’d be so simple that we couldn’t.” Ian Stewart, The Collapse of Chaos: Discovering Simplicity in a Complex World)
In fact, AI technologies will be complex. Many government policymakers may struggle to understand them and how to best oversee their integration and evolution in government, society and key economic sectors. This is further complicated by the chance that the creation of AI might be a globally distributed effort, operating across jurisdictions with potentially distinct approaches to regulation. Smart cars, digital assistants, and algorithmic trading on financial markets are already pushing us towards AI, how could we improve the understanding of the technology, transparency about its decision making, integrity of its development and ethics, and the actual control of the technology in practical terms?
But it is also critical to understand the role AI can and will play in cybersecurity and resilience. The technology is initially likely to be “white hat” enabling critical infrastructures to protect themselves and the essential services they provide to the economy, society and public safety in new and novel ways. AI may enable systems to anticipate and rapidly mitigate security incidents or advanced persistent threats. But, as we have seen in cybersecurity, we will likely see criminal organizations or nation states seek to exploit AI to evade cybersecurity defenses or even attack. This means that reaching consensus on cybersecurity norms becomes more important and urgent. The work on cybersecurity norms will need more public and private sector cooperation globally.
In conclusion, it is worth noting that despite the challenges posed by AI in cybersecurity, there are also interesting and positive implications for the balance between cybersecurity and cyber-resilience. If cybersecurity teams can rely on smart systems to play defense, their focus can turn to preparing to handle a successful attack’s consequences. The ability to reinvent processes, to adapt to “black swan” events and to respond to developments that violate the fundamental assumptions on which an AI is built, should remain distinctly human for some time to come.
from Paul Nicholas