AI in cybersecurity

Why we need governance to support the development of AI in cybersecurity

Artificial Intelligence is set to play an increasingly important role in our lives. And while the general public may still consider it the work of science fiction, it is already all around us, if only in discourse, and looks here to stay. But we must equip ourselves if we are to truly make the most of this progress and master its by-products. 

How the media understands AI

Artificial Intelligence is on everyone’s lips, but few know where it comes from and what it represents — and no one can predict the impact it will have in twenty years’ time. Nevertheless, that doesn’t stop some expressing their feelings on the matter. On one side, the inventor of Facebook, Mark Zuckerberg, is rather optimistic about AI and spoke of how it will eliminate car accidents. However, AI has bugs, and these will be the direct cause of accidents. But it’s understandable that he keeps that part to himself, given that not a month goes by without new AI-based functionalities appearing on his social network. On the other side lies Elon Musk, the visionary entrepreneur and head of Tesla Motors and SpaceX, who sees Artificial Intelligence as a real danger, with robots taking to the streets and killing people. These two, world-famous entrepreneurs only appraise AI through the prism of innovation (i.e. what it could bring), without necessarily thinking about reality.

In our ever-more connected world, and with a growing interest in AI, it is strikingly obvious the extent to which people lack information vis-à-vis Artificial Intelligence. Yet, authority figures in the field do exist, they just don’t make themselves known. One such individual is Rodney Brooks, former director of MIT’s AI laboratory, the go-to expert who has dedicated his life to the subject and witnessed the technology’s evolution. Brooks notes that while Musk may criticise Zuckerberg for having no idea what he’s talking about, neither of them actually work in the area. And from his point of view, there is still a long road ahead before AI becomes a profitable venture: while the technology is promising, it lacks maturity to be rolled out operationally. Its true impact is thus difficult to measure.

Artificial Intelligence and cybersecurity: the present and future

Although progress continues to be made, Artificial Intelligence still remains immature, and its real-world application is still far from meeting the expectations of decision-makers. Initial results are encouraging: we are able to detect unknown anomalies in IT logs that would not have been identified by traditional methods. Precision, however, is still well below industry standards: there are too many false alerts, which would eventually deter the operator. Nonetheless, our knowledge regarding the realities of AI on the ground enables us to provide specialist advice on the subject to our clients. We are now in a position to know where we should head and in which direction we should move to operationalise AI.

When it comes to solutions in the pipeline, we are at the beginning of a promising and burgeoning market. Evidently, there is IBM’s Watson, which supports Security Operations Center analysts. With Watson, AI is used to create an encyclopaedist that will read, analyse and somewhat understand all the cyber threat literature. Our minds also turn to Vectra, whose Machine Learning techniques make it a major ally in the detection of network attacks. These are not the only examples, and the list continues to grow daily.

So, how is cybersecurity going to change in light of AI? We must understand that our current vision is that of a chain of components linked together to fence off a perimeter, ranging from infrastructure, access and identification to governance. However, AI has rendered machines more intelligent, capable of learning and adapting. Since most of the chain’s links are machines, the chain itself is destined to become intelligent. Consequently, we predict future discourse will talk about a group of components that analyse, communicate and adapt to local conditions.

The need for a framework, a form of governance for Artificial Intelligence.

AI is unavoidable, by virtue of the fact it provides the technological answer to the information overload that we are currently suffering. It will allow us to collect all the knowledge digitised by humans and display it contextually (e.g. Watson). We believe that it should be used to enable humans and machines to work together to resolve problems, rather than the former exploiting the latter: a machine, after all, only follows the orders given to it by a human. In the future, it could help us complete our chores and facilitate our work. So yes, socially speaking, AI is going to change the workplace, and probably affect the core organisation of companies. Jobs will be replaced, but others will evolve: machines will always need humans for operation and maintenance. And new positions will be created. The fantasy of autonomous machines is just that: a fantasy.

If we want to seriously govern AI and be able to anticipate, we must refocus the debate and move away from speaking in terms of good or bad. AI, like all other technology before it, will become what we make of it. Currently, there is a widespread misconception of what AI represents, what it is and what it could be, as much on the part of politicians as decision-makers, entrepreneurs and the general public. The French government’s recent announcement of a National Strategy Plan for Artificial Intelligence is thus great news, providing a framework to align its development with policies from the governing social body. Because what we humans need in relation to AI — on all the levels from government through business and down to the individual, and before technical solutions — is education, familiarisation and a genuine culture that will allow us to make informed decisions.

The following two tabs change content below.

Leave a Reply

Your email address will not be published. Required fields are marked *