Should we be scared of AI?
Artificial Intelligence, or AI, has certainly been making waves over the past few years. In 2011 IBM’s system Watson won the game show Jeopardy! and in 2016 Google’s AlphaGo beat the world’s best Go player. Not only are these results impressive, but experts deemed it unthinkable for at least another decade.
These breakthroughs are the result of a learning system, the accumulation of evidence within a mass of different case studies. These systems reproduce what they’ve learned, by searching for the best answers through the well-balanced exploration of all viable pathways in a given context. Today, we need to trigger a real power boost in the systems which have produced these feats. We can but welcome these achievements, a bit like a new lunar conquest (although 50 years after our own moon programme, we’re not planning a trip there anytime soon).
As we already have industrial robots replacing us in tiresome and repetitive jobs, we might ask ourselves if they’re not going to replace us in all domains?
When we’re dealing with Big Data, learning algorithms can be set to work, with their results improving over time until they’re better than human results. For example, house number recognition – taken from images of house fronts by Google – was finished in less than an hour for the whole of France, not to mention with a smaller error rate than the human case-control team. We don’t know the exact level of power used, but starting up several thousands of learning machines is no longer a scary thought.
We can’t really speak about “intelligent systems”, as we should really refer to them as “knowledge systems”. Cognitive approaches don’t invent things, and this leaves us with a fighting chance.
At the least, these systems will become useful partners, like some of the tools we already use on a daily basis to assist our memory (who’s calling when and where a meeting is, how to get there…), or the tools which bring use closer to others (who’s buying what, who has a similar profile to someone, is the room captivated, etc.)
As an example, one of these AI companions can immediately find – and better than any human – any case law associated with a court case it’s listening to, however it won’t be able to respond to the contradiction; that will remain the lawyer’s prerogative. Or, another AI companion will be able to “diagnose” a broken-down cable TV set from a set of observed “symptoms”, however it won’t have the age old reaction of gently hitting it to see if it will help!
Of course, the results aren’t perfect yet, but the path is wide-open and just needs that little bit more to reach perfection:
- More examples: whether text or image banks, videos or personal expression, digitalisation is getting there. The more training, the more reliable the results.
- More power: Today, the cloud is offering the required means for processing, and the future is also moving towards specialist hardware. GPUs are already authorising largely parallel calculations, and thus the emergence of new neural chips such as IBM’s TrueNorth. This power is coming and we can even bet that a good chunk of it will end up in our smartphones.
- More interest: The big players’ commitment, seen through widely-known achievements already mentioned, is triggering off the expected effect. So as to not be left behind, companies are voting on budgets, mobilising teams and authorising the exploration of new pathways. The digital world is already being qualified as AI.
- More professionals: AI is included on college and university programmes as there are several open teaching resources, and the large players are lowering the bar through easy to use, high-level services (speech, image, emotion, and personality recognition, etc.)
But to go back to the title of this article on fear, more concern is also needed: we don’t really know what’s going to be pulled out of the hat.
As an example, let’s use the open source group openai, that could be qualified as ethical. Openai raised a 1 billion dollar not-for-profit investment, initially so that everyone can take advantage of AI breakthroughs. Furthermore, and above all, it was to prevent this presently uncontrollable force from being used for dominating ends, whatever they may be.
AI is well and truly a revolution in industry and computing (learning directly from examples, rather than from conceptualising and programming).
AI isn’t creative: it learns from our experiences.
It will assist us and replace us in all recognition tasks. AI will come to master knowledge just as it can master factory work.
We need to remain vigilant on the uses and changes of AI, and maybe even prepare ourselves for a new world where a good part of normal, information research work will die out. Let’s hope that, should this happen, it will be to the benefit of creative arts which remain entirely ours.
And so to finish, some could imagine that the will to exist could be the spontaneous emergence of complexity. We might already be in the midst of creating a conscious entity of a whole new “utterly inhuman” kind (to use the title of Jean Michel Truong’s novel Totalement inhumaine, the story of which would be better off remaining fictional). Now that would be scary.