Held on Thursday, December 13 in Paris, the new ICT and Geopolitics conference at EPITA examined the question of artificial intelligence (AI) in the company of Amal El Fallah Seghrouchni, professor at the Sorbonne University and researcher at the Laboratoire d’informatique de Paris 6 computer lab, and Cédric Villani, member of Parliament and famous mathematician. When questioned by journalist Nicolas Arpagian as well as the students and professionals in the audience, these two experts were able to provide a few keys to understanding the stakes behind a technology that is no longer rooted in the realm of science fiction.
What is AI?
Amal El Fallah Seghrouchni: “It’s a reality, not a fantasy. It is also a true technological breakthrough that extends far beyond the previous industrial revolutions. We are confronted with AI daily in our telephones, cars, and homes, which has brought about real change in the way we use objects! Pervasive and diffuse, AI challenges our senses. It will replace vision, play a role in speech, writing, etc. It will bring answers and solutions to the highly intimate human functions. It will also be able to generate knowledge and awaken conscience, while shaking up paradigms, above and beyond technologies. This is why AI requires safeguards.”
Cédric Villani: “AI is also a misleading name, which has been a subject of debate by researchers since the 1950s. The term is so incredibly vague and suggestive of fantasy rather than reality that it is naturally disconcerting for a scientist. Today, there are three distinct approaches to AI. First of all, there is the systems approach carried out by experts in symbolic representation, to understand what is happening. Then, there is the approach that consists of using a large data set, which makes an algorithm reproduce an action. Finally, there is the approach that seeks to explore all possibilities to systematize curiosity. In the industrial sector, it is clear that the highest stakes concern the second approach, by collecting massive amounts of data to provide correlations. Lastly, AI is also a subject that is at the heart of a large international issue.”
Data and AI, two sides of the same coin?
Amal El Fallah Seghrouchni: “Although there has been a lot of very serious research in symbolic AI, it really took off when GAFA began to show interest with data. In my opinion, data is the driving force behind AI rather than intelligence. A good example is when an algorithm was requested to classify chihuahuas: by modifying a few pixels, AI began comparing the dogs to muffins! Nonetheless, data brings many things to light. Yet, in order to ensure responsible, acceptable and socially accessible AI, we need safeguards, through calculations, symbolic representations and laws.”
Cédric Villani: “We cannot really believe that symbolic representations and correlations alone can lead to something truly intelligent.”
Are GAFA beyond reach in Europe?
Cédric Villani: “This is a complex subject as GAFA began with volumetry, via platforms and usage. They became more influential thanks to the arrival of new, more powerful calculations and clouds. Given the amount of data that they possess, they finally had everything they needed at their fingertips. All they had left to do was to find talented individuals from leading universities around the world, create laboratories and recruit the best engineers worldwide. That is the reason we are lagging behind today. It is important to attract AI researchers who need to work on large machinery to be fulfilled. We must modify the appeal and develop European research laboratories. Above all, the more data you have, the more you will be able to improve your algorithms. The Google research engine, its autonomous car and Google Translate function more accurately due to the thousands of searches undertaken, miles driven, and translations completed.”
Amal El Fallah Seghrouchni: “Artificial intelligence has become a service that is nearly free. Providing solutions to third party developers who have their own data sets helps generate AI algorithms and the more frequently AI is used by third parties, the better it becomes. Not everyone has understood that the data retrieved by GAFA stems from a high number of citizens who, through using the systems, help generate algorithms at a reduced cost, nearly free of charge. The strength of GAFA lies in the fact that society as a whole contributes. Nothing is free because you help Google and other businesses improve. Hence, in addition to the turnover generated – which, over time will be far from negligible – the opening of these services will ensure their optimization in the long-term.”
Cédric Villani: “The slogan is well-known: “If it’s free, you’re the product.” In Europe, the strategy is different because the situation differs from country to country. In the United Kingdom, there is long-standing expertise in information technology, particularly with several entities, such as the Turing Institute – proficiency that we will soon strive to implement in France. Our British neighbors have developed significant skills in certain areas, such as cyber security, cyber intelligence, etc. And, they contribute to European knowledge in the field of AI as they do not want to be isolated, particularly with the looming prospect of Brexit. France, which was the next country to examine the question, also has a part to play. Several years ago, public authority was unaware that certain international AI experts were French and were not familiar with the subject of data. Since then, particularly thanks to the report by Axelle Lemaire on the subject, followed by mine, there has been an awareness which has encouraged reflection, between social sciences and exact sciences. Now, the main unknown is not the technology itself, but how humans will adapt to it and vice versa. Germany announced its strategy in November. Yet, it remains very German, with research and industry – the latter of which is nationally-based. We will have to make an effort to forge real ties. Then, there is Northern Europe, composed of highly proactive countries and endowed with major trust capital that differs from France and Germany in the practice of technological uses. This difference can be explained by our common history: on both sides of the Rhine, the compilation of data is a touchy issue. However, these questions are much less sensitive in the Nordic countries. And finally, Europe must also be built together with Eastern European countries: treasure troves of geeks! This is obvious from the yearly results of these nations during the International Olympiads in Informatics. Essentially, the main question is how we can bring all of these countries together. For legal matters, the community-based approachworks well. However, it’s more delicate for matters concerning research cooperation on a European scale.”
Amal El Fallah Seghrouchni: “Nonetheless, we didn’t have to wait for the rise in AI to create numerous European research programs, which have long existed! In France, there are already many excellent collaborations and networks, particularly with Germany as well as the Netherlands and several Eastern European countries, such as Romania and Poland, which historically boast very good mathematics schools. More recently, there has been several initiatives for responsible and ethical AI, with platforms designed to create open algorithms for a vision of the common good.”
Cédric Villani: “A war with GAFA would not only be useless but doomed to be a losing battle. We must think differently, create collaborations, etc. Two European networks of international researchers are currently being set up – one on automatic learning, and the other on symbolic AI – and they are trying to develop an alliance to address the economic and industrial issues in order to continue their research efforts.”
Are we moving towards a Hippocratic oath for AI?
Amal El Fallah Seghrouchni: “People are concerned about the development of AI. Last year for example, over 2,350 researchers signed a document containing 23 Asilomar principles (similar to Asimov’s laws of robotics from around 1970 – Asimov’s principles). These principles are intended to act as safeguards to oversee AI research, ethics and forecasts. The principles make sense, even though they are sometimes a bit Utopian. In fact, although they are simple, they are difficult to apply in a concrete manner. This is due to the fact that AI and its concepts are constantly evolving: how then can we audit an algorithm if it is able to modify itself?”
Cédric Villani: “Let’s take the example of fraud in the food industry. Obviously, not everything can be controlled. It will certainly be necessary to control AI in the same way, with occasional inspections to make sure that the algorithms do not discriminate, that they are reliable, etc.”
Amal El Fallah Seghrouchni: “Algorithms that teach will be difficult to formally audit or validate.”
Cédric Villani: “In any case, if there is no policing function, it is useless to have rules. Code and data must be tested at time T.”
Amal El Fallah Seghrouchni: “It also depends on the approach. Let’s look at the work by Jacques Pitrat, one of the AI pioneers, who designed an Artificial AI Scientist (CAIA). The idea behind this was rather than create algorithms endowed with AI, it is better to build AI with AI. CAIA is capable of resolving problems. Jacques Pitrat’s approach consists of bootstrapping, intelligence and generating meta knowledge. Today, CAIA has generated 44,414 constraints – of which only 12,120 are useful – and 3,470 pages to explain the evidence behind the problem of N*N quasi-groups (each element appears exactly once in each row and each column – Latin square). What is also interesting is that Jacques Pitrat showed that a generic program is more efficient than a dedicated program specially designed to solve the problem. This may sound counter-intuitive, but the reason is that this generic program has useful methods for resolving other problems, which we do not think about including in the specific program.”