 Okay, let's get started. Good afternoon to all of you. I'm Lydia Kavraki, and I'm a professor of computer science at Rice University in Houston, Texas. I will be coordinating this next session on artificial intelligence and computing together with Professor Theodora Varvarigou, who is with the School of Electrical and Computer Engineering of the National Technical University of Athens. It is now my distinct pleasure to introduce our keynote speaker, Dr. Joseph Cifakis. We'll start with the keynote and we'll proceed with the panel as the other sessions did. Dr. Joseph Cifakis, EOSIF Cifakis, is Emeritus Research Director of Verimag at Grenoble, France. Verimag is a leading research center in the area of embedded and safety critical systems and is affiliated with the National Center for Scientific Research of France, the University of Grenoble-Apple and the Grenoble Institute of Technology. Dr. Cifakis is the founder of Verimag and served as its director for 13 years. He was a full professor at the Cold Polytechnic Federal de Lausanne for the period of 2011 to 2016. Dr. Cifakis is a leading figure in the fields of model checking and embedded systems. His recent research focuses on rigorous component-based design and the design of trustworthy autonomous systems, self-driving cars in particular. Dr. Cifakis' work over the years is characterized by an unusual recurrent pattern. The problem is first studied from an abstract foundational point of view which leads to methods and techniques for its solution which in turn leads to an effective implementation that is successfully used in multiple industrial applications. In 2007, Dr. Cifakis received the Turing Award for his contributions to the theory and application of model checking, the most widely used system verification technique. The Turing Award is the highest distinction in computer science and it's colloquially known and often referred to as the Nobel Prize of Computing. In his career, he has received numerous distinctions. I'm just going to say that he's the member of six different academies. He served Greece in multiple capacities including being president of the Greek Council for Research and Technology from 2014 to 2016. We are honored that Dr. Cifakis has agreed to discuss his view on the topic of computing and artificial intelligence, the topic of our session, and we will follow with a discussion on how this topic is being shaped in the Greek landscape. Dr. Cifakis. Thank you for the introduction. It's a great pleasure to give this talk about computing and artificial intelligence. I will say nothing specific about Greece. Hopefully, we will have some discussion about that in the panel. I would like to start by saying that computing and AI evolved in parallel and they have undergone transformations. Computing was driven by technological conversions with different application areas and you see here a list. And AI has studied systems that mimic human cognitive abilities. Initially, it focused more on symbolic approaches and then a connection is prevailed and you see a list of sub-areas of AI. But today when we talk about AI, we mean mainly neural networks and machine learning. I would like also to emphasize that there is currently a lot of confusion about how intelligence is, how it is achieved. And of course the spectacular rise of intelligence is fueled by some of the optimism by the media, large technology companies that suggest that human level AI is only a matter of time. We know also this mythology that has been developed around ultra-intelligence and some believe that machine learning and its developments will enable us to meet the intelligence challenge. Of course, this is not my opinion. I think that despite all the spectacular achievements due to neural networks today, we have only weak AI. This gives the building blocks for building smart systems but a lot needs to be done to be able to build intelligence systems like we build bridges and the buildings if ever this will be possible. I don't know, I doubt. But a big step toward the general AI is to develop autonomous systems that are systems capable of replacing human agents that work in complex organization. And this is something very central in the vision of the IOT. And of course, this is what I'm going to explain, building transquartion autonomous systems requires the convergence between conventional computing and AI because we have to integrate database techniques and model-based systems engineering. So this is an outline of my talk and without delay, I will start talking about comparing human and machine intelligence. So probably you know that this is a problem that Alan Turin has addressed with his famous test. So how goes the Turin test? You have a computer A and a human B in a room and an experimenter that sends questions that are written questions and receives back answers. And Turin says that if she cannot tell us which is the computer and which is the person, then of course the machine is as intelligent as the person. And the reason I'm talking about this is because we hear quite often claims that the system just passed the Turin test, okay? And they say, alleluia, this is a great achievement. I think that this is technically nonsense. Anyway, the Turin test is not appropriate enough to compare human and machine intelligence because access depends on human judgment, is purely subjective. And of course you can have a lot of bias by choosing the set of test cases. And also another argument is that the test cannot be a question answer game only because human intelligence is much more than a conversation is interaction with environment, speech, movement, social behavior. So instead in some paper we propose another kind of test I believe is more useful to compare human and machine intelligence. I will say that the machine is as intelligent as a human performing a task in a system if I can replace the human by the machine in a seamless manner. So for instance I can say this machine is as intelligent as a driver if I can put the machine in the place of the driver and I have the overall, I mean the same behavior. And of course for this kind of test success can be based only on technical criteria. And also, and this is a problem I think for some people this requires not only computational intelligence because in order to build such a system that replaces humans we should also implement sensory motor functions and this is a very important problem. Now another way to compare human and machine intelligence is to think about how we produce and develop and apply knowledge. So a very well known fact is that our mind combines two types of thinking, fast thinking and slow thinking. Here I'm using Kahneman's technology for the famous book and fast thinking is non-conscious, automatic. This is a kind of thinking we use when we work, we speak, we don't understand how it works but it works and slow thinking is the source of any reason knowledge. Now it is important to emphasize that there is a very interesting analogy between the two types of thinking and the two types of computers we have today. We have conventional computers that we program by using algorithms and this is model based knowledge and we use on the other hand neural networks. These are circuits that we train to distinguish say cats from dogs exactly as children do but the problem with these systems is that they cannot be verified. So they produce a type of knowledge that is different from scientific and technical knowledge and here I would like to explain that as engineers we deal with different types of knowledge and it's important to understand the differences. This is illustrated by this pyramid here so the blue area is non empirical knowledge, knowledge that you produce by reasoning and below you have empirical knowledge knowledge about the external world. You have very simple knowledge like events and conditions the temperature today is 35 degrees and then you have common empirical knowledge that's like system one knowledge that is data based. This allows prediction but there is a big difference between this type of knowledge and I include machine learning there and that analytics knowledge with scientific and technical knowledge. Why? Because for scientific and technical knowledge we require explainability using mathematical models. Remember when Newton developed his proposed laws he developed also differential calculus to justify to explain his theory, to explain his law. Okay so here in this slide I would like to illustrate the difference between the type of knowledge generated by neural nets and scientific knowledge. So this is a standard experiment of Galileo so does some experiment and through generalization and abstraction he guesses that this is the model that describe the relationship between force and acceleration. Now if I want to train a neural network to distinguish between images of cats and dogs I take images, I have an experimenter, a human experimenter who will label the images and you will train the network. But of course now the important question is how you can characterize this input-output relation of the neural network and this is the issue of explainability that is a very, very important issue in AI. Okay so just a more pragmatic argument regarding the comparison between human and machine intelligence. Probably you've seen information of this kind and autopilot mistakes, moon for the yellow traffic light for instance, okay. And this can happen to neural networks but this will never happen to humans. Why? Because humans contextualize sensory information and understand that the traffic light cannot be in the sky. And the reason for this is that the human mind is equipped with a semantic model of the external world and we don't understand how this semantic model is built but this is the model we used to interpret sensory information and natural language. And human understanding in fact goes bottom up from sensor level to semantic level and top down from reasoning from semantic model to perception. Just to give you another example if you see this photo here you'll say oh this is a stop sign. And perhaps you've never seen a stop sign covered by snow but you know what is snow and what is a stop sign. And of course if you want to train neural networks to recognize this situation you should train neural networks for all the variety of weather conditions. And this is another example. This shows a sequence of images. If you see it you say oh this is probably an aircraft accident. And this is something very, very hard to infer by neural networks. Why? Neural networks can analyze an image but cannot find the causal relationship. They need some model of the external world. So to summarize humans are much superior in a situation where as a machine and the challenge here I think this is a big challenge of AI today is how to combine learning and reasoning. How to build a semantic model of the external world and be able to interpret the world. Okay I'll stop here. Now let me talk about autonomous systems. Why autonomous systems are important? Because in my opinion this is the result of the convergence between computing and AI and this is an important step toward the general AI. And I take as an example self-driving cars because this is an example easy to understand. So here I'm showing the behavior of an agent that controls a car. So the external environment you see here the environment is like that. This is an electromechanical system and you have sensors and actuators. It's obvious how it works. And of course you need some module for situational awareness, understanding what happens and for decision making. Situational awareness means that I perceive I can analyze what the sensory information I perceive. I find obstacles. I know the kinetic state. I build a model of the external world and based on this model I will see which goals are applicable and I will pick up a set of applicable goals and I will generate corresponding plans. Okay so these are standard ideas that come also from robotics I think. And another idea is to have here this module of self-learning to be able to learn from what we observe here and create some knowledge that will allow us to increase predictability and also to make better decisions. I have no time to discuss this. All I would like to emphasize is that we don't know how to build these systems today. And because there are problems, complexity problems, complexity of perception this is well understood, complexity of uncertainty because we cannot predict the behavior of the environment, complexity of decision because you deal with many different types of goals and the goals have different time constants in fact. It's a very complex problem and for each plan, okay I mean the complexity also for plan generation is maybe huge. But what I would like to emphasize is that in addition to autonomic complexity in order to build the autonomous systems you have to solve some very hard systems engineering problem. I have a concept called system complexity. It's the product of component complexity and architectural complex. My point of view is that to build autonomous systems these are the hardest systems that you can imagine because components are cyber physical systems and architectures involve time, space, organization, dynamism, organizational dynamism. Now something that is very important that is not very important for intelligent systems like language translators or personal assistants is that you should guarantee their safety, their transphophilis. So for small critical systems, we have theory, we have standards about how to do that and methods. I will not explain this in full detail. What I would like to show is that we follow design flows with predefined steps and at each step we use models to justify our decisions. And based on these analysis we make of our design flow you can say for instance then that the flight controller has no more than 10 to the minus nine failures per hour of flight. And this does not work for obvious reasons. And this explains also the fact that some big companies like Google and Vidya develop solutions for autonomous, for self-driving cars based exclusively on neural networks. So they bypass all this, they forget about that. So what's the situation today? We know how to build, apply the classical conventional approach to build the components that are reliable and then on the other hand you have these companies that come with self-driving platform. So the self-driving platform here is a huge neural network that receives frames, it's trained by simulation and generates a steering angle and breaking and acceleration, deceleration signals. Of course you can buy such a platform but of course nobody believes that it is reliable enough and additionally you have all the problem of the integration in an electromagnetic system. So I think that in my opinion we should start to define design flows that are I call hybrid so the idea would be to try to mix components that use neural networks and other model-based components and of course this is not an easy problem, I don't know how to deal with that but I think this is the way to go and if you have questions I can, okay. Now last slide's about the future. So the impact of intelligent systems. I said that we should break with the traditional techniques and still have guarantees of trans-worthness. Design cannot be entirely model-based and also design correctness cannot be achieved at design time so if we want to design a system today it is checked and then it's closed, you cannot modify if it's a critical system you cannot modify a line of code and you know that for Tesla cars for instance you have updates of critical software over here. So this is a problem I don't know but I think this is an important trend you should take into account. I talked already about hybrid design and also something very important that in new systems engineering practice would be that we should find global system validation techniques and okay so probably also you've seen that some companies say that we've driven so many billions of miles and our systems are safe and this is not a technically valid argument just because we don't know how the simulated miles are related to real miles you need some theory about that I don't have time to discuss or to finish about autonomous systems there is a big gap between automated and autonomous systems and we will need some time to if we ever can reach a full automation. Something else I would like to discuss concluding is that intelligent systems could help us overcome some limitations we have in the development of knowledge. You know that human mind is limited by what we call cognitive complexity all the scientific theories we have involve a small number of independent variables elements et cetera and often we study complex phenomena like economic phenomena by doing some simplification this is a very well-known phenomenon in economy they don't model the human factor just because they cannot they don't know how to do so I think that by using supercomputers and AI we can build what I call neural oracles so neural net cost with millions of parameters that we can try to predict to predict some complex phenomena I know for instance a project that is also supported by Google for predicting earthquakes predicting earthquakes I think that with little science and many data perhaps they can do better than theoreticians we will see. Of course using such kind of science will pose some problems and in particular we understand that knowledge production is not a privilege of humans and of course the question is the division of work between machines and humans in this process. Finally, and this is the last slide I would like to say that our progress in building autonomous systems will ultimately depend on our ability to deepen the mind-brain relationship and I think I know that some people say mind is not important and mind does not exist you know the philosophical debates around the body-mind problem I think I don't care if mind exists or not mental phenomena are very important to understand the human behavior just as software is very important to understand what your computer does so I hope that in the future we'll have more projects that will focus not only on the study of the circuits of the brain but on the relationship between the brain and the mental system of humans and I know that projects that have promised a lot about say investigating unraveling consciousness and things like that and they did not give anything concrete and finally I have written text about that so I would like to finish with that I think that we should invest more money to create interdisciplinary projects to explore what I call the big bang of consciousness as we explore the big bang of cosmology why the big bang of cosmology is more important than the big bang of consciousness I think this can be discussed I think that and here we have a list of questions so I think I'll stop here and if we understand how intelligence and consciousness have evolved and have been created during evolution this will be something very very important also for humanity so I would like to thank you for your attention thank you