 Good evening. Kia ora tatao. Welcome to the first of this year's Gibbons Lectures. My name's John Hosking. I'm the Dean of Science here at the University of Auckland. It's my very great pleasure to welcome you here this evening. The Gibbons Lectures are an annual series of talks hosted by the Department of Computer Science and Association with ITPNZ. The goal of each lecture is to describe detailed developments in a particular research area to a general but technical audience from computer science students at all levels to IT practitioners in other departments and outside the university. The Gibbons Lectures are named in memory of Associate Professor Peter Gibbons, a former head of the Computer Science Department and a very good friend and mentor to many of us who hail from that department. In doing so, I'd like to now acknowledge Peter Sister Sally, who's here this evening. Welcome, Sally. This year's Gibbons Lectures on artificial intelligence and its impact. This is, of course, a topic which has been receiving a lot of attention in the popular press with remarkable successes of software such as machine translation systems, driverless cars, voice response systems, as well as corresponding concerns over job losses through automation. Our lead speaker for this year is Professor Nick Kasaboff from Auckland University of Technology. He will discuss the research progress of AI from its deepest roots to the current frontier, applying AI to the big data of medicine. Nick is Director of the Kedri Research Institute at Auckland University of Technology. Originally from Bulgaria, Nick has a PhD from Te Kumi University of Sofia, has worked at the University of Essex, University of Otago, and since 2002 at AUT. He has what I can only regard as a phenomenal publication history with over 600 works. As you can see from the title up there, he's a collector of prestigious fellowships. He's a fellow of the IEEE, the IITP, and the Royal Society of New Zealand, of course. He has research interests in neurocomputation, artificial intelligence, machine learning, data mining and knowledge engineering, neuroinformatics, bioinformatics, signal speech and image processing. And this combination makes him eminently qualified to talk to us about the topic in hand. So Nick, I now invite you to deliver the first of the given lectures for 2007 on AI from Aristotle to Deep Learning Machines. Good evening, ladies and gentlemen, colleagues in France. It is my great pleasure to give the first lecture of this series that is organised by the Computer Science Department at the University of Auckland and the Institute for IIT Professionals. Thank you very much for organising that. It is a very timely series having in mind the AI revolution that is going on in the world. There will be four lectures and if you expect me to cover all aspects of AI that is not going to happen, what is going to happen, I'm going to talk about a little bit of the evolution of the AI methods and I will be a little bit more technical to explain what is behind this symbol of AI. Is it something that we should be frightened if we understand it better we will be more familiar with the development and we can actually have more vision about the future of artificial intelligence. What is AI? Well, probably the simplest definition is it is part of interdisciplinary information sciences area that develops and implements methods and systems that manifest cognitive behaviour. Our main features of AI currently, to mention only some of them, are learning, adaptation, generalisation, inductive and deductive reasoning, human light communication, natural language processing and many more. Some more features that we will see in the future are consciousness, self-assembly, self-reproduction, AI social networks and these features are coming now to the current AI systems. I will talk first about the evolution of the AI methods and then I will cover a little bit about the computer platforms that the artificial intelligence inspired to build. Then I will talk about applications about AI in New Zealand. We have to be aware where we are and what we are going to do in the future. In the future of AI of course it is very, very difficult to predict but I have my view and other people, other lecturers will have their view and that is a matter of discussion worldwide. The evolution of AI methods, many philosophers consider Aristotle to be the originator of the deductive reasoning. And Aristotle was a very pronounced philosopher and scientist who was a pupil of Plato and a teacher of Alexander the Great. And if I illustrate what deductive reasoning is, I would say this is the example that is in all logic books and AI books. We have deductive reasoning if we have a statement like all humans are mortal or this is a rule that says if human then mortal. And we have a new fact, socraticism is human and the deus inference is socraticism is mortal. Well it is the simplest possible example of deductive reasoning but Aristotle went further. He introduced epistemology which is based on the study of particular phenomena which leads to the articulation of knowledge, rules, formulas, across sciences. So he worked in botany, zoology, physics, astronomy, chemistry, meteorology, psychology etc. According to Aristotle his knowledge was not supposed to change, it became dogma. In places Aristotle goes too far in deriving general laws of the universe from simple observations and overstretched the reasons and conclusions. Because he was perhaps a philosopher most respected in Europe, European thinkers think and accept his erroneous positions sometimes such as inferior roles of women which helped back science and social progress for a long time. But the first deductive logic theory inspired the development of the so-called symbolic AI where logic rules and deductive reasoning started to appear in the 18th, 19th century including relations and implications, propositional logic, bullion logic that is the basis of our contemporary computers. Predicate logic with the language pro-lord, probabilistic logic, rule-based systems, expert systems. We should say that logic systems and rules are too rigid to represent the uncertainty in the natural phenomena. They are difficult to articulate and not adaptive to change. So then a step further to account for uncertainties in human-like reasoning was introduced by a lot of his other fuzzy logic. And fuzzy logic deals with so-called fuzzy proposition. Here we have a fuzzy membership function that represents a variable that is called time and the time is represented as short, medium, long as membership function of fuzzy terms. And the propositions could be fuzzy, washing time is short. And if it's 4.9 minutes washing time it is short to a degree of 0.8 and medium to a degree of 0.2. So fuzzy rules like if wash load is small then washing time is short can be articulated, can be implemented and that was actually a very, very important development of artificial intelligence in Japan. Especially with the rice cookers and other fuzzy logic devices. However, fuzzy rules need to be articulated in the first instance. They need to change, adapt, evolve through learning to reflect the way human knowledge evolves. Further challenges in the artificial intelligence as a turning point appear and one of them was the Turing test for artificial intelligence. The Turing test was initially proposed by Turing as a question, can machines think? And it was a test that he later called the imitation game where an observer is communicating behind a curtain with a machine and a person. If the observer cannot distinguish whether the observer communicates with a machine or a person. So that is what artificial intelligence is about. The Turing has been highly influential and widely criticised. However, it has become an important concept in the philosophy of artificial intelligence. The test, though, was too difficult to achieve without machine learning in an adaptive, incremental way. So, machine learning is something that we humans do all the time, every minute. And learning from data inspired by the human brain was one of the directions to develop machine learning systems. And the human brain is the most sophisticated product of the evolution as an information processing machine. Why is that? Well, the human brain consists of billions, 80 billions, 100 billions of neurons, trillions of connections and it has evolved to millions of years of evolution. And the brain can deal with different memory types, short-term in the membrane potential of the neurons, long-term synaptic ways, genetic. It deals with different scales of time, nanoseconds, milliseconds, minutes, hours, many years, like the evolution. So, what could be the most inspirational source for machine learning than the brain? And a single neuron, if we look at a single neuron, is a very sophisticated information processing machine. Deals with time, frequency, phase information, thousands of genes expressed in the nucleus, tens of thousands, 20,000 inputs to each of the neurons in just one output. And the question is, can we make artificial intelligence to learn from data like the brain? And that was, this question was addressed early in 1943, Mark Collet and Pitt. They introduced the so-called artificial neuron with inputs, connection weights that represent synaptic weights. They are subject to learning and outputs that are calculated as an output function. And then Rosenblatt introduced the so-called perceptron, the first neuron network, very simple one. And then it was further developed into multi-ray perceptrons and a large-scale neural networks. But the early neural networks were so-called black boxes and also one strength difficult to adapt to new data without much forgetting. And there was quite a lot of thinking about neural networks that they cannot do much and they are black boxes and therefore they are not very useful to use. And that is why a new development of neural networks was done in terms of neural networks that can not only learn from data but they can be used to extract rules, to extract patterns, to extract so-called knowledge from these data. And these neural networks were the first ones were called neuro-fuzzy systems. So no more the black box curves. Neural networks can be trained, inputs and outputs and neurons here. And after analysis of the neural networks, rules can be extracted that explain the essence in the data. For example, if input one is high and input two is low, then the output is very high. So using also fuzzy terms which was the combination between neural networks and fuzzy logic to make neural networks represent more human-like thinking. So this is one example of using neural networks to extract rules from data that relate to evaluation of renal function. The golden standard is using one function for everybody anywhere in the world at any time. And if we can see, we can train a system that clusters patients data into different clusters and it extracts the rule for each of these clusters. For example, this cluster is defined as H about 21 of the people and this is the membership function. Female, creatinine, et cetera, et cetera. And this is the function that was derived for this particular cluster. Unfortunately, I don't belong to this cluster because my H is not about 21. But I have another cluster here that will tell me what function can be used to evaluate my renal functions in the realm. And this is something that was also very important development in neural networks and I should say that 24 centuries after Aristotle, now we have systems, artificial intelligence systems that can automate the rule and knowledge extraction from data. It doesn't mean that humans do not have to do anything. No, they have to observe these rules and they have to make sense out of that. But these rules can change. They can vary from group to group and they can be updated all the time with data. I'm sure Aristotle would have been very happy to see that. Now deep neural networks are the current development in the neural networks. What is why deep? Because first of all they have many layers of neurons connected to each other. And second, some neurons, they have neurons they can actually look deep in the data and extract features from smaller sections of the data. And this is a multi-layer deep network, so-called convolutional. We have neurons that extract features from the palm and then these features are combined in the next level to recognize the hand and then of course they will be recognizing the leg and the body and then it is now recognized that this robot is sitting and the robot has yellow eyes and it has big feet. So this is the approach that is currently used for many better recognition systems and the so-called convolutional networks indeed do the deep analysis of features of smaller sections. For example, this neuron can calculate the maximum value within this field of area, so that was six. Deep neural networks is nothing new of course. It is inspired by the computer brain and computer vision was used as inspiration to develop the first deep neural networks like Cognitron and Neocognitron by Fukushima. So these neural networks have many layers, layers that capture different features, for example contrast edge detection. They are combined, combined, combined in different layers and these layers correspond to the visual cortex until the recognition is done. Well, deep neural networks are excellent for vector frame-based data, but not much for temporal or spatial spectrum temporal data. There is no time of asynchronous events learned in the model that difficult to adapt to new data and the structures are not flexible. If we ask the question, how do we humans learn pieces of music? For example, a performer who plays Mozart makes about 10,000 strikes on the piano within one hour without looking at notes. That's a deep learning patterns that are learned in the human brain. Still, the deep neural networks that are at the moment present cannot do that and they are very limited. So now we would like to move further to develop systems that are not limited in terms of number of layers. We need to have as deep as needed from data systems. So one way to do that is to use the third generation of neural networks and they are the sort of spiky neural networks. Spiky neural networks represent the information as strengths of spites or binary units at a certain time. So time is part of the information representation here. And this neuron receives many spites from many inputs and if the membrane potential grows above a certain threshold, this neuron emits a spike. And this is the sort of spiky neural networks. Spiky neural networks have the ability to capture time, to learn time, learn temporal patterns. And also they are very easy to implement as hardware and easy to implement in software because they deal with spikes only, which is the binary element. They are very economical in the information processing. Well, the question is, can we use these neural networks to develop large scale deep learning machines? And if, as the IBM fellow Dermendra Moda, who is the Chief Scientist of Brain Inspired Computing in IBM Research, says, the goal of brain inspired computing is to deliver a scalable neural network substrate while approaching fundamental limits of time, space and energy. And indeed spiky neurons can deal with spatial temporal data, they can integrate different modalities and they can deal with time, space, synchronization, they can evolve. Well, how can we use this phenomena to develop deep learning machines? One example is the developed in my lab, a deep learning machine which is called Newtube or Neurocube. This Newtube, which we have the architecture here, consists of a three-dimensional sort of cube that is based on spiky neurons. Tube is scalable, you can have a tube of 100 neurons or 100 million neurons. Still possible to simulate on different platforms. And this tube can learn patterns like the brain learns deep patterns in different areas of the brain at different time scales. So this deep learning machine has similar learning methods like spiked time-dependent plasticity. When the neurons receive inputs, they spiked and then that causes the creation of connections between the neurons. And these connections are meaningful because they capture time associations between the spatially distributed input data. That is what the brain does. We don't have any limitation in terms of how deep these patterns are. It could be any deep as needed. And in my lab we developed a machine that we call it a machine of course. This is a development system that everybody can use it and download in this software in a simplified form. They can develop their own artificial intelligence deep machines on their data, visualize that models in a three-dimensional virtual reality space to understand what the data is about. Unfortunately we can't do this with the brain, but we can do this with the models that we create to analyze how they learn in an online manner. Because now it is online learning of data coming here and connections are being created. So this is the principle that we use as an example of deep learning machines. Now we know that along with the life-long learning in our brains there is quite a lot of learning happening in nature as evolution. Another area of artificial intelligence called evolutionary computation uses some principles of learning from natural evolution. And this is Charles Darwin, species learned to adapt to genetic evolution, crossover and mutation in populations over generations. Genes are carriers of information stability versus plasticity. Evolutionary computation is also a learning paradigm in artificial intelligence. But any can be used to optimize parameters genes on learning systems. Now the development of the methods of artificial intelligence triggered the development of new computational hardware platforms. And the beginning was the von Neumann computer architecture which uses memory control unit as separate units. And this architecture is still on our laptops, realized in our laptops and computers. And it is realized also as general purpose computers or specialized fast computers as GPUs and TPUs, tensor product units, or cloud-based computing platforms. But alternative computer architecture that evolved due to the development of the brain-like artificial intelligence was a neuromorphic computational architecture. And neuromorphic computational architecture integrates data programs and computation in one structure, similar how the brain does it. We don't have a separate memory, we all have the memories and the computation, the rules, the learning together in one brain. A third type of architecture developed was the so-called quantum-inspired architecture using quantum bits, which are in the quantum bits in superposition of 1 and 0. So all these architectures, the common thing is that they use binary representation by the bits, but the bits in the von Neumann architecture are static representation of data. In neuromorphic computation, bits are allocated with time. And in the quantum architecture, bits are in a superposition of states. AI models can be simulated using any of the architectures if available, but with various efficiency. And if we look at the cloud computing platforms that are massively available now for AI applications, we should say that they make it possible to rapidly build cognitive cloud-based exploration application of data. Such systems have been released by competing rivals for world domination. And this, of course, we all know about the cloud computing facilities by Google, Facebook, Microsoft, IBM, Baido, Amazon, and many more. And this is one example of IBM Watson discovery services that people can download, upload their data and do some preprocessing. And they can do some modeling and they get the output delivered. And this is data and here we can have the same with text that could be entered in the system, etc. And I should say that the cloud-based computer platforms are useful, but they have limited set of methods, mainly for online data analysis and not very suitable for processing streaming data. Neuromorphic hardware systems that were developed to meet the requirements of the brain-like computation started with Hodgkin-Hurtsley model. Carver-Mead at the Caltech developed the first electronic circuit that realizes a neuron. The first silicon retina was developed by Misha Machowat. Unfortunately, she had a very short life. At the moment that there are quite a lot of competition, I would say development of neuromorphic hardware from the IBM True North with 1 million neurons and 1 billion synapses to the Stanford Neurodrid and to also the Spinacker developed in the University of Manchester under Steve Fulburs leadership. What is the next step in the computer platforms that will support AI? Well, maybe quantum-inspired computation. Quantum-inspired computation doesn't mean quantum computers at this stage. Because we have quantum principles used in quantum-inspired evolutionary algorithms in present neural networks, quantum-inspired neural networks and quantum-inspired optimization of deep learning machines, but they are still in their influence. This is Ernest Rutherford and many other people contributed to the physics of quantum physics. Some principles are now being used like principles of the superposition and the quantum gates. We use quantum vectors and of course we have some other principles like interference, parallelism and entanglement. Well, that was all about AI methods and AI platforms. Now let us look at the applications. Well, I'm not going to read all this. I just want to say that on one hand we have the technology stack and this is actually a diagram that was developed by Bloomberg. Still probably not quite that needs some update, but we have here the techniques and the platforms that could enable AI applications. So we have the machine learning, we have the natural language, methods and systems, we have development, we have data capture, we have open source libraries and we can see Wetter here, actually the white-cuttle development machine learning. So these are the technological platforms and methods that would enable development of a lot of machine learning and artificial intelligence applications in the areas of healthcare. I won't read the companies. There are only a short list of companies which deal with that. We talk about industrial applications, agriculture education, we talk about autonomous systems as virtuals, aerial, we talk about enterprise functions, customer support, etc. We talk about visual, audio, sensory and other information processing and this is only a short list of applications of artificial intelligence, not to mention radio astronomy and other large projects that are using that. Now I'm going to talk about only some of them that some of them were developed here in New Zealand, some of them were developed in my lab, some of them were developed in other universities in New Zealand and in other places, but I will just select a few of them to talk about these applications. I can't talk about all of them. Well, let's start with AI applications in medicine. Modeling and understanding the brain is a very important part of science and research in artificial intelligence because the brain is a complex information processing machine. We would like to understand it for several reasons, not only to develop new artificial intelligence but to improve to protect, to understand our brain for the future. And this is an example of how EEG or FMRI data can be modeled over time and the model can be used to try to understand some processes in this data. And here we have computational models based on YouTube that are trained on FMRI, functional magneto resonance imaging data. Not only we can train the system to recognize certain patterns of FMRI, to classify, to predict it, but we can use it in terms of understanding what the functional connectivity of the model is in order to explain better what the data is about. We can also look at using or modeling electroencephalogram data that are collected from the human brain from the scalp. And these data could be used for many applications at the moment. I will show only a few of them. One application which is very, very important at the moment with the ageing population is to predict progression of multiple mild cognitive impairment to Alzheimer's disease. So here we have the brain model of a mild cognitive impairment patient and here we have the brain model of the same patient who developed Alzheimer's disease. And we can see that there is not much happening here under the same conditions. That could be used, these models can be used to predict progression of disease. And here this is in months, but of course we can also use such models to predict states of brain in seconds. And this is the brain before activities of a driver and this is the same brain before microsleep. So we can see that two seconds before microsleep the brain shuts up, shuts down, of course. And then we can recognize by computer system that is measuring the brain signals whether the person is going to microsleep or not. And that is done with the University of Canterbury. Brain computer interfaces is a fascinating area and brain computer interfaces are interfaces that allow humans to communicate directly with computers or external devices. Through their brains, for example EEG data. And the brain computer interfaces can be used to parallelize people to navigate and to move cursors for people to move wheelchairs or for people to communicate between each other. They don't have to sit to each other, they can communicate with their brain signals to a computer. And a lot of applications of brain computer interfaces for neuro-rehabilitation that saw skeletons and for robot controls that was also done in my lab. Now brain computer interfaces are also used in applications in virtual reality, a navigation virtual reality for entertainment or for rehabilitation of stroke. This is a virtual reality system that helps people to move their hands through observing a virtual hand in a virtual reality and using the so-called the mirror neuron in their brain to activate the part of the brain that is damaged. Computer systems now can recognize emotional phases along very well discriminating all these types of emotions with 94.3% accuracy. They can also recognize the emotion of the person who does the face expression as well. Well emotional computing is now coming and it's a computer system that can learn express attitude and emotions. A motivation for research is the ability to stimulate empathy. The machine should interpret the emotional state of humans and adapt the behavior to them giving the appropriate response to their emotions. Computer systems now can have human face and what Mark Zader from the ABI of himself Auckland has chosen is the face of his baby. So this is the face of an emotional computing system and this is Mark Zader. Precision medicine and precision health is a very rich area of research and funding. Precision medicine means that everybody deserves a model that will predict the outcome for this person in the best possible way rather than using one model or one function or one formula for everybody anyway in the world at any time. And precision medicine and precision health is based on building the model for a person based on the personal data and based on a data set that has many other personal data to select the best neighborhood and the best model for the best prediction of this person. Personalized modeling devices and individual risk of event prediction is also a very active area where we are seeing portable devices that can predict certain events of humans. Here we have a prediction of stroke of individual one day to 11 days ahead and that is a personalized model that is built on both personal data and environmental data including solar eruption, that was done with Nissan Institute in AUT and this model predicts 95% accurately one day ahead stroke on the population in Auckland. Understanding human decision making is subject in the so-called neuro-marketing and neuro-economics. This is a current research that I do with my students where we measure brain data to understand when people react on familiar versus unfamiliar objects. And we can see that a very deep learning is happening in the brain when people, different areas are activated, when people see familiar objects. And that can explain versus the familiarity of the person even before 300 milliseconds which is considered to be the perception time of the stimuli. So here we have persons who perceive unfamiliar objects. You can see that not much happening in their brains and that could be classified early, very early when the stimuli is presented. Applied AI in bioinformatics is a very active area when we have systems that extract patterns from gene expression, from protein to define what is the pattern that can discriminate people with good prognosis versus people with bad prognosis of cancer. And computational neuron genetic modeling is part of bioinformatics where we have gene information included in the computational brain models. That is also a very active area of research. AI for audio-visual information processing is indeed now having a boom with deep learning machines where we have systems for fast-moving object recognition. Using autonomous vehicles, surveillance systems, cybersecurity, military applications. And we have systems that can recognize very quickly the movement on the road and classify this movement with very good accuracy. Enhancing human prosthetics? Well, artificial intelligence, human prosthetics sometimes give some information, but it is not precise information. And we can enhance the information given to the person from the human prosthetics, from the eye prosthetics with some artificial intelligence that analyzes the information and gives either verbal information to the person. So prosthetics are, of course, a very active area. And robotics, of course, is something that is in many laboratories used to demonstrate some algorithms. And here we have the drone autonomous system that has also AI for image recognition and for decision-making. Driver assistance? Well, if you talk with the IBM Watson Conversation Services that offer one example of driver assistance. And I know that many groups work on these autonomous drivers. And I did this experiment and I asked this system to say, give me a, I said, well, stop the degustation. And the system gave me the map where there are just stations nearby. And the system said, which one would you like to drive? But I didn't want to drive to anywhere. I said, stop the car. And the system said, I don't understand. So, well, we have to be, of course, careful using these driver assistance systems. But a lot of progress is being done in this respect. AI in finance, now we are having automated trading systems which are autonomous robots on the Internet. So there are quite a lot of autonomous trading agents. We don't have people sitting in a big room trading. No, these are autonomous robots that do the trading. And this is a very fast-drawing area at the moment. AI for ecological data-modernity and event prediction. Well, that is quite a lot to be done to take data from the ecology and to be able to predict some events like the establishment of harmful species that was done with University of France. Or to use multisensory streaming data to evaluate the pollution of the area. And this is the area of Vancouver. And the research was done with a group of David Williams from University of Auckland. Predictive data modelling for streaming data is now becoming very useful in telecommunication, in milk volume prediction, in wind energy prediction. And you can't imagine how much the wind energy systems and farms lose with a bad prediction of future energy. So this is something that has a good place in New Zealand. Seismic data modelling. And this is New Zealand with all the seismic sites. As you can see at the moment, what is happening as seismic creativity is that the spite means that there is a seismic change in this particular seismic centre. And the connection means that after that there was a seismic creativity in another one. So there is a temporal relationship. And there is some good progress. There is a good progress in this respect in New Zealand. AI in New Zealand. Well, this research started early. Here we have John Andre, who is still in Canterbury. He is retired, but he published one of the first books, Tinking with the Teachable Machines. The academic press is 77. Now we had development in computer interfaces, computer animation, neural networks, machine learning software. And now we have a very active research in New Zealand in computer vision, natural language processing, evolutionary computation, robotics, emotional computing. Distributive AI. And I should say that this development of AI is not on a kind of empty space. It was built on the general computer information sciences capabilities of New Zealand due to some pioneers in this area like Brandcourt, Bob Doron here, Sally's and other. In New Zealand at the moment we have applications, quite a lot applications across areas of medical devices, health care, transportation, precision agriculture. And I have a long list of applications that I could find. But it is not a long list. It's a short list. I think there are many other companies which use artificial intelligence at the moment, even though artificial intelligence law has been developed at the moment. There is an AI forum in New Zealand to discuss the future of New Zealand AI. And Interactive is just one project of artificial intelligence for big data technologies in New Zealand that consists of generic technologies, along with domain-specific technologies in different areas, along with projects and products that are planned to be developed. The future of AI? Well, that's a big question. Is it artificial general intelligence? Means machines that can perform any intellectual task that humans can do? Is it technological singularity? Machines become super intelligent? That they take over from humans? And develop on their own beyond which point the human societies collapse in their present form? Which may ultimately lead to the perish of the humanity? Or it is a tremendous technological progress? Early disease prognosis, diagnosis, robots improve productivity? Well, Stephen Hawking said, I believe there is no real difference between what can be achieved by a biological brain and what can be achieved by a computer? AI will be able to redesign itself at a never-increasing rate. Humans who are limited by slow biological evolution couldn't compete and could be superseded by AI. AI could be either the best or the worst thing ever to happen to humanity. Well, do we accept that or not? Well, it is, of course, a matter of discussion. My view is that the future is in the symbiosis between the human intelligence and artificial intelligence for the benefit of the humanity. Being at the same time aware of the potential risk for devastating consequences if AI is misused. And there will be another lecture, the lecture number four, by Ian Watson talking about ethics of AI. Well, now we should talk about natural intelligence. And questions are, would AI help to improve our human intelligence? Will reading books improve the intelligence our IQ, as Jim Flynn suggested? Will mindfulness help? Will brain prosthetics help? Or we need to listen more often to Mozart's music. Because there are some studies that Mozart's music being in the alpha brain waves is very similar and stimulates human creativity. Maybe we need to listen to Mozart's music. But I don't have much time. I will stop it here. Sorry, you can hear it at home. I'm sure you have some. And I should say that this work in artificial intelligence and my work personally has been supported by AUT. And also I would say that Marie Curie is one of the first women in science. And I was funded by this European Union funding. Royal Academy of Engineering, the University of Auckland and the IT Professional Institute. I would like to acknowledge them. And I would like to acknowledge my lab here, which most of them are here people. And I would like to acknowledge my love and thanks to my family, my wife, who has been with me for a long time. And in this particular case, she helped me to reduce significantly the number of the slides. And also my daughter, Stapka, in Scotland and Asia, in Switzerland, who are, I believe, watching this live presentation. Thank you very much. Thank you, Nick. That was a great way to start off the series. We've got enough time for a few questions. So if you've got a question, stick your hand up. Thank you very much for pressing on us. The enormous extent to which human abilities can be extended by machine. But I'm wondering, is there any overall limitation? We have two sorts of knowledge in the world. Objective knowledge of the way the external world we've seen works. This is safe, let's say. But do we also have subjective knowledge? Our own reaction is film, touches, cases, personal experience, and time and so forth. Now, I think the Bertrand Russell held the view that all our knowledge is, all objective knowledge is dependent on our subjective experience. There seems to be enormous difficulty in going on our objective science approach to understanding our subjective knowledge. Do you have a response to that? Well, the question is, there is objective knowledge and the subjective knowledge in our brain. How the subjective knowledge matches the objective knowledge and is artificial intelligence helping in this respect? Well, I should say that this is a philosophical question. I'm not a philosopher of AI, I'm more scientist. My approach to this is that we talk about knowledge as only subjective. It resides in the human brain. Our knowledge about the objective nature, the objective so-called knowledge, changes all the time, evolves all the time, and it has to be the case. We are no more in the time of Aristotle when knowledge was fixed forever. That is what he told. Now, we talk about knowledge that is evolving, that never ends to improve it, to adapt it, to make life better based on this knowledge. So, there is always improvement, adaptation of our subjective knowledge of the objective environment, objective nature. My kind of modest response to this large philosophical question that somebody can answer better. Which side? Yes, of course, yes. There will be available online. You can also approach me on this email address here and I can send it to you as a personal copy. Would you like to give a better response to that? Thank you for helping me to elaborate the answer. Thank you. Of course, the symbiosis by symbiosis. The question is, is symbiosis, does it mean men and machine work together? Yes, it does. And it is up to us to make it happen. And that is how I see the future of AI in a symbiosis with our human intelligence enhancing it, helping us with cognitive tasks that has been done by many people. So, we have more time to develop our human intelligence to develop new technologies and improve life. Yes, definitely symbiosis is a co-working systems, the human and the AI. Well, I wouldn't say at the same level. I think we are the driver of this symbiosis, we the humans. And that is my belief and it is opposite to what other people talk about the technological singularity that the robots will be driving that. I think it's up to us to make it happen or to lose the game. Yes. The question is, is artificial intelligence, how different artificial intelligence is from the process of making computers faster? Making a fast computer doesn't mean that it is artificial intelligence. That is the question. Well, the answer is no. Artificial intelligence, we look at faster machines. They work fast at a level of searching data and data crunching rather than learning generalisation, making hypotheses, planning prediction. If you, of course, you can say, well, the chess player, the machine defeated Tasparov and it was not quite artificial intelligence. It was a fast machine which had some elements of artificial intelligence but mainly it checked many, many steps ahead what would be the game. So having fast machines can help artificial intelligence but it doesn't constitute artificial intelligence. It's probably a good point to stop. Nick, we've got something here that isn't artificial intelligence but it will change your mind state. Well, it excites my brain. Thank you. Thanks very much. Next week's lecture is... Robert? He's going to be talking on Home Smart Home. Come along. See you then.