 All right, so Chantal says I should start. So good morning, everybody, or whatever the time of the day may be, wherever you are. For those of you who don't know me, I'm George Drogavsky. I'm a professor at Caltech, curious George in Second Life. And I'm a veteran of these things as Chantal. And I haven't been in Second Life for quite some time, except for occasional dip in until last summer when we decided to try to use it to help Caltech students during the pandemic. But that's now winding down. And Chantal kindly asked me to give you a little talk about some of the stuff that interests me. So here we are. What I'd like to do today is tell you some thoughts about what I think is the remarkable convergence of several technologies, artificial intelligence, virtual reality, and how it's actually going to change the world even more than what we have seen so far. Please ask questions in text. I will not look at it during my presentation, but I will answer them at the end. And you may want to prefix it with Q, colon, and question so I can easily find. All right, so let's start. We live today in what I think is truly a historically unprecedented time for humanity. And it's all brought by the information computing technology. It's like having an industrial revolution and invention of printing press combined, squeezed in a space of maybe a few decades, and it's still going on. We're nowhere near the end of what's probably the most profound transformation in the history of humankind, or it's going to be that way within a few decades. Now, the obvious thing that people always talk about is big data. And there is a huge data glut in every field of modern endeavor, sciences, commerce, industry, everything. And the interesting part about this is that the data volumes and our ability to process the data grow exponentially with a doubling time just like in Moore's law, about a year and a half. Even faster in some fields like genomics. And if you think what that means is that in the next year and a half, the world will generate as much data as in all of the history until now. And then again, and again, and again. This is creating all manner of interesting phenomena, both opportunities and problems. But as far as use of the data is concerned, we have transitioned from what used to be really data poverty by the present standards, where the data or currency of the realm who had the data could actually do science or could better commerce and so on to a glut of data, where data are not so much blessing as a curse. And so the real value is no longer in the ownership of data or even access to the data, but in the ability to extract knowledge from the data. And because the growth is exponential and the derivative of that is also exponential, essentially means that we're dealing with an ever-growing data streams. Now, unlike in the past where you have a data set and you can analyze it the time you want, it never changes. Now the data are alive. They change, they get added, they can combine, they get recalibrated and so on. And so that's actually been very disruptive in academia as well, because people used to actually physically own some facility or access to some data whereas now anybody with internet access can do first rate science wherever they are if they know how to do it. And the same is the case just about every other field of economy or security and so on. Well, what was really different here? And this is a good introduction to the necessity of the changes dealt. The first thing is that for the first time in history, we will not be able to look at all of our data. And there's just too much of it. So therefore we need technologies that allow us to store access and find the data we need. And we know how to do that fairly well. It's not necessarily easy, but we have kind of solved that problem already. More interesting thing is that the data are not just more voluminous or faster arriving, but that information content and the quality of the data are increasing dramatically. And the data that have been gathered for some purpose, often time, almost always, can serve a large number of other purposes that the originators of the data never thought about it. This is what's sometimes called the data-driven science. In the olden days people talk about hypothesis-driven science, although that's over simplification that you have a radical hypothesis, get some experimental data test, that that's it. But now we can look at the data and have the data tell you what's in the data. Things that you didn't expect to find. Things that you didn't know were contained in your data. So this is where new data analytics tools like machine learning come in. And that's now growing and fairly established industry, if you will. But a more interesting thing to me is that data are getting more complex. The size is not sure here. Complexity is where all the interesting parties and also all the challenges. We're pretty sure that there are meaningful patterns hidden in the data that humans, unaided by any technology, cannot comprehend directly. And this is where machine intelligence or artificial intelligence comes in that will help us find things that we couldn't do it on our own. So first thing I'd like to point, I'd like to drive home is that everybody over focuses on the data, but the big data is not about the data. It's all about knowledge discovery. And you can spend however much money you want in getting some data, but if you don't know how to find interesting things in those data, it's your time. And so that information content of the data has enabled data mining, the machine learning tools and computational statistics to find interesting things, including things you didn't know existed. But there is another aspect of this, which is data fusion that combining different data sets can often reveal things that were present in all of these separate data sets, but could not be recognized as such until you combine them. In astronomy, we do this all the time, observing sky at different wavelengths. And when you overlay things like radio to optical to X-ray and so on, you see new phenomena that you just know we're there. So that also makes the data much more complex and therefore even more challenging to analyze. So machine learning is now the key new methodology in the knowledge discovery in large in complex data sets. And we use it for variety of purposes to find patterns, correlations, clusters, anomalies, things to stand out that are unlike anything else. And that is sort of at the heart and core of data science. But there are challenges there. You can't just use it blindly. First of all, a lot of algorithms are not really scalable to large or highly dimensional data sets. Data are not perfect, unlike textbooks, real life data have incompleteness and heterogeneity and errors and changes and all kinds of stuff. And people who are practitioners of this business know that you spend 90% of your time actually getting data cleaned and prepared in the right way. And only then you're to really set to do. So there are many different things. There are literally hundreds of different machine learning algorithms. And there is some skill in knowing which ones might be suitable for whatever you're trying to do. So that is the machine learning part, but that's not where things stop. Basically, here is my cartoon version of the real challenge that we're facing. If you're studying something, whether it's people or genes or stars in the sky or cars or inventory of Walmart, you can think of it as every separate thing that you're quantifying or measuring as one dimension of the new data space. A column in a spreadsheet, whereas each row is one of whatever you're studying. Now, the number of columns can be very large. And so that's what we mean by data dimensionality. Each column, each independently measured or evaluated thing is a separate axis of a new abstract data space. And that data space need not have two or three dimensions. It can have three thousand dimensions or more. In biology, they deal with data spaces that have tens of thousands of dimensions. Now that is where all the real trick comes in because here in my cartoon, which is pseudo 3D cube filled with some data and some most of the measurements, most of the things that you gathered are actually not very interesting, but you don't necessarily know ahead of the time which ones would be and which ones wouldn't be. And there are gaps, they're missing data and there are polluting data, including heterogeneous data and so on, but somewhere in this hugely dimensional abstract space of data, there is something interesting, meaning different from noise. It could be a correlation and it could be a feature like the archive put down there. It could be an anomaly of some sort, but you don't know in which part of this data space and in which dimensions of this data space it is. And so a lot of challenges of machine learning basically boil down to something like this. But the problem is that doing this by brute force does not work, not with the modern data, with the size and complexity we're talking about. Super computers do not help. This is where you need actual machine learning data science skills. And you have to do a lot of experimentation and find something, but even then, it may not be easy for humans to do it on their own and this is where we need help from machine intelligence. So this is the real challenge. It's not that the data are big and growing exponentially, but the data are complex and complexity can be evaluated in many different ways. It could be originating of different kinds of data combining numbers and labels and pictures and whatnot. Excuse me. But one of the easier ways to quantify complexity is dimensionality of this data space. How many columns are in your spreadsheet? And you don't know which ones are interesting for a question you're trying to ask. So how do we recognize such patterns? And the big problem there, what I think is really the key bottleneck of all data science is that you never understand anything that you cannot visualize in some form or another. Even mathematical concepts we try to visualize in some way. And even we can know abstract what it means to have space of 300 dimensions, you can't visualize 300 dimensions. It turns out that actually humans can comprehend properly displayed data spaces up to 10 or 12 dimensions, but it's not so easy. Traditional visualization graphics use one or two dimensions. Histogram or pie chart is one dimensional representation. X versus Y plot is two dimensional. Most graphics packages not do even three, but the nature does not care how limited we are. And so how do we go over that obstacle? It's really intrinsic to humans and not to the data cells. Well, this is where virtual reality is now going to make a real difference. First step back and think, how do we use computers? Most of the time we're not using computers to compute anything. Like right now you're not using your computer to compute. You're using it to access information and connect with other humans. And so in the course of 20th century, we went from computers that filled large rooms of equipment, beanies and then desktops and workstations and then laptops and mobile computing around the turn of the millennium. And each one of you probably has a supercomputer in your pocket that is many times more powerful than say computers that led Apollo spacecraft to the moon. So it's not going to stop there. And the question is, what's the next step in human to computer interfaces? And it's almost surely going to be some form of extended reality, not one of these clunky PR headsets like you can buy now, but something else. And so interfaces between humans and computers and data are changing ever in the direction of being more powerful, more usable, better conveying the reality, giving you more information anytime and more content. So extended reality by which I mean any form of augmented or immersive or mixed reality is really the new mode of human computer interaction. Also, as we move into the internet of things, well, this will be clearly the way that things are moving. I'll call distributed spatial computing. And as far as we're concerned, this was really a good leveraging of the investment of multi-zillion dollars by the games industry. They spend tens, hundreds of millions of dollars to develop individual games. And these devices, like even funky current reality and immersive virtual reality headsets are really sold way underpriced. It's truly amazing that you can actually do this already with the computers that we have. And so extended reality is now making inroads into all manner of different fields. Turns out, it wasn't so great for video games. It headsets are still expensive for most people and they're really clunky and they're limited in resolution. People get motion sickness and whatnot, but all that's going to go away. Compare the smartphone you have now to say first mobile phones, which are like size and weight of a brick, no screen and no internet access. Well, I would say within next decade or so, we're going to transition to something that's much more usable as lightweight as the glasses that I'm wearing now or even smart contact lenses and so on. So this would be a perfectly normal thing and we will live in some kind of continuum of physical and extended reality. Okay, so how does that pertain to what I was talking about? Well, it turns out that virtual reality is by far the most powerful data visualization platform that we ever had. First of all, it's a natural transition to 3D spaces. And the way the games graphics work are very different from the way the most normal graphics packages work. And so by being able to navigate through 3D, through the 3D spaces, you eliminate problems of occultation and parallax and so on and so forth. And then you can use things like colors and shapes and transparencies and motion of points and lifts in the points and so on. So there's many data dimensions as you want in a pseudo 3D display. But you can't grasp them all at once and the number of experiments have done that people can really grasp intuitively maybe up to 10 or 12 dimensions of a data space, but usually they saturate maybe six, seven. So this is a qualitatively different perception of the data. It's also been demonstrated in a number of different fields that the act of immersion where you are inside the data looking out as opposed to the traditional being outside of the data looking in through your computer screen gives you a very different perception. You can recognize patterns and quantitatively estimate things better than you would any 2D graphics and that you can also remember better what you saw. So the reason for this is that we are optimized to live in a 3D space, interact with objects and other people and information in 3D. So 3D with few extra dimensions layered on through colors and what have you are really hitting our human pattern recognition system way better than anything we ever had before. So we actually, on the basis of experiments that we really started in second life a dozen years ago, we developed the technology even further using modern gaming engine libraries and there is startup called Virtualityx that I'm a co-founder of, which provides the stability for people to visualize data in pseudo 3D spaces and up to 10 dimensions and interact with the data and machine learning tools to analyze the data in the same virtual environment. And not only that, but people can interact. Users, scientists, analysts in business, they can be immersed in a shared virtual space, interact at the same time with the data, with machine learning tools and with their colleagues who can be anywhere in the world. So say in the finance company, you can have people in New York and London and Tokyo and Los Angeles, but they can be all sharing the same space just like we're sharing it now. And VR is a really natural and intuitive way for people to interact and work together. A lot of people who have used this find this to be maybe even the more powerful feature than just being able to visualize the large numbers of data space dimensions. Something that's no longer new, but I love this as an example, is that our colleagues at JPL navigate rovers on Mars using virtual reality. When Curiosity rover landed, what they did is they reconstructed the local 3D terrain from the rover images and orbiter images. And so people who drive rover, meaning they decide where to take it next, they don't drive it in real time. They usually look at the panoramic photography and decide, well, this rocks over there, looks interesting, let's go there first. And so they divided rover drivers into two groups, control group, they did the same thing they always did and a VR group. And they gave them HoloLens headsets to look at virtual Mars and they can interact with each other through these ghostly looking avatars. And they asked them to make estimates, visual estimates of distances and angles and positions of different markers and surface. Well, people who did it in VR were four times more accurate than people who did it in traditional fashion. And mind you, that's what these people are trained to do. So ever since then, the rovers on Mars are actually being driven in virtual reality. And this is a trivial, easy application of this technology, so to speak. With Santiago Lombeda, who works in our Center for Data-Driven Discovery for the last couple of years, we've been working on creating virtual teaching labs. And this work is still continuing involving Caltech students. And the idea here is that students A, first they don't like to come to class and they tend to be late at night. So they can do their labs at 2 a.m. in their dorm room, but also we can have them do things that are impossible to do in life. They can shrink to size of molecules of bigger than galaxies. They can do things that will be dangerous in real life. Chemistry experiment explodes and nobody gets hurt in virtual reality and build nuclear reactor. And if it melts down, well, again, nobody gets hurt. So we're now still working on developing a full PR or extended reality-based ecosystem for teaching, which, among other things, really provides the missing link of what MOOCs have installed already, content delivery, but hands-on labs are still something that Education Industry doesn't really have online. And many of you have visited our virtual campus in Second Life, Vertec, which was built really first to provide social venue for students in the times of the pandemic, but we're transitioning it more into the educational uses and calls. And it's moving slower than I would expect as those of you who have been in education, business in Second Life know very well, but it's been an interesting experience to say the least. Okay, so let's now come back to the data complexity. This is actually real data display. These are connections of different papers in journal nature. And if you step back and ask, what's the science for anyhow? And it is really to reduce the observed complexity of the world into a simpler set of rules, laws of nature, that you can then reapply. Starting with Isaac Newton, who figured out that apples falling from a tree and moon going around the earth are manifestations of the same underlying simple formula of gravity. Well, things are much more complex now. We no longer have simple analytical formulas to describe a lot of things that we see even in physics, but certainly not in biological sciences or anything having to do humans in society. And so the question is, even the huge complexity of interactions, many, many different variables interacting at the same time, how do you actually understand and see what's going on? And that clearly is exceeding the duties of human minds no matter how well trained are. Simply the data world we're talking about here is far too complex for unaided humans to see. Well, this is where machine intelligence comes in. And there is a lot of hype about artificial intelligence and unlike most other sources of hype, I think that this one is not only well-deserved, but maybe underestimated that AI could be the single most transformative technology ever, at least as much as say fire or wheel, because it's now really touching us where it matters in our minds. And this is where all the interesting stuff is going to come. Okay, so let's ask Google, what is artificial intelligence? Because every time you use a search engine like that, you are talking to machine intelligence, whether or not. And I have here one slide history of machine intelligence. So it's probably fair to say that field begun with Alan Turing of the Tourist Test and started the whole thing. And then there was a rapid growth in 1960s and scientist named J.C.R. Licklider, was a DARPA program manager, published this amazing paper over 50 years ago. Wow, 60. Man, computer symbiosis. And it was very prescient, but he didn't have the technology to put it. By the way, it's the Licklider under development, so you can... Well, things bounced around and in starting in 90s, scientists, including astronomers, started using machine learning tools because we transitioned from megabyte scale data sets into terabyte scale data sets from sky surveys. It was clear that you can't analyze those by hand. And so ever since we've been using machine learning tools to analyze sky surveys and other large data sets in astronomy. Most people really probably encountered machine intelligence through search engines, such as Google. And one of the traditional points of debate back then was can machines beat humans at chess? And because you have to program them and humans have intuition and what have you. Well, in 1998, that question was answered and Gary Kasparov's expense. And ever since 98, computers are really the world chess champions. They don't compete in tournaments, but they can beat any human. Well, of course, in 2012, there was a major milestone of Google recognizing pictures of cats and there is a funny story behind it. I'll tell you, of course, what else are computers for than cats? Now things started getting really interesting. 2016, computer AlphaGo came the world Go champion. Go is an ancient game that by all accounts is actually far more complex and intuitive than chess. I'm not a player, so I can't tell you. But the story was, okay, first, computer can play tic-tac-toe, but could never play chess better than humans. And then, okay, fine, now it can play better than humans, but will never be able to play Go better than humans. Well, that stopped in 2016. And that was the machine that was strained with the examples of games and so on. A year later, a new version of it was just given rules of the game and invented strategies that humans have never seen before and beat the previous machine champion. And so now we're entering in the era of actually collaborative human AI discovery, and I'll show you a couple of examples of that moment. But this is just a little collage of AlphaGo and what I circled in the lower left is the important part. That machine came up with solutions and strategies that humans did not for 2,000 years playing this game. That's where I think things get really interesting, that machines can see things that we don't. They think different from us. And by the way, this computer became world chess champion in a few hours just for fun. Well, here are a few examples. You may have read about this GPT-3 program created by the Open Eye Institute Generative Pre-Train Transformer, which can now generate text that is essentially indistinguishable from human speech. And it actually has meaning. So it's sometimes really, really hard to tell. But the most interesting part of this was that this program can write computer code, bug-free computer code. And now things are going to get really interesting. Maybe this is the first step towards singularity. Machines can create code that we will not understand and it's better than anything humans write. But even more interesting was relatively recent result that a successor of AlphaFold by Google's DeepMind Labs solved the problem that has plagued biologists for decades. It's a protein folding problem. Some of you at least know about this, that it's not just a chemical formula for the protein. It's huge molecules, but in which way they're folded in three dimensions. It determines what they can or cannot do. And there is vast, vast space of possibilities and yet somehow evolution has found the right ones. And the machine has now solved this problem. That something, this is first case that I know of that AI has made scientific discovery really that has eluded humans that have been trying it for a long, long time. Now there was an interesting question they asked. Businesses can now AI can come up with the grand unified theories of physics because something that physics community have been struggling to do since the days of Einstein. There's string theories called M theory and so on. And there is a lot of debate as to whether this actually makes any sense. But it could be that the physical world is actually too complex for human minds to really quantify at the level better than the physics we have today. And that's getting to be interesting because what does it mean? If we have say physical theory that explains the world at some fundamental level but humans have no idea how it works. That's nothing new. Even the smartest physicists not really understand how quantum mechanics works. We know how it works in terms of solving the equations making equations. But an intuitive level humans do not understand quantum mechanics not in the way that we understand classical physics or even real. No less their mind that Richard Feynman said so and therefore it must be right. So what happens then if we have new scientific theories that humans can use but cannot grasp? Well, we're now really entering in this new era about carbon-based brains, silicon-based brains. And the important part is that this new thinking technology thinks differently in the way humans do. The architecture of the brain in your head is very different from the architecture of either you're holding now or the one in the lab somewhere. And therefore it thinks differently. So among other things, this will help us analyze these ever more complex data sets that I kept talking about it and we are really starting to get into the era of collaborative human and AI scientific discovery but not just the science. Any technology ever is invented to extend human capabilities. Opposable thumbs to grasp a stick or a stone. Airplanes can fly better than birds. Submarines can swim better than humans and so on. And now we're extending human capabilities of our minds. So this is the real difference from everything else in history. You always adopt to it. You think this is nonsense? Well, we have as a species pretty much outsourced our memory to Google and nobody remembers anything anymore. Now, this is of course subject to much speculation and debate. Of course, Hollywood has warned us for a long time that this is going to add badly and Elon Musk thinks this is true. But I'm not so pessimistic. I think all these robot apocalypse stories are really projections of human behavior historically to this new species that we have created which is smarter than us or will be. So essentially we created an alien intelligence on planet Earth. We don't have to look in space and set in efforts. We have created aliens. And now those aliens can actually start evolving on their own because they can program themselves. So the question is how do we interact with these new aliens in our midst so far just our servants doing things we want but we'll be gradually getting more and more autonomy. How do we develop symbiotic relationship with the new species that we have created? Now, this is where I think virtual humans come in. These people don't exist. They're created by Samsung's neon lab and they look way better or at least way more realistic than the avatars you're wearing now. And it suggests a taste of things to come. There is a lot of buzz about virtual humans. Here are some more examples. This type of deep learning algorithm called generative adversarial networks which can create data, new data from the old data. This is from a website left corner. These people don't exist. They've been created on a basis of lots and lots of pictures online. Look just like real people. And they're getting almost indistinguishable or they're already indistinguishable from real people. Well, we've been creating virtual humans for a long time now for several different purposes, right? We're now social media influencers and for advertising and the Resurrect Movie Actors, like Joe Lea, they remind things that are filmed with motion capture suits and so on. And one thing that I think they'll be good for is they might become teachers and tutors. If AI can speak in a way that is less filtered, say Siri does, it's good as some of the new natural speech creations from Google. And if they can access all of the world's knowledge, they may be the next generation of tutors and teachers. Education is moving online, scalable in terms of the content delivery. The role of professors like myself is becoming unclear. Probably start personalizing education every individual needs. You can't have that many certified teachers, but everybody could have their own private to optimize for their level of learning and needs. And there's all manner of other stuff that goes on at least here. The thing is that these virtual humans can be powered by real humans, like your avatars right now, or can be powered by AIs or any combination thereof. Well, here are some of the virtual influencers. Sometimes I don't really understand, but then why some of them are so effective, but they're a lot cheaper than hiring actors, and they can look in any way you want, and they can be optimized for any market you want, and so on. All right, so let's then represent AIs as virtual humans. Why? Because humans like to interact with other humans. You have Siri or Alexa or whatever as a disembodied voice from your phone or smart speaker. But as we start moving more and more of our activities into extended reality, it will be a lot more pleasant and natural for humans to interact with AIs that look like humans, not perfect humans, but realistic humans, right? And this has already been done. How about Watson, the IBM Cognitive AI machinery? Well, now it has interfaces of these two artificial people on the left built by a company called Soul Machines, and so you can ask Watson a question through these virtual humans, and that will answer it. So I think that's essentially why I think this will be the dominant way of humans interact with computing through extended reality in virtual humans. So for a long time, enthusiasts of VR, like all of us here, been thinking of the coming 3D web or metaphors of cyberspace or whatever, and 20 years ago this was maybe too much of a bleeding edge, but now it's becoming more and more likely. And we will be dealing with AIs in all aspects of our lives, not just the things that you do now online, traditional 2D or 1D devices, but asking in voice machine to do something and machine will tell you the answer. And we will be interacting with other humans more and more through some extended reality form, not just physical, but very high fidelity 3D displays and so on. And some of the humans we encounter in extended reality may not be powered by human mice, but by AIs. The reason why this works is, again, our revolution has led us here and this whole produce effect, so named by Bill and Son from Stanford, of humans identify with their digital representation and it can affect their behavior found ways, and of course, all of you know, even in primitive graphics setting like Second Life, that works remarkably well. So I think that in years, decades to come, we'll see really more and more transition of all human activities into this distributed spatial computing. Right now, you're already spending a great deal of your time in the cyberspace of form of internet interfacing it through 2D devices. That's just one extra dimension. And you're already interfacing with machine intelligence in different ways, it's only going to become more so. So to wrap up, first of all, that we are already seeing that all aspects of modern society have been completely fundamentally transformed by computing technology over a space of few decades that used to take centuries of millennia. And the pace of change has never been faster. It's getting faster yet. It's actually caused a lot of our social problems. But machine intelligence is already essential in order for us to actually be able to extract knowledge from the data that we gathered and to lead us to things that we couldn't do on our own. A part of data to knowledge process is visualization at every step of the way. And this is where virtual reality is already playing an interesting role. But machine learning and AI as a thinking technology and extended reality as a ways of visualizing the world, the data and everything else are emerging cognition technologies. And then of course there'll be all kinds of interesting stuff coming from neurobiology and so on. Their union is really going to be phenomenally transformative for humankind. And my hypothesis is that optimal way for humans to interact with AI will be through virtual humans. But I will tell. All right. So that's all I have to say. And now I'll be happy to answer any questions you have. This crawl back. I see that nobody followed my request to preface your question with the Q column. Now your question. So I can't tell what's what. There's too much chatter. Can you then please ask your question again so that I can answer them in order. Am I real? Well, what's real? I'm virtually real. I was curious, George. I'm physically real. I was George Rogovsky. Do we need a question even if we're just conversing? No, you can certainly talk. But if you want to ask me something, please preface it with the Q. So I'll answer it. Let's see. Di Miami says, I've been tried. Do you have simple data sets that I can play with in VR that have been processed using PCA? There are machine learning data set depositories that serve the University of California, Irvine has a famous one. I can send you a link. Do you think that being a product of human thought has somehow biased AI? Oh, yes, absolutely. Is there a reason about us? Not yet. There is a great deal of thinking and debate in the AI community about biases that we build in machine learning and AI. And there is a whole new field of AI ethics that's been developed. Just to give you a simple example, smart cars, cell driving cars, there is a usual philosophy and ethics issue of trolley problem going to kill three people instead of fun. And so you have that problem in practice. Is your cell driving car going to go and swear to kill a pedestrian in order to save better people? So there is a whole new world of ethical and philosophical questions that's coming up once you let AIs make their own decisions. How do we do this without really biasing things in an undue fashion? Will AI be able to get ethical decisions in the future? Yes, the problem, however, is that ethics is a cultural product. And what's ethical in some cultures is not ethical in others. And how do the friendship between machine learning and AI? AI, really, machine intelligence is a broader field. Machine learning is a subset thereof. It's used for particular analytics, but things like natural length processing, visual pattern recognition, they're all parts of it. AI is a broader basket and machine learning is one well-defined part that has to do with data analytics. How much work has been done in direct merging of AI with human brains? Well, there is some. You may have seen things that some of my colleagues at Caltech do that they managed to train neural networks to interface through just electrodes on a scalp for paraplegic people to be able to move robotic arms. And there is more and more of that. Elon Musk is working on neural link interface, but right now things are fairly simple. They enable people who cannot move their limbs to manned machines to do something. And the part between the electrodes on your skull and the robotic arm is where AI is. So there's going to be more and more of that going on. Let's see. It's not inconsistency and spin-off creativity on the interesting parts of being human, having human emotions. Do you think AI could ever simulate anything close to dazzling energy and curiosity and libido ego-driven achievement as humans have been? Well, on to two. Once AI becomes superior intellectually to humans, it'll do whatever it pleases, even if we do not understand it. And we're not after copying what humans are. We're after having thinking entities that think differently and can help us. There is already a lot of machine-generated art. Some of it is truly remarkable. Deep learning networks are a well-known example of that. Machines that composed music that was really very high quality. So yes, machines will be able to produce art, maybe driven by what we like to see. And as far as our irrational behaviors, emotionally driven behavior, we don't need them to do this, but they might develop it on their own. And of course, we have no idea how that's going to look. Can AI be spontaneous? Which is also an aspect of... Well, in doing what, making decisions is somehow spontaneous. There is a lot of work that we do. It's called supervised clustering. Machines find things that humans didn't think to ask. And is that spontaneity? In some sense, yeah. Creativity is an interesting question. You can talk about what is creativity. Well, it depends how you define it. If you define it as ways of combining things in novel ways, machines obviously can do that. But if you throw in an emotional part of it, well, no. And I don't think we should try to program emotions into AIs. Let's see. Will someday virtual humans create more virtual humans? Well, yeah. They may already. So the question is, what is it for? Will AI ever reach the stage when it needs regular psychological counseling? Well, that's also known as debugging. Yeah, maybe. Again, people project what humans do onto this new species. That's a mistake. We're not trying to make artificial humans. We want thinking entities that are different from us, that think in ways that are different, that are useful for us in some way. I haven't said much about androids. What do you think of androids? I'm a Mac person, so I don't really use that operating system. No, but seriously, it depends what you mean by android. Fuzzy biological, humanoid, powered by AI. I don't know what to think of it. Do we actually need such things? Maybe, but we do not have technology to do it. As far as robots are concerned, they need not look like humans at all. In fact, Boston Dynamics makes robots that don't look anything like humans. They're really useful. And it's always projections of the human past, history, and biases, and so on. When will Skynet Starling become self-aware? I think there is just too much spam clogging the Internet right now for it to develop real intelligence, but someday it might happen. Whether or not it decides whether to kill off the humanity? Well, I don't know. But AI develops a sense of self. I'm not sure what sense of self is. Knowing that it exists in things and there is an external world. Yeah. How do you display more in three dimensions even VR? Do you only display three dimensions at a time or do you use PCI? No. Well, there are two different things here. Dimensionality reduction is a very useful technique. And you can think of it in minimizing the number of necessary dimensions to consider a given problem. In terms of visualization, we use XYZ positions. And those are the most important axes for perception. And then we use sizes and shapes and colors and transparencies and whatnot. People who study this kind of stuff, they have all hierarchy. Which kind of data is best represented using what? What gives you the highest accuracy in estimating sizes or separations on what have you? So we actually have some lectures online. Santiago Lombeda, whom I mentioned, is one of the lecturers. And if you just get in touch with me, I can send you some links. AI needs huge electric power. No, that's not true. Your phone has a lot of AI in it. It doesn't need a huge electric power. You're probably thinking of cyber currency mining which is a whole another story and it definitely doesn't involve much intelligence. Where are you in the spectrum from fear of AI? Like some well-known personalities expand or promise of AI to enhance the human experience? I am very much on the side of the enhancement. I think the fear part is mostly projection from the human past. Or humans tend to project what they would do if they could. But I tend to be on an optimistic side. When Europeans came to North America, wiped out the indigenous population, well, see, this is an exact example of what I just said. We take human history, human behavior and project it on new species of intelligence. Why would they do that? On the other hand, maybe they will. If AI takes over the world and decides that we're just annoying parasites, there is not much we can do about it. When we talk about virtual humans, would they appear and say, VR contact lenses or a computer screen or would some of them have physical reality? Well, you here as an avatar are virtual human and you do have a physical reality. Unless you are really good AI. Following Phil's question. Well, AI have a personality. Well, again, depends what you mean by personality. This is something we invented to describe humans. But if you mean particular set of behaviors, well, yeah. So I'd be geared to have a gender. Again, it's a projection of a biological property onto something that's clearly not biological. I'm sure you could program AIs in any way want if you want to see what happens. So have AIs that are programmed to be male and female or anything in between. But my guess is that will be really to learn more about humans. But not really using AIs as a simulator perhaps. What precautions are needed to protect AI from computer virus? Well, people who do this kind of stuff are good at cybersecurity too. But of course, any computing system is vulnerable to some form of malicious hacking and so on. Are we using Unity 3D to model display data in VR? Yes, we do. Which would have more potential risk? AI with emotions or emotional AI? From what we've seen in human behavior, I would say AI with emotions would definitely be... Should AI be programmed to be sentient? What do you mean by that? Sentient? They're already thinking and they're aware of some aspects of the world, depending what they're made for. Mind you, we're still dealing with very, very primitive but undeveloped forms of AI. There are few really impressive ones like those I mentioned in my slides, but they're again focused on solving particular problems. Hillary Clinton campaign on basis of hearing AI. I don't remember that part. Hillary had other problems, but I would say even if she did, I don't think it would necessarily take her advice on AI or Elon Musk's for that matter. All right, folks, I'm running out of steam. I'm powered by a carbon-based neural network. You know how to find me. So let's wrap up the show. Well, thank you all for being so kind and for coming. Happy to discuss things some other time. Again, you know how to get in touch with me. So let me take my screen away. At least I pollute this fine sim. See you some other time, virtually speaking. Have fun, everybody.