 What's up, everyone? Welcome to Simulation. I'm your host, Alan Sakyan. Super pumped to be talking about rebooting AI. We have Dr. Gary Marcus joining us on the show. Hi, Gary. Great to be here. Thank you so much for coming on the show. Really appreciate it. Thank you. So pumped for this. It's been a long time into making almost three years of aiming to make something like this happen, our conversation. And it's good timing because he has just published Rebooting AI. Building AI that we can trust. For those that don't know Gary's background, he's a scientist, bestselling author, and entrepreneur, founder and CEO of Robust AI, which is building a new foundation for the future of robotics, was founder and CEO of Geometric Intelligence, an ML company acquired by Uber in 2016, and is the author of five books most recently, Rebooting AI, which is about building computer systems with the conceptual framework of our world, time, space, causality. So we can trust them and not fear their operation. You can find all Gary's links below, GaryMarcus.com, robust.ai, as well as his Twitter, LinkedIn profiles, and the newest book, Rebooting AI. Gary, I love how you take things from this perspective of we are birthed into the world and we start building a conceptual framework. And the same style of how a child begins building a conceptual framework, if we can start helping our computer systems have a conceptual framework that's somewhat similar to that, at least, can be a much more robust AI that we can trust. Is this about the general essence? Exactly right. And it goes back to Plato and Kant, talking about innate ideas. So there's a long distinction, nature versus nurture. I think anybody who's ever seriously studied the problem realizes it's both nature and nurture if you're talking about biology. We have genes built in that help build rough drafts of our brains. I wrote another book about that some time ago called The Birth of the Mind. It's pretty clear that that's how biology works. We don't know all the details, but it's clear we're not born blank slates. Politically, people might wish that we were, but we're not. And nor are we born with every detail of our brain complete. We learn a lot. If you look at machine learning, though, in AI, it's really moved to one side of that spectrum, which is the nurture side of the spectrum. So people are trying to learn everything from raw, unanalyzed data. So how do you play Atari games? You feed in pixels and joystick motions and make the machine figure everything out without any conception of what a brick is or what gravity is or the goal of a game or anything like that. And you get results that sort of look good, but they're really just approximations. They don't really work when you change things. So if you're playing breakout and you move the paddle three pixels, the whole thing breaks down. So what I'm saying is we need some nature along with the nurture. We need to keep using clever ideas about machine learning, but we have to bring in some of what biology has, which is good starting points. Like, for example, Kant and the critique of Purism talked about space and time and causality is critical. I think they're critical for machines, too. We're trying to build machines without that, hoping for the best from vast troves of big data. It's just not going to work. I like the way that you frame it as this biological starting point, a nature. And what would you say would be that ideal framework to start things off with? I mean, I would start by saying we're not trying to build replicas of human beings here, right? I mean, I have two children. And they're miniature humans with human brains. We're not in the lab trying to do the same thing as biology. But I would say that there are some things biology does really well and some really poorly. So I wrote a whole book, another book, about all the things that the human mind does that are kind of substandard. So our memory systems, for example, are lousy. And in my book, Cluj, I talk about how it is that we could have evolved things that aren't optimal. They're very associative, but we have poor recall for the vast log of experience that we've had. We don't have buffers, for example. Where are the last place I put my keys, which is trivial to program in a computer? Where did I put my car? And you go out into a lot that you park every day and you're like, I don't know. You have a conflation between all the different memories. You don't have what a computer has, which is called garbage collection, right? So, and location-addressable memory. So it's things that are like first day of computer science. Some of them are not built into the brain. So our brains are not perfect. We don't want to copy them. We don't want to do arithmetic like people do. Like, hmm, did I remember to carry the one? I can't remember. I mean, we don't want that. But there are things that human brains are really, really good at, in particular, reasoning flexibly and understanding natural language. And those go hand in hand. So if somebody's sitting here understanding this conversation, they're connecting with things that they might have heard about before in linguistics or psychology or AI. And they're very rapidly, as we speak, building up ideas about what we're talking about. And then they can flexibly use that information so they could use it to decide whether they're gonna believe the next news story they hear about AI or they'll bring it into a conversation with their friends or maybe make decisions about how to use it in their business. So people are really flexible with the information that we use, and we can talk about it in very flexible ways. Machines can't do that. There's no AI system that can read a book like mine and come away explaining the main ideas or say, you know, why the examples were used the way they do anything you'd expect of a sophisticated reader. So there are things that humans can do much better than machines and vice versa. Ultimately, the synthesis is gonna be some of both. So you want, you know, the ultimate, like Star Trek computers should understand languages well as a person, but be able to read through vast, you know, tables in order to synthesize a new answer in a way that no human could do. Okay, so a starting point that takes some of the best principles of computational capacities as well as human biological capacities. That's right, and it occurs to me, I didn't fully answer your first question and come back to it, what are those starting points? So one starting point I gave you was kind of conceptual, which is, you know, you want to take the best of humans and the best of computation. Another starting point is how do humans understand the world? And I would say it starts by having a framework, and that framework has some things in it. Like we know that there are objects and those objects continue to exist in space and time. There was some confusion, I think, from Piaget about that, but my best reading the developmental psychology of literature is kids know that from the beginning. And then you learn about the specific objects that are in the world. You know that there are psychological agents, you don't know everything about them, but you realize very early in life that another biological creature is different from a chair, right? You can have a conversation with a person, you can't have a conversation with a chair. And you learn lots of details, like what chairs are like and what people are like, but you start with a framework where you can learn information about particular individuals, for example. So we know that it's not just that there's Alan-ness in the world, but that Alan is a particular person, and he kind of travels on a space-time continuum. He's only gonna be at one point, at one time. They're not gonna be four Alan's at the same time. We learn all of this stuff, but we have this basis of what objects are, what people are, and then we learn more information about that. As opposed to if you could just imagine, you just have light on your eyeballs and it dances around. Then the world would be, as William James said, a blooming, buzzing confusion. What allows us to learn about the world is that we know enough from the beginning to structure the kind of information that we get in. The machines we're building right now don't have that basic structure except in some very limited ways, and I think they suffer for it. So like you can talk to the GPT-2 system that's very popular, open AI released, and it will produce coherent sounding sentences, like the, or I should say grammatical sentences, the fluent sounding, but it's no idea what it's actually talking about. If you make it talk for a little while, it will constantly contradict itself. It doesn't really understand the premises of what it's saying. And so what you get are the like the correlations. It knows which nouns and verbs follow each other in which context, but it knows nothing about the thing that it's actually talking about because there's no conception of space or time or objects or people. All these basic things that I think are part of humans and not just humans. I mean, think about the baby Ibex. It climbs down on the side of a mountain a couple hours after it's born. It has a basic understanding of three-dimensional geometry and things like that. It's not conscious, but it has evolved to do that. We need AI systems if they're gonna work to have a similarly strong starting point rather than just being blank slates. Doesn't mean we toss away learning, but you don't learn much if you don't have a strong starting point. What would you say would be the, let's go back, back, back, and then we'll try and build up. As in what would be the first sort of framework that a child is born into the world with? You took an innate-ism stance versus the blank slate stance, and what are then those fundamental frameworks that a child is born into the world with? We don't know for sure. I mean, you can do different kinds of experiments to get at it. So you can look at babies and see how they respond to things. I did an experiment in 1999 in science that's been replicated that showed that babies at the age of seven months old are recognizing kind of abstract patterns. We gave them sentences like la-ta-ta-ga-na-na. They were able to recognize whether other things followed the same patterns. And then other people showed that even newborns could do the same thing. And so that set of experiments, for example, shows that babies are looking for rules and patterns. That's one of the most basic things we're doing. And a lot of current neural networks aren't looking for rules in the same sense. That's just one example. There are other sets of experiments that show that babies seem to understand something about causality very early in life. They understand that one billiard ball strikes another that the second one is likely to move. So that's kind of roots of causality. Babies, at least pretty early in life, seem to understand something about objects and they're not, even if they look identical, they're not identical. So they can do something that looks, at least looks a little bit like arithmetic. I don't think it's literally that. But early in life, there's some calculus of objects and what they do. That's another thing that is presumably there very early in life. And I'll just pause to say that innate does not actually literally mean at birth. So like my ability to grow a beard was built into my biology, but it didn't emerge until I was 14 or whatever years old. So innate and early aren't the same. But when we see something very early, as early as we can test in newborns or infants, then we tend to think that it's probably built in. It's hard to perfectly prove these things. The best studies that we could do from a scientific perspective are not the best we could do from an ethical perspective. So we really, to really answer some of these questions, you would have to do what we call deprivation studies. Or simulations. Or you raise a child on, if you want to know what kids know about gravity, you raise a child on a space station and see what happens or something. But we can't ethically just like assign kids to different conditions in those experiments. So some things remain unknown. But the best gases from my own work and work from people like Elizabeth Spelke and Renee Byerjohn and so forth, is that kids have some conception of the world to get started that centers around things like space, time, object causality, personhood or agenthood and so forth. And that could even likely be from the parents, the parental experiences in their parents and their parents, just this transgenerational. I think it's generational, but I think it's genetically transmitted. I think there's a lot of information that is transmitted by teaching and by imitation and just by observing parents. But some of it, the most basic foundations, I think are part of the mammalian brain plan or even the vertebrate brain plan. That's had a long history of evolving to see things like fire and language and the basic ways that we go and get food and that we love each other and these types of things have conversations. Yeah, I mean, language itself is a complicated case and we could come back to it, but I would say that for a billion years, evolution has been evolving creatures that have some basic comprehension of their world and understand about obstacles and objects and predators and prey. And you look at the so-called precocial animals like the ibex that are born and basically able to walk and it's clear that genes can do that. There's still calibration, so you still have to figure out how strong are my legs and the vertebrate monkey, for example, is born with three calls, which basically amount to aerial predator, land predator and so on. And they have to learn exactly what those look like. They have to learn what an eagle's like, but they are born essentially knowing that there are things up in the air that you need to worry about and that you should do a particular call if you see them. Yes, yes. Now then, let's move to that next step then, which is, okay, what about then the conceptual frameworks for building artificial intelligence that we can trust, that we don't need to fear? How do we embed time, space, causality into computers? It doesn't really exist yet. I think that right now, people have thought about it some, but I think that for the last seven years, people have spent almost all of their effort on systems that don't have a lot of conceptual framework that are really good at picking up on statistics and they've been very fruitful for things like speech recognition. So speech recognition, you can hear a bunch of syllables and you can correlate the auditory stream that you hear with labels for what's going on. And so the dominant paradigm for the last seven years is really about labeling. So you see a picture of a dog and you tell the machine this should be a dog. You see a picture of a chair? You tell the machine that this should be a chair. And the technique that people are using is pretty good for that. So if you show it another chair, there's a good chance if it's not too different from the other chairs as seen before, that it will recognize the chair or recognize the dog if it's not too different for the dog that's seen before. It doesn't mean the system has the slightest clue what a chair is for. It doesn't mean that the system understands that some chairs have cushions and others don't and that some are made of wood and some are not. It doesn't mean that the machine understands anything about the properties of those objects. But we built these systems are really good at categorizing and people got really excited about it. And then there's some very nice commercial applications that most life-changing one might be speech recognition, but it's really great to have automatic photo tagging is another example. And so people have gotten kind of obsessed with the tool that they have. You know, the famous saying is to a man with a hammer, everything is a nail. There's a lot of hammer and nailitis right now. And so I think people are working really hard to make their hammers just a little bit more efficient. How can we make the metal of the hammer strike a little bit better and stuff like that? You said something along the lines of that machine learning is critical for building robust AI and deep learning is pretty good for machine learning. And with deep learning, we're doing these things with big data, with statistical models, with convolutional neural networks. This is kind of this paradigm for image recognition for... That's right. And it's great stuff. Like it's really genuinely useful. I don't want to say that it's not. But it's like this hammer and nail thing. So it's really great for the problem of categorizing things. And if you have labeled examples of things you want to categorize, it's great. It's not great for reasoning and it's not great for language. So systems have been built on this and they can pick up all kinds of subtle statistical detail. But at the end of the day, they don't really understand the things that they're talking about. It's just not the right tool for that. So the brain has many different brain regions that do different things. We don't know everything about it. But we know the occipital cortex and Broca's area are doing really different things. And deep learning is kind of like the occipital cortex. It's there for some part of vision. It's really not doing anything like what we think Broca's area is doing where you take a sentence and you break it down into its parts and then you understand the meaning by putting together those parts. Deep learning doesn't really do that. It doesn't do what Broca's area does. It doesn't do what prefrontal areas do in terms of kind of making rational calculations about complex ideas. It just doesn't do that. And so it's like, it would be as, I mean, it's almost also like the blind man and the elephant. It's like, you know, people have discovered a trunk and they think that it's the whole elephant and it's not. It's a piece of it. I wonder what then would be beyond just the trunk which is this deep learning passion that we have for image recognition and natural language. What would be a interesting way for you to hypothesize the construction of computer systems that have understandings of conceptual time space? The first thing I think that the field needs to do is to get past the kind of holy war that has gone on for 60 some years, which is between two approaches. One is the approach from which deep learning emerged which tries to find things. People call them neural networks. They're vaguely like brains, but fundamentally there's statistical approximators that look at a lot of data. That's one tradition and it goes back to the fifties and arguably the forties. And then there's this other approach, which is- The statistical approximators for big data, was it the first? Yeah, I mean originally it was even with little data and now it works because it has big data. If you have small amounts of data, your statistical approximations aren't that good. But so there were people have been taking one approach which is basically about statistical approximation. And machine learning is kind of a part of that. I mean being a little bit sloppy here. And then there's another approach which has really been about knowledge and it comes from the tradition of like Bertrand Russell and Gottlieb Frege and so forth, which is about representing things in formal languages or things close to that. And computer programming comes from that tradition and it works great. So if you want a web browser, you don't want to machine learn how the web browser works by giving labeled examples of people clicking at pages and what images they show. You want to write software for that in traditional sense. You want to have if thens. If the user presses this key, then do this thing. Load the contents of this buffer, copy it over to this other buffer. And so there's a whole approach of AI that is built around what we call symbol manipulation, which looks a lot like traditional computer programming for many people in your audience who have done that. And the two approaches have not liked each other for a very long time. They've each wanted to say, do it our way or the highway. And the modern representative of the do it our way or the highway on the machine learning side is Jeff Hinton, who has gone around saying basically symbol manipulation is like gasoline engines and deep learning is like electric engines. Stop using the symbol manipulation stuff. You know, it's antiquated and you should just use my electric power. I mean, it's very hostile to symbol manipulation. And the reality is we need both. Reality is his metaphor is not really right. It's not really like a choice between gasoline engines and electric engines. It's really like we have different techniques. It's more like we need power screwdrivers, hammers, nails. We need all kinds of different techniques because the fundamental problem we're trying to solve is itself what a psychologist would call multi-dimensional. There are many different aspects of intelligence and it's absurd to expect one silver bullet. And I know he's enamored of his silver bullet of this kind of machine learning from raw statistical data. But the reality is that what you do as a thinking creature ranges from recognizing patterns you've seen before at a kind of concrete perceptual level which deep learning is good for to making inferences about complex ideas that you've only just been exposed to and say, does this fit with these other ideas? And deep learning is just not a good tool for that. And symbol manipulation is a better way to represent, for example, things that are compositional. You put the parts together to make larger and larger parts. So we say the book on the couch and then we say the book that's on the couch that's in the room and the book that's on the couch that's in the room that's in the house that's in this particular suburb of California and so forth. And you can understand how all the parts fit together. You know, the cat that chased the rat that chased the mouse, whatever. We can build more and more complicated ideas and that's part of what civilization is based on. We need different tools for that. Symbols are really good tools for that. We use them, for example, in mathematics. That's how we do physics is, you know, we put together symbols that represent abstract ideas. The thought that we're gonna build AI sort of with a hand tied behind her back because, you know, some person who's famous happens not to work in that domain is crazy. Wow, okay. And then would it be maybe fair to hear your thoughts on this? If we had an ongoing, let's say, a clock of all time and we turned on a computer system and the first thing it did was calibrate itself along with this big moving time that's happening with all the humans that know what date it is and that we're meeting at this time and the computer systems begin also working on that same idea of time. Would that then enable them to talk to each other and talk to other humans on the same idea of time to start? I would say the first thing you wanna do is just build a variable in your system which is time or moment or something like that. So that the machine doesn't have to induce the very fact that there's time and I would wanna build in some procedures and some basic things. Like it should realize that time flows linearly and you might wanna build in things like people have birth dates and death dates and they're not here before they're born and they're not here after. I would build in some basic things so you can interpret the rest. The alternative is like, I don't know, you have a video camera on the world and you somehow have to induce that things follow in temporal sequence and nobody has ever built a machine that does anything like that that actually induces from kind of just raw data that time exists or that space exists. The closest is these neural networks now build in a notion of space that handles some aspects of space. They build in the notion of what we call spatial invariance or translational invariance. So if I see my hand here, then I see this image over here. They're probably the same thing as the same image. So convolution is a way of building that in. It's a clever technique that's built in and then people are like, I don't wanna build anything else in. I do machine learning and they stress the word learning. They don't want anything else to be built in but I think it's too hard to induce the rest. And I think, you know, it took a long time to evolve the, you know, let's say the mammalian brain but you won't find a mammal that doesn't understand time intuitively more or less from the moment it's born. Yes. Right? It's not like, you know, some, you know, pygmy marmoset or something like that is so smart that it's able to like extract the basic temporal nature of the universe by hanging out. Like that's built into the marmoset's brain and it's built into our brains. Through a long period of evolution. So then would the other functionalities like we want to see all of these new upcoming successes with AI helping us with medical diagnostics. We wanna see it succeeding with autonomous vehicles. We wanna see it succeeding with all different aspects of making our lives better. And then are we gonna be building specific narrow AIs for those applications? How do we figure out to invest time and resources into that versus this building from this really first principled framework of time, space and causality? It's a great question. So we didn't quite make explicit before this distinction that you just alluded to between narrow AI and general AI. So narrow AI works on a very specific problem and general AI doesn't exist yet but the notion is that it would be able to solve a wide range of problems. And it turns out that with existing techniques that I would call narrow AI, we're able to, for example, build machines that play go and chassis extremely well but not to read very well. So nobody's been able to adapt the same techniques that work go and chess to a more open-ended problem like reading. So go, you know, the rules haven't changed in 2000 years. The board's always the same size. It's a very closed problem. Whereas reading is an open-ended problem. You never know what you might see next. So maybe there'll be a cartoon. Maybe there'll be a joke about a favorite television show. You have to be able to kind of roll with the punches and integrate all kinds of different information. So that's open-ended. And the more open-ended something is, the more it seems like we need a general approach to AI. Then you get that into some of the specific problems and it's interesting. So like, we don't know for sure whether driverless cars can be built with narrow AI in a way that's reliable. So people are trying to do that. Trying to build driverless cars that don't really have a conception of what a construction site is or what a police officer is or what a highway is but know enough narrow rules to get by. But what we're finding is that doesn't really work. So you have, for example, at least five times in the last 18 months a Tesla has run into a stopped vehicle on a highway. So emergency vehicles, tow trucks, fire trucks, police cars. The systems don't have a conception of what a stopped vehicle is on the side of the road and they're not able to cope with it. So it might be empirically that to really deal with all of the range of cases like that we need a general intelligence that is able to reason about the things that season how the worlds fit together or maybe we'll be lucky and we don't have to and we can just gather enough data from enough cases. My feeling is we're not doing that well when we try to do it without some kind of general intelligence and what happens is there's always some core cases we do very well and then a periphery around it where the current techniques don't work that well. So core cases driving on the highway when traffic is moving smoothly and the weather is good and the system is great at that. It's better than a person because it pays more attention and you might think about level four autonomous driving where it helps you sort of like cruise control. That's all well and good, but fully trusting the system, right? The subtitle of the book is about building AI that we can trust. Fully trusting a system means not just like cruise control but you're always still paying attention to be like I can actually read a book in the back and before you could do that it would have to deal with the periphery cases too and so periphery is like there aren't a lot of data recorded about tow truck stopped on the side of the road and you might gather, you might put a camera in a car for a million hours and not, well, for a hundred thousand hours and not see that. You call these the bizarre long tail. So exactly, it's exactly about the bizarre long tail. So outliers are edge cases, people have different names. I was using periphery right now. The outliers are these cases in the long tail which means there's not that many of them, right? The fat tail is there's many, many cases of being four meters behind this kind of behind a Tesla or something like that. There's just so much data about that but there's not a lot of data about what happens when there are tow trucks and there's even less data about like electric skateboards in the streets of Manhattan and so you want a system that's flexible enough to deal with an electric skateboard even if it wasn't in the sample of data that it's looking at. You can also think of this like in terms of like politics and you do polls, you need a big sample. In order to cover all the possibilities with dumb techniques, they don't really understand the world. You need so much data in order to get a representative enough sample. What people do is not that they have that vast a database or that they have a lot but they have reasoning techniques and kind of a knowledge about how the world works. So if some police officer comes by with a hand lettered sign saying please don't go here. You're like well it's a police officer and must have written it on the sign because they didn't have time. There must be a recent emergency and I'm gonna take his word for it or her word for it and I'm gonna go around. And so you could reason about that whereas these systems will be like they don't have anything like that in the database and so they just drive right ahead or whatever. And so this infinite range in driving of cases that you just haven't seen before and that makes driving really hard to solve with narrow AI. Natural language is even worse. Like every sentence that you hear that's interesting you never heard before. I was about to ask you about this with IBM Watson seems to be like the big issue that was scanning through all these new medical papers and trying to find something from college. It didn't work, right? So IBM Watson, I mean I should be careful unless I get sued, but so IBM Watson won at Jeopardy and that was very impressive but it turned out to be a narrower job than we initially thought. So it looked like wow it's understanding natural language but it turns out that almost every answer in Jeopardy is the title of Wikipedia page. And so then that means your job is not really to understand the question. It's just to find which Wikipedia page comes closest to the question. And that turns out to be a lot easier. I think it's 94 and a half percent or something where the answers are like that. And if you combine that with the speed of the computer hitting the buzzer you're good to go. But it doesn't mean the system actually understands things. So then some of the top brass at IBM not necessarily the people working on Watson said well can you make this do medicine? And I think some of the people working on Watson were like well that's actually a harder problem. They're like no you're gonna make this do medicine and they kind of scale back over the years from we're gonna make it a doctor to we're gonna teach students to we're gonna help with animal care or something like that. Like they really backed down. And the original idea was it's gonna do great medical diagnosis but even easy cases it sometimes made mistakes on. Like it failed to diagnose heart attack sometimes and it didn't really understand the stuff that it's working with. So there are all these tricks that you might call text processing but distinct from actually understanding language. So text processing includes things like keyword search so you can see that this word often occurs in this context and so you can guess that it's affiliated with this thing but it doesn't have the same thing as knowing the causal mechanism. Knowing that the heart is a pump. Like if you know that the heart is a pump thing you can reason about what happens if the pump doesn't work but if you don't actually understand what a mechanical pump is it's you don't really have a concept of what the circulatory system is about and Watson doesn't really understand what a pump is and nor does any other AI system really. I mean someone may have built a narrow AI system for that there are simulations for the heart but nobody has a general system that can kind of read a biology textbook and come away with the conceptual underpinning such that it could use that information in different ways. The closest we have is a system now that can do multiple choice questions but it's still like using the statistical things it's not that hard to break. My co-author just saw this new system and a day later found an example. To break it is like something like a certain pig died last Tuesday and then multiple choice. Will it come back to life next week, the week after or never and the system like you know somehow next week and back to life or correlated and the system comes up with next week. It doesn't really understand what death is and you know if you don't have the conceptual underpinnings of death that you don't really understand biology. You gave a couple other interesting ones like did George Washington own a computer? It's like okay computer was invented you know six, 70 years or whatever go. You can count it in different ways but all the ways that you can count you can go back to Babbage or whatever but there were no computers in George Washington's time and people should know that and should be able to figure out even though there's no, I mean the point of that example is a lot of things you can look up by keyword search but you can't look that one up or you couldn't before we wrote this thing at the times. You couldn't look up nobody had a sentence George Washington was alive before computers were invented. If you keyword search then great you're done but if nobody happened to have that sentence then in order to answer the question was George Washington alive or did he have a computer? Then you have to understand he couldn't have a computer if he was alive before the computers. It's trivial for you know any ordinary at least adult human being probably most kids but the systems don't have a conceptual framework of life, death, span of introduction of an invention and so forth and so if it's not in the keyword search they can't put it together. I'm really interested to see where our global resources are going to go for building this first principle of conceptual framework of general intelligence as well as where we're gonna put resources towards narrow intelligences and I really appreciate how you're pushing us to see things from a first principle conceptual framework just like the human is forms when they're born. I love that aspect to what you're teaching. I took a lot of philosophy as an undergrad and I think that that taught me to look at the big picture and not just the thing that you're pounding away on right now and I think the field in general is kind of pounding away with a tool that's right there and there's money to be made and it's not totally crazy but it doesn't put the field as a whole in the position to do the right thing to solve the harder problem. So if we really wanna solve medicine and have machines integrate all of the stuff they read and machines have to be able to read and that's like not on the agenda of what Google and Facebook are necessarily trying to do if what they're trying to do is sell ads. I mean there's some research in those companies but if the goal was really the long term you might set things up differently. One proposal I made was to build something like CERN for AI where you could have a large multi-disciplinary multinational collaboration. There are problems with that idea so you wanna make sure that it doesn't just become a bunch of academics finding funding for their own particular research. You have to be coordinated with a goal and so forth but I think that's one possibility. Another possibility that I'm following right now is I've built my own company. I can't take on all of this but we're trying to take on some of this with respect to robotics. The company's called Robust.ai and we're trying to go after some of the harder problems that we think other people are running away from because their tools don't work and we're trying to build a new set of tools. So it's also possible, at least in some circumstances to do it in a corporate environment. Now you started hinting at this a little bit. I love this idea of some sort of like a global multi-disciplinary effort with the goal of building a general intelligence. Or even just a system that can read. Like forget general intelligence but just like the yellow read. Of course there's an interaction between the two. But people forget that current computers are illiterate. People are like, is AI gonna be here in 10 years? Like, not if it can't read. If you can't read this unstructured part of Wikipedia, the stuff that's not in boxes, then how are you gonna bootstrap your system? So then what would you say, we're moving into this information technology, exponential technology age, there's eight billion of us, the amount of democratization of these powers is happening across so many aspects of biotech, neurotech, AI, all these types of things. How do you foresee us geopolitically harmonizing more? Is this a process of like self-work that we need to go with? I'm not currently super optimistic on that side. So I mean one codice to what you just said is, there are ways in which AI is becoming more democratic and less democratic. So what's great is a lot of stuff is being published openly. Some of it's being patented and not everybody's talking about that. So places like Google do a lot of patenting in the AI domain. So there's some questions there. But the bigger question is, the techniques that people are building are very computationally expensive. So sometimes one simulation run in one of these neural networks can cost $100,000 or $1 million. So just because everybody and their brother can watch a Coursera course or a sister and watch a Coursera course and learn how these techniques work doesn't mean they can do it at scale and the techniques don't necessarily work if they're not at scale. So one thing I think a lot of corporations find out is like what works for Google with the massive amount of data that Google has doesn't necessarily work for a smaller company where there's not as much data. And so there's tensions both around like the ability to, let me start that sentence again. There's techniques that now anybody can use but to use them well you need Google style or Google scale compute and Google scale data and not everybody has that available. Now the techniques that the human mind uses are not so driven by data and your brain uses like 20 Watts, not 20,000 Watts. So it is possible in principle with the brain as an existence proof to have AI techniques that don't demand a lot of data that don't demand enormous amount of compute but those aren't the ones that are being developed right now. So the ones that are being democratized are the ones that Google and Facebook and Amazon and so forth are best positioned to use and not necessarily ones that you at home can use that well. You might be able to build something for your doorbell to detect whether the UPS truck has arrived. There's some things that even at home people can do but people at home are not gonna be using these tools to build real natural language understanding. Kiri, what has been your connection with source or with the divine? And then how do you see our connections with that higher power being relevant to our global harmony? I'm not a divine power kind of guy. I do take a lot of inspiration from nature. I moved to the Pacific Northwest because I just love the beauty of it all and so I mean I guess my connection is I'm pretty amazed with what nature has come up with and I like to surround myself with many of its products. Do you think we're in a simulation? I do not. I understand the arguments that people like Elon Musk have given but again I don't see any direct evidence for it. Do you wanna give me an argument for it? We're here to just feature how you feel about it. We've taken a lot of interesting perspectives. I don't see any direct evidence for the hypothesis. I guess another thing I'll say is it's much harder to build really good simulations than I think a lot of people realize. So it's one thing to build a simulation like Grand Theft Auto but to build a really detailed simulation in the world, particularly human behavior is really, really hard and nobody really knows how to do that. They're also more mundane things. Like we don't really know how to simulate liquids that well and I mean you could imagine you tell some science fiction story about how the year is actually 20,200 and you're just toying with me right now in the brain and the vat kind of style and you've decided to immerse me in this environment in which the simulation tools themselves are lousy in order to make the whole thing. I mean you can make up a story but if you were talking about certainly the technology that is available today it's not good enough to make simulations that have the richness of the real world. So you can look YouTube for like physics engine fails and you'll find things like a car that's sitting here on the side of the road and then it just starts jumping up and down for no reason because there's like some instability in the simulator and we don't see that stuff in the real world. So it would have to be a very elaborate story to make sense of the facts and then the more coincidences that you rest on the lower the probability is, it's not zero but it's not the best explanation of the data I have in front of me. What would you say is the most important skill for young kids like your children as well as the adults as we go into this exponential technology is to learn? I mean first I would say that not everything is exponential. So the ability of machines to read has not grown exponentially. It's grown hardly at all in 50 years. So not everything is exponential but I'll presume for the sake of argument that more is law, Carlson curve stuff. Yeah, I mean, so a lot of stuff follows that but general intelligence hasn't, right? So there's not actually been growth in general intelligence. So one thing to say is that if general intelligence isn't rising that fast but narrow intelligence is then it's not a good way to make your living doing a narrow intelligence because it's gonna be replaced. So you don't wanna be the guys scooping ice cream cones because machine's gonna be doing that pretty soon. But creativity is gonna be valued for a long time because machines are not really that good at it and for the moment machines are not that good at careful reading of things and so learning to read well may distinguish us from machines for quite some time. The last people to be replaced will be entertainers because we will like the fact that they're human beings. So kids growing up to be entertainers is an interesting possibility. I think scientists are gonna be around for a while, not forever. If I really do my job with AI, scientists will get replaced sooner. But I think the problems are hard enough that they won't get replaced immediately. If you take a long perspective, like 10,000 years from now, I don't think there'll be many jobs if you have a different kind of economy where people have to derive their value from their creative pursuits and not from their work, not from paid work. You're just not gonna be that many paid jobs 10,000 years from now. I think you can extrapolate or whatever, interpolate in between and maybe 30 years from now there'll be fewer paid jobs than there are now and a thousand years fewer still. We're just not gonna be able to have a society that is organized around finding meaning from paid work because it just won't be enough. Yeah, even the permutation potential for creativity for computation is just 10,000 years from now just feels like what creative output could still even be left that's undiscovered. How about what is the most beautiful thing in the world? I'd say nature in all of its forms, the way the species are so exquisitely adapted to their niches and the way in which the systems that are built, they're not always perfect, but they're often sort of from an engineering standpoint pretty spectacular and we're just catching up to nature in some things and not others, right? I mean, AI is a case where we have not caught up entirely with nature. We've caught up in little corners of it, right? So we can build AI systems that play go better than people, but in terms of general intelligence, which is part of nature, we still have not caught up. This has been super fun, Gary. I really appreciate you coming on our show and talking to us. Thank you. Thank you very much. It's been very much fun. Thank you so much everyone for tuning in. We greatly appreciate it. We'd love to hear your thoughts in the comments below on the episode. Let us know what you're thinking. Also, to have more conversations with your friends, family, coworkers, people online about rebooting AI, about building a new conceptual framework for time, space, causality versus narrow intelligences, have more conversations about it and get building more everyone. Also, do check out the links in the bio below. Check out the link, garymarkus.com, also robust.ai, his Twitter profile is LinkedIn profile in the book, rebooting AI as well as down there. Support the artist, the entrepreneurs, the organizations, the leaders around the world that you believe in support them, help them grow, support simulation. Our links are below to our PayPal cryptocurrency, Patreon and Design Cool Merch, you can pay all the stuff below. Thank you, Ori Shapiro, our co-producer for cutting this episode, we greatly appreciate it. And go and build the future everyone and manifest your dreams into the world. We love you very much. Thanks for tuning in and we'll see you soon. Peace.