 Okay, we're back. We're live with likable science here on a given Friday and my co-host who is the host actually really is Ethan Allen. Hi, Ethan. Hey, Jay. How are you doing? Good. I really enjoy these conversations with you. It's like Mr. Science, you know, except you're the scientist, not me. Anyway, so the thing we decided to talk about today is about artificial intelligence and what we call this, new miracles with artificial intelligence and what what provoked this whole discussion is a piece in this morning's MIT newsletter which talks about artificial intelligence with the capability of taking large volumes of text of any kind and summarizing them in short paragraphs to tell you what they're really saying. I remember in years past in my law firm they would take the junior associate and tell here's a book of text, you know, you read the whole thing, make little notes and then tell us in a paragraph what it says, you know, or the West publishing company, all the legal publishing company, give you one sentence about what this case says and now it can be automatic. It's quite remarkable. How do they do that, Ethan? Yeah, I know. It's a sort of this quantum leap from computers simply understanding words to actually understanding the meaning of whole sentences and whole paragraphs, right, which is really a very different thing to grasp the point of and it's not just go do this, you know, it's much much more sophisticated to be able to extract meaning and and put something back out. How do you do that? You know, artificial intelligence is just like a miracle. It's like magic. Well, it's again, it's what they call this deep learning business where you don't try to just program every possible scenario into it. You set the machine up and let it start extracting meaning out of the world as it finds it. Imagine, you know, how powerful it is. You have a loan document, could be anything, and the machine, the machine is looking through it, looking for words, looking for phrases, trying to, you know, get a pattern, pattern count, what is that and the other thing, sort of like what NSA does with our email and trying to make sense. I'm sure NSA is using artificial intelligence in the same way. I'm sure they must be right. But if you think about it too, then this gets in the whole thing, you know, who, who has written the software to extract what from it and to give it what kind of slant, right? I mean, if it's, if these machines start reading the bills that are going through the House or the Senate and extracting them, are they going to give it a conservative or a liberal slant in the summary? Yeah. Well, you know, it's really mind-boggling. So the thing has to go through. It's going to find a word. We're going to put the word over here. Let's hold that word. That's going to find that word again over here in a different context. And then we see what the context is and we compare the two and then we learn something from that. And then it's doing this with every word in the whole, you know, whatever, thousands of pages, whatever it is, and then it's sort of making sense of which words are more important, which words are repeated in what context. And it's, it's like learning. It's making rules about what this is all about. And then ultimately it's coming up with a conclusion. It's brilliant to be able to do that. But they talk about reinforcement learning now in these machines, where these machines are basically set up to reward them, I guess, reward themselves on. I don't know what the machine gives itself as a reward for sort of finding a pattern and sort of being able to make sense out of that and seeing how that repeats and can be used again. And that's, I mean, again, and we've talked about this before, the Alpha Go system, the software that beat the world champions in a Go tournament. You know, it didn't really know, no one ever taught it particularly anything about Go other than the very basic rules, which are incredibly simple. But it basically learned very subtle, very sophisticated things. They point out in one of these articles that Alpha Go 37 moves into the game made a move that when people saw it, the observers or expert Go players looked at and thought the machine was crazy. There's some fluky, weird mistake that it made because it was just placed a new dot somewhere that made no sense to anyone. And later on in deep analysis, they found out that this machine calculated one that no person would virtually ever make this move. There's like a one in 10,000 chance that anyone would ever make that move. So no one would think about it. And two, that it was a very powerful move to make. And therefore, of course, it completely sort of blindsided its human opponent with it and blew the guy out of the game. So it would be very, very creative. And it's not like that move was programmed in its inventory of moves. It invented the... Exactly. It saw an interesting opportunity. And yeah, so this was apparently a good thing to do on a lot of levels for it to win the game. Think about that as a method, a pathway to deal with problems of all kinds in all endeavors, human endeavors. Then you realize the applicability. I want to talk to you about that. My thought about this is that it's going to change humanity. It can write, for example. So when it goes into this document and writes a summary for you, it writes it. It's writing the English language or whatever language. It's actually composing sentences. That's another element. It's talking to you in English. It's not talking to you in the language, in the words and the terms that it has just analyzed. It has created it's a third-party observation and it's giving you an explanation of what it has learned. The learning and the explanation are two separate things, so that's another incredible function. Right. And presumably it could give you a summary in Chinese or Arabic or Hindi or whatever. Language is no barrier. I wonder what the code looks like. It must be incredible. So let's talk about the implications and the other ways that this kind of thing... One thing, for example, we just had a show about congressional process in Washington. Oh, that's so hard. Now is really hard. But we have legislation that is special interest legislation. We have legislation that is based on animosity, based on all the wrong reasons, emotional, low reasons, bad reasons. Why can't we do legislation with artificial intelligence, Ethan? Can you think of a good reason why we can't do that? Couldn't do much worse than I can remember. That's true. We can only be better. At least it would look at evidence, presumably, and say, yes, there are reasons to believe that this is a good policy and that we should do this and no, there aren't any reasons to do the other. Or there are reasons for it and this is going to benefit these groups, but it's going to hurt these groups. I'm not saying that we can get input. People could come and testify. They can testify as long as they want. They can write tones about whether to do it or not do it, and this artificial intelligence machine would be able to summarize in its own learning process what's the good, what's the bad, make some judgment calls, and incorporate a huge amount of data. Go make research. Inquire, go extend inquiry. But again, then you're placing a fair amount of trust in whoever sort of built that machine. That's another issue. That their political agenda is neutral or positive in some sense. That's the scary part. I mean, I'm sure that within the next decade or two, because there's a lot of smart guys and you know how computer programming is sort of geometric. It keeps on getting smarter and smarter and faster and faster. Let's assume we can do this, but as you said, you raised this question earlier. It pervades the whole discussion. What's the agenda here? It's the agenda built in the code. And hopefully now that should become clearer as we move ahead because they're now asking the machines be able to explain why they make certain decisions. So you can see this and we've talked before about self-driving cars. And if your car suddenly chooses to stop, presumably, it's got some reason that it's chosen to stop in the middle of a highway. It's either seen something or it's sent some approaching danger, right? And it should be able to tell you that, right? I mean, it's not doing it so randomly. I'm like, oh gee, I feel like stopping right now. It should be able to tell you. You're in English. It sucks saying to English. And I mean, we saw, I just saw recently the earliest nuances of this. My wife and I were driving and she was driving and thought the cruise control on this rental car was broken because the car would slow down, sometimes below the speed. And then what she realized was that only when her vehicle was coming closer to a vehicle in front of it, the car would slow down. And as soon as she would move to an open lane, her car would resume its cruise control speed immediately. But again, it knew and it presumed we could have told us this. Yes, you're approaching this car ahead. The screen comes up. It's no effort at all. The screen comes up and says, you know, you're approaching another car, silly. Why are you doing that? We're stopping you. And tell you why. So you were involved in the learning process. You learned too. It beeps at you as you cross white lines now. These cars just say, you know, like, oh, you've moved into a different lane. And then if you are doing this too often, it begins to wonder, are you drifting off to sleep and bouncing back and forth on the road? Yeah. And it reports you. Yes, and it begins to alert you. As soon the police will be involved, they'll want to have a report of this. And your insurance company too, which will raise your rates. But this all, that's sort of primitive in the sense you're talking about a car that's driven by a human person. Soon enough in not many years, AI, all through AI, the very AI you're talking about will enable us to have automated cars, which is near, near term. And there was a wonderful study recently done on that they actually put, like, 20 human drivers and one automated car into the track. And the 20 drivers, human drivers, they would tend to, cars would tend to bunch up and they'd get these sort of traffic jam things, simply putting one AI car in there actually smooth the whole traffic flow out. And the drivers wouldn't bunch up in the same way because this car understood and sort of, you know, it's the same kind of thing if you watch a flock of birds, right? And they move as if they're one organism, right? The flock will swirl and stretch out. And all these birds are, they're not all plotting elaborate course, right? They're actually running off of fairly simple rules about how they behave to their relatives. These would be the next bird. Right, yeah. And it's the same sort of thing now that our cars are starting to do, you know, we're starting our cars to be flock members. The swarm, the flock, they already got this kind of thing for drones, right? The drones fly in a swarm. I mean, all kinds of drones are doing this now, you've seen them. And that means it's the same sort of rules. They all connect up in some way to watch the guy next to them before you know it. The swarm is moving in exactly the right place and coordinated. Right. But you know, this suggests, this sort of coordination suggests, you know, more about my legislative initiative. You know, it means that the legislature would compare notes with other legislatures. In fact, so the national, I mean, let's assume the country stays in the same general legal configuration. So Congress is doing its thing, taking testimony, making decisions, giving reasons, detailed reasons why it made the decisions. It's comparing notes with the state legislatures who will likewise have a little box about that big, right, doing AI, what's going on in the state. I guess it's the old federal state system, but query, do you need multiple boxes? Or you can have one big box. And query, what is the agenda of that one big box? What if some wise guy programmer decided he was going to put something in there to favor this or that or the other thing? And now you have the whole country run on some kind of corrupt agenda. Right. And I mean, once you're going to do it on that level, like why bother with respecting national boundaries? Let's just go up and put AI in charge of the whole world. Tell us how we should be behaving. By the way, we didn't talk about the president yet. But I don't know why you need a human being to be a president. Why can't you have an AI president? I think it'd be a lot more rational than what we got now and a lot safer and a lot more in tune with the people. The machine would listen to the people. It would summarize all the testimony. It would come to some reasonable conclusion if programmed. Presumably would not send out nasty tweets. Right. It would send out nice tweets. Explanation tweets. See, this is a great idea. I like it. I like it. I hope it happens soon. But wait, we haven't talked about the courts yet. I've always thought, you know, I mean, the law is mechanized in a certain sense. You go to court. Everybody's got the precedent out. You can find it very easily now on the web. You can not only find the case that supports you, you can find all the cases that oppose you. And you can argue them out in front of a judge. Well, why can't that judge be a little box this big, artificial intelligence? And he listens. He listens dutifully to everybody. He summarizes in his own mind. He's really smart, just a little box. And, you know, he makes a decision. And then he explains his decision. And you can see that he's in the right place. You can see he's made good judgments. Why can't we do that, Ethan? No. And there's actually virtually no reason we couldn't do that. I presume we'd want to put a system like that in place with some sort of human oversight where the judge box would make a recommendation. And a human panel might, you know, at least for the first few thousand of them, would decide whether that was reasonable or not reasonable, right? But, you know, I really think this is coming. And the question is whether it looks like the government we have. Do you have a president? Do you have a congress box? Do you have a state box in every state? Or is it just one box that all the, you know, the checks and balances are already in the one box? You know, it's hard to imagine in the human condition a box that would be smart enough to say, now, now, boys, just relax. We're not going to do any wars. Everybody has to act properly. And we're going to make all those value judgments for you. Just listen. And if you don't listen, we'll make you listen. There we go. Scary. So much so, let's take a break. Okay. We're an exciting discussion about AI. We're calling this here Unlikable Science with Ethan Allen, my co-host. We're talking about new miracles with AI. And it's really, it's dazzling. It's miraculous, but it's also a little scary. Talked about the, you know, the swarm effect and how an object which is run by AI would know what the other objects are doing, even if they weren't the same objects. Everybody would have its place. You could order society this way, not only the cars on the highway. Everybody knows its place. And so, you know, the power of this to, you know, govern, to make political decisions, legal decisions, regulatory decisions, engineering decisions, it would be nearly perfect. Well, I don't know. Did you ever read a book many years ago with a small child, I read it, A Wrinkle in Time, Madeline Dlingel's rather dystopian view of when there's a, I want to tell us about it. All the kids are bouncing their balls all in sync with one another. I mean, just, yeah, very, Jim Carrey movie. What was it with the white picket fences? The Truman story. Everybody behaves just a certain way. Don't diverge, don't. So, you know, that's the problem. But between now and then, before you even get to that problem is, will people accept a replacement of human judgment, however flawed it may be for the machine? This is exactly why I brought up with the thing earlier with the cruise control in the car. People are accepting it. You're not hearing an outcry about, you know, cruise controls taking over my life. Yes, people don't want to get driving their cars, maybe, but they're perfectly willing to have the car tell them, basically, and indeed act in their best interest and make their life safer and or easier. Now, there are maybe some drivers who don't want a driver who wants a tailgate, may find it's very hard to tailgate. I don't know. I'm smiling because I'm thinking of two millennials who are coding this thing. And one says to the other, we need a very authoritative voice on this box. Let's see if we can get an Edward R. Murrow voice here, so that when we tell people what they need to do, they'll believe us and accept right away. I think that conversation would probably take place because you have to be authoritative about it if you want people to follow it. And not only the voice, but everything it does, it has to be credible, you know? What was it, Hal, in 2001? Space Odyssey, right? Yes. I forget the fellow's name who was in the spaceship with him, but it says something like, Steve, you can't do that here. And, of course, Hal went crazy. And that's really the old fear about all of this, is that however you program it, you know, whatever the program puts in, however it teaches itself, you know, because the problem is, yes, you could put an agenda in. Remember, though, this is AI, and AI learns, and AI will make its own agenda, and you don't know what that will be. And some decision that it makes, it looks crazy to you, may actually be a very smart decision, as, again, we just saw with AlphaGo, where it made a crazy move according to all the observers, and it turned out to be, it was the move that sort of shifted the whole game into winning mode. So, you know, it's all science fiction, so you say, well, okay, you're going to have any decision you want, but don't kill anybody. Okay, we don't, it's a special hard-coded thing. As much as three laws of robotics or whatever, right? Right. And so, I mean, you know, I suppose that would have, you need to do that, because otherwise you wouldn't know. Point is that the guy who might write an agenda is not really the end of the program, because the program learns and could find its own way to do whatever it wants, like that special move and go. So, I don't know how we fix that, and I don't know how people not worry about that. Would you worry about that? Yeah, I mean, I'd be concerned about it. I'd want to see, I'd want to see sort of the system and operation in what we call sort of low-risk, you know, situations and have it working really well for a good long while before I'd be willing to sort of turn over my whole life and accept it unquestionably, right? But it's fun. I don't feel that way about cars. I'd happily have a self-driving car and sit back. Yeah. But you know, what surprised me actually is that I figured who got, I think Apple got a permit or something recently from California to test their cars. And the same with Google and not one of the major car companies, too. And what surprised me is they actually gave them a permit, because I think a lot of people in government, you know, with the bureaucratic kind of look at things, will want to retain control. They will not want to have a machine arguably controlled by the programmer doing things on their turf. So they will resist. And I suggest to you that this question I asked you about, would you agree with this, is a question a lot of people would ask. And when government, who holds the keys, is asked that question, they say, I don't want to give up my authority here, because I know in the end I'll be out of a job. Yeah. And I mean, there's an interesting thing. We are actually, we're in the midst of sort of a struggle right now. Are you going to let government control those things, or are you going to let sort of private industry control those things? The marketplace as it were. And I would certainly argue there are functions that are not well controlled by a marketplace type situation. Healthcare and education being two obvious things where I don't really want a competitive market in those, I want somebody with oversight saying what's best for the patients or what's best for the kids. And yes, you consider some economic issues. Well, as you say that, I think to myself, industry can be very influential and very crafty. And they can get the government to be an extension of their self interest. They can get the government to be self interested in their view of things as they are. And so government, at least in our recent lifetimes, I think government has gotten maybe too sensitive to that and has adopted self interest as a motivating feature rather than doing the common good. I wish we could go back to a time when Mr. Smith went to Washington and we did the common good. But you can program a machine to do the common good. The machine would have no self interest. The machine would be programmed to do it in the interest of the community in general. That would be infinitely better than the problem of humanity. Yeah, absolutely. I mean, we are sitting here, we're sort of replaying the tragedy of the commons every day in terms of keep throwing plastic and crud into the oceans because sort of, hey, big oceans, it's not disappear as soon as you throw it in there and it's gone, right? But of course it really isn't gone. It just adds to more crud and more crud. And yeah, a sort of sensible, reasonable overseer as it were would say, hey, you can't keep doing this indefinitely. This is bad for the ocean. It's bad for all the life forms in there. It's ultimately going to hurt your fisheries. It's going to deprive 3 billion people a day of the protein they need. And it's sort of a stupid lose-lose situation. So let's not do it. Right, right. So we don't do it. It's a priority. It reminds me of the comparison of the American democratic system and for that matter the European democratic system in China, which is more totalitarian. And Xi Jinping finds, and his Politburo and his Central Committee find that a given thing, like, you know, damaging the environment is not a good thing, he can come down and say, no, we're not doing that. And my word counts. Don't have a big discussion. My word counts. That's the end of the conversation. All done. Same thing with, like, sea level rise. We're going to protect this harbor, this island, this shore from sea level rise. That's a high priority because we look ahead. We, Xi Jinping and the Politburo and the Central Committee, and for that matter the Black Box, have the capability of looking ahead and making a decision based on what they see in the future, not within the election cycle. Relative to that, our system is not nearly, our system here is not necessary, not nearly as efficient, don't you think? Right, right. Now our system, yeah, takes longer to change. It's very sort of sluggish to respond to changes. People do get sort of their vested self-interest dig into place. Right, so when we, for instance, face issues of rising sea level, it's going to be a thousand, ten thousand, a hundred thousand lawsuits about being pushed off of your nice coastal land, whereas in China, yes, they're going to just say, tomorrow, you guys are all leaving your houses and you're moving 20 miles inland. End of discussion. Boom. Yeah. And it's done. Yeah. And, you know, that's, I mean, both things have that place, both things have that place. I mean, this is going to happen. You know. And the only question is when it's going to happen. And what resistance it meets before it does happen. Right. But we start seeing things like sea level rise and other environmental issues that we, we can't, you know, we, this country, we can't see into the future. We just have blinders on about that. Some day we're going to have to have a machine tell us, no, you got to do this. Yeah. Well, I mean, the recent slew of evidence that's accumulating that points out that a lot of common air pollutants, particularly the tiny particulates that are ubiquitous now, basically are associated with the onset of dementia. And clearly it's not anyone's interest to have a whole population of people getting dementia earlier than they need to, or earlier than they should otherwise. And yet basically people are still, you know, building new coal plants and spewing the stuff out in the air, even though, you know, five years, 10 years down the road is going to be huge human health costs. Public health is huge. And you could, you could save a population, give them much better health care. And you can also, what was it in 1984, where they all marched down a road at the end, that that was the end of them, reaching a certain age or life condition. That was the end of you. And I mean, that may be hard on, no, I mean, not too hard, you know, sort of a death with dignity kind of thing. At the end, that was the end. And in the meantime, the population in general, it's like the herd, has to get cold, right? Sure. The population in general, you can say that's immoral, but I'm not sure it is immoral. No. I mean, you look at now, this was from some years ago, I heard this figure, I don't know what it is today. But at that point, they pointed out that something like 60 cents of every dollar in our health care system went to care in the last 30 days of people's lives. Whereas less than two cents at that point is going to prenatal care or something, which is like insane. And what I see recently, the last six months of a person's life in this country, that person will, his health care costs will be six times the average health care cost. It's just, you know, and it's not efficient. And it's doesn't help them very much either. And all of the long-term studies in public health show that indeed giving young children, particularly neonatal, and in the early years, giving them good care, a good healthy food, good environment pays off big time. It pays off in education, it pays off in kids' health, it pays off in their social adjustment. And it continues to pay off for days down the road. Yeah. I mean, you know, just take a grocery store. There's all kinds of junk food in there that is damaging the health of the population. And we haven't gotten to the point where we actually make rational choices, I mean, as a government, about what should be on those shelves and what shouldn't be on those shelves. Because a lot of it is damaging our children and ourselves. So, you know, I do hearken back to the notion that in some cases, a good machine with lots of authority would actually be, you know, a better deal. And furthermore that a good machine could make complex decisions without the baggage of, you know, emotional, hostile, you know, aggravated, imperfect decisions. You know, for example, and I leave you with this rhetorical question, you can respond or not, would a machine do better at the health care bill? You bet it would. Yeah, could it be worse. And so many things are pending in Congress right now. And how long would it take? Not long. Exactly, exactly. Thank you, Ethan. Thank you, James. Always great. Let's do it again. That's a need.