 From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. Hello everyone, welcome to the CUBE Studio here in Palo Alto. I'm John Furrier, host of the CUBE. We're here introducing a new format for CUBE, panel discussion that's called Around the CUBE, and we have a special segment here called Get Smart, unpacking AI with some great guests in the industry, Gene Santos, professor of engineering in college, engineering, Dartmouth College, Bob Friday, vice president, CTO at Mist, at Juniper Company, and Ed Henry, senior scientist and distinguished member of the technical staff machine learning at Dell EMC. Guys, this is a format we're going to keep score and we're going to throw out some interesting conversations around unpacking AI. Thanks for joining us here. Appreciate your time. Yeah, I'd be here. Okay, first question, as we all know, AI is on the rise, we're seeing AI everywhere. You can't go to a show or see marketing literature from any company, whether it's consumer or tech company around, they all have AI, AI something. So AI is on the rise. The question is, is it real AI? Is AI relevant from a reality standpoint? What really is going on with AI? Gene, is AI real? I think a good chunk of AI is real there. It depends on what you apply it to. If it's making some sort of decisions for you, that is AI that's going into play. But there's also a lot of AI that's out there potentially is just simply a script. So, one of the challenges that you always have is that if it sort of scripted, is it scripted because somebody's already developed the AI and now just pulled out all the answers and just using the answers straight? Or is it actually learning and changing on its own? I would tend to say that anything that's learning and changing on its own, that's where you're having the evolving AI and that's where you get the most power from. Bob, what's your take on this? AI real? Yeah, if you look at Google and the world, what you see is AI really became real in 2014. That's when AI and ML really became a thing in the industry. And when you look, why did it become a thing in 2014? It's really back when we actually saw TensorFlow open source technology really become available. It's all that Amazon compute story. You look what we're doing here at MIST really, I really don't have to worry about compute storage except for the Amazon bill I get every month now. So I think you're really seeing AI become real because of some key burning points in the industry. Ed, your take, AI real? Yeah, so it depends on what lens you want to kind of look at it through. The notion of intelligence is something that's kind of ill-defined. And depending on how you want to interpret that, we'll kind of guide whether or not you think it's real. I tend to call things AI if it has a notion of agency. So if it can navigate its problems based without human intervention. So really it depends on, again, what lens you kind of want to look at it through. It's a set of moving goalposts, right? If you took your smartphone back to Turing when he was coming up with the Turing test and asked him if this intelligent for some value of intelligent device was AI, would that be AI to him probably back then? So really it depends on how you kind of want to look at it. Is AI the same as it was in 1988? Or is it changed? What's the change point with AI? Because some are saying AI's been around for a while. But there's more AI now than ever before. Ed, we'll start with you. What's different with AI now versus say in the late 80s, early 90s? See, what's funny is some of the methods that we're using aren't different. I think the big push that's happened in the last decade or so has been the ability to store as much data as we can along with the ability to have as much compute readily disposable as we have today. Some of the methodologies, I mean, there was a great Wired article that was published and somebody referenced something called Methods called the Eigenvector Decomposition that they said it was from Quantum Mechanics that came out in 1888, right? So it really, a lot of the methodologies that we're using aren't much different. It's the amount of data that we have available to us that represents reality and the amount of compute that we have. Bob. Yeah, so for me, right back in the 80s when I did my masters, I actually did a masters on neural networks. So yeah, it's been around for a while. But when I started and missed but really changed a couple of things. One is this modern cloud stack, right? So if you're going to have to build an solution, really have to have all the pieces and just tons of data and process it in real time. So that is the one big thing that's changed that we didn't have 20 years ago. The other big thing is we have access to all this open source tensorflow stuff right now. You know, people like Google and Facebook have made it so easy for the average person actually do an AI project, right? You know, anyone here, anyone in the audience could actually train a machine learning model over the weekend right now. You just have to go to Google. You have to find kind of the, you know they have the data sets. You want to basically build a model to recognize letters and numbers. Those data sets are on the internet right now. And you personally yourself could go become a data scientist over the weekend. Gene, your take. Yeah, I think also on top of that because of all that availability on the open software anybody can come in and start playing with AI. It's also building a really large experience space of what works and what doesn't work. And because they have that, now you can actually better define the problem that you're shooting for. And when you do that, you increase, you know what's going to work, what's not going to work. And people can also tell you is that on the part that's not going to work how is it going to expand? But I think overall though, this comes back to the question of when people ask, what is AI? And a lot of that is just being focused on machine learning. And if it's just machine learning that kind of a little limited use in terms of what you're classifying or not. Back in the early 80s, AI back then is really what people are trying to call artificial general intelligence nowadays. But it's that all encompassing key. All the things that us humans can do, us humans can reason about, all the decision sequences that we make. And so that's the part that we haven't quite gotten to but there is all the things that's why the applications that AI with machine learning classification has gotten us this far. Okay, machine learning is certainly relevant. It's been one of the most hottest topic I think in computer science. And with AI becoming much more democratized, you guys mentioned TensorFlow, a variety of other open source initiatives has been a great wave of innovation. And again, motivation, younger generations is easier to code now than ever before. But machine learning seems to be at the heart of AI. And there's really two schools of thoughts in the machine learning world. Is it just math or is there more of a cognition learning machine kind of thing going on? This has been a big debate in the industry. I want to get your guys take on this gene. Is machine learning just math and running algorithms? Or is there more to it like cognition? Where do you guys fall on this? What's real? If I look at the applications and look at what people are using it for, it's mostly just algorithm. It's mostly that, you know, you've managed to do the pattern recognition. You've managed to compute out the things and find something interesting from it. But then on the other side, the folks working in say, neurosciences, the first people working in cognitive sciences, I have the interest in saying that when we look at that, that machine learning, does it correspond to what we're doing as human beings? Now, because the reason I fall more on the algorithm side is that a lot of those algorithms, they don't match what we're often thinking. So if they're not matching that, it's like, okay, something else is coming up, but then what do we do with it? You know, you can get an answer and work from it, but then if we want to build true human intelligence, how does that all stack together to get to the human intelligence? And I think that's the challenge at this point. Bob, machine learning math, cognition, is there more to do there? What's your take? Yeah, I think right now you look at machine learning, machine learning or the algorithms we use. I mean, I think the big thing that happened in machine learning is the neural network and deep learning. That was kind of a mile stepping stone where we got through and actually building kind of these AI behavior things. When you look what's really happening out there, you look at the self-driving car, what we don't realize is like, it's kind of scary right now, you go to Vegas, you can actually get on a self-driving bus now. So this AI machine learning stuff is starting to happen right before our eyes. When you go to the healthcare now and you get your diagnosis for cancer, right? We're starting to see AI and image recognition really start to change how we get our diagnosis. And that's really starting to affect people's lives. So those are cases where we're starting to see this AI machine learning stuff is starting to make a difference. When we think about the AI singularity discussion, right? When are we finally going to build something that really has human behavior? I mean, right now we're building AI that can actually play jeopardy, right? And that was kind of one of the inspirations for my company, Miss, right? Was hey, if they can build stuff in the play jeopardy, we should be able to build something and answer questions on par with network domain expert. So I think we're seeing people build solutions now that do a lot of behaviors that mimic humans. I do think we're probably on the path to building something that is truly going to be on par with human thinking, right? Whether it's 50 years or 1,000 years, I think it's inevitable on how man is progressing right now. If you look at the technologically exponential growth for seeing in human evolution. Well, we're going to get to that in the next question. So you jump ahead, hold that thought, add machine learning, just math, pattern recognition, or is there more cognition there to be had? Where do you fall on this? Right now it's, I mean, it's all math. So we collect something, some data set about the world, and then we use algorithms and some representation of mathematics to find some pattern, which is new and interesting, don't get me wrong. When you say the word cognition though, we have to understand that we have a fundamentally flawed like perspective on how maybe the one like guiding light that we have on one intelligence could be, would be ourselves, right? Computers don't work like brains, brains are what we've determined our body, our intelligence, right? Computers, our brains don't have a clock, there's no state that's actually kept between different clock cycles outside of brain. So when you start using words like cognition, we end up trying to measure ourselves or use ourselves as a ruler. And most of the methodologies that we have today don't necessarily head down that path. So yeah, that's kind of how I view. Yeah, I mean, state, stateless, those are API kind of mindsets. You can't run Kubernetes in the brain. Maybe we will in the future. Stateful applications are always harder than stateless as we all know. But again, when I'm sleeping, I'm still dreaming. So cognition and the question of human replacement. This has been a huge conversation. This is one, the singularity conversation, you know, the fear of most average people and then some technical people as well on the job front, will AI replace my job? Will it take over the world? Is there going to be a Skynet Terminator moment? This is a big conversation point because it just teases out what could be and tech for good, tech for bad. Some say tech is neutral, but it can be shaped. So the question is, will AI replace humans? And where does that line come from? We'll start with Ed on this one. What do you see this singularity discussion where humans are going to be replaced with AI? So replace is an interesting term. So I mean, you look at the last kind of industrial revolution that happened and people I think are most worried about the potential of job law. And when you look at what happened during the industrial revolution, this concept of creative destruction kind of came about. And the idea is that, yes, technology has taken some jobs out of the market in some way, shape, or form, but more jobs were created because of that technology. That's kind of our one, again, lighthouse that we have with respect to measuring that. Singularity in and of itself, again, the ill-defined definition of intel or the ill-defined notion of intelligence that we have today. I mean, when you go back and you read some of the early papers from psychologists from the early 1900s, the experiment specifically who came up with this idea of intelligence, he uses the term general intelligence. It's kind of like the first time that all of civilization has tried to assign a definition to what this thing of intelligence is, right? And it's only been roughly a hundred years or so since, or maybe a little longer since we've had this kind of understanding that's been normalized, at least within Western culture, of what this notion of intelligence is. So singularity, this idea of the singularity is interesting because we just don't understand enough about the one measure ruler yardstick that we have that we consider intelligence ourselves, to be able to go in and then embed that inside of a thing. Gene, what's your thoughts on this? Reasoning is a big part of your research. You're doing a lot of research around intent and contextual, all these cool behavioral things. You know, this is where machines are there to augment or replace. This is the conversation, your view on this. I think one of the things is this is that that's where the challenge still lies. If we have that intentions and we can actually start communicating, then we can start getting the general intelligence. I mean, sort of like what Ed was referring to how people have been trying to define that. But I think one of the problems that comes up is that computers and stuff like that don't really capture that at the time. The intentions that they have are still at a low level. But if we start tying it to the question of the terminator moment to the singularity, one of the things is that autonomy, how much autonomy that we give to the algorithm. How much does the algorithm has access to? Now, there could be just be on extreme, there could be a disaster situation where we weren't very careful and we provided an API that actually gives full autonomy to whatever AI we have to run it. And so you could start seeing elements of Skynet that can come from that. But I also tend to come down to say, hey, even with APIs, while it's not AI, APIs, a lot of that also, you have the intentions of what you're going to give it to control. Then you have the AI itself where if you've defined the intentions of what it is supposed to do, then you can avoid that terminator moment in terms of that's more of an act. So I'm seeing it at this point. And so overall singularity, I still think we're a ways off. And when people worry about job loss, probably the closest thing that I think that can match that in recent history is the whole thing on automation. I grew up at the time in Ohio when the steel industry was collapsing and that was a trade-off between automation and what the current jobs are. And if you have something like that, that's one thing that we go forward dealing with. And I think this is something that state governments or national governments, something we should be considering. If you're going to have that job loss, what better study, what better form can you do from that? And I've heard different proposals from different people like, well, if we need to retrain people where to get the resources from it could be something even like AI job tax. And so there's a lot of things to discuss. We're not there yet, but I do believe that the lower repetitive jobs out there how should I say, the things where we can easily define those could be replaceable, but that's still close to the automation side. Yeah, and there's a lot of opportunities there. Bob, you mentioned in the last segment, the singularity, cognition, learning machines. You mentioned deep learning. As the machines learn, this needs more data, data informs. If it's bias data or real data, how do you become cognitive? How do you become human if you don't have the data or the algorithms? The data is the data. I mean, and I think that's one of the big ethical debates going on right now, right? Are we basically going to basically take our human biases and train them into our next generation of AI devices, right? But I think for my point of view, I think it's inevitable that we will build something as complex as the brain eventually. Don't know if it's 50 years or 500 years from now, but if you look at kind of the evolution of man where we've been over the last 100,000 years or so, you kind of see this exponential rise in technology, right? From, you know, for thousands of years, our technology was relatively flat. Something in the last 200 years where we've seen this exponential growth in technology that's taking off. And, you know, what's amazing is when you look at quantum computing, what's scary is I always thought of quantum computing being a research lab thing, but when you start to see VCs investing in quantum computing startups, you know, we're going from university research discussions to yes, we're starting to commercialize quantum computing. You know, when you look at the complexity of what a brain does, it's inevitable that we will build something that has basically the complexity of a neuron. And I think, you know, if you look out people, you know, neuroscience looks at the brain, we really don't understand how it encodes, but it's clear that it does encode memories, which is very similar to what we're doing right now with our AI machine, right? We're building things that takes data and memories and encodes it in some certain way. So, and I'm convinced that we will start to see more AI cognizant start to really happen as we start the next 100 years. Guys, this is a great conversation. AI is real based upon this around the cube conversation. Look at, I mean, you're seeing the evidence there, you guys pointed it out. And I think cloud computing has been a real accelerant with the combination of machine learning and open source that you guys have illustrated. And so that brings up the kind of the final question I'd love to get each of your thoughts on this because Bob just brought up quantum computing, which as the race to quantum supremacy goes on around the world, this becomes maybe that next step function, kind of what cloud computing did for revitalizing or creating a renaissance in AI. What does quantum do? So that begs the question five, 10 years out if machine learning is the beginning of it. And it starts to solve some of these problems as quantum comes in more compute, unlimited resource applied with software. What does that go five, 10 years? We'll go start with Gene, Bob, then Ed, let's wrap this up. Yeah, I think if quantum becomes a reality that when you have the exponential growth, this is going to be exponential and exponential. Quantum is going to address a lot of the harder AI problems that were from complexity. When you talk about this regular search, regular approaches of looking up stuff, quantum is the one that allows you now to potentially take something that was exponential and make it come. And so that's going to be a big driver. That'll be a big enabler where, a lot of the problems I look at trying to intention is that I have an exponential number of intentions that might be possible if I'm going to choose as an explanation. But quantum will allow me to narrow down the one if that technology can work on. Of course, the real challenge is that if I could rephrase it into say a quantum program on doing it, but that's, I think the advance is just beyond the step function. Beyond a step function you see. Okay, Bob, your take on this because you brought it up, quantum, step function, revolution, what's your view on this? I mean, quantum computing changes the whole paradigm, right? Because it kind of goes from a paradigm of what we know, this binary, if this, then that type of computing. So I think quantum computing is more than just a step function. I think it's going to take a whole paradigm shift of, it's going to be another decade or two before we actually get all the tools we need to actually start leveraging quantum computing. But I think that is going to be one of those step functions that basically takes our AI efforts into a whole different realm, right? And it'll let us solve another whole set of classic problems. And that's why they're doing it right now. Because it starts to be able to let you crack all the encryption codes, right? Where you have millions or billions of choices and you have to basically find that one needle in the haystack. So quantum computing is going to basically open that piece of the puzzle up. And when you look at these AI solutions, it's really a collection of different things going underneath the hood, right? It's not just one algorithm that you're doing and trying to mimic human behavior. So quantum computing is going to get one more tool than AI toolbox that's going to move the whole industry forward. Ed, you're up. Quantum. Cool. Yeah, so I think it'll, like Gene and Bob had alluded to, fundamentally change the way we approach these problems. And the reason is combinatorial problems that everybody's talking about. So if I want to evaluate the state space of anything, using modern binary based computers, we have to kind of iteratively make that search over that search space where quantum computing allows you to kind of evaluate the entire search space at once. When you talk about like games like AlphaGo, you talk about having more moves on a blank 19x19 AlphaGo board than you have if you put a thousand universes on every proton of our universe. So the state space is absolutely massive. Searching that is impossible. Using today's binary based computers, but quantum computing allows you to evaluate kind of search spaces like that in one big chunk to really simplify the aspect. But I think it will kind of change how we approach these problems to Bob and Gene's point with respect to how we approach it. The technology, once we crack that quantum nut, I don't think we'll look anything like what we have today. Okay, thank you guys. Looks like we have a winner. Bob, you're up by one point. We got a tie for a second between Ed and Gene. Of course I'm the arbiter, but I decided Bob, you nailed this one. So since you're the winner, Gene, you guys did a great job coming in second place. Ed, good job. Bob, you get the last word. Unpacking AI, what's the summary from your perspective as the winner of Around the Cube? Yeah, I think from a societal point of view, I think that AI is going to be on par with kind of the internet. It's going to be one of these next big technology things. I think it's starting to impact our lives and people, when you look around it, it's kind of sneaking up on us, whether it's the self-driving car, the healthcare cancer, the self-driving bus. So I think it's here. I think we're just at the beginnings of it. And I think it's going to be one of these technologies that's going to basically impact our lives over the next one or two decade, next in 20 years is this going to be exponentially growing everywhere in all of our segments. Thanks so much for playing, guys. Really appreciate it. We have an inventor, entrepreneur, Gene, doing great research at Dartmouth. Check him out. Gene Santos at Dartmouth Computer Science and Ed, technical genius at Dell, figuring out how to make those machines smarter and with the software abstractions growing, you guys are doing some good work over there as well. General, thank you for joining us on this inaugural around theCUBE, unpacking AI, get smart series. Thanks for joining us. Thank you. Thank you. That's a wrap, everyone. This is theCUBE in Palo Alto. I'm John Furrier. Thanks for watching.