 from Las Vegas. It's theCUBE. Covering Nextwork, 2019 Americas. Brought to you by Juniper Networks. Welcome back everybody. Jeff Frick here with theCUBE. We're in Las Vegas at Caesars at the Juniper Nextwork event. About a thousand people kind of going over a lot of new cool things. 400 gigs, who knew that was coming? New information for me. But that's not what we're here today. We're here for the fourth installment of Around the Cube unpacking AI. And we're happy to have all the winners of the three previous rounds here at the same place. We didn't have to do it over the phone. So we're happy to have them and let's jump into it. So winner of Round One was Bob Friday. He is the VP and CTO at Mist, the Juniper company. Bob, great to see you. Yeah, good to be back. Absolutely, all the way from Seattle. Sharna Parky, she's a VP applied scientist at Textio. Good to see Sharna. Good to see you. And from Google, we know a lot of AI happens at Google, Rajan Sheth. He is the VP AI product management at Google. Welcome. Thank you. Great to be here. All right, so let's jump into it. So just to warm everybody up and we'll start with you, Bob. What are some, when you're talking to someone at a cocktail party Friday night, talking to your mom, and they say, what is AI? What do you give them as an example of where AI is impacting our lives today? Well, I mean, I think we all know the examples of the self-driving car, you know, AI starting to help our healthcare industry being diagnosed cancer. For me personally, I had kind of a weird experience last week at a retail technology event where basically we had these new digital mirrors doing facial recognition, right? And basically you start to have digital mirrors that are going to basically start guessing, hey, you have a beard, you have some glasses, and they start calling me old. So this is kind of very personal when I have AI. It's the one thing for you to call me old, but AI, you know, walking down a mall with a bunch of mirrors calling me old, that's a little annoying. Did it bring you out like a cane or a walker? No, no, no, no, no, no, no, no, no, no, no. So they started giving me some advertisements that were like, okay, guys, this is a little bit over the top. All right, Sharna, what about you? What's your favorite example to share with people? Yeah, I think one of my favorite examples of AI is kind of accessible and on your phone where the photos you take on an iPhone, the photos you put in Google Photos, they're automatically detecting the faces and they're labeling them for you. They're like, here's selfies, here's your family, here's your children, and that's the most accessible one. The ones that I think people don't really think about a lot are things like getting loan applications, right? We actually have AI deciding whether or not we get loans and that one is probably the most interesting one to me right now. Rajan, so I think the photos example is probably my favorite as well and what's interesting to me is that really, AI is actually not about the AI, it's about the user experience that you can create as a result of AI. What's cool about Google Photos is that my entire family uses Google Photos and they don't even know actually that the underlying is some of the most powerful AI in the world but what they know is they can find every picture of our kids on the beach whenever they want to or we had a great example where with our kids, every time they like something in the store, we take a picture of it and we can look up toy and actually find everything that they've taken picture of. It's interesting because I think most people don't even know the power that they have because if you search for beach in your Google Photos I was looking for an old bug picture from my high school there, it came right up so until you kind of explore, it's pretty tricky. So Rajan, I think a lot of conversation about AI, they don't always focus on general purpose, general purpose, general purpose machines and robots and computers but people don't really talk about the applied AI that's happening all around us. Why do you think that's so? It's a good question. There's a lot more talk about general purpose but the reality of where this has an impact right now are those specific use cases and so for example, things like personalizing customer interaction or spotting trends that you wouldn't have spotted before or turning unstructured data like documents into structured data, that's where AI is actually having an impact right now and I think it really boils down to getting to the right use cases where AI can add value. Sharan, I wanted to ask you, there's a lot of conversation always is AI replace people or is it an augmentation for people and we had Gary Kasparov on a couple years ago and he talked about, it was the combination of he plus the computer made the best chess player but that quickly went away. Now the computer is actually better than Gary Kasparov plus the computer. How should people think about AI as an augmentation tool versus a replacement tool and is it just going to be specific to the application? How do you kind of think about those? Yeah, I would say that any application where you're making life and death decisions, where you're making financial decisions that disadvantage people, anything where you've got UAVs and you're deciding whether or not to actually drop the bomb like you need a human in a loop. If you're trying to change the words that you are using to get a different group of people to apply for jobs, you need a human in the loop because it turns out that for the example of beach, you type sheep into your phone and you might get just a field, a green field and AI doesn't know that if it's always seen sheep in a field that when the sheep aren't there that that isn't a sheep. Like it doesn't have that kind of recognition to it. So anything where we're making decisions about parole or financial, anything like that needs to have human in the loop because those types of decisions are changing fundamentally the way we live. Great. So shift gears the team, or did you have something to pop? No, no, I didn't. Okay. The team reminded me I've been delinquent on my bell so I'll be more active on the bell, sorry about that. So everyone's even, we're starting at zero again. So I want to shift gears and talk about data sets. Bob, you're up on stage demoing some of your technology, the missed technology, and really it's an interesting combination of data sets. AI in its current form needs a lot of data. Again, kind of the classic Chihuahua and blueberry and photos you got to run a lot of them through. How do you think about data sets in terms of having the right data and a complete data set to drive an algorithm? Yeah, I think we all know data sets was one of the tipping points for AI to become more real, right? Along with cloud computing storage. But data is really one of the key points of making AI real, right? And my example on stage was wine, right? Great wine starts with great grapes, AI starts with great data. For us personally, LSTM is an example in our networking space where we have data for the last three months from our customers, and we're really using the last 30 days to really train these LSTM algorithms to really get this anomaly detection to a point where we don't have false positives. How much of the training is done once you've gone through the data a couple of times and adjust versus when you first start and you're not really sure how it's going to shake out in the algorithm? Yeah, so in our case right now, right training happens every night. So every night we're basically retraining those models basically to be able to predict if there's going to be an anomaly or network. You know, and this is really an example where you look at all these other cat image things. This is where these neural networks are really one of the transformational things that really moved AI into the reality column. And it's starting to impact all our different energy, whether it's techs, imaging. And in the networking world is an example where even AI and deep learning is really starting to impact our networking customers. Sharon, I want to go to you. What do you do if you don't have a big data set? You don't have a lot of pictures of Chihuahuas and blackberries and I want to apply some machine intelligence to the problem. I mean, so you need to have the right data set. You know, big is a relative term and it depends on what you're using it for, right? So you can have a massive amount of data that represents solar flares and then you're trying to detect some anomaly, right? If you train in AI what normal is based upon a massive amount of data and you don't have enough examples of that anomaly you're trying to detect, then it's never going to say there's an anomaly there. So you actually need to over sample. You have to create a population of data that allows you to detect images. You can't say, oh, I'm going to reflect in my data set the percentage of black women in Seattle which is something below 6% and say it's fair. It's not, right? You have to be able to over sample things that you need and in some ways you can get this through surveys, you can get it through actually going to different sources but you have to bootstrap it in some way and then you have to refresh it because if you leave that data set static like Bob mentioned like you, people are changing the way they do attacks and networks all the time and so you may have been able to find the one yesterday but today it's a completely different ball game. Prozinc to you, which comes first? The chicken or the egg? You start with the data and I say this is the right opportunity to apply some AI or do you have some AI objectives that you want to achieve and now you got to go out and find the data? So actually I think where it starts is the business problem you're trying to solve and then from there you need to have the right data. What's interesting about this is that you can actually have starting points and so for example there's techniques around transfer learning where you're able to take an algorithm that's already been trained on a bunch of data and train it a little bit further with your data and so we've seen that such that people that may have for example only 100 images of something but they can use a model that's trained on millions of images and only use those 100 to create something that's actually quite accurate. So that's a great segue. Give you a ring on that one. It's a great segue into talking about applying one algorithm that was built around one data set and then applying it to a different data set. Is that appropriate? Is that correct? Are you risking all kinds of interesting problems by taking that and applying it here especially in light of when people are going to go to algorithm marketplaces because I've got a data scientist I can go get one in a marketplace and apply it to my data. How should people be careful not to make a bad decision based on that? So I think it really depends and it depends on the type of machine learning that you're doing and what type of data you're talking about. So for example with images there are well-known techniques to be able to do this but with other things there aren't really and so it really depends but then the other really important thing is that no matter what at the end you need to test and iterate based on your data sets and based on sample data to see if it's accurate or not and then that's going to guide everything ultimately. Sharan, I was going to go to you. You brought up something in the preliminary rounds about open AI and kind of this we can't have this black box where stuff goes into the algorithm and stuff comes out and we're not sure what the result was. Sounds really important. Is that even plausible? Is it feasible? This is crazy statistics, crazy math. You talked about the business objective that someone's trying to achieve. I go to the data scientist here by data you're telling me this is the output. How, kind of where's the line between the layman and the business person and the hardcore data science to bring together the knowledge of here's what's making the algorithm say this. Yeah, there's a lot of names for this whether it's explainable AI or interpretable AI or opening the black box, things like that. The algorithms that you use determine whether or not they're inspectable and the deeper your neural network gets the harder it is to inspect actually, right? So to your point, every time you take an AI and you use it in a different scenario than what it was built for, for example, there was a police precinct in New York that had a facial recognition software and a victim said, oh, it looked like this actor. This person looked like, I don't know, Bill Cosby or something like that. And you were never supposed to take an image of an actor and put it in there to find people that look like them. But that's how people were using it. So to Rajan's point, yes, like you can transfer learning to other AIs but it's actually the humans that are using it in ways that are unintended that we have to be more careful about, right? Even if your AI is explainable and somebody tries to use it in a way that it was never intended to be used, the risk is much higher. Now, I think maybe I'd add, you know, if you look at Marvis, kind of what we're building for the networking community, a good example is when Marvis tries to do, estimate your throughput, right? Your internet throughput, that's what we use what we call decision tree algorithm. And that's a very interpretive algorithm. And when we predict low throughput, we know how we got to that answer, right? We know what features got us there. But when we're doing something like anomaly detection, that's a neural network. You know, it's a black box that tells us, yes, there's a problem, there's some anomaly, but that doesn't know what caused the anomaly. But that's a case where we actually use neural networks to actually find the anomaly. And then we're using something else to find the root cause. So it really depends on the use case and whether or not you're going to use an interpretive model or a neural network, which is more of a black box model to tell you you've got a cat or you've got a problem somewhere. So Bob, that's really interesting. So can you not unpack a neural network is just the nature of the way that the communication and the data flows and the inferences are made that you can't go in and unpack it, that you have to have the separate kind of process to get to the root cause? Yeah, you know, it's always hard to say never, but inherently, yes, neural networks are a very complicated set of weights, right? It's basically usually a supervised training model and we're feeding a bunch of data and trying to train it to detect a certain feature, a certain output. But that is where they're powerful, right? And that's why they basically are doing such a good job because they are minicking the brain, right? That neural network is a very complex thing. It's kind of like your brain, right? We really don't understand how your brain works right now. When you have a problem, it's really trial and error when we try to figure out what's going on. So I want to stay with you, Bob, for a minute. So what about when you change what you're optimizing for? So you just said you're optimizing for throughput of the network, you're looking for problems. Now let's just say it's into the quarter. So some other reason we're now you're changing, you're changing what you're optimizing for. Do you have to write a separate algorithm? Can you have dynamic movement inside that algorithm? How do you approach that problems? Because you're not always optimizing for the same things depending on the market conditions. Yeah, I mean, I think a good example, again with Marvis is really with what we call reinforcement learning, right? And reinforcement learning is a model we use for like regular resource management. And there we're really trying to optimize for the user experience and trying to balance the reward. The model's trying to reward whether or not we have a good balance between the network and the user, right? That reward can be changed. So that algorithm is basically reinforcement. You can fundamentally change how that algorithm works by changing the reward you give the algorithm. Great. Rajan, back to you. Couple of huge things that have come into play in the marketplace. Get your take. One is open source. You know, kind of what's the impact of open source generally on the availability to use I and more applications. And then two, cloud and soon to be edge. You know, the current next stop. How do you guys incorporate that opportunity? How does it change what you can do? How does it open up the lens of AI? Yeah, I think open source is really important because I think one thing that's interesting about AI is that it's a very nascent feel. And the more that there's open source, the more that people can build on top of each other and be able to utilize what others have done. It's similar to how we've seen open source impact operating systems, the internet, things like that. With cloud, I think one of the big things with cloud is now you have the processing power and the ability to access lots of data to be able to create these networks. And so the capacity for data and the capacity for compute is much higher. Edge is going to be a very important thing, especially going into the next few years. You're seeing more things incorporated on the edge and one exciting development is around federated learning where you can train on the edge and then combine some of those aspects into a cloud side model. And so that I think will actually make edge even more powerful. But it's got to be so dynamic, right? Because the fundamental problem used to always be to move the compute to the data or the data to compute. Well now you've got on these edge devices you've got ton of data, right? Sensor data, all kinds of machine data. You've got potentially nasty hostile conditions. You're not in a nice pristine data center where the environmental conditions are and the connectivity issues. So when you think about that problem, yet there's still great information there. You've got latency issues. Some might have to be processed close to home. How do you incorporate that age old thing of the speed of light to still break up the problem to give you a step up? What we see a lot of customers do is they do a lot of training on the cloud but then inference on the edge. And so that way they're able to create the model that they want but then they get fast response time by moving the model to the edge. The other thing is that like you said, lots of data is coming into the edge. So one way to do it is to efficiently move that to the cloud but the other way to do it is to filter and to try to figure out what data you want to send to the cloud so that you can create the next datasets. Sharuna back to you. Let's shift gears into ethics. This pesky issue that's not a technological issue at all but we see it often, especially in tech just because you can doesn't mean that you should. So and this is not a STEM issue, right? There's a lot of different things that happen. So how should people be thinking about ethics? How should they incorporate ethics? How should they make sure that they've got a standard kind of overlooking kind of what they're doing and the decisions are being made? Yeah, one of the more approachable ways that I have found to explain this is with behavioral science methodologies. So ethics is a massive field of study and not everyone shares the same ethics. However, if you try and bring it closer to behavior change because every product that we're building is seeking to change a behavior. We need to ask questions like what is the gap between the person's intention and the goal we have for them? Would they choose that goal for themselves or not? If they wouldn't, then you have an ethical problem. And this can be true of the intention goal gap or the intention action gap. We can see when we regulated for cigarettes. We can't just make it look cool without telling them what the cigarettes are doing to them, right? So we can apply these same principles moving forward and they're pretty accessible without having to know, oh, this philosopher and that philosopher and this ethicist said these things. It can be pretty human. The challenge with this is that most people building these algorithms are not, they're not trained in this way of thinking. And especially when you're working at a startup, right? You don't have access to massive teams of people to guide you down this journey. So you need to build it in from the beginning and you need to be able to then based upon principles. And it's going to touch every component. It should touch your data, your algorithm, the people that you're using to build the product. If you only have white men building the product, you have a problem. You need to pull in other people. Otherwise, there are just blind spots that you are not going to think of in order to build that product for a wider audience. It seems like that we're on such a razor sharp edge, right? Coca-Cola wants you to buy Coca-Cola and they show as for Coca-Cola and they appeal to your, let's all sing together on the hillside and be one, right? But it feels like with AI that now you can cheat, right? Now you can use behavioral biases that are hardwired into my brain as a biological creature against me. And so where is the fine line between just trying to get you to buy Coke, which somewhat argues probably just as bad as Jewel because you get diabetes and all these other issues. But that's acceptable, but cigarettes are not and now we're seeing this stuff on Facebook where they're coming right at you. So we see taxes on a soda now, though, right? So we know that this is bad. And Coke isn't just selling Coke anymore, they're also selling vitamin water. So their play isn't to have a single product that you can purchase, but it is to have a suite of products that if you want that Coke, you can buy it. But if you want that vitamin water, you can have that too. I think if vitamin water and a smile, that only comes with the Coke, though. Bob, you ought to jump in. Yeah, I think we're going to see ethics really break into two different discussions, right? I mean, ethics is already like human behavior that you're already doing, right? Doing bad behavior like discriminatory hiring, training that behavior into AI is going to be wrong. It's wrong in the human world, it's going to be wrong in the AI world. I think the other component to this ethics discussion is really around privacy of data. It's like that mirror example, right? Who gave that mirror the right to basically tell me I'm old and actually do something with that data, right? Is that my data or is that the mirror's data that basically recognized me and basically did something with it, right? That's the Facebook example when I get the email telling me, look at that picture, and someone's tagged me in the pictures, like, where did that come from? Right, but I'm curious, Bob, to follow up on that, as social norms change, we talked about it a little bit before we turned the cameras on, right? It used to be okay to have no black people drinking out of a fountain or coming in the side door of a restaurant. Not that long ago, right, in the 60s. So if someone had built an algorithm then that would have incorporated probably that social norm, but social norms change. So how should we kind of try to stay ahead of that or at least go back reflectively after the fact and say kind of back to the black box, ooh, that's no longer acceptable. We need to tweak this thing. I would have said in that example, that was wrong 50 years ago. Yep. It was not okay 50 years ago. It was wrong, but if you asked somebody in Alabama, you know, at the University of Alabama math department who've been born red and born red in that culture is, well, they probably would have not necessarily agreed. But so generally though, again, assuming things change, how should we make sure to go back and make sure that we're not again carrying forward things that are no longer the right thing to do? Well, I think, as I said, I think, what we know is wrong is going to be wrong in the AI world. I think the more subtle thing is when we start relying on these AI to make decisions like shouldn't my car hit the pedestrian or save my life. Those are tough decisions to let a machine take off or your bomb decision, right? When we start letting the machines or, is it okay for Marvis to give this VIP's preference over other people, right? Those type of decisions are kind of the ethical decision. You know where they're right and wrong in the human world. I think the same thing will apply in the AI world. I do think we'll start to see more regulation. Just like we see regulation happen in our hiring, that regulation is going to be applied into our AI solutions. We're going to come back to regulation in a minute, but Rajan, I want to follow up with you. In your earlier session, you made an interesting comment. You said 10% is clearly good. 10% is clearly bad, but it's a soft squishy middle at 80% that aren't necessarily super clear, good or bad. So how should people kind of make judgments in this big gray area in the middle? Yeah, and I think that is the toughest part. And so the approach that we've taken is to set out the set of AI principles. And what we did is actually wrote down seven things that we think AI should do and four things that we should not do, that we will not do. And we now have to actually look at everything that we're doing against those AI principles. And so part of that is coming up with that governance process, because ultimately it boils down to doing this over and over, seeing lots of cases and figuring out what you should do. And so that governance process is something we're doing, but I think it's something that every company is going to need to do. Sharana, I want to come back to you. So we're going to shift gears to talk a little bit about law. And we've all seen Zuckerberg, unfortunately, for him has been stuck in these congressional hearings over and over and over again, a little bit of a deer in a headlight. You made an interesting comment on your prior show that he's almost like he's asking for regulation. Like, you know, he stumbled into some really big, hairy, nasty areas that were never necessarily intended when they launched Facebook out of his dorm room many, many moons ago. So what is the role of the law? Because the other thing that we've seen unfortunately in a lot of those hearings is a lot of our elected officials are way, way, way behind. They're still printing their emails, right? So what is the role of the law? How should we think about it? What should we invite from the law to help sort some of this stuff out? Yeah, I think as an individual, right, I would like for each company not to make up their own set of principles, I would like to have a shared set of principles that we're following. The challenge, right, is that with between governments, that's impossible. China is never going to come up with the same regulations that we will. They have a different privacy standards than we do. But we are seeing locally, like the state of Washington has created a future of work task force and they are coming into the private sector and asking companies like Textio and like Google and Microsoft to actually advise them on what should we be regulating? We don't know, we're not the technologists, but they know how to regulate and they know how to move policies through the government. What we'll find is we don't advise regulators on what we should be regulating. They're going to regulate it in some way, just like they regulated the tobacco industry, just like they regulated sort of monopolies that tech is big enough now. There is enough money in it now that it will be regulated. And so we need to start advising them on what we should regulate because just like Mark, he said, well, everyone else was doing it. My competitors were doing it. So if you don't want me to do it, make us all stop. I think, can I do a negative bell on that one? Not for you, but for Mark's response to me, that's crazy. So Bob, old man at the mall, it's actually a little bit more codified, right? There's GDPR, which came through May of last year. Now the noon is to California, make sure I get to write California Consumer Protection Act, which goes into effect January one. And it's interesting, is that the hardest part of the implementation of that, I think I haven't implemented it, is the right to be forgotten. Because as we all know, computers are really good at recording information in cloud, it's recorded everywhere. There's no, they're there. So when these types of regulations, how does that impact AI? Because if I've got an algorithm built on a data set and person item number 472 decides they want to be forgotten, how the heck do I deal with that? Well, I mean, I think with Facebook, I kind of see that as, I suspect Mark knows what's right and wrong. He's just kicking ball down tires, like, oh, you guys, it's your problem. Please tell me what to do. I see AI as kind of like any other new technology. It can be abused and used in the wrong ways. I think legally, we have a constitution that protects our rights. And I think we're going to see a lawyer street AI, just like any other constitutional things. And people who are building products using AI, just like when you build medical products or other products and potentially harmful people, you're going to have to make sure that your AI product does not harm people. Your AI product does not include and promote discriminatory results. So I think we're going to see, our constitutional thing is going to apply to AI just like we've seen other technologies in work. And it's going to create jobs because of that, right? Yeah, so it'll be a whole new set of lawyers. A whole new set of lawyers and testers even, because otherwise if an individual company is saying, oh, we tested it, it works, trust us. Like, how are you going to get the independent third party verification of that? So we're going to start to see a whole proliferation of that type of field that never had to exist before. Yeah, one of my favorite, Dr. Reuben Chowdhury from Accenture. If you don't follow her on Twitter, follow her, she's fantastic and a great lady. So I want to stick with you for a minute, Bob, because the next topic is autonomous. And ramen up on the keynote this morning, talked about mist and really this kind of shifting workload of fixing things into an autonomous setup where the system now is finding problems, diagnosing problems, fixing problems up too. I think he said even generating return authorization for broken gear, which is amazing. But autonomy opens up all kinds of crazy scary things. Robert Gates, we interviewed, said the only guns that are autonomous in the entire US military are the ones on the border of North Korea. Every single other one has to run through a person. So when you think about autonomy, and when you can actually grant this AI, the autonomy, the agency to act, what are some of the things to think about and what are the things to keep from just doing something bad really, really fast and efficiently? Yeah, I mean, I think this is what we discussed, right? I mean, I think for our practical purposes, there is a tipping point. I think eventually we will get to the CP30 Terminator Day where we actually build something on par with a human. But for the real purposes right now, we're really looking at tools that are going to help businesses, doctors, self-driving cars. And those tools are going to be used by our customers to basically allow them to do more productive things with their time. Whether it's a doctor that's using a tool to actually use AI to help make better predictions, they're still going to be a human involved. And what Rami talked about this morning, networking is really allowing our IT customers focus more on their business problems where they don't have to spend their time finding bad hardware, bad software, and making better experiences for the people they're actually trying to serve. Right, trying to get your take on autonomy. Because it's a different level of trust that we're giving to the machine when we actually let it do things based on its own volition. Yeah, there's a lot that goes into this decision of whether or not to allow autonomy. There's an example I read, there's a book that just came out. Oh, what's the title? You look like a thing and I love you. It was a book named by an AI. If you want to learn a lot about AI and you don't know much about it, get it. It's really funny. So in there, there is in China a factory where the AI is optimizing output of cockroaches. Now, they want more cockroaches. Now, why do they want that? They want to grind them up and put them in like a lotion. It's one of their secret ingredients. Now, it depends on what parameters you allow that AI to change, right? If you decide to let the AI flood the container and then the cockroaches get out through the vents and then they get to the kitchen to get food and then they reproduce, the parameters in which you let them be autonomous over is the challenge. So when we're working with very narrow AI, when you tell the AI you can change these three things and you can't just change anything, then it's a lot easier to make that autonomous decision. And then the last part of it is that you want to know what is the results of a negative outcome, right? What's the result of a positive outcome and are those results something that we can take actually? Right, right. Rajan, I'll give you the last word on autonomy because kind of the next order of step is where the machines actually write their own algorithms, right? They start to write their own code. So they kind of take this next order of thought and agency, if you will. How do you guys think about that? You guys are way out ahead in this space. You have huge data sets. You got great technology. You got TensorFlow. When will the machines start writing their own algorithms? Well, and actually it's already starting there. That, you know, for example, we have a product called Google Cloud AutoML which basically takes in a data set and then we find the best model to be able to match that data set. And so things like that are there already, but it's still very nascent. There's a lot more that can happen. And I think ultimately with how it's used, I think part of it is you have to start to always look at the downside of automation and what is the downside of a bad decision? Whether it's the wrong algorithm that you create or a bad decision in that model. And so if the downside is really big, that's where you need to start to apply a human in the loop. And so for example, in medicine, AI can do amazing things to detect diseases, but you would want a doctor in the loop to be able to actually diagnose. And so you need to have that in place in many situations to make sure that it's being applied well. But is that just today or is that tomorrow? Because with exponential growth and as fast as these things are growing, I mean, will there be a day where you don't necessarily need to maybe need the doctor to communicate the news? Maybe there's some second order impacts in terms of how you deal with the family and kind of pros and cons of treatment options that are more emotional than necessarily mechanical. Because it seems like eventually that doctor has a role, but it isn't necessarily inaccurately diagnosing a problem. I think for some things, absolutely. Over time, the algorithms will get better and better and you can rely on them and trust them more and more. But again, I think you have to look at the downside consequence, that if there's a bad decision, what happens and how is that compared to what happens today? And so that's really where that is. So for example, self-driving cars, we will get to the point where cars are driving by themselves. There will be accidents, but the accident rate is going to be much lower than what's there with humans today. And so that will get there, but it will take time. And there will be a day when it'll be illegal for you to drive, if you have manslaughter, right? I believe absolutely there will be. And I don't think it's that far off actually. And I'm waiting for the day when I have my car take me up to Northern California with me sleeping. And it's the day, I live that long. That's right, and work while you're sleeping. All right, well, I want to thank everybody a ton for being on this panel. This has been super fun and these are really big issues. So I want to give you the final word. We'll just give everyone kind of a final say. And I just want to throw out there a Mars law. I mean, people talk about Moore's law all the time, but a Mars law, which Gartner stole and made into the hype cycle, is that we tend to overestimate in the short term, which is why you get the hype cycle. And we tend to underestimate in the long term the impacts of technology. So I just want to, as you look forward in the future, we won't put a year number on it. Kind of how do you see this rolling out? What are you excited about? What are you scared about? What should we be thinking about? We'll start with you, Bob. Yeah, you know, for me, you know, the day of the Terminator C-Pythero, I don't know if it's a hundred years or a thousand years that day is coming. You know, we will eventually build something that's in part of the human. I think they mentioned about the book, you know, you look like a thing and I love you type of thing. That was written by someone who tried to train AI to basically pick up lines, right? Cheesy pick up lines. I'm not for sure I'm going to trust AI to help me with my pick up lines yet. You know, you know, I love you. You know, you look like a thing. I love you, I don't know, it may work. It's kind of cute. Yeah, but who would have guessed online dating is what it is if you had asked, you know, 15 years ago. But I think, yeah, I think overall, yes, we will see the Terminator C-Pythero. It's probably not in our lifetime, but it's in the future somewhere. AI is definitely going to be on par with the internet, cell phone, radio. It's going to be a technology that's going to be accelerating. If you look at where technology's been over the last, is this amazing to watch how fast things have changed in our lifetime alone, right? Yeah. And this curve of technology accelerations, this is amazing. It's a bit into curves, Sharuna. Yeah, I think the thing I'm most excited about for AI right now is the addition of creativity to a lot of our jobs. So a lot of, we build an augmented writing product and what we do is we look at the words that have happened in the world and their outcomes and we tell you what words have impacted people in the past. Now, with that information, when you augment humans in that way, they get to be more creative. They get to use language that have never been used before to communicate an idea. You can do this with any field. You can do it with composition of music. You can, if you can have access as an individual to the data of a bunch of cultures, the way that we evolve can change. So I'm most excited about that. I think I'm most concerned currently about the products that we're building to give AI to people that don't understand how to use it or how to make sure they're making an ethical decision. So it is extremely easy right now to go on the internet to build a model on a data set and I'm not a specialist in data, right? And so I have no idea if I'm adding bias in or not. And so it's an interesting time because we're in that middle area. And- It's getting loud. It's getting loud. It's getting loud. All right, Roger, we'll just throw it to you before we have to cut out or we're not going to be able to hear anything in a few minutes. So I actually start every presentation out with a picture of the Mosaic browser because what's interesting is I think that's where AI is today, compared to kind of when the internet was around 1994. We're just starting to see how AI can actually impact the average person. As a result, there's a lot of hype, but what I'm actually finding is that 70% of the companies I talk to, the first question is, why should I be using this and what benefit does it give me? Why? 70% asks you why. Yeah, and what's interesting with that is that I think people are still trying to figure out what is this stuff good for? But to your point about the long run and we underestimate the long run, I think that every company out there and every product will be fundamentally transformed by AI over the course of the next decade. And it's actually going to have a bigger impact than the internet itself. And so that's really what we have to look forward to. All right, again, thank you everybody for participating. That was a ton of fun. I hope you had some fun. As I look at the score sheet here, we've got Bob coming in in the bronze at 15 points. Rajen at 17 and our gold medal winner for the silver bell. And Sharna at 20 points again. Thank you. Thank you so much. And look forward to our next conversation then. Me too. All right, Jeff Frick signing out from Caesars. Juniper next work, unpacking AI. Thanks for watching.