 Live from New York, it's theCUBE, covering machine learning everywhere. Build your ladder to AI, brought to you by IBM. Welcome back to New York City, along with Dave Vellante. I'm John Walls. We continue our coverage here on theCUBE of machine learning everywhere. Build your ladder to AI. IBM, our host here today, and we put together occasionally these events of panel of esteemed experts with deep perspectives on a particular subject. Today, our influencer panel is comprised of three well-known and respected authorities in this space. Glad to have Colin Sumter here with us. He's the man with the mic, by the way. He's going to talk first, but Colin is an IoT architect with CrowdBowl. Thank you for being with us, Colin. Jennifer Shen, those who are on theCUBE, you're very familiar with Jennifer, a longtime CUBER, founder at 8Pass Solutions on the faculty at NYU and Cal Berkeley, and also with us is Craig Brown, a big data consultant and a home game for all of you guys, right? And more or less, here we are in the city. So thanks for having us. We appreciate the time. First off, let's talk about the title of the event, the path, Build Your Path, or your ladder, rather, excuse me, to AI. What are those steps on that ladder, do you think, Colin? The fundamental steps that you've got to jump on or step on in order to get into that true AI environment. In order to get to that true AI environment, John, is a matter of mastering or organizing your information well enough to perform analytics. That'll give you two choices to do either linear regression or supervised classification, and then you actually have enough organized data to talk to your team and organize your team around that data to begin that ladder, to successively benefit from your data science program. We'll take a stab at it. Jen? So it's a compute, right? So you need to have the right processing or at least the ability to scale out to be able to process the algorithm fast enough to be able to find value in your data. I think the other thing is, of course, the data source itself. Do you have the right data to answer the questions you want to answer? So I think without those two things, you'll either have a lot of great data that you can't process in time, or you'll have a great process and a great algorithm that has no real information so your output is useless. So I think those are the fundamental things you really do need to have any sort of AI solution built. I'll take a stab at it from the business side. They have to adopt it first. They have to believe that this is going to benefit them and that the effort that's necessary in order to build into the various aspects of algorithms and data subjects is there. So I think adopting the concept of machine learning and the development aspects that it takes to do that is a key component to building the ladder. So this just isn't toe in the water, right? You got to dive in the deep end, right? Well, it gets the culture. Right, I mean, if you look at most organizations, not the big five market cap companies, but most organizations, data is not at their core. Humans are at their core, human expertise, and data sort of bolted on. But that has to change, or they're going to get disrupted. Data has to be at the core. Maybe the human expertise leverages that data. What are you guys seeing with end customers in terms of their readiness for this transformation? For what I'm saying, customers spend their time right now is getting out of the silos. So when you speak culture, that's primarily what the culture is surrounding. They develop applications with functionality as a silo. And then data specific to that functionality is the component in which they look at data. They have to get out of that mindset and look at the data holistically. And ultimately is, you hear in these events, looking at it as an asset. The data is shared resource. Right, right. Okay, and again, with the exception of the, whether it's a Google Facebook, obviously, but the Uber's, the Airbnb's, et cetera, with the exception of those guys, I mean, most customers aren't there. They're still, the data is in silos. They've got myriad infrastructure, your thoughts? I'm also seeing sort of a disconnect, right? Between the operationalizing team, the team that runs these codes or has a real business need for it. And then they have, sometimes you'll see corporations with research teams, and there's sort of a disconnect between what the researchers do and then what these operations or marketing, whatever domain it is, what they're doing in terms of a day-to-day operation. So for instance, a researcher will look really deep into sort of these algorithms. I may know a lot about deep learning in theory, right, in theoretical world. It might publish a paper that's really interesting. But that application part, where they're actually being used every day, there's this difference there, where you really shouldn't have that difference, right? There should be more alignment. I think actually aligning those resources, I think companies are struggling with that. So Colin, we were talking off camera about RPA, robotic process automation. What's the, where's the play for machine intelligence and RPA? Maybe first of all you could explain RPA. So David, RPA stands for robotic process automation. That's gonna enable you to grow and scale a digital workforce. Typically it's done on the cloud. So the way RPA and robotic process automation plays into machine learning and data science is that it allows you to outsource business processes to compensate for the lack of human expertise that's available in the marketplace because you need competency to enable the technology to take advantage of these new benefits come into the market. And when you start automating some of these processes, you can keep pace with the innovation in the marketplace and allow the human expertise to gradually grow into these new data science technologies. So I was mentioning some of the big guys before. Top five market cap companies, Google, Amazon, Apple, Facebook, Microsoft, all digital, Microsoft you can argue, but still pretty digital, pretty data oriented. My question is about closing that gap. In your view, can companies close that gap? How can they close that gap? Are you guys helping companies close that gap? It's a wide chasm it seems, thoughts? The thoughts on closing the chasm is presenting the technology to the decision makers. What we've learned is that you don't know what you don't know. So it's impossible to find the new technologies if you don't have the vocabulary to just begin a simple research of these new technologies. And to close that gap, it really comes down to the awareness, the events like the CUBE webinars, different educational opportunities that are available to line of business owners, directors, VPs of systems and services to begin that awareness process, finding consultants, there's begin that pipeline enablement to begin allowing the business to take advantage and harness data science, machine learning and what's coming. Yeah, so I think one of the things I've noticed is that there's a lot of information out there, like everyone has a webinar, everyone has the tutorials, but there's a lot of overlap, right? There aren't that many very sophisticated documents that you can find about how to implement it in real-world conditions. They all tend to use the same card data set, a lot of these machine learning tutorials to find, which is hilarious because the data set's actually very, very small, right? And I know where it comes from, right? Just from having the expertise, but it's not something I'd ever use in the real world. I don't find that that's the level of skill you need to be able to do any of these methodologies, but that's what's out there. And so there's a lot of information, but they're kind of at a rudimentary level. They're not really at that sophisticated level where you are gonna learn enough to deploy in real-world conditions. So I think one of the things I'm noticing is, with the technical teams, with these data science team, machine learning teams, they're kind of using the same methodologies I used maybe 10 years ago because the management, the people who manage these teams are not technical enough. They're business people but they don't understand how to guide them, how to explain, hey, maybe you shouldn't do that with your code because that's actually gonna cause a problem. You should use parallel coding, right? You should make sure everything's running in parallel so it computes faster. But if these younger teams are actually learning for the first time, they make the same mistakes you made 10 years ago. So I think what I'm noticing is that lack of leadership is partly one of the reasons and also the assumption that a non-technical person can lead a technical team. So it's just not skill set on the worker level, if you will. It's also knowledge based on the decision maker level. That's a bad place to be, right? So how do you get into the door to a business like that? Obviously, and we've talked about this a little bit today that some companies say we're not digital companies, we sell widgets. Well, yeah, but you sell widgets and you need this to sell more widgets. And so how do you to get into that door and talk about this problem that Jennifer just cited that you're signing the checks, man. You're going to have to get up to speed on this. Otherwise, you're going to have no checks to sign in three or five years. You're done. So I think that speaks to use cases. And I think that, and what I'm actually saying at customers is that there is a disconnect and an understanding from the executive teams and the low level technical teams on what the use case actually means to the business, right? Some of the use cases are operational in nature. Some of the use cases are data in nature. There's no real conformity on what does the use case mean across the organization and that understanding isn't there. And so the CIOs, the CEOs, the CTOs think that, okay, we're going to achieve a certain level of capability if we do a variety of technological things. And the business is looking to effectively improve some or bring some efficiency to some business process because at each level within the organization, the understanding is at the level in which the discussions are being made. And so I'm in these meetings with senior executives and we have lots of ideas on how we can bring efficiencies and some operational productivity with technology. And then we get in a meeting with the data stewards and what are these guys talking about? They don't understand what's going on at the data level and what data we have. And then that's where the data quality challenges come into the conversation. So I think that to close that cataclysm, we have to figure out who needs to be in the room to effectively help us build the right understanding around the use cases and then bring the technology to those use cases and actually see within the organization how we're affecting that. Sort of changed questioning here. I want you guys to think about how capable can we make machines in the near term? Let's talk next decade, near term. Let's say next decade. How capable can we make machines and are there limits to what we should do? That's a tough one. Although you wanna go next decade, we're still faced with some of the challenges today in terms of, again, that adoption, the use case scenarios and then what my colleagues are saying here about the various data challenges and DevOps and things. So there's a number of things that we have to overcome but if we can get past those areas in next decade, I don't think there's gonna be much of a limit in my opinion as to what the technology can do and what we can ask the machines to produce for us. And as Colin mentioned with RPA, I think that the capability is there, right? But can we ultimately, as humans, leverage that capability effectively? Yeah, so I get this question a lot. People are really worried about AI and robots taking over and all of that. And I go, well, let's think about the example. We've all been online probably over the weekend, maybe it's three or four a.m., checking your bank account and you get an error message that says your password is wrong. And we swear, I mean, I've been there where I'm like, no, no, my password's right. And I keep saying that the password's wrong. Of course, then I change it, it's still wrong. Then the next day when I log in, I can log in, same password, because they didn't put a great error message there. They just defaulted to wrong password when it's probably a server that's down. So there are these basic sort of processes that we could be improving, which no one's improving. So you think in that example, how many customer service reps are gonna be contacted to try to address that? How many IT teams? So for every, every one of these bad technologies that are out there or not, technologies that aren't being run efficiently or run in a way that makes sense, you actually have like maybe three people who are gonna be contacted to try to resolve an issue that actually maybe could have been avoided to begin with. So I think there's, I have some, I feel like it's optimistic to say that robots are gonna take over because you probably need more people to put band-aids on bad technology and bad engineering, frankly. And I think that's the reality of it, right? Like we had hoverboards, that'd be great. For a while we thought we did, right? But then we found out, oh, it's not quite hoverboards. I feel like that might be what happens with the AI. Like we might think we have it and then go, oh wait, but it's not really what we thought it was. So there are real limits, certainly in the near to mid to maybe even long-term that are imposed. But you're an optimist. Yeah, well, not so much with the AI, but everything else, sure. AI I'm a little bit like, well, it'd be great, but I'd like basic things to be taken care of every day. So I think that usefulness, right? The usefulness of technology is not something anyone's talking about. You're talking about this advancement, that advancement, things people don't understand, don't know even how to use in their life. Great, great as an idea. But what about useful things that we could actually use in our real life? So block and tackle first. And then put some reverses in later, if you will, to switch over to football. But we were talking about it earlier, just about basics, right? Fundamentals. Get your fundamentals right. And then you can compliment on that with supplementary technologies. Craig, McCollum? Jen made up some really good points and brought up some very good points. And so it has... Craig. Craig, so it's Craig. I'm sorry. Sorry. But 10 years out, Jen and Craig spoke to false positives. And false positives create a lot of inefficiency in businesses. So when you start using machine learning and AI 10 years from now, maybe there's reduced false positives that have been scored in real time, allowing teams not to have their time consumed and the business resources consumed trying to resolve false positives. So these false positives have a business value that today some business might not be able to record. So in financial services, banks count money not lended. But in everyday business, a lot of businesses aren't counting the monetary consequences of false positives and the drag it has on their operational ability and capacity. I want to ask you guys about disruption. If you look at where the disruptions, the digital disruptions have taken place, obviously retail, certainly advertising, certainly content of businesses. There are some industries that haven't been highly disrupted, financial services, insurance. We were talking earlier about aerospace and defense rather. Is any business, any industry safe from digital disruption? There are. Certain industries are just highly regulated, healthcare, financial services, real estate, transactional law. These are very extremely regulated technologies or businesses that are, I don't want to say susceptible to technology, but they can be disrupted at a basic level, operational efficiency to make these things happen, these business processes happen more rapidly, more accurately. So you guys buy that? There's some. Well, so. A little debate going here if we could. So I work with the government and the government's trying to change things. I feel like that's kind of a sign, because they tend to be a little bit slower than say other private industries, private companies. And they're actually say, if they have data, they're trying to actually put it into a system, right? Meaning like if they have files, I think at some point I got contacted about putting files that they found like birth records, right? Marriage records that they found from like a hundred plus years ago and trying to put that into the system. By the way, there was, I did look into it. There was no way to use AI for that because of the way, there was no standardization across these files. So they have like all these like half a million files, but someone's probably gonna manually have to enter that in, right? The reality is, I think because there's a demand for having things be digital, right? We aren't likely to see their decrease in that. We're not gonna have one industry that goes, oh, we're files aren't digital, probably because they also wanted to be digital, right? The companies themselves, the employees themselves wanna see that change. So I think there's going to be this continuous move toward it, but then there's the question of, are we doing it better? Is it better than say having it on paper sometimes? Because sometimes I don't know, I just feel like it's easier on paper than to have to look through my phone, look through the app. I mean, there's so many apps now, like I can't find anything. I got my index card still, Jennifer. Dave's got his notebook. I'm not sure I want my ledger to be on paper, but anyway. So I think that's gonna be an interesting thing when people take a step back and go like, is this really better, right? Is this actually an improvement? Because I don't think all things are better digital. It's a great question. Will the world be a better, more prosperous place? Uncertain, your thoughts. I think the competition is probably the driver as to who has to do this now and who's not safe. And the organizations that are heavily regulated or compliance driven that can actually use that as the reasoning for not jumping into the barrel right now and letting it happen in other areas. First, watching the technology mature and then. Let's wait. And let's wait, because that's traditionally how. Good strategy, in your opinion. It depends on the entity. But yeah, I think there's nothing wrong with being safe. There's nothing wrong with waiting for a variety of innovations to mature. What level of maturity I think is the perspective that probably is another discussion for another day. But I think that it's okay. I don't think that everyone should jump in. Get some lessons learned, watch how the other guys do it. But I think that safety is in the eyes of the beholder. But some organizations are just competition fierce and they need a competitive edge and this is where they get it. When you say safety, do you mean safety in making decisions or do you mean safety in protecting data? I mean, how are you defining safety? Safety in terms of when they need to launch and look at these newer technologies as a basis for change within your organization. But what about the other side of that point that there's so much more data about, so much more behavior about so many more attitudes and so on and so forth. And there is privacy issues and security issues and all that. So I mean, those are real challenges for any company. And becoming exponentially more important as more as at stake. So how do companies address that? And that's got to be absolutely part of their equation as they decide what these future deployments are because they're going to have great vast reams of data, but that's a lot of vulnerability too, isn't it? It says vulnerable is, so from an organizational standpoint, they're accustomed to these challenges aren't new, right? I mean, we still see data breaches, they're bigger, but we still see occasionally data breaches in organizations where we don't expect to see them. So I think that from, you know, from that perspective, it's the experiences of the organizations that determine the risks they want to take on to a certain degree. And then based on those risks and how they handle adversity within those risks, I think from an experienced standpoint, they know ultimately how to handle it and get themselves to a place where they can regroup, figure out what happened and then fix the issues. And then the others watch while these risk takers take on these types of scenarios. So I want to underscore this whole disruption thing and ask, much time, I know we're going a little over, but so I want to ask you to pull out your really, you know, your Hubble telescopes. Maybe let's make a 20, 30 year view, so we're safe because we know we're going to be wrong, but I want to sort of scale a one to 10, likelihood, high likelihood being 10, low being one, maybe sort of rapid fire. Do you think large retail stores are going to mostly disappear? What do you guys think? Oh, I think the way that they're structured, the way that they interact with their customers might change, but you're still going to need them because they're going to be times where you need to buy something. So a six, seven, something like that, is that kind of consensus or do you feel differently, Colin? I feel retail is going to be around, especially fashion because certain people and myself included, I need to try my clothes on. So you need a location to go to, a physical location to actually feel the material, experience the material. So we kind of have a consensus there, it's probably no, how about driving? I was going to say, Amazon I think opened the bookstore, just saying, it's kind of funny because they got, you know, they opened the bookstore. So, you know, I think what happens is people forget over time, they go, it's a new idea, it's not so much a new idea. I heard a rumor the other day that their next big acquisition was going to be, not Neiman Marcus, what's the other high end retailer? Nordstrom. Nordstrom, yeah. And my wife said, bad idea, they'll ruin it. Will driving or owning your own car become an exception? Driving and owning your own car. 30 years now, we're talking. 30 years, sure. I think the concept is there. I think that we're looking at that, right? IoT is moving us in that direction. 5G is around the corner. So I think the makings of it is there. So since I can dare to be wrong, yeah. We'll be on 10G by then anyway. Automobiles really haven't been disrupted, the car industry, but you're forecasting, I would tend to agree. You guys agree or no? Well, you think culturally, I want to drive my own car? Yeah, I think people, well, so I think a couple of things. How well engineered is it? Because if it's badly engineered, people are not going to want to use it. For instance, there are people who could take public transportation. It's the same idea, right? Everything is autonomous. You'd have to follow in line, right? There's going to be some system, some order to it. And you might go, oh, I want it to be faster than I don't want to be in line with that autonomous vehicle. I want to get there faster, get there sooner. And there are people who want to have that control over their life, but they're not subject to things like schedules all the time and that sort of constraint. So I think if the engineering is bad, then you're going to have more problems and people are probably going to go away from wanting to be autonomous. All right, Colin, one for you. Will robots and maybe 3D printing, for example, RPA, will it reverse the trend toward offshore manufacturing? 30 years from now, yes. I think robotic process engineering, eventually you're going to be at your cubicle or your desk or whatever it is and you're going to be able to print off the supplies. Do you guys think machines will make better diagnoses than doctors? Ooh. I'll take that. All right. I think yes to a certain degree because if you look at the problems with diagnosis, right now, they miss it. I don't know how people, even 30 years from now, will be different from that perspective where machines can look at quite a bit of data about a patient in split seconds and say, hey, the likelihood of you reoccurring this disease is nil to none because here's what I'm basing in on. I don't think doctors will be able to do that. Now, again, being daring to be wrong. Don't tell your own doctor either. All right. That's true. I think happens to, you know, we all know. Yeah, so I think it depends, right? So maybe 80%, some middle percentage might be the case. I think extreme outliers, right? Maybe not so much, right? So you think about anything that's programmed into a algorithm. Someone probably identified that disease, a human being identified that as a disease, made that connection and then it gets put into the algorithm. So I think what will happen is that for the 20% that isn't being done well by machines, you'll have people who are more specialized in being able to identify the outlier cases from, say, the standard, you know, normally if you have certain symptoms, you have a cold, right? Those are kind of standard ones. If you have this weird sort of thing where there's a new variable, right? Environmental variables, for instance. You know, your environment can actually lead to you having cancer, right? So, you know, there's other factors other than just your body and your health that's gonna actually be important to think about when diagnosing someone. Colin, go ahead. I think machines aren't going to out-decision doctors. I think doctors are gonna work well with machine learning. For instance, there's a published document of Watson doing the research of a team of four in 10 minutes where it normally takes a month. So those doctors to bring up Jen and Craig's point are gonna have more time to focus in on what the actual symptoms are to resolve the outcome of patient care and patient services in a way that benefits humanity. I just wish, Dave, that you would have picked a shorter horizon than 30 years. I hope I'm 20. I feel good about our chances of seeing that. 30, I'm just not so sure. I mean, for the two old guys on the panel here. Well, it feels like a good sense. This is 10 years, not so much. But beyond 10 years, a lot's gonna change. Well, thank you all for joining us. Always enjoyed the discussions. And Craig, Jennifer, and Colin, thanks for being here with us here on theCUBE. We appreciate the time. Back with more here from New Yorker. After this, you're watching theCUBE.