 Welcome friends to HPE Esmeral's Analytics Unleashed. I couldn't be more excited to have you here today. We have a packed and informative agenda. It's going to give you not just a perspective on what HPE Esmeral is and what it can do for your organization. But you should leave here with some insights and perspectives that will help you on your edge to cloud data journey in general. The lineup we have today is awesome. We have industry experts like Kirk Bourne, who's going to talk about the shape this space will take to key customers and partners who are using Esmeral technology as a fundamental part of their stack to solve really big, hairy, complex, real data problems. We'll hear from the execs who are leading this effort to understand the strategy and roadmap forward as well as give you a sneak peek into the new ISV ecosystem that is hosted in the Esmeral marketplace. And finally, we have some live music being played in the form of three different demos. This is going to be a fun time. So do jump in and chat with us at any time or engage with us on Twitter in real time. So grab some coffee, buckle up, and let's get going. Getting data right is one of the top priorities for organizations to affect digital strategies. So right now we're going to dig into the challenges customers face when trying to deploy enterprise wide data strategies. And with me to unpack this topic is Kirk Bourne, Principal Data Scientist and Executive Advisor Booz Allen Hamilton. Kirk, great to see you. Thank you, sir, for coming on the program. Great to be here, Dave. So hey, enterprise scale data science and engineering initiatives, they're non-trivial. What do you see as some of the challenges in scaling data science and data engineering ops? Well, one of the first challenges is just getting it out of the sandbox because so many organizations, they say, let's do cool things with data, but how do you take it out of that sort of play phase into an operational phase? And so being able to do that is one of the biggest challenges and then being able to enable that for many different use cases then creates an enormous challenge because do you replicate the technology and the team for each individual use case or can you unify teams and technologies to satisfy all possible use cases? And so those are really big challenges for companies, organizations, everywhere to think about. Well, what about the idea of industrializing those data operations? I mean, what does that mean to you? Is that a security connotation, a compliance? How do you think about it? It's actually all of those industrialized to me is sort of like how do you not make it a one-off but you make it sort of a reproducible, solid, risk compliant and so forth system that can be reproduced many different times. And again, using the same infrastructure and the same analytic tools and techniques but for many different use cases. So we don't have to rebuild the wheel, reinvent the wheel, reinvent the car so to speak every time you need a different type of vehicle. You need to build a car or a truck or a race car. There's some fundamental principles that are common to all of those and that's what that industrialization is. And it includes security, compliance with regulations and all of those things but it also means just being able to scale it out to new opportunities beyond the ones that you dreamed of when you first invented the thing. You know, data by its very nature, as you well know, it's distributed but for you've been at this a while, for years we've been trying to sort of shove everything into a monolithic architecture and hardening infrastructures around that and in many organizations it's become a block to actually getting stuff done. But so how are you seeing things like the edge emerge? How do you think about the edge? How do you see that evolving and how do you think customers should be dealing with edge and edge data? Well, it's really kind of interesting. I had many years at NASA working on data systems and back in those days, the idea was you would just put all the data in a big data center and then individual scientists would retrieve that data and do analytics on it, do their analysis on their local computer. And you might say that's sort of like edge analytics so to speak because they're doing analytics at their home computer but that's not what edge means. It means actually doing the analytics, the insights discovery at the point of data collection. And so that's really real time business decision making. You don't bring the data back and then try to figure out some time in the future what to do. And I think an autonomous vehicle is a good example of why you don't wanna do that because if you collect data from all the cameras and radars and LiDARs that are on a self-driving car and you move that data back to a data cloud while the car is driving down the street and let's say a child walks in front of the car, you send all the data back, it computes and does some object recognition and pattern detection and 10 minutes later it sends a message to the car hey, you need to put your brakes on. Well, it's a little kind of late at that point. And so you need to make those discoveries, those insight discoveries, those pattern discoveries and hence the proper decisions from the patterns and the data at the point of data collection. And so that's data analytics at the edge. And so yes, you can bring the data back to a central cloud or a distributed cloud. It almost doesn't even matter if your data is distributed so that any use case and any data scientist or any analytic team in the business can access it, then what you really have is a data mesh or a data fabric that makes it accessible at the point that you need it whether it's at the edge or in some static post event processing. For example, typical business quarter reporting takes a long look at your last three months of business. Well, that's fine in that use case but you can't do that for a lot of other real time analytic decision making. Well, that's interesting. I mean, it sounds like you think the edge not as a place but as where it makes sense to actually, the first opportunity if you will to process the data at low latency, where it needs to be low latency. Is that a good way to think about it? Yeah, absolutely. It's the low latency that really matters. Sometimes we think we're going to solve that with things like 5G networks. We're going to be able to send data really fast across the wire. But again, that self driving car is yet another example because what if you all of a sudden the network drops out? You still need to make the right decision with the network, not even being there. Yeah, that darn speed of light problem. And so you use this term data mesh or data fabric. Double click on that. What do you mean by that? Well, for me, it's sort of a unified way of thinking about all your data. And when I think of mesh, I think of like weaving on a loom. For example, you're creating a blanket or a cloth and you do weaving and you do that all that cross layering of the different threads. And so different use cases and different applications and different techniques can make use of this one fabric, no matter where it is in the business. Or again, if it's at the edge or back at the office, one unified fabric, which has a global name space so anyone could access the data they need, sort of uniformly, no matter where they're using it. And so it's a way of unifying all the data and use cases in sort of a virtual environment that no longer you need to worry about. So what's the actual file name or what's the actual server this thing is on? You can just do that for whatever use case you have. But I think it helps enterprises now to reach a stage which I like to call the self-driving enterprise. Okay, so it's modeled after the self-driving car. So the self-driving enterprise, the business leaders in the business itself, you would say needs to make decisions, oftentimes in real time. All right, and so you need to do sort of predictive modeling and cognitive awareness of the context of what's going on. So all these different data sources enable you to do all those things with data. And so for example, any kind of a decision in a business, any kind of decision in life, I would say is a prediction. You say to yourself, if I do this, such and such will happen. If I do that, this other thing will happen. So a decision is always based upon a prediction about outcomes and you want to optimize that outcome. So both predictive and prescriptive analytics need to happen in this same stream of data and not statically afterwards. And so that self-driving enterprise is enabled by having access to data wherever you, and whenever you need it. And that's what that fabric, the data fabric and data mesh provides for you, at least in my opinion. Well, so like carrying that analogy, like the self-driving vehicle, you're abstracting that complexity away in this metadata layer that understands whether it's on-prem or in the public cloud or across clouds or at the edge, where the best place is to process that data. What makes sense? Does it make sense to move it or not? Ideally, I don't have to. Is that how you're thinking about it? Is that why we need this notion of a data fabric? Right, it really abstracts away all the complexity that the IT aspects of the job would require. But not every person in the business is going to have that familiarity with the servers and the access protocols and all kinds of IT related things. And so I'm abstracting that away. And that's in some sense what containers do. Basically the containers abstract away all the information about servers and connectivity, protocols and all this kind of thing. You just want to deliver some data to an analytic module that delivers me an insight or a prediction. I don't need to think about all those other things. And so that abstraction really makes it empowering for the entire organization. You actually talk a lot about data democratization and analytics democratization. This really gives power to every person in the organization to do things without becoming an IT expert. So the last question we have time for here is, so it sounds like Kirk, the next 10 years of data are not going to be like the last 10 years. It'll be quite different. I think so, I think we're moving to this. Well, first of all, we're going to be focused way more on the why question, like why are we doing this stuff? The more data we collect, we need to know why we're doing it. And one of the phrases I've seen a lot in the past year, which I think is going to grow in importance in the next 10 years is observability. So observability to me is not the same as monitoring. Some people say monitoring is what we do, but what I like to say is, yeah, that's what you do, but why you do it is observability. You have to have a strategy. Why am I collecting this data? Why am I collecting it here? Why am I collecting it at this time resolution? And so getting focused on those why questions be able to create targeted analytic solutions for all kinds of different business problems. And so it really focuses on small data. And so I think the latest Gartner data and analytics trending report said, we can see a lot more focus on small data in the near future. Kirk Bourne, you're a dot connector. Thanks so much for coming on theCUBE and being a part of the program. My pleasure.