 Live from Orlando, Florida, it's theCUBE. Covering Pentaho World 2017. Brought to you by Hitachi Ventura. Welcome back to sunny Orlando, everybody. This is theCUBE, the leader in live tech coverage. My name is Dave Vellante, and this is Pentaho World. Hashtag P-World 17, Don Delosier. He's the co-chair of the Midwest IoT Council. Thanks so much for coming on theCUBE. Good to be here. Yeah, so, you just written a new book. I got it right here in my heart, off the presses in my hands. The future of IoT, leveraging the shift to a data-centric world. Can you see that okay? All right, great, how's that? You got that? Well, congratulations on getting the book done. Thanks. It's like the closest a male can come to having a baby, I guess. But so, it's fantastic. Let's start with sort of the premise of the book. Why'd you write it? Sure. I'll give you the short version, because that in and of itself can go on forever. I'm a data guy by background, and for the last five or six years, I've really been passionate about IoT. And the two converged with a focus on data, but it was kind of ahead of where most people in IoT were, because they were mostly focused on sensor technology and communications, and to a limited extent, the workflow. So I kind of developed this thesis around where I thought the market was going to go, and I would have this conversation over and over and over, but it wasn't really sticking. And so I decided maybe I should write a book to talk about it, and it took me forever to write the book, because fundamentally, I didn't know what I was doing. Fortunately, I was able to eventually bring on a couple of co-authors, and collectively, we were able to get the book written, and we published it in May of this year. And give us the premise. So the central thesis of the book is that the market is going to shift from a focus on IoT-enabled products, like a smart refrigerator or a low-fat fryer or a turbine in a factory or a power plant or whatever. It's going to shift from the IoT-enabled products to the IoT-enabled enterprise. If you look at the Harvard Business Review article that Jim Heppelman and Michael Porter did in 2014, they talked about the progression from products to smart products to smart connected products to product systems to system of systems. We've largely been focused on smart connected products, or as I would call, IoT-enabled products. And most of the technology vendors have focused their efforts on helping the lighting vendor or the refrigerator vendor or whatever, IoT-enabled their product. But when that moves to mass adoption of IoT, if you're the CIO or the CEO of C-Land or Disney or Walmart or whatever, you're not going to want to be a company that has 100,000 IoT-enabled products. You're going to want to be an IoT-enabled company. And the difference is really all around data primacy and how that data is treated. So right now, most of the data goes from the IoT-enabled product to the product provider. And they tell you what data you can get. But that, if you look at the progression, it's almost mathematically impossible that that is sustainable. Because company organizations are going to want to take my, like let's just say, let's say we're talking about a fast food restaurant. They're going to want to take the data from the low-fat fryer and the data from the refrigerator or the shake machine or the lighting system or whatever. And they're going to want to look at it in the context of the other data. And they're going to also want to combine it with their point of sale or crew scheduling or inventory. And then if they're smart, they'll start to even pull in external data like pedestrian traffic or street traffic or micro-weather or whatever. And they'll create a much richer signature. And then it comes down to governance where I want to create this enriched data set and then propagate it to the right constituent in the right time in the right way. So you still give the product provider back the data that they want. And there's nothing that precludes you from doing that. And you give the low-fat fryer provider the data that they want. But you give your regional and corporate offices a different view of the same data. And you give the FDA or your supply chain partner, it's still the same atomic data. But what you're doing is you're separating the creation of the data from the consumption of the data. And that's where you gain maximum leverage. And that's really the thesis of the book. It's data, a great summary by the way. So it's data in context. And the context of the low-fat fryer is going to be different than the workflow within that retail operation. Yeah, that's right. And again, this where the product providers have initially kind of pushed back because they feel like they have stickiness and loyalty that's bred out of that link. But first of all, that's going to change. So if you're Walmart or a major concern and you say, I'm going to do a lighting RFP and there's 10 vendors that say, hey, we want to compete for this. And six of them will allow Walmart to control the data. And four say, no, we have to control the data. Their list just went to six. They're just not going to put up with that. Yeah, absolutely. That's right. So if the product providers are smart, they're going to get ahead of this and say, look, I get where the market's going. We're going to need to give you control of the data. But I'm going to ask for a contract that says, I'm going to get the data I'm already getting because I need to get that. And you want me to get that. But number two, I'm going to recognize that Walmart can give me my data back, but enrich it and contextualize it so I get better data back. So everybody can win, but it's all about the right architecture. Well, the product guys kind of have the Trojan horse strategy of getting in when nobody was really looking. That's right. And okay, so they've got there. Do you envision, Don, a point at which the Walmart might say, no, that's our data and you don't get it? Or is there going to be a quid pro quo? Not really. And here's why. The argument that the product providers have made all along is, almost in a condescending way sometimes, although not intentionally condescending, it's been, look, we're selling you this low fat fryer for your fast food restaurant. And you say you want the data, but we have a team of people who are experts in this. Leave that to us. We'll analyze the data and we'll give you back what you need. Now, there's some truth to the fact that they should know their products better than anybody. And if I'm the fast food chain, I want them to get that data so that they can continually analyze and help me do my job better. They just don't have to get that data at my expense. There are ways to cooperatively work this, but again, it comes back to just the right architecture. So what we call the first receiver is in essence setting up an abstraction close to the point of the ingestion of all this data, upon which it's cleansed, enriched, and then propagated again to the right constituent in the right time, in the right way. And by the way, I would add with the right security considerations and with the right data privacy considerations. Because if you look around the market now, things like GDPR in Europe and what we've seen in the US, just in the wake of the elections and everything around how data is treated, privacy concerns are going to be huge. So if you don't know how to treat the data in the context of how it needs to be leveraged, you're going to lose that leverage of the data. Well, plus the widget guys are going to say, look, we have to do predictive maintenance on those devices and you want us to do that. Okay, they say follow the money, let's follow the data. So what's the data flow look like in your mind? You got these edge devices. Yep. Physical or virtual? Doesn't have to be a physical edge. Although in a lot of cases, there are good reasons why you'd want a physical edge, but there's nothing technologically that says you have to have a physical edge. Collaborate on that. What do you mean by virtual edge? Sure, so let's say I have a server inside a retail outlet outfit and it's collecting all of my IoT data and consolidating it and instantiating or persisting it into a data store and then propagating it to a variety of constituents. That would be creating the first receiver in the physical edge. There's nothing that says that that edge device can't grab that data but then persist it in a distributed Amazon cloud instance or a Rackspace instance or whatever. It doesn't actually need to be persisted physically on the edge but there's no reason it can't either. Okay, so I understand that now. So the guys at Wikibon, which is a sort of sister or company of theCUBE, have envisioned this three tier data model where you've got the devices at the edge where real-time activities going on, real-time analytics and then you've got this sort of aggregation point, I guess call it a gateway. And then you've got, and that's I say, aggregation of all these edge devices and then you've got the cloud where the heavy modeling is done and that could be your private cloud or your public cloud. So does that three tier model make sense to you? Yeah, so what you're describing is the first tier is actually the sensor layer. The gateway layer that you're describing in the book would be characterized as the first receiver. It's basically an edge tier that is augmented to persist and enrich the data and then apply the proper governance to it but what I would argue is in reality, I mean your reference architecture is spot on but if you actually take that one step further, it's actually an end tiered architecture because there's no reason why the data doesn't go from the 10 franchise stores to the regional headquarters to the country headquarters to the corporate headquarters and every step along the way, including the edge, you're going to see certain types of analytics and computational work done. I'll put a plug for my friends at Hitachi Lumata in on this. There's like 700 horizontal IoT platforms out there. There aren't going to be 700 winners. There's going to be probably eight to 10 and that's only because the different specific verticals will provide for more winners than it would be if it was just one like a search engine but the winners are going to have to have an extensible architecture that will ultimately allow enterprises to do the very things I'm talking about doing and so there are a number out there but one of the things that and I, Rob Tiffany who's the CTO of Lumata, I think has a really good handle on and his team on an architecture that is really plausible for accomplishing this as the market migrates into the future. And that architecture has got to be very flexible not just elastic but sometimes we use plastic plasticity being able to go in any direction. Well sure, I mean up to including the use of digital twins and avatars and the logic that goes along with that and the ability to spin something up and spin something down gives you that flexibility that you as an enterprise, especially the larger the enterprise, the more important that comes, need. How much of the data, Don at that edge do you think will be persisted? Two part question, is not all going to be persisted, isn't that too expensive? Is it necessary to persist all that data? Well, no, so this is where you'll hear the notion of data exhaust or and what that really means is let's just say I'm, let's say I'm instrumenting every room in this hotel and each room has six different sensors in it and I'm taking a reading once a second. The ratio of inconsequential to consequential data is probably going to be over 99 to one. So it doesn't really make sense to persist that data and it sure as hell doesn't make sense to take that data and push it into a cloud where I spend more to reduce the value of the payload. It just, that's just dumb. But what will happen is that there are two things. One, I think people will see the value in locally persisting the data that has value, the consequential data and doing that in a way that's stored at least for some period of time so that you can run the type of edge analytics that might benefit from having that persisted store. The other thing that I think will happen and this is, I don't, I don't talk much, I talk a little bit about it in the book, but there's this whole notion where when we get to the volumes of data that we really talk about where IoT will go by like 2025, it's going to push the physical limitations of how we can accommodate that. So people will begin to use techniques like developing statistical metadata models that are a metadata, a highly accurate metadata representation of the entirety of the dataset but probably in about 1% of the space that's queryable and suitable for machine learning where you're going to, it's going to enable you to do what you just physically couldn't do before. So that's a little bit into the future but there are people doing some fabulous work on that right now and that'll creep into the overall lexicon over time. Is that a lightweight digital twin that gives you substantially the same insight? It could augment the digital twin in ways that allow you to stand up digital twins where you might not be able to before. The thing that, the example that most people would know about are like in the Apache ecosystem there are tool sets like snappy data that are basically doing approximation but they're doing it via sampling and that is a step in that direction but what you're looking for is very high value approximation that doesn't lose the outlier so like in IoT, one of the things you normally are looking for is where am I going to pick up on anomalous behavior? Well if I'm using a sample set and I'm only taking 15%, by definition I'm going to lose a lot of that anomalous behavior. So it has to be a holistic representation of the data but what happens is that data is transformed into statistics that can be queryable as if it was the atomic dataset but what you're getting is a very high value approximation and a fraction of the space and time and resources. Okay, but that's not sampling, you're saying? No, it's statistical metadata. My last company had developed this thing that we called approximate query and it was based on that exact set of patents around the formation of a statistical metadata model. It just so happens it's absolutely suited for where IoT is going. It's kind of, IoT isn't really there yet. People are still trying to figure out the edge and it's most basic forms but the sheer weight of the data and the progression of the market is going to force people to be innovative and how they look at some of these things. Just like if you look at things like privacy, right now people think in terms of anonymization and that's basically I'm going to delink data contextually where I'm going to effectively lose the linkages to the context in order to conform with data privacy but there are techniques, like if you look at GDPR there are techniques within certain safe harbors that allow you to pseudonymize the data where you can actually relink it under certain conditions and there are some smart people out there solving these problems. That's where the market's going to go. It's just going to get there over time. And what I would also add to this equation is at the end of the day right now the concepts that are in the book about the first receiver and the abstraction of the creation of the data from the consumption of the data look, it's a pretty basic thing but it's the type of shift that is going to be required for enterprises to truly leverage the data. The things about statistical metadata and pseudonymization pseudonymization will come before the statistical metadata but the market forces are going to drive more and more into those areas but you got to walk before you run. Right now most people still have silos which is interesting because when you think about the whole notion of the internet of things it infers that it's this exploitation of understanding the state of physical assets in a very broad based environment and yet the funny thing is most IOT devices are silos that emulate M2M sort of peer to peer networks just using the internet as a communication vehicle but that'll change. Right and that's really again back to the premise of the book. That's right. We're going from these individual products where all the data is locked into the product silo to this digital fabric that has an enterprise context not a product context. That's right and if you go to the tool sets that Pentaho offers, the analytic tool sets, let's just say now that I've got this rich data set assuming I'm following basic architectural principles so that I can leverage the maximum amount of data that now gives me the ability to use these type of tool sets to do far better operational analytics to know what's going on, far better forensic analysis and investigative analytics to do root, mine through the data and do root cause analysis, far better predictive analytics and prescriptive analytics to figure out what will go on and ultimately feed the machine learning algorithms ultimately to get to, in essence, the living organism, the adaptive systems that are continuously changing and adapting to circumstances and that's kind of the holy grail. You mentioned Hitachi Ventara before. I mean, I'm curious of what your thoughts are on the Hitachi. Two years ago we saw the acquisition and said okay, now what? And on paper it sounded good and now it starts to come together. It starts to make more sense. Storage is going to the cloud, HDS is all right, we've got this Hitachi relationship. But what do you make of that? How do you assess it and where do you see it going? Yeah, first of all, I actually think the moves that they've done are good and I would not say that if I didn't think it. I just find a politically correct way not to say that. But I do think it's good and so they created the Hitachi Insight Group about a year and a half ago and now that's been folded into Hitachi Ventara alongside HDS and Pentaho. And I think that it's a fairly logical set of elements coming together. I think they're going down the right path. I mean, in full disclosure, I worked for Hitachi data systems from 91 till 94. So it's not like I'm a recent employee of them. It's 25 years ago. But my experience with Hitachi corporate and the way they approach things has been unlike a lot of really super large companies who may be super large, but may not be the best engineers or may not always get everything done so well. Hitachi's a really formidable organization and I think what they're doing with Pentaho and HDS and the Insight Group and specifically the Mata is well thought out and I'm optimistic about where they're going. So, and by the way, they won't be the only winner in the equation. There's going to be eight or nine different key players, but I would not short them whatsoever. I hope for them. The TAM is enormous. Normally Hitachi eventually gets to where it wants to go, which is a very thoughtful company. I've been watching them for 30 years. But through a lot of people, the Pentaho and the Insights play make a lot of sense. And then HDS used to work for HDS, a lot of infrastructure still, a lot of hardware. But, a relationship with Hitachi Limited that is, you know... Well, here's where I think that's important. Where do you see that third piece of the stool? So, this is where there's a few companies that have unique advantages with Hitachi being one of them. Because if you think about IoT, IoT is the intersection of information technology and operational technology. So it's one thing to say, I know how to build a database or I can build machine learning algorithms or whatever. It's another thing to say, I know how to build trains or CAT scans or smart city lighting systems. And the domain expertise married with the technology delivers a set of capabilities that you can't match without that domain expertise. And I mean, if you even just reduce it down to artificial intelligence and machine learning, you get an expert ML or AI guy, and they're only as good as the limits of their domain expertise. So that's why, and again, that's why I go back to, you know, the comparison to search engines where there's going to be like, you know, there's Google and maybe Yahoo. There's probably going to be more platform winners because the vertical expertise is going to be very, very important, but there's not going to be 700 of them. But Hitachi has an advantage that they bring to the table because they have very, very deep roots in energy, in medical equipment, in transportation. All of that will manifest itself in what they're doing in a big way, I think. Okay, so, but a lot of the things that you described, and help me understand this, are Hitachi limited. Now, of course, Hitachi data systems started, remember national advanced systems was a distribution arm for Hitachi IT products. Good for you. How many people remember it? I'm old. So like I said, I had a 30 year history with this company. Do you foresee that that, and by the way, interestingly, was often criticized back when you were working for HDS, as I go, yeah, there's still a distribution arm. But in the last decade, HDS has become much more of a contributor to the innovation and the product strategy and so forth. Right. Having said that, it seems to me advantageous if some of those things that you discussed, the trains, the medical equipment, can start flowing back through HDS. I'm not sure if that's explicitly the plan. I didn't necessarily hear that, but it sort of has to, right? Well, you know, I'm not privy to those discussions, so I would be conjecture on my part. Let's opine, but right, doesn't that make sense? It makes perfect sense. Because, I mean, HDS for years was just a storage silo. Right. And then storage became a very uninteresting business and credit to Hitachi for pivoting. But it seems to me that they could really, and they probably have a, I had Brian householder on earlier, I wish I had explored this more with him. But it just seems, the question for them is, okay, how are you going to tap those really diverse businesses? I mean, it's a business like a GE or Siemens, or I mean, it's very broad-based. Well, again, conjecture on my part, but one way I would do it would be to start using Lumata in the various operations, the domain-specific operations around Hitachi. Whether they plan to do that or not, I'm not sure. I've heard that they probably will. That's a data play, obviously, right? Well, it's a platform play. Yeah, absolutely. And it's an enabling technology that should augment what's already going on in the various elements of Hitachi. Again, this is conjecture on my part, but you asked, let's just go with this. We're riffin'. I would say that makes a lot of sense. I'd be surprised if they don't do that. And I think in the process of doing that, you start to cross-pollinate that expertise that gives you a unique advantage. I mean, it goes back to, if you have unique advantages, you can choose to exploit them or not. Very few companies have the set of unique advantages that somebody like Hitachi has in terms of their engineering and massive reach into so many, you know, Hitachi, GE, Siemens, these are companies that have big reach to the extent that they exploit them or not. One of the things about Hitachi that's different than almost anybody though is they have all of this domain expertise, but they've been in the technology-specific business for a long time as well, making computers. And so they actually already have the internal expertise to cross-pollinate, but whether they do it or not, time will tell. Well, but this is, it's interesting to watch the big whales, the horses on the track, if you will. Certainly GE's made a lot of noise like, okay, we're a software company. And now you're saying, wow, that's not so easy. And again, I'm sanguine about GE. I think eventually they'll get there. And then you see IBM's got their sort of IoT division. They're bringing in people. Another company with a lot of IT expertise, not a lot of OT expertise. And then you see Hitachi, who's actually got both. You know, Siemens, I don't know as well, but presumably they're more OT than IT. And so you would think that if you had to evaluate the company's positions, that Hitachi's, in a unique position, certainly have a lot of software. We'll see if they can leverage that in the data play. Obviously Pentaho is a key piece of that. One would assume, yeah. Final thoughts, for sure. No, I mean, again, I think I'm very optimistic about their future. I think very highly of the people I know inside that I think are playing a role here. You know, it's not like there aren't people at GE that I think highly of. But listen, San Ramon was something that was spun up recently, you know, Hitachi's been doing this for years and years, you know, so different players have different capabilities, but Hitachi seems to have sort of a holistic set of capabilities that they can bring together. And to date, I've been very impressed with how they've been going about it, and especially with the architecture that they're bringing to bear with Lumata. Okay, the book is the future of IoT, leveraging the shift to a data-centric world. Don DeLoach, and you had a co-author here as well. I have two co-authors. One is Weilio Riffy from Pentaho, Hitachi Ventera, and the other is Emile Berthelsen, a Gartner analyst who was with Makina Research, and then Gartner acquired them, and Emile has stayed on with them, both of them great guys, and we wouldn't have this book if it weren't for the three of us together. I never would have pulled it off on my own, so it's a collective work for sure. Don DeLoach, great having you on theCUBE. Thanks very much for coming on. All right, keep it right there, everybody. We'll be back. This is Pentaho World 2017, and this is theCUBE. Right back.