 from San Jose. It's theCUBE, presenting Big Data Silicon Valley, brought to you by SiliconANGLE Media and its ecosystem partners. Welcome back to theCUBE. We are live in San Jose at Forger Eatery, a really cool place down the street from the Strata Data Conference. This is our 10th big data event. We call this Big Data SV. We've done five here, five in New York, and this is our day one of coverage. I'm Lisa Martin with my co-host George Gilbert and we're joined by a CUBE alumni, Jacques Isak, the head of data from Pivotal. Welcome back to theCUBE, Jacques. Thank you, it's great to be here. So just recently, you guys announced, Pivotal announced the GA of your Kubernetes-based Pivotal Container Service, PKS, following this initial beta that you guys released last year. Tell us about that. What's the main idea behind PKS? So as we were talking about earlier, we've had this opinionated platform as a service for the last couple of years. It's taken off, but it really requires a very specific methodology for deploying microservices and kind of next-gen applications. And what we've seen with the groundswell behind Kubernetes is a very seamless way where we can not just do our opinionated applications, but can do any applications leveraging Kubernetes. In addition, it actually allows us to, again, kind of have an opinionated way to work with stateful data, if you will. And so what you'll see is two of the main things that we have going on, again, if you look at both of those products, they're all managed by a thing that we call Bosch. And Bosch allows for not just the ease of installation, but also the actual operation of the entire platform. And so what we're seeing is this ability to do day two operations around not just the apps, not just the platform, but also the data products that run within it. And you'll see later this year, as we continue to evolve, our data products running on top of either the PKS product or the PCF product. Quick question for A.Jump and George. What are, so you talked about some of the technology benefits and reasoning for that. From a customer perspective, what are some of the key benefits that you've designed this for or challenges to solve? So I'd say the key benefits, one is convenience and ease of installation and operationalization. Kubernetes seems to have basically become the standard for being able to deploy containers, whether it's on-prem or off-prem. And having an enterprise solution to do that is something that customers are actually really looking towards. In fact, we had sold about a dozen of these products, even before it was GA. There's so much excitement around it. But beyond that, I think we've been really focused on this idea of digital transformation. So Pivotal's whole talk track really is changing how companies build software. And I think the introduction of PKS really takes us to the next level, which is that there is no digital transformation without data. And basically Kubernetes and PKS allow us to implement that and perform for our customers. This is really a facilitator of a company's digital transformation journey. Correct, in a very easy and convenient way. And I think, whether it's our generation or what's going on in just technology, but everybody is so focused on convenience. Push button, I just want it to work. I don't want to have to dig into the details. So this picks up on a theme we've been sort of pounding on for a couple of years on our side, which was the infrastructure was too hard to stand up and operate. But now that we're beginning to solve some of those problems, talk about some of the use case. Let's pick GE because that's flagship customer. Start with maybe some of the big outcomes, the big business outcomes they're shooting for and then how the Pivotal products map into that. Sure, so there's a lot of use cases. Obviously GE is both a large organization as well as an investor inside of Pivotal. A lot of different things we could talk about. One that comes to mind out of the gate is, we've got a data suite that we sell in addition to PCS, in addition to PCF. And within that data suite are a couple of products, Green Plum being one of them. Green Plum is this open source, MVP, data platform. One of the probably most successful implementations within GE is this ability to actually consolidate a bunch of different ERP data and have people be able to query it. Again, cheaply, easily, effectively. There were a lot of different ways that you can implement a solution like that. I think what's attractive to these guys, specifically around Green Plum, is that it leverages standard ANSI SQL. It scales to petabytes of data. We have this ability to do on-prem and off-prem. I was actually at the Gartner conference earlier this week and walking around the show, it was actually somewhat eye-opening to me to be able to see that, if you look at just that one product, there isn't really a competitive product that was being showcased that's open source, multi-cloud, analytical in nature, et cetera. And so I think, for again, to get back to the GE scenario, what was attractive to them is everything that they're doing on-prem, we actually can move to the cloud. Whether it's Google, Azure, Amazon, they can literally run the exact same product in the exact same queries. If you extend it beyond that particular use cases, there are other use cases that are more real-time. And again, inside of the Data Suite, we've got another product called GemFire, which is an in-memory data grid. It allows for this rapid ingest. So you can kind of think and imagine whether it's jet engines or whether it's wind turbines. Data is constantly being generated and our ability to take that data in real-time, ingest it, actually perform analytics on it as it comes in. So again, kind of a loose example would be if you know the heat tolerance for a wind turbine is between this temperature and this temperature, do something, set an alarm, shut it down, et cetera. If you can do that in real-time, you can actually save literally millions of dollars by not letting that turbine fail. Okay, so it sounds here where the GemFire product and the Green Plum DBMS are very complementary. One is speed and one is sort of throughput. And when you, you know, this is, we've seen almost like with Hadoop, an overreaction, like in turning a coherent platform into a bunch of building blocks. Yes. And with Green Plum, you have sort of everything packaged together again. Is would it be proper to think of Green Plum as combining the best of the Data Lake and the Data Warehouse where you've got the data scientists and data engineers to satisfy with what would have been another product and the business analysts and, you know, sort of BI crowd satisfied with the same product, but what would have been another? So I'd say you're spot on. What is super interesting to me is one, I've been doing data warehousing now for, I don't know, 20 years. And for the last five, I've kind of felt like Data Warehouse, just the term, was equivalent to like the mainframe. So I actually kind of relegated it to the, I'm not going to use that term anymore. But with the advent of the cloud and with other products that are out there, we're seeing this resurgence where the Data Warehouse is cool again. And I think part of it is because we had the shift where we had really expensive products doing, you know, the classic EDW. And it was too rigid and it was too expensive. And so Hadoop sort of came on and everyone was like, wow, hey, this is really easy. This is really cheap. You know, we can store whatever we want. We can do any kind of analytics. And I think, and I was saying before, I think the love affair with, you know, piecing all of that together is kind of over. And I also think, you know, it's funny. It was really hard for organizations to successfully stand up a Hadoop platform. And I think the metric that we hear is 50% of them fail, right? So part of that, I believe, is because there just aren't enough people to be able to do what needed to be done. And so interestingly enough, because of those failures, because the Hadoop ecosystem didn't quite integrate into the classic enterprise, products like Green Plum are suddenly very popular. I was just seeing our downloads for the open source part of Green Plum. And we're literally, you know, at this juncture, seeing about 1,500 distinct customers leveraging the open source product. So I kind of feel like we're on this upswing of getting everybody to understand that you don't have to go to Hadoop to be able to do structured and unstructured data at scale. You can actually use some of these other products. So sorry, George, quickly. Being in the industry for 20 years, we talk about, you know, culture a lot. And we say, oh, cultural shift. People started embracing Hadoop. Oh, we can dump everything. The data lakes turned into swamps. I'm curious though, what is that? Maybe it's that a cultural shift. Maybe it's a cultural roller coaster of like, you know, mainframes are cool again. Like you said, give us your perspective on how you've helped companies like GE sort of as technology waves come really kind of help design and maybe drive a culture that embraces the velocity of this change. Sure, so one of the things that we do a lot is help our customers better leverage technology and really kind of train it. So we have a couple different aspects to Pivotal. One of them is our labs aspect. And effectively that is our ability to teach people how to better build applications, how to better do data science, how to better do data engineering. Now, when we come in, we have an opinionated way to do all of those things. And when a customer embraces it, it actually opens up a lot of doors. So we're somewhat technology agnostic, which aids in your question, right? So we can come in, we're not trying to push a specific technology, we're actually trying to push a methodology and an end goal or a solution. And I think, you know, oftentimes, of course that end goal and solution is best met by our products, but to your point about the roller coaster, you know, it seems as though as we have evolved, there is a notion that data will, from an organization, will all come together in a common object store. And then the ability to quickly be able to spin up an analytical or a programmatic interface into that data is super important. And that's where we're kind of leaning in. And that's where I think this idea of convenience, being able to push button instantiate a green bloom cluster or push button instantiate gemfire grid so that you can do analytics or you can take actions on it is so super important. You said something that's really, what sounds really, really important, which was we want to get what you were, it sounded like you were alluding to a single source of truth. And then you spin up whatever compute, and you know, bring it to the data. But there's an emerging still early sort of school of thought which is maybe the single source of truth should be sort of a hub around set of real-time streams. Sure, yeah. How does pivotal play in that world? So there are a lot of products that can help facilitate that, including our own. I would say that there is a broad ecosystem that kind of says, if I was going to start an organization today, there are a number of vertical products that I would need in order to be successful with data. One of them would be just standard relational database. And if I pause there for a second, if you look at it, there's definitely a move towards building microservices so that you can glue all the pieces together. Those microservices actually require smaller, simpler relational type databases or perhaps no sequel type databases on the front end. But they become simpler and simpler, which is where I think if I was Oracle or some of the more stalwarts on the relational side, it's not about how many widgets you can put into the database. It's really about its simplicity and performance. From there, having some kind of message queue or system to be able to take those changes and the updates of that data down the line so that not so much IT providing it to an end user, but more self-service, being able to subscribe to the data that I actually care about. And then again, going back to the simplicity, me as an end user being able to take control of my destiny and use whatever product or whatever technology makes the most sense for me. And if I sort of dovetail just a little bit on the side of that, we've focused so much this year on convenience and flexibility that I think it is now at a spot where all of the innovations that we're doing, say, in like Amazon Marketplace, for example, on Green Plum, all of those innovations are actually leading us to the same types of innovations in data deployments on top of Kubernetes. And so like two of them that come to mind, I felt like I was in front of a group last week and we were presenting some of the things that we've done. And one of them was self-healing of Green Plum. And so it's often been said that these big analytical solutions are really hard to operate. And through our innovations, we're actually able to have, if a segment goes down or a host goes down or there's network problems, through the implementation, the system will actually self-heal itself. And so all of a sudden, the operational needs become quite a bit less. In addition, we've also created this automatic snapshotting capability, which allows, I think our last benchmark, we did about a petabyte of data in less than three minutes. And so suddenly you've got this operational stalwart, almost a database as a service without really being a service, really just this living, breathing thing. And that kind of dovetails, like I said before, back to where we're trying to make all of our products perform in a way that customers can just use them and not worry about having to the nuts and bolts of it. So last question, we've just got about 30 seconds left. You mentioned, we talked about a lot of technologies, but you mentioned methodology. Is that approach from Pivotal, do you think one of the defining competitive advantages that you deliver to the market? It is 100% one of our defining things. Our methodology is what is enabling our customers to be successful. And it actually allows me to kind of say, we've partnered with PostgresConf and Green Plum Summit this year is next month in April. And the theme of that is hashtag data tells the story. And so from our standpoint, Green Plum is continuing to take off, Jim Fier is continuing to take off, Kubernetes is continuing to take off, PCF is continuing to take off, but we believe that digital transformation doesn't happen without data. We think data tells the story. So I'm here to encourage everyone to come to Green Plum Summit. I'm also here to encourage everyone to share their stories with us on Twitter, hashtag data tells the story so that we can continue to broaden this ecosystem. Hashtag data tells the story. Jock, thanks so much for carving out some time this week to come back to theCUBE and share what's new and differentiating at Pivotal. Thank you. We want to thank you for watching theCUBE. I'm Lisa Martin with my co-host George Gilbert. We are live at Big Data SV, our 10th Big Data event. Come down here, see us. We're in San Jose, a forager eatery. We've got a great party tonight. And also tomorrow morning at 8 a.m., we've got a breakfast briefing you won't want to miss. Stick around and we'll be back with our next guest after a short break.