 Live from Las Vegas, Nevada, it's theCUBE covering EMC World 2015, brought to you by EMC, Brocade, and VCE. Welcome back everyone. Jeff Frick here from EMC World 2015. You're watching theCUBE where we go out to the events, extract the signal from the noise. We've been going three days of wall-to-wall coverage here from EMC World 2015. It's the year at the show. It's where we started theCUBE in 2010, so we always like coming out. We have a ton of great guests, and so we're excited for our next segment. My co-host was Stu Miniman. Thanks, Jeff. And joining us for this segment we've got Mike Cookie, who's the Senior Director of Outbound Product with Pivotal, part of the whole federation here. Mike, thanks so much for joining us. Thanks for having me. I'm really excited to be here. Alright, so you're in the big data side of Pivotal. We've dug in with the Cloud Foundry pieces there. Talk to us a little bit about your organization and your role there. Yeah, so basically we have a set of tools that we leverage for advanced analytics, big data processing. But more importantly, and maybe uniquely, we really drive those analytics into operationalized applications. And that's where the synergy point with Cloud Foundry comes in, being able to kind of rapidly iterate and do agile development. At Pivotal, we firmly believe it's not just building applications. It's building applications that leverage analytics so that they can create unique experiences for customers. But yeah, so I represent a set of analytical databases, a Hadoop offering distribution, and then sets of SQL protocols to be able to advance analytics on Hadoop. Yeah, so Mike, when I think of Pivotal first form, people kind of scratch their said, and said we got all these weird piece parts that came from EMC and VMware. What they had in common is they were developer centric, many of them had open source capabilities. How has that kind of come together over the last couple of years? So I think it's funny, it's actually a testament to the leadership of the company because they listened to the customer base from an EMC perspective and a VMware perspective. They started hearing customers driving towards new use cases. And when they looked across their portfolio and their people, they saw a very clear path to connecting sets of technologies that were in each of the organizations. And they are rooted in the same general foundations like you just mentioned. So we announced in February of this year that we're going to be open sourcing the entire data portfolio. We have a flexible subscription, we call the Big Data Suite. In the big data space and the traditional analytics space, it had been traditionally a very proprietary market. And then thanks to Hadoop and the energy and the healthy community in the Apache organization, we've really seen open source establish itself as a sea change and also really as a requirement. So we had already taken that tact with a number of the VMware assets that are in the application stack and obviously Cloud Foundry is 100% open sourced as well. So we wanted to follow through with that strategy across our data products and we actually open sourced the first one just two weeks ago. We open sourced Gemfire, which is an in-memory database. That power is kind of like bleeding edge stock portfolio transaction and fraud use cases or the largest booking systems in the world. And it's really unheard of to take such amazingly advanced technology and just release it out to the open source. So that's called Geode, I hope people dig into that. Okay, if I remember right, was that the fast data piece that was in there? That's correct. So we look at what's been happening with open source and to trend that Wikibon's been tracking for many years. I mean our company itself is founded on open source and crowd sourcing. We really like what the Cloud Foundry Foundation did to help put proper governance in place, learn some of the problems that maybe OpenStack had as to maybe a little bit too fragmented in so many projects out there. Can you talk a little bit about how Pivotal looks at? How do you guys make money and differentiate from the pure open source version? So you brought up a couple great points. The first one is open source is vibrant and it's an amazingly powerful momentum, but it results in very bursty development cycles across many, many projects, right? And so Pivotal's obviously heavily invested in the Hadoop space and so are everybody at the show, right? So it's better for all of us and our customers if it's stable and predictable and scalable. So same thing happened in the platform as a service space as you pointed out. There was a ton of innovation, but customers and enterprises were kind of paralyzed because there's a lot of complexity and there's also you have a limited amount of budget that you have to invest in these technologies so you're making an investment that you want to be able to play forward. So the Cloud Foundry Foundation helped to kind of stabilize that platform as a service ecosystem. It brought in all of these major players that put a stake in the sand and said we're going to invest in this platform as a standard. And we really saw along with a number of major partners and even competitors that we needed the same thing in the data space. So we launched something, we were proud to be a founding member of something called the Open Data Platform and that's basically driving towards the same type of stability in the Hadoop space that we achieved in the platform as a service space. So you're outbound product, so I assume you're talking to a lot of customers, prospects. Talk a little bit about how that world is evolving, how just the concept of big data and the early days of Hadoop are moving from what was very early days into POCs, into production, into big production. How is that adoption going? What are you hearing out in the field? What can you share with us? So I think it's been really exciting to see just how rapidly this technology is changing the cost and value that enterprises can get from data and analytics. But I think what we've seen early on, that complexity and that bifurcation in the ecosystem resulted in a bit of a slower adoption and maybe a slower transition into primetime production use cases. Pivotal was unique at that time because we had invested heavily in SQL advanced analytics for doing predictive machine learning and even prescriptive analytics. And what we did is we ported that advanced SQL analytics tool onto Hadoop. And so the result there that benefited Pivotal and our customers was they could take their traditional stack and they could point it directly at Hadoop data leveraging this technology called HAWK. And so when we witnessed that enterprise wasn't hesitant to use Hadoop, they just wanted to leverage their existing skill sets and their technology investments on new data in Hadoop. We realized that we could help to accelerate their adoption if we could bring all of that stack into an enterprise environment. So I would argue we're seeing really nice progress there. Last year we released this subscription called the Big Data Suite. And we talk about this journey from storing data to becoming an expert at analyzing data, but then critically building something with the intelligence that you just obtained. And we've seen people now mature across this over the last 24 months. What originally was a customer just investing to store and analyze big data very quickly became somebody that was requiring an in-memory database in order to push the analytics up into the application. And so the Big Data Suite did really well as a result. And then what we saw this year was that customers wanted to take that stack and migrate it with their infrastructure strategies. So support for virtual infrastructures, support for the Cloud Foundry platform as a service, right? Ability to launch these offerings as services inside of the platform as a service. So we spent a lot of time focusing on the ability to enable kind of agile analytics and agile applications and seeing the convergence of those too. So Mike, Jeff and I spent a bunch of time today talking to the EMC Federation Big Data Lake. Can you talk about how Pivotal and your products fit into that? Absolutely. So no matter what anybody tells you, building large clusters, it's not that easy, right? And applying the sets of hardware and software that need to be brought together to extract data, I mean, extract value from your Big Data use cases, it takes time and it takes expertise. And this is one area we think the Federation can really lead the way to accelerate adoption. We've actually been in market with EMC for, since our inception, with an appliance that enables you to do turnkey analytical database or turnkey Hadoop stack with SQL on top of it. And we also kind of pioneered with the EMC the idea of this data lake, which supports the use case I just mentioned. It's a mixture of an analytical database, Hadoop, SQL on Hadoop, in-memory technologies, virtualization, platform as a service. It's kind of your next generation platform, if you will. And the Federation business data lake gives us a really heavily integrated way to deliver that complex stack in a low kind of complexity, low risk, rapid time to value. So you could think of it as reference architecture, but there's been a lot of co-development to ensure the integrity and the rapid provisioning of that infrastructure. So getting close on time, working outbound, what's the next hill to climb in the next six months for you? I think Pivotal is very unique because we see the merging of these three markets, where a lot of other people are focused on each market independently. So I think what you'll see from us is an acceleration of the ways to leverage our data and analytics and in-memory technologies through our platforms as a service. I think also in-memory in general is an area that we've been heavily invested, but we're moving aggressively forward on. We just released a brand new version of our Pivotal HD, and that includes the full spark stack. So we're very interested in how in-memory impacts these amazing applications, but more importantly maybe how in-memory and in-memory file systems can kind of merge the world of advanced analytics and the applications that need to feed off of those advanced analytics. So I think our kind of strategy and vision has been validated, and now it's about really accelerating the ability to integrate all these together for enterprises. Sighting time, Spark Summit's coming up, right? In the summer, I know they're getting a lot of activity. It is, we'll be there. And then you got the flash, you got fast, fast, fast, so it's all good. Plus DSSD from EMC, can you imagine what these massive, you know, high-performance arrays of memory can do for the space we all play in? Yeah, as a matter of fact, Wikibon's Chief Technology Officer, David Floyd is doing a session right now about rack-scale flash, or what Wikibon calls flash as memory extension, you know, real transformative to just take some of those core technologies and transform how you're building things. I mean, we're six years into kind of flash back into the enterprise, and we're just starting to hit some of the really cool stuff we're going to do with it. Yeah, I think from a federation perspective, that's where you're going to see some amazingly impressive technology come out in a relatively short amount of time. Yes, good stuff. Well, Mike, thanks for stopping by at my cookie from Pivotal. Thank you guys very much. Absolutely. I'm Jeff Frick here with Stu Miniman. We're at EMC World 2015, day three of wall-to-wall coverage. We'll be back with our next guest after this short break. Thanks for watching.