 Welcome to our presentation here at VMworld 2017. I'm John Furrier, co-host of theCUBE with Dave Vellante, who's taking a lunch break. We are at VMworld on the ground on the floor where we have Googles, Vice President of Product Management, developer platforms. Sam Ramje, welcome to theCUBE conversation. Great, thank you very much, John. So you had a keynote this morning. You have came up on stage, big announcement. Let's get right to it. That container as a service from Pivotal, VMware and Google announced kind of a joint announcement. It was kind of weird. A fully joint but it really came from Pivotal. Clarify what the announcement was. Sure, so what we announced is the result of a bunch of co-engineering that we've been doing in the open source with Pivotal around Kubernetes running on Bosch. So if you've been paying attention to Cloud Foundry, you know that Cloud Foundry is the runtime layer and there's something called Bosch sitting underneath it that does the cluster management, cluster operations. Pivotal is bringing that to commercial GA later this year. So what we announced with Pivotal and VMware is that we're going to have constant compatibility between Pivotal's Kubernetes and Google's Kubernetes. Google's Kubernetes service is called Google Container Engine. Pivotal's offering is called Pivotal Container Service. The big deal here is that PKS is going to be the standard way that you can get Kubernetes from any of the Dell Group companies whether that's VMware, EMC. That gives us one consistent target for compatibility because one of the things that I pointed out in the keynote was inconsistency is the enemy in the data center. That's what makes operations difficult. And Kubu was announced at Cloud Foundry, Stu Miniman covered it, but that wasn't commercially available. That's the nuance, right? That's right, and that still is available in the open source. So what we've committed to is we've said every time that we update Google Container Engine, Pivotal Container Service is also going to update. So we have constant compatibility that that's delivered on top of VMware's infrastructure including NSX for networking. And then the final twist is a big reason why people choose Google Cloud is because of our services. So big table, right? Big query, a dynamically scaling data warehouse that we run an enormous amount of Google workloads on. Spanner, right? Which is why all of your data is consistent globally across Google's planet scale data centers. And finally, all of our new machine learning and AI investments. Those services will be delivered down to Pivotal Container Service, right? That's going to be there out of the box at launch and we'll keep adding to that catalog. It's just that Google Next has a lot of conversations like, oh, Google's catching up to Amazon. Okay, Amazon's done a great job, no doubt about it. We love Amazon. Andy Jassy was here as well. Super capable, very competent. There's a lot of workloads in VMware community that runs on AWS, but it's not the only game in town. Jerry Chen, investor in Docker, a friend of ours, we know, called this years ago. It's not going to be a one cloud winner take all game. Clearly. But there's the big three lining up. AWS, Microsoft, Google, you guys doing great. So I got to ask you, what is the biggest misconception that people have about Google Cloud out in the market? Because a lot of enterprises are used to running ops. Maybe not as much dev as there is ops. And dev ops comes in with cloud native. There's a lot of confusion. What is the thing that you'd like to clarify about Google that they may not know about? The single most important thing to clarify about Google Cloud is our strategy is open hybrid cloud. We think that we are in an amazing place to run workloads. We also recognize that compute belongs everywhere. We think that the durable state of computing is more of a mosaic than a unidirectional arrow that says everything goes to cloud. We think you want to run your containers and your VMs in clouds. We think you want to run them in your data centers. We also think you want to move them around. So we've been diehard committed to building out the open source projects, the protocols to let all of that information flow and providing services that can get anywhere. So open hybrid cloud is the strategy and that's what we've committed to with Kubernetes, with TensorFlow, with Apache Beam, with so much of the open source that we've contributed to Linux and others and then maintaining open standards compatibility for our services. Well, it's great to see you at Google because I know your history, great open source guy, you know open source, it's been really part of your life and bringing that to Google's grades of congratulations. There's a reason for that though, it's pragmatic, right? This is not a crazy crusade. Value of open source is giving control to the customer. And I think the most ethical way that you can build businesses and markets is based on customer choice. Giving them the ability to move to where they want, reducing their costs of switching. If they stay with you, then you're really producing a value-added service. So I've spent time in the operator's shoes, in the developer's shoes, and in the vendor's shoes. When I've spent time buying and running software on my own, I really always valued and preferred things that would let me move my stuff around. I've preferred open source. So that's really the method of the madness here. It's not about opening everything up insanely, giving everything away. It serves customers better and in the long run, the better you serve customers, you'll build a winning business. We're here on the ground floor at VMworld 2017 in Las Vegas, where behind us is the VM Village. And obviously the same was on stage with the big announcement with that pivotal VMWare. And this is kind of important now. We had a debate now. Usually I'm not the contrarian in the group. I'm usually the guy who's like, yeah, raw, raw, entrepreneurial, optimistic. Yeah, we could do that. The future's here. Go to the future. But I was kind of skeptical. And I told VMWare, and I saw Pat Gelsinger and Michael Dell in the hallways. And I'm like, they thought this was gonna be the big announcement. It was their big announcement. But I was kind of like, guys, I mean, it's the long game. These guys in the VMWare community, they're operation guys. They're not gonna connect the dots. And there was kind of an applause, but not a standing ovation that Google would have gotten at a Google Next commerce, where the geeks would have been like going crazy. What is the operational dynamic that you're seeing in this market that Google's looking at and bringing value to? So that's the question for you. This is what the big change in the industry is, is going from only worrying about increasing application velocity to figuring out how to do that with reliability. So there's a whole community of operators that I think many of us have left behind as we've talked about clouds and cloud native. We've done a great job appealing to developers, enabling them to be more productive. But with operators, we've kind of said, well, your mileage may vary, or we don't have time for you, or you have to figure it out yourself. I think the next big phase in adoption of cloud native technologies is to say, first of all, open hybrid, run your stuff wherever you want. Well, you guys have experienced running cloud. Now you're bringing that knowledge out here. And that's the next piece, is how do we offer you the tools and the skills that you need as an operator to have that same consistency, those same guarantees you used to have, and move everything forward in the future? Because if you turn one audience, one community, into the bad people who are holding everything back, that's a losing proposition. You have to give everybody a path to win. Everybody wants to be the good guy. So I think now we need to start paying really close attention to operators, and be approachable. I would like to see GCP become the most approachable cloud. We're already well known as the most advanced cloud, but can we be the easiest to adopt as well? I think that's our challenge, the experience. You got to get that touch that these enterprise companies, oh, historically have had. But it's interesting. I mean, the mosaic you mentioned requires some unification, right? You got to be likable, you got to be approachable, and that's where you guys are going. I know you guys are building out for that. But the question is, for you, because Google has a lot of experience, and I know from personal knowledge, Google's depth of people and talent, not always the cleanest execution out to the market in terms of the front-facing white glove service that some of these other companies have done, but you guys are certainly strong. So the question, I think this is where Diane Green is driving the transformation. I mean, she breathes, eats, sleeps, dreams, enterprise. So being both a board member at Google and being the SVP of Google Cloud, she's really bringing the discipline to say, no, white glove service is mandatory, right? We have a pretty substantial professional service organization and building out partnerships with Accenture, with PWC, with Deloitte, with everyone to make sure that these things are all serviceable and properly packaged all the way down to the end user. So no doubt there's more room for us to improve, right? There's miles to go on the journey, but the focus and the drive to make sure that we're delivering the enterprise requirements, right? Diane never lets us stop thinking about that. It's like math, right? The order of operations is super important, and there's a lot of stuff going on in the cloud right now that's complex. Ease of use is the number one thing that we're hearing, because one, it's a moving train in general, right? But the cloud's growing, a lot of complexity. How do you guys view that? And the question I want to ask you is, we know what cloud looks like today. It's Amazon, they're doing great, no multi-horse race, if you will. But in 2022, the expectations and what it looks like then is going to be completely different if you just take the trajectory of what's happening. So cleaning up Kubernetes, making that manageable, all the self-updates makes a lot of sense and I think that's the dots, no one's connecting here. I get the long game, but what's the customer's view in your opinion as someone who's sitting back and with the Google Perch looking out over the horizon, 2022, what's it like for the customer? That's an outstanding question. So I think 2022, looking back, we've actually absorbed so much of this complexity that we can provide ease of use to every workload and to every segment. Backing into that, ease of use looks different. Like let's think about tooling. Ease of use looks different to an electrician versus a carpenter versus a plumber. They're doing different jobs, they need different tools. So I think about those as different audiences and different workloads. So if you're trying to migrate virtual machines to a cloud, ease of use means a thing. It includes taking care of the networking layer. How do we make sure that our cloud network shows up like an on-premises network and you don't have to set up some weird VPC configuration? How can those just look like part of your land subject to your same security controls? That's a whole path of engineering for a particular division of the company. For a different division of the company, focused on databases, ease of use is, wow, I've got this enormous database, I'm straining at the edges. How do I move that to the cloud? Well, what kind of database is it, right? It's a SQL database, there's no SQL database. So engineering that in, that's the key. The other thing that we have to do for ease of use is upskilling. So a lot of the things that we talked about before are the need to drive IT efficiency through automation. But who's going to teach people how to do the automation, especially while they're being held to a very high SLA standard for their own data center, and held to a high standard for velocity movement to the cloud? This is where Google has invented a discipline called SRE or Site Reliability Engineering. And it's basically the meta-discipline around what many people call DevOps. We think that this is absolutely teachable, it's learnable, it's becoming a growing community. You can get all Riley books on the topic, so I think we have an accountability to the industry to go and teach every operator in every operating group. Hey, here's what SRE looks like. Some of your folks might want to do this because that'll give you the lift to make all of these work loads much easier to manage because it's not just about velocity, it's also about reliability. It's interesting, we got about a minute left or so. I'm going to just get your thoughts on this because you certainly seen it on the developer side. Stack wars, we were going, well, my stack runs this stack, but last night I heard in the hallway here multiple times the general consensus of two stacks coming together, not just software stacks, hardware stacks. You're seeing things that have never run together or been tested together before. So the Site Reliability is a very interesting concept and developers get pissed off when stacks don't work. So this is a super kind of nuance in this new use cases that are emerging because stuff's happening that's never been done before. Yeah, so this is where the common tutorials get really interesting, especially as we build out a planetary scale computer at Google, right? We're no longer thinking about how does the GPU look as part of your daughter board? We think about what about racks of GPUs as part of your data centers using NVIDIA K80s? What does it mean to have 180 teraflops of tensor processing capability in a cloud TPU? So getting container-centric is crucial and making it really easy to attach to all of those devices by having open-source drivers, making sure they're all Linux-compatible and developers can get to them is going to be part of the substrate to make sure that application developers can target those devices. Operators can set a policy that say, yes, I want this to deploy preferentially to environments with a TPU or a GPU and that the whole system can just work and be operable. Great, Sam, thanks so much for taking the time to stop by. One-on-one conversation with Sam Ramj, who's at Google Cloud, he's a Vice President of Product Management and Developer Platforms for Google. We'll see you at Google next. Thanks for spending the time. I'm John Furrier, thanks for watching. Thank you, John.