 Live, from San Francisco, it's theCUBE, covering Google Cloud Next 19. Brought to you by Google Cloud and its ecosystem partners. Welcome back, this is day three of Google Cloud Next. You're watching theCUBE, the leader in live tech coverage. theCUBE goes out to the events, we extract the signal from the noise. My name is Dave Vellante, I'm here with my co-host Stu Miniman. John Furrier has been here all week, wall to wall coverage, three days. Check out thecube.net for all the videos, feel like an angle.com for all the news. Eric Brewer is here, he's the vice president of infrastructure and a Google fellow. Dr. Brewer, thanks for coming on theCUBE. Happy to be here. Great to see you. So, tell us the story of sort of infrastructure and the evolution at Google, and then we'll talk about how you're taking what you've learned inside of Google and helping customers apply it. Yeah, one of the things about Google is it essentially makes no use of virtual machines internally. And that's because Google started in 1998, which is the same year that VMware started, it was kind of brought the modern virtual machine to bear. And so Google infrastructure tends to be built really on kind of classic UNIX processes and communication. And so scaling that up, you get a system that works a lot with just processes and containers. So kind of when I saw containers come along with Docker, we said, well that's a good model for us. And we can take what we know internally, which was called Borg, a big scheduler. And we can turn that into Kubernetes and we'll open source it. And suddenly we have kind of a cloud version of Google that works the way we would like it to work. And a bit more about the containers and APIs and services rather than kind of the low level infrastructure. I wish you would fur from that comment that you essentially had a cleaner sheet of paper when containers started to ascend. I kind of feel like, and it's not an accident, Google influenced Linux's use of containers, which influenced Docker's use of containers. And we kind of merged the two concepts and it became a good way to deploy applications that separates the application from the underlying machine. Instead of deploying a machine and an OS application together, we'd actually like to separate those and say we'll manage the OS and machine and let's just deploy applications independent of machines. Now we can have lots of applications for a machine improve your utilization, improve your productivity. And that's kind of what we were already doing internally but was not common in the traditional cloud. But it's actually a more productive way to work. Yeah, Eric, my background's in infrastructure. And I was actually at the first Docker comeback in 2014, only a few hundred of us right across the street from where we were here. And I saw the Google presentation. It was like, oh my gosh, I lived through that wave of virtualization. And the Nirvana we want is, I want to just be able to build my application, not worry about all of those underlying pieces of infrastructure. We're making progress but we're not there. How are we doing as an industry as a whole? And I guess to say is where are we and what's Google looking with Kubernetes and all these other pieces to improve that? And what do you still see as the room for growth? Well, it's pretty clear that Kubernetes is one in the sense that if you're building new applications for the enterprise, that's clearly the way you would build them now. But it doesn't help you move your legacy stuff. And it doesn't per se help you move to the cloud. It may be that you have workloads on-prem that you would like to modernize. They're on VMs or bare metal. They're traditional kind of 80s apps from Java or whatever. And how does Kubernetes affect those? That's actually still a place where I think things are evolving. The good news now is it's much easier to mix kind of traditional services and new services using Istio and other things. And we see people containerizing workloads, but actually I would say most people are actually just do the new stuff in Kubernetes and wrap the old stuff to make it look like a service. That gets you pretty far. And then over time you can containerize the workloads that you really care about and want to invest in. And what's new with Anthos is you can kind of make some of those transitions on-prem if you'd like separate from moving to the cloud. And then you can decide, oh, this workload goes in the cloud. This workload I need to keep on-prem for a while but I still want to modernize it with a lot more flexibility. Can you just parse that a little bit for us? Are you talking about the migration service that's coming out or is it Anthos itself? Part of that is the Velostrata work, which is kind of can take a VM and convert it to a container. There's a newer version of that which really kind of gives you a manifest essentially for the container so you know what's inside it and you can actually use it in the modern way. That's a migration tool and it's super useful, but I kind of feel like even just being able to run high quality Kubernetes on-prem is a pretty useful step because you get the developer velocity, you get high release frequency, you get more decoupling of operations and development. So you get a lot of benefits on-prem but also when you move to cloud, you can go to GKE and get a great Kubernetes experience whenever you're ready to make that transition. So it sounds like what you described with Anthos is particularly the on-prem piece. It's like an elixir to help people more easily get to a cloud-native environment and then ultimately bridge it to the cloud. That's kind of like we're helping people get cloud-native benefits where they are right now and they on their own time can decide not only when to move a workload but even frankly which cloud to move it to. We prefer obviously move it to Google Cloud and we'll take our chances because I think these cloud-native applications are particularly good at but it's more important that they are moving to this kind of modern platform. It helps them and it increases our impact on the industry to have this happen. Help us understand the nuance there because there's obvious benefits of being in the public cloud, being able to rent infrastructure, the OPEX versus CAPEX and the managed services, et cetera. But to the extent that you can bring that cloud experience on-premises to your data. That's what many people want and have that hybrid experience. But other than the obvious benefits that I get from public cloud, what are the other nuances of actually moving into the public cloud from an experienced standpoint in a business value perspective? Well one question is how much rewriting do you have to do? Cause it's a big transition to move to cloud but it's also a big transition to rewrite some of your applications. So in this model, we're actually separating those two steps and you can do them in either order. You can lift and shift to move to cloud and then modernize it. But it's also perfectly fine to say I'm going to modernize on-prem, do my rewrites in a safe controlled environment that I understand, this low risk for me and then I'm going to move it to the cloud because now I have something that's really ready for the cloud and has been thought through carefully that way and having those two options is actually an important change with Anthos. And we've heard some stats. I think Thomas mentioned them that 80% of the workloads are still on-prem. We hear that all the time. And some portion of those workloads are mission critical workloads with a lot of custom code that people really don't want to necessarily freeze. And a lot of times if you're going to migrate you have to freeze. So my question is can I bring some of those Anthos and other Google benefits to on-prem and not have to freeze the code, not have to rewrite, just kind of permanently essentially leave those there and then take my other stuff and move it into the cloud. Is that what people are doing? And can I get- We're seeing a mix, but I would say the beach head is having well-managed Kubernetes clusters on-prem that you can use for new development or as a place to do your rewrites or partial rewrites. You can mix VMs and mainframes and Kubernetes. They're all mixable. It's not a big problem, especially with Istia where it can make them look like they're part of the same service mesh. Common framework, right. So I think it's more about having the ability to execute modern development on-prem and feel like you're really being able to change those apps the way you want and on a good timeline. Okay, so I've heard several times this week that Anthos is a game changer. I mean, that's how Google I think is looking at this. You guys are super excited about it. So one would presume then that 80% on-prem is going to really start to move. What are your thoughts on that? I think the way to think about it is all the customers we've talked to actually do want to move their workloads to cloud. That's not really the discussion point anymore. It's more about the reasons they can't, which could be they already have a data center that they've fully paid for to those regulatory issues they have to get resolved to, this workload is too messy. They don't want to touch it at all. The people that wrote it aren't here anymore. That's all kinds of reasons. And so, Fred's gone. I really like the essence of it is let's just interact with the customer right now before they make a decision about their cloud and help them. And in exchange for that, I believe we have a much better chance to be their future cloud, right? Because we're helping them, but also they're starting to use frameworks that we're really good at, right? If they're betting on Kubernetes containers, I like our chances for winning their business down the road. They are earning their trust by providing those capabilities. That's really the difference. We can interact with those 80% of the workloads right now and make them better. All right. So, Eric, the term we've heard a bunch this week is we're listening to customers and we're meeting them where they are. Now, Dave and I are analysts, so we can tell customers they suck at a lot of stuff. You should listen to Google. They're really smart and they know how to do these things right. I hope so. See, tell us some of those gaps there as to the learnings you've had and we understand migrations and modernization is a really challenging thing. What are some of those things that customers can do to accelerate that faster? There's a couple of them. The basic issues, I would say, one thing you notice when using GKE is that the OS has been patched for me magically. We had these huge security issues in the past year and no one on GKE had to do anything. They didn't restart their servers. We didn't tell them, we get downtime because we have to deal with these massive security tax. All that was magically handled. Then you say, oh, I want to upgrade Kubernetes. Well, you can do that yourself, but guess what? It's not that easy to do. Kubernetes is a beast and it's changing quickly every quarter and that's good in terms of velocity and trajectory and it's the reason that so many people can participate. But at the same time, if you're a group trying to run Kubernetes on-prem, it's not that easy to do. There's a lot of benefit just in saying, we update clusters all the time. We are experts at this. We will update your clusters, including the OS and the Kubernetes version and we can give you moderating data and tell you how your cluster is doing. Just stuff that honestly is not core to these customers, right? They want to focus on their advertising campaign or their oil and gas workloads. They don't want to focus on cluster management. So that's really the second thing they did out of it. That operating model, if I do Anthos in my own data center with the same kind of environment, how do we deal with things like, well, I need to worry about change management and testing out all my other pieces? Does most of that go away? The general answer to that is you use many clusters. You can have thousand clusters on-prem if you want and there's good reason to do that but one reason is, we'll upgrade the clusters individually. So you can say, let's make this cluster a test cluster and we'll upgrade it first and we'll tell you what broke, if anything, if you give us tests, we can run the tests and then once we're comfortable that the upgrade is working, we'll roll it out to all your clusters automatically. Same with policy changes. You want to change your quota management or access control, we can roll out that change in a progressive way so that we do it first on clusters that are not so critical. So I got to ask a question, you're a software guy and you're approaching this problem from a real software perspective. There are no box, I don't see a box and there are three examples in a marketplace, Azure Stack, Oracle Cloud at Customer in Amazon Outpost where there's a box, no box from Google. Pure software, why no box? Do you need a box? The box guys say, hey, you got to have a box. You don't have a box. It's more like I would say, you don't have to have a box. You don't have to have a box, okay. That's because again, a lot of these customers are in the data center because they already have the hardware, right? If they were going to buy new hardware, they might as well move to cloud, right? At least for some of the customers. And it turns out we can run on most of their hardware. We're leveraging VMware for that with the partnership we announced here. So that's generally works, but that being said, we also announced partnerships with Dell and others about if you want a box, Cisco, Dell, HPE, you can actually, we'll have offerings that way as well. And there's certainly good reason to do that. You can get updated infrastructure, we'll know it works well, it's been tested. But the bottom line is, we're going to do both models. Yeah, okay, so I can get a full stack from hardware through software, through the partnerships, and there's your stack. Right, and it'll always come from partners. We're really working with a partner model for a lot of these things. Because we honestly don't have enough people to do all the things we would like to do with these customers. And how important is it that that on-prem stack is identical from a homogeneous with what's in the public cloud? Is it really, it sounds like you're cooking the wrong, but your philosophy is all the software components have to be like that. You really, at least the core piece is to be the same. Like Kubernetes, Istio, kind of policy management. If you use open source things like MySQL, or Kafka, or Elastic, those ought to operate the same way as well. Right, so that when you're in different environments, you really kind of get the feeling of one environment, one stack. Now that being said, if you want to use a special feature like I want to use BigQuery, that's only available on Google Cloud. You can call it, but that stuff won't be portable. Likewise, if it's something you want to use on Amazon, you can use it, and that part won't be portable, but at least you'll get the most of your infrastructure will be consistent across the platforms. How should we think about the future, you guys? I mean, just without giving away confidential information, obviously you're not going to do that, but just philosophically, where are you going when you talk to customers? What should their mindset be? How should they be preparing for the future? Well, I think there's a few bets we're making, so we're happy to work on kind of traditional cloud things with brush machines and disks and lots of classic stuff. That's still important, it's still needed, but I would say a few things that are interesting that we're pushing on pretty hard. One in general, this moved to a higher level stack about containers and APIs and services, and that's Kubernetes and Istio and its genre. But then the other thing I think is interesting is we're making a pretty fundamental bet on open source, and it's a deeper bet than others are making, with partnerships with open source companies where they're helping us build the managed version of their product. And so I think that's really going to lead to the best experience for each of those packages because the people that develop that package are working on it, and we will share revenue with them. So it's Kubernetes is open source, TensorFlow is open source, this is kind of the way we're going to approach this thing, especially for hybrid and multi-cloud, where they're really, in my mind, is no other way to do multi-cloud other than open source because this space is too fast moving. You're not going to be able to say, oh, here's a standard API for multi-cloud because whatever API you define is going to be obsolete in a quarter or two. What we're saying is the standard is not particular standard per se, it's the collection of open source software that evolves together, and that's how you get consistency across the environment. It's because the code is the same, and in fact there is a standard, but we don't even know what it is exactly, right? It's implicit in the code. Okay, but so any competitor to say, okay, we love open source too, we'll embrace open source, what's different about Google's philosophy? Well, first of all, you can just look at a very high level of contribution back into the open source packages, not just the ones that we're doing. You can see we've contributed things like the Kubernetes trademark, so that that means it's actually not a Google thing anymore, it belongs to the Cabinet of Community Foundation. But also the way we're trying to partner with open source projects is really to give them a path to revenue, right? Give them a long-term future, and the expectation is that makes the products better, and it also means that we're implicitly a preferred partner because we're the ones helping them. All right, Eric, one of the things caught our attention this week, really kind of extending containers with things like Cloud Code and Cloud Run. Can you speak a little bit to that and maybe, you know, directionally where that's going? Yeah, Cloud Run's one of my favorite releases of this week, and both the ones, Cloud Code is great also, especially it's VS Code integration, which is really nice for developers. But I would say the Cloud Run kind of says, we can take, you know, any container that has a kind of a stateless thing inside and it's an HTTP interface, and make it something we can run for you in a very clean way. And what I mean by that is, you pay per call, and in particular, we'll listen 24-7 in case a call comes, but if no call comes, we're going to charge you zero, right? So we'll eat the cost of listening for your packets to arrive, but if a packet arrives for you, we will magically make sure you're there in time to execute it, and if you get a ton of connections, we'll scale you up, you could have 1,000 servers running your Cloud Run containers, and so what you get is a very easy deployment model that is a generalization, frankly, of functions. You can run a function, but you also can run not only a container with kind of a managed runtime app engine style, but also any arbitrary container with your own custom Python and image processing libraries, whatever you want. So Eric, you're our last guest at Google Cloud Next 2019, so thank you. And so put a bow on the show this year. Obviously, we've got the bigger, better, shinier, Moscone Center, it's awesome. Definitely bigger crowd, you see the growth here, but Taiyabo, tell us what you think. Take us home. I have to say it's been really gratifying to see the reception that Anthos is getting. I do think it is a big shift for Google and a big shift for the industry, and we actually have people using it, so I kind of feel like we're at the starting line of this change, but I feel like it's really resonated well this week and it's been great to watch the reaction. Everybody wants their infrastructure to be like Google's. This is one of the people who made it happen. Eric, thanks very much for coming on theCUBE. Appreciate it. I appreciate it. All right, keep it right there, everybody. We'll be back to wrap up Google Cloud Next 2019. My name is Dave Vellante, Stu Miniman, John Furrier, we'll be back on set. You're watching theCUBE. We'll be right back.