 Live from San Francisco, it's theCUBE, covering Google Cloud Next 19, brought to you by Google Cloud and its ecosystem partners. Hello everyone, welcome back to theCUBE's live coverage here in San Francisco, the Moscone Center for the Google Clouds conference, it's called Google Next 2019. I'm Trevor and my co-host Stu Miniman, Dave Vellante is also here doing interviews. Our next guest is Pali Bhatt, who's the VP of product and design for serverless at Google Cloud. Pali, great to see you. Thanks for coming on. Thank you for having me, nice to be here. You're the VP of product, you get the keys to the kingdom on the roadmap. You're seeing all the announcements, obviously serverless, Cloud Run was announced, Cloud Code was mentioned on stage, that's going to come out tomorrow, so Code, Build, Run, this is DevOps, this is actually happening. Yeah, you know what's super exciting is that we've, we're finally solving the problem for customers and taking a customer-centric view of this. I'll start off with a little bit of the journey we took to get here, right? As we were talking to customers, they kept coming back to three things that they wanted from us. The first thing they wanted was agility. They understand that, you know, Cloud could give them great cost savings, but they also wanted to be able to move faster and innovate, right? The second bit they wanted was having the flexibility to be hybrid and multi-cloud, super important, especially to our largest customers. And then the third piece was they've really struggled with this journey to Cloud and they wanted our partnership to make it a much more seamless and non-disruptive journey. So as we talked to them about these three things, right, we came back to the drawing board and said, hey, what are the products that we can build to make their journey to be more cloud native and more agile, much more seamless and future-proofed, that much better, right? So we came back to the drawing board and came up with three products that you talked about just now. The first was, we looked at developers and their journeys and we said, look, they're building in traditional IDEs like IntelliJ or VS Code, optimized for local development, right? And they're not writing a lick of yaml there, right? For Kubernetes. And we said, okay, how can we take those environments and help those development teams build cloud native apps really, really easily? So really just turbocharging their cloud native development. So build cloud code, which extends their local IDEs and lets them deploy to remote clusters so they can get full debugging, full deployment, building, it's integrated in the cloud build and they get the full Kubernetes development environment right in place, yeah. So cloud build was released earlier, you got enhancements to that. So news, the hard news here is, enhancements to cloud build, cloud code is new announced here, cloud run announced today. That's right. So this is the new, hard news. That's right. Okay, so bottom line, what does it mean for a developer? So I'm like an enterprise. So I'm a CIO, I'm a CISO, I'm going to be putting all my eggs in the cloud basket. I'm still going to run the on-prem. Data is going to be critical to my strategy. Is this early days setup time or are you guys thinking it's more about the setup or is it more of the lifecycle of CICD pipelining all the way to application deployment? A great question, John. So I think where we are in this journey is that enterprises have started off with something that's the most basic cloud-ready workloads that have been lifted and shifted. We now see the next wave of workloads. This is the 80% of workloads that are still on-premise. We see them start to get cloud-ready and cloud-native. And the way that their enterprises are going to do that is by building on top of the standards we've created, like Kubernetes and Istio and Knative, and what cloud code and build and run, and of course Anthos that we talked of this morning as well, these are great managed solutions from Google, fully managed solutions from Google that let you get cloud-native fast. Yeah. All right, Polly, I wonder if you can help us squint through, I see a disconnect in the market. So Google showed great leadership in the container space and of course Kubernetes came out of Google and when I look at cloud run, okay, it's helping to connect that and Knative to Kubernetes and Serverless. When I talk to a lot of the developers in Serverless, it's not the infrastructure moving up the stack, it's, they didn't want to even think about it. It's built in the cloud, I focus on the application, I don't even think about that, so I've got this big gap as to, you know, on-premises, forget it. I don't never want to touch it or think about it and you know, one of the reasons, you know, there's the term Serverless, put it to the side, but no, if I need to manage this environment, I don't want to think about it and we know hybrid is a reality, but there's this big disconnect as to what kind of developer are you? Are you a DevOps person that came from an infrastructure background or are you just building apps today? Yeah, yeah. Yeah, we're definitely seeing that from our customers, right? So one thing that we hear all the time is developers don't want to just not think about infrastructure. They actually want the managed service and the platform they're building on to think about the infrastructure and optimize it for them. So it's not just programmable infrastructure, right? It's Cloud Run programming the infrastructure for you so you don't have to do it. And I think increasingly you're going to see products like Cloud Run and Anthos and Cloud Code let developers focus just on code because that's what they want to do, right? I've never seen a developer say, I really want to write a YAML file or I want to set up more configuration parameters, right? So I think we're going to get to the place where you have developers being able to focus on code and all of the rest of this being taken care of by platforms like Code and Run and Anthos. Automation becomes key. I mean, Jennifer Lin's demo I thought was very game-changing because she made a comment developers can focus on their code and agility not access permissions and all the configuration management that goes on under the covers. You guys are going to provide that in an automatic programmable way, programmable APIs. And she kind of teased out service meshes. So service meshes kind of point to the future which is app developers are going to still need to be aware of, maybe not aware of with Cloud Run how to manage those services as they stand up and get pulled down dynamically. How do you view that? Because this is becoming complex. Is that going to be automated? Is that where Cloud Run comes in? People expand on this whole impact of service meshes because that's the next level. That's right, that's right. So if you think about Knative, it's built on Kubernetes and it forms a kind of triad with Istio as well, right? And what a product like Cloud Run does is it lets you not have to think about that. Because at the end of the day we don't want developers to have to think about Knative. What Cloud Run is it takes care of the Knative portability and compatibility for you and all you do is focus on the code itself, right? So ultimately we want developers to focus on their applications. But I will say this, right? We do care about another important constituent which is all of those folks who've already gotten apps built out there. Can those workloads be serverless as well? And that's part of the problem we're trying to solve with Cloud Run. That's an operational thing. All right, so let's take a step back here. So serverless, obviously fanfare has been great. We're seeing a lot of traction. People are enamored by it because functions as a service has been very compelling, whether it's retail managing, the spike loads, we see some use cases where it's like really an amazing thing. Where is it limiting? What is the next level growth for serverless? Where do you see, you mentioned workloads. And we've seen people deploying functions and being happy with it. Are there limitations with serverless? How does it go to the next level? Can you take a minute to describe the current state of serverless and what's coming around the corner? So, great question. The first thing I'll say is that there's a ton of developers who come up to us every day and tell us Cloud Functions is awesome, right? And they really like functions as a service. They like the event-driven approach to it. They like the serviceful approach that serverless provides. Love the programming model. That's great. Now there's another large contingent of developers who tell us, look, this is super constraining for what I want to do. I don't get to choose the libraries I want. You're forcing me into a particular programming model. Can you give me more flexibility? And what they see every day is the flexibility that containers provide, especially on Kubernetes, right? And what we've tried to do with Cloud Run is try to bridge those worlds where you get all of the flexibility that you want, right, that you get with containers, but then combine it with what you really want with the operational model, which is serverless, right? So you pay only for what you use and of course you get the agility of serverless as well. Now one thing that we've noticed and heard some great stories about this is a customer of ours, Violia, which is one of the early adopters of Cloud Run and they've been partnering with us. We thank them for it. They are running a complex workload. You talked about retail. What Violia does is they're a large French multinational and they do energy, water, and environmental services. These are things that need to be highly reliable, very complex, and these are workloads that have existed for ages, right? And what Violia is doing is using Cloud Run to run that complex workload, but in a service full way, running in a serverless fashion. All right, take a minute to explain what's a complex workload for your definition and what is a simple workload? Because guys, again, we love functions. Stu and I always talk about how great it is, but what's the D-mark line? When does it become complex by your standards where you guys are addressing? Describe the characteristics of a complex workload. So the first thing is, does the workload require flexibility, right? Meaning, are there custom workloads? Sometimes even legacy C++ or C applications, do they need to pull that functionality in as well? Do they need to pull random artifacts from across the enterprise to combine it? And sometimes these are things that have been built over 20 years ago, right? But they're really critical, mission-critical piece of software that need to be able to trigger and run, right? And can we actually take that flexibility, right? But also combine it with a highly reliable environment, right? So workloads like Violia's, there is no downtime, right? They need to be up 24 by 7, 4, 365 days of the year, right? So that flexibility plus that level of reliability is what we look at when we look at complexity. So you're getting into complex systems where you got some code media written on a mainframe, COBOL, C++ you mentioned. That was my day, I'm kind of old dating myself, but that was state of the art back in the 90s. So I'm running an agile job, maybe standing up cloud-native, but I need to use software and data from a system. Is that where the container piece comes? Is that where the Kubernetes really needs to happen? Yeah, they're running it on either Kubernetes, but Cloud Run also supports Docker. So let's say you're running it in a Docker container. All you need is a Docker container image and we can host that workload on Cloud Run. Yeah, Polly, help us understand where Google kind of, what's the same and what's different compared to the other serverless offerings out there? Just what I've heard feedback the last year or two is the great thing about serverless is it's really easy to get started. I've talked to marketing people that have no coding background that can get off and running it, but doing complex, mission critical stuff, like we understand there is no magic wand in IT, no silver bullet to make it easy, but what do you see as Google's role in this broader marketplace and where does open source fit into that too? Yeah, so first I'll start off by saying there's a whole host of functions that are running on Cloud Functions which are relatively lightweight, simple, targeted, event-driven functions. Those work great. Where we see as really making a difference for our customers is in two ways. The first is get these more complex workloads that are currently running in a container, whether it's a Docker container or on GKE, for that matter, and bring the agility of serverless to those workloads, so that's the first thing. It's something that we think is very unique because combining containers with serverless. The second bit really is the open approach we've taken. Built on top of Knative. Knative, as you know, has a number of partners. So one of the cool demos that you'll see during Google Cloud Next is you'll see a workload being shifted from Cloud Run on GKE to the IBM Cloud. IBM is one of our partners for Knative without a single line of code. And that flexibility is something that I think customers really desire. Talk about the business benefit, talk about the benefits at the business level, at the developer level, at the operations level. Can you hit those three points of serverless, Google serverless on those three sectors? What's the benefits? Yeah. So we talked about the benefits for developers. For developers, it's simply about agility. Focus on your own code. Don't worry about YAML, don't worry about Knative. You don't have to worry about any of that. We'll take care of it for you. The second benefit that I'll talk about is, again, this is the benefit for the CIO, which is, hey, we're going to give you the flexibility and the openness so you can have portability of your workloads across whatever environment you want, whether it's on-prem or in a cloud, whether it's Google or another cloud. That's the second benefit. The third bit is all of the operational benefits of serverless. One of the things you'll see us do and continue to commit to do is we'll bill you to the 100th of a millisecond, right? And so you'll continue to get that with all of the resiliency you expect of Google infrastructure. Security also pretty much baked in as well. Security is baked in. There's a fully managed offering from Google and so you'll get security, compliance, policies, all baked in. Of course, we watch the keynote and we watch every word from Koreans giving Diane green a little tip of the hat, which is a great signal, a lot of class, great respect for that. But Jennifer Lynn said something I want to get your reaction to. She was kind of talking about her thing, doing a great demo, game-changing, and she said, this would allow you to negotiate better contracts. Okay, that might have been a slip of the tongue. Your reaction to that, that implied to me, I took that and said, whoa, that means leverage shifts to the customer. Your thoughts and that comment, maybe it's a slip of the tongue, but if you're saying that I can have options and choice. You know, Jen is spot on. This is what customers want. And at Google, what we're focused on is giving customers what they want. And one of the things that customers are worried about today is lock-in. And especially in the serverless area, because the current offerings are so proprietary, customers are worried about it because they want serverless for all the benefits offers that we talked about, but they do want that flexibility. And that's what we'll give them. So negotiating contracts say, we know Oracle's very strict on their cloud. This is going to give customers the choice as to saying, whoa, you want a licensed renewal? That's what you're getting at here. So Polly, when you talk about choice and flexibility, Kubernetes gives some of that. The concern with serverless is, if I look at Azure, if I look at AWS, if I look at Knative, those three aren't the same. I talked, there's a small startup called TriggerMesh that's getting Knative to work with with AWS Lambda. But do you see a future? Is there, I've talked to the CNCF, I've looked at some of the various pieces that, serverless isn't just something that I'm baked into a cloud. Yeah. Look, I think we've seen extraordinary momentum around Knative is very similar to what we had seen in the early days of Kubernetes. There's huge amount of ecosystem interest. And so we'll see continued innovation where you'll see workload portability come to serverless. And I'm confident in that because of all of the momentum we're seeing around Knative. So we're committed at Google to Knative and its success. So you'll see us continue to innovate. Talk about open source. Obviously open source becomes a very strategic part. You've mentioned Kubernetes, which you guys have the DNA, the founding fathers of Kubernetes. Now the team, some of the team went to VMware, some went to Microsoft, some stayed within Google. Containers, certainly we see what you guys have done in Borg and trajectory. But open source, still this fear of open source, I mean, I don't mean it in a way that it's going to be inhibitive and primitive, but support, making sure SLAs work, latency, microservice is going to be involved. You mentioned Knative. So as open source accelerates the time and value for the code, that also triggers this upside of the service ability and reliability and support. What's your thoughts on that? How are you guys, how do you see the industry supporting that critical piece of the puzzle? Yeah, could not be more critical for customers to be able to adopt this. Because the number one thing that we need to do for customers is give them a managed offering that lets them not have to worry about security. Let's them not have to worry about compliance. Let's them not have to worry about policies or identity, et cetera. Bake all of that into the managed service. And then the second operational bid is, which is as important. This goes to what Thomas talked about at the very end of his keynote, with his open source announcement is, we want to make it simple for customers to adopt. It'll be supported by Google and the partner. You'll get unified billing, unified support, and one person to call when you have a problem. Right? So, Polly, we're at an interesting point in open source today because they're, I want to get your opinion as a product person and your relationship with open source because there's a certain cloud out there, they're going to give you open source as a managed service, but you have some of the companies that are making open source databases, changing their policies to try to fight against just being taken over by some of the big players. How does Google react to that? Yeah, for us, the approach is all about partnership because we think together we can better serve customers' needs and best serve them. And so, our approach has always been about partnership. So, whether it's Kubernetes or Knative or the larger managed open source offerings that we talked about earlier in the keynote, we want to bring all of these together so we can serve customers. So, you're going to see us continue to support the open source ecosystem because we believe that innovation is absolutely critical to helping our customers really start to innovate and be agile. Final question, I know we're tight on time. I want to get this in because obviously, a lot of positive have come out of the show. There's been some critical analysis around, we've got to build up sales people and all the field stuff, which as you guys are well aware of. But one of the things that was kind of teased out in the open source announcement was the role of Google having their own ecosystem. Obviously, the CNCF has been a big tailwind for Google. You guys have been a big part of that ecosystem as a cloud commercial provider. And with these kinds of serverless, you're going to have an ecosystem starting to develop, kind of a thousand flowers blooming, pun intended. So, how do you see that in your area because this is going to be super important? Partnering, ecosystem support, which is developer traction, distribution of software, integration opportunities, and monetization all kind of come together. Your thoughts? Huge, hugely critical for us and that's something that we've been focused on. We have a rich ecosystem of partners for serverless. We're going to continue to build it out across all of the different pieces you need. One of the things we didn't talk much about was our entire operational stack, monitoring, logging, all of those pieces. And we need to bring all of those together, along with all of our partners. We have a big partnership with the likes of Datadog, right? Number of others. So, we're going to continue to partner with the entire ecosystem so we can go solve the problems that they have. Are you guys going to show them the white space where they can play and is going to be part of the strategy? Yeah, so it's going to be across the board. You'll see us continue to support the K-native ecosystem tremendously and like lean into that. And we're already excited to see all of the different offerings that exist on K-native. Same thing with Kubernetes. We're going to continue to like press hard. We've got on the operational side, we've got an offering called OpenSensors, which has got lots of traction. Again, just open monitoring of applications. So we're going to continue to do that across the board. Yeah. Pauli, great to have you on. Vice President of Product and Design. You've got the keys to the kingdom right here. He's the man who's running the show for the serverless. Really the key part of how Kubernetes really intersects old and new to create the next generation applications. Thanks for joining us and sharing the insight. I'm John Furrier, Stu Miniman here, live coverage, Google Next. More coverage after this short break.