 Welcome back everyone, Google Next here, live theCUBE coverage, I'm John Furrier, host of theCUBE at Rob Strecce, Savannah Piecer, Rebecca Knight is all here. We're getting all the action, day two of three days of wall-to-wall coverage. It's great to see Google next level. We've got a CUBE alumni here, Hen Goldberg, Vice President in General Matter of Kubernetes and serverless at Google. She's working on all the cool stuff that's sitting right above all the CPUs, CPUs. Hen, great to see you. Thanks for coming on theCUBE. Thank you so much for having me. It's always a pleasure. It's been quite a journey. Talking before we came on camera real quick that Kubernetes is celebrating its 10th anniversary. We were reminiscing, looking at old footage from 2017, CUBE interviews. So much has changed, but there's a lot of similarities in Kubernetes and AI. But first, congratulations. Well, congratulations to all of us for Kubernetes. For all of us, definitely, you know, it's an amazing testimony to the community. Cause I think it's not easy to sustain a project for so long. And I really remember those early days thinking about, will there be a Kubernetes 2.0? And what will migration look like? And it was really like our no-star was to avoid those kind of migration projects for our customers and users. So I think the community has done an amazing job. Yeah, a lot of debates, but a lot of solidarity, a lot of great community came together. Now AI's got a similar kind of view in the sense of you can see the future, you get it, it's coming together, it's unfolding right in front of us. And you have a role, and you have a team building out on top of Kubernetes, which is everything from containers to server. Take a minute to explain what your current role is now. How are you fitting into this? I won't say new Google, but let's say Google Cloud. I mean, you got a CEO of a public sector with board of directors. You have now all this horsepower. It's starting to see the layers of the stack and the workspaces, user experience. What's your team do? How do you fit in to the Google Cloud equation? So my team is responsible for what we call modern runtimes. Meaning when you want to come bring to the cloud your workloads, no matter if it's a traditional workload, AI workload, any modern workload. And we are helping our customers to manage and operate that at scale. It's probably sounds familiar with Kubernetes. And we have two main offerings that we do it through. And one is Cloud Run. Think about it as a container, as a service. The easiest way to build applications on Google Cloud. And the other one is, you know, when you're looking for something more customizable, flexible, you want to build your own platform on Kubernetes, GKE got you covered. And within Google Cloud, that's exactly the role that we are playing. And we are helping our customers run geo transformation, innovate, create new experience. And now with AI, it's just amazing to see what's possible. Things are coming together. What's the keynote highlights that you see that you got excited about that your team's working on? Where you see opportunities for more innovation. So the first thing that I will highlight is the opportunity with AI to improve our technology team's velocity. So we made one announcement, a Gemini Code Assist, a few months back, and I'm sure you're going to talk about it with Gabe. But yes, we also announced Gemini Cloud Assist, which is talking about how do you take Gemini and create Gemini powered insights and recommendation within the console and just help me be more productive, be more effective. So that's something that I'm really excited about. I'm asking my team to become more effective and productive and really leverage AI. And we are doing it in the context of testing and coding. So I think that's one area which is super exciting. It would seem that it's also with Cloud Run and with Kubernetes and getting up and running. A lot of people don't necessarily want to run their own kit right now. And is that where you're seeing the biggest uptick from a Cloud Run perspective? I think that's what's happening is that if five years ago, I think many wanted to build their own platform. But the world is changing. It's, there are more things we want to do. There are more investments we want to make. It's actually higher to higher, great talent. And I talk with different customers. I'll give some examples. And those VPs, the CIO that I talked to, they are saying, I want my team to invest where it matters, where we want to innovate with our unique IP. And I just give two examples. So we had the founder of Lung Chain, which is a very successful open source. Parasen Chase, great guy. So they did a benchmarking on what platform would be the best. And they decided, we're going to go with Cloud Run. Why? Because it gives us the velocity, it's the best for us. And we're like, yay. But you know what, on the flip side of that, I'll give two other examples. Today I had a session, my spotlight, when we were talking about the new things we are working on. And Farhan Tawar, he's a VP at Shopify and we've been working very closely together. And about a year ago, I was asking him, hey, what about Cloud Run? And he's like, yeah, you know, we always do our stuff. We build our own. And I'm like, what about trying it? And he's an amazing leader and his team is amazing. And they try it out. And now they're like, yeah, we're all in. Really? They went from tire kicking to you a favor? Give me a taste test. It's not for me, right? I think it's for them, where they invest their time. And we see enterprises that they just feel that value. So I think that we will see more and more use of managed services. We can see it with Vertex AI, okay, as well. And I love it because we would like to build platforms and build services. And I love to see what it's able others. Why is Google Run working so well for folks? What's the reason? Simplicity? Simplicity, oh my God. You see me smile, right? Yeah. You know, every time I think that we made it the simplest ever, then the team comes with another way to make it even simpler. So this week we announced a Cloud Run application canvas, which you can use natural language, say what you want, and behind the scene, we're powered with Gemini, we're creating the architecture, all the APIs enabled, everything you need, you just click deploy, and voila. Magic. It's not magic, it's technology. But it works and it makes things simple. Well, AI gives a feeling of magic. I mean, that's what ChatGPT did to the whole world when it said, wow, it's just getting a response to a query, but it was generating a response. That's what people want. They want ease of use coding. How's that going to impact new application developers? Either is it a low code, no code? You see it much more about creativity. What's that going to do for the developer? Obviously, it's going to create some value, time. I think it's... I have lots of ideas around that topic, but maybe going back to Kubernetes early days, I think that the most important thing that we are seeing people do right now, they are experimenting. And there are a lot of areas that are experimenting. They are experimenting in frameworks. They are experimenting in use cases. They are experimenting like which models should I use? We see a lot of options. How do I decide? And I think that in the next, I don't know how long it will take, probably not much because everything is moving very fast. We're going to see a lot of learnings and experiments which will then tell us what will be that developer experience. What will people expect? And from an innovation perspective, I will say that we see a lot of AI startups that are building their own models and they are solving problems in very creative ways. The back of my head, I'm thinking about for this to really change the world, we'll need to make sure that the technology is accessible. And now maybe it will be the last thing I will tie to Kubernetes. When we started that journey, one of the things that excited me is that through open source, we made this technology accessible for people. So no matter where I'm from, where in the world, what kind of education I can try and scale and we enabled a lot of experiments and learnings, we didn't try to solve all of the problems at once, slowly. I think that will be key for such a technology. Yeah, I think one of the things, and we were over, I was over at KubeCon in Paris and one of the really interesting things was that it seemed like, it's called KubeCon CloudNativeCon, but it looked like the CloudNative part was really becoming more front and center and it was the ecosystem around that. How does that really impact what you're building and how your teams work with all of the different ecosystems that's around Kubernetes in general? I think that before everything happened with AI, our goal was to keep Kubernetes boring. Okay, I think that's one of the critical success factors in order to get to the 10 years anniversary. And we've done that by creating those abstractions and really enabling that ecosystem that you're talking about that I think is crucial for any innovation. What I think is now happening though, specifically with AI and I'm sure you also heard that like in Paris and we'll hear about it in the next one as well is that Kubernetes and containers have some key attributes that are a great fit for what's happening. Containers are great for innovation and moving quickly. We have an abstraction to the infrastructure so it's a good way to manage GPUs and TPUs and of course I have orchestration which helps me to optimize, but together with that there is some new things that are needed. The scale is different, okay? Like we always take pride in our 15,000 nodes cluster but then we have customers like character as an example that they need a much bigger cluster. Then what? How do we do that? How do we work with them? So I think this will be interesting. They're moving forward. You mentioned accessibility. Obviously you've been dealing with all this and it develops all the time. You get performance availability, scalability, maintainability, those are all the usual conversations. The accessibility is good, but it also brings up two other factors that are part of the AI. I call it the glue layer, it's emerging relatively fast, that's security and compliance. So governance has become a big topic because if you get the governance right from day one, a lot of the data can be scaling and be fast and be into AI much better, more safety, more security. So talk about the security compliance that comes along with accessibility. How do you see that unfolding? Or is that too early for the conversation? I think it's a great point. By the way, definitely an area that we invest a lot in at Google at every layer, okay? So my team, of course, is focusing more on the containers and the runtime from that perspective and the data team. But of course we care about data in transit. So there's a lot of factors there, who has access. So that's one piece. Another piece, especially when we talk about sovereignty, for example, a lot of people are worried about what kind of access we will have as cloud provider for things running on our cloud. So that's another angle that we are working on. But that we continue with your data is your data. And container security, I mean, the big conference that's been always there is supply chain, S-bombs or have been a solution. Any update there? I mean, I'm just curious because since you brought it up I might as well ask. So we are working on that for sure and just trying to think about a couple of things. From a container security, a lot of things are staying the same. But what is becoming interesting that with ML Ops, our tool chain is changing, okay? So working through that for sure. So those are the kind of things that we are investing in right now. And it seems like that as the ecosystem grows and there are new personas coming in, like the persona of a platform engineer and we were kind of talking about that earlier. And how do you look at addressing that, the skills are changing and platform engineers sometimes have a much broader view and they're more than just Kubernetes. How do you see that as you build out the services like Cloud Run and GKE? So first of all, I think this is a super exciting time for a platform engineering team. Because I think at the heart of it, if you're a platform engineer, you want to enable innovation and innovation is happening, okay? We see all of our customers are thinking, how can they take AI? How can they empower their developers to build game changing experiences? So I think that's very exciting for everyone in the field. Our goal, also with GKE by the way, is to make things as simple as possible. So if there is a problem or a challenge that we've already solved for you and we can automate, we'll do it. I'll give you a different example. A Ray framework, which is being used by many of AI developers for their workloads. It's actually pretty complicated to have it available on a GKE cluster or on a Kubernetes cluster. Now for us, it's a checkbox. Okay, we automated that. Why would you invest the time? It's a problem that we can solve for you. You want to enable TPUs. It's a checkbox. Other problems that we are solving specifically for AI workloads, which I think is again interesting. One of the challenges, especially in inference, is that the container images are much bigger. Okay, sometimes they have the model and if you want to scale out quickly, you need to take into consideration the cold start time. So what do you do if you don't have a solution and then you over provision? But GPUs are very expensive. You don't want to over provision. So we are creating a lot of new mechanism in the platform to do that. And I think what, for example, we're doing the image preloading, which Vertex AI, which is also running on GKE, we like to drink our own champagne, have seen 29 times improvement because of that. So we are making an effort to solve everything we know at scale. And I think if I'm a platform team, my role hasn't changed. My role is to enable tens and hundreds and thousands of engineers to build new AI applications and I shouldn't forget about all the other workloads at Arani. Okay, so my job maybe from that perspective is becoming more complicated. And so how many, tell me about your team. How big is it? How you guys organize? Do you go by technology? Do you organize by groups? Let's see, can you, it might be confidential for me. We don't talk about size, it's a large team. Okay, that's okay. We are... But you got a lot going on. You got the container. It's the big part of the tool chain. You mentioned ML Ops. This is going to be the hottest area on the planet because you're going to have models coming in. So the way we are usually structured is that every team thinks about what's their role in Google Cloud. And I think you said in the beginning, our expertise is in that runtime piece and orchestration and integration to other systems. So we are not building new ML Ops, but we will be enabling, creating new tool chains as an example. And I believe that we are doing a good job enabling other platform teams for AI workloads because just last year, we've seen growth of 900% in GPUs and TPUs on GKE. 900% one year. So I guess we're helping that innovation happen. And your North Star goal, like you, Kubernetes, to make things simpler. What's your overall goal that you're trying to achieve your vision? Is it to have the most cohesive integration layer? Fastest runtime? So in essence, I believe that I would love to see customers use more managed services as much as possible because you don't need to reinvent the wheel. And I want to make sure that if I need more complexity, if I need more flexibility, we are not creating barriers for that customer. So really allowing innovation to move on. So we are really investing in that, what we call inter-probability. So we started talking about GKE and Cloud Run, but now we're also talking about GKE and Vertex. Maybe I will start my work with Vertex and maybe there will be one use case that I will need for inference and I will be really specific about what kind of utilization I want to get. You know what, maybe GKE is a great. Can I use them together? Can I use my model garden with it? Yes, I can. Can I use Collab Enterprise notebooks? Yes, I can. So thinking about that, I think from my perspective, is how we enable innovation. I mean, distributed computing is the paradigm. I mean, making things work, it's going to be a big deal. Final question for me, and Rob might have a couple more questions is that, what's your business plan for the year? Obviously, we just came back from KubeCon EU in Paris. We've got North America coming up in the fall. And your job, what is your business objectives, goals for the year? Can you share your plans and what you're trying to do and what you hope to accomplish this year? So I think the good thing that our mission or our vision hasn't changed, meaning for the past few years, and you know I can also maybe go back to the beginning of Kubernetes, it is about enabling innovation. And maybe there are different challenges and maybe different workloads, but in essence, it's about making things easy, okay? Comprehensive, okay? You don't want to stitch things for no reason, and that's something we take a lot of pride in GCP. Can we integrate the entire stack? Okay, I don't want to build like point solutions. It has to work together, and it has to be reliable, okay? We are a cloud provider. You should trust us with the most important workloads. So across that, we continue on that mission. You should expect us to continue and invest in security. It's a high priority for us. The second thing is supporting AI workloads. And the last thing is, how can I scale it within the organization? I would love to see more enterprise customers, more traditional workloads, benefiting from all those new technologies. And the commercial opportunities are great. Even the public sector, we had the new CEO, Kara, to put on earlier. She's going to be, I mean, all this AI is going to, I mean, I think about the government, Rob. All those procurements, paperwork, inadequate processes. Well, I was going to say, I mean, and part of what you have to be concerned with is that Google's not just running in Google Cloud now. It's, you have the distributed cloud, and you have sovereign clouds, and you've got all these certifications now with the government, with the agencies and stuff. Is that a big, do you see that as just another set of requirements that are coming in, and how do you keep that for being simple as well, I guess? So first of all, we have a special team that is focused on that, and their role is doing exactly that. So how can we take all that power, all that amazing things that are coming from GCP, but make them accessible with those set of constraints? Yeah, yeah, that makes total sense. What is the, my last question would be, what do you want us to be able to be on stage with us next year to be able to save? What if you, what do you hope to be able to save next year that you can't save today? I would love to see more and more customers running. I work with that scale. Okay, we see a lot of the startups now innovating, but I expect this will change how we live our lives. So I think that would be amazing for sure. And from a technology perspective, I think it would be interesting to think about what is that AI platform we look like? But you know, the painful truth we are not excited predicting the future. Well, you've done an amazing job. We loved you on theCUBE. And again, 10 years ago, Kubernetes started, and the CNCF was in the round then, then when they picked up the project a few years later, you were a big part of that success and the community is glad to have you. We're glad to have you on theCUBE as a CUBE alumni. You're like a contributor. You're like an analyst for us. You're running all the good stuff at Google. I think, yeah, the key part there, from my perspective was focus, and optimizing for learning, and really working with customers. I mean, that was, from my perspective, so something actually pretty new that we've done in an open-source project, both creating focus and think about a product, in a sense, and that's my philosophy for innovation. Well, and I think you're on the right track. I've been saying on theCUBE, I've been like a broken record. Generative AI is just distributed computing with the notion of runtime assembling things. And this is that bit around computer science principle, so it's kind of another shot in the arm for computer science, engineering, and with low-code, no-code tools, the creativity barrier can be right in the front line. So you don't need to be a super coder. You just kind of have computer science systems thinking, and you can make these happen. So I think the revolution's legit. It's very high-pubbly right now, but it's a good bubble. It's not a bad bubble, but it's a good bubble. It's fun. It's fun, it's a lot of fun. And thank you so much for your time. Thank you so much for having me. And Heddon Goldberg here on theCUBE. Vice President and General Manager at Google, covering serverless containers, at Kubernetes, here. Q, I'm John Furrier with Rob Stretchy. Savannah Peterson and Rebecca Knight. Dream Team is here. Team coverage here at Google Next. We'll be right back after this short break.