 Welcome back everyone to theCUBE's coverage here on location in Las Vegas for AWS re-invent 2023. I'm John Furrier, your host. This is theCUBE's 11th re-invent. We've been watching the transformation, the growth of the cloud, growth of Amazon. Every year it's the same, almost the same story. What's Amazon going to do? They're under pressure and every year at re-invent they drop down the narrative and they have the last word of the year and the industry in terms of conferences and they did a great job this year. Generative AI center stage, but the subtext of that is clusters, inference, standing up a lot of compute and infrastructure so application developers can have a feeding frenzy on the foundation models and more at open source. We've got our next guest here who's going to help us break down the Kubernetes side and the container side. Barry cooks with VP of Kubernetes and Container. Registries for AWS. Thanks for coming on theCUBE. Yeah, it's my pleasure. Thanks for having me on. So we love Kubernetes. Everyone knows theCUBE is at KubeCon. Kube is at KubeCon and CNC is at CloudNativeCon, which has been kind of the industry trade show for open source developers, which includes companies, end users, companies like AWS, end users like Lyft, Intuit, Airbnb, all donating great software to open source as well as the industry at large and developers. So that's become the CloudNative world. But interesting makeup. Amazon re-invent is your show. And so there's an intersection between the open source growth of AWS' team and the success it's been having in Kubernetes has become the lingua franca. It's become de facto standard. It's becoming more invisible, boring as they say, which is a good thing. It is. Give us the update on how Kubernetes is playing in the center stage of generative AI with all this talk about developers standing up, applications, new inference clusters, compute clusters. What's the Kubernetes story here at re-invent? Yeah, I mean, I think you're spot on. Gen AI obviously is getting all of the attention these days and ultimately you really need a lot of cycles, compute, you've got to run it. And we've seen explosive growth of that in Kubernetes for EKS in particular, the elastic Kubernetes service here at AWS. One of the things that's been great about that is to see the level of success we're seeing amongst a number of our key customers. So at the show this year, we've had presentations from Adobe who has their Firefly Gen AI. So text to image product backed with EKS and we were super happy to work with them on making sure that they could scale up to the level that they needed, get the resources online, scale back down to avoid excess cost. Same thing with Anthropic, they came out this week and pointed out that using some of our technology, including Carpenter, something that we just donated to the CNCF in the last few weeks, they were able to drop 40% in their cost month over month as soon as they enabled Carpenter on some of those really big Gen AI models. And so we're seeing a lot of success in that area. What was that project you donated to CNCF? Carpenter, so Carpenter is a sort of a note auto scaling project. It's something that when we started one of our mantras and you were talking about the CNCF ecosystem and the landscape and how it leads to a lot of coordination and cooperation across all these different boundaries, including hobbyists. And so we like to say how Kubernetes is bigger than just us. Yeah, totally. And so when we started the Carpenter project we chose to do it completely in the open. And so it has been in a slash AWS repo now for a while and as we've matured that and really feel like it's gotten to the point where it's high leverage, high value to folks and Anthropics is a great example of someone who's seen that value in the real world. We wanted to donate it to the CNCF, put it under neutral governance, which is what we just announced this week publicly and continue to see it evolve, continue to see it grow and become this standard for how people can really pack in those workloads on Kubernetes clusters. Talk about the scope out the order of magnitude of compute that's involved. You mentioned Firefly, Adobe, I've seen a lot of image stuff going on. I mean, if you just go back five, six months ago some of the capabilities that's being done in the cloud right now would say compute just image generation alone. It's pretty spectacular. Inference, we heard at KubeCon Tim Hawkins from Google said inference is the new web app on stage which was pretty kind of pointed because inference is going to be the iteration of how people engage with the applications. Okay, so all that's going on. Now as developers need to stand up more infrastructure the platform engineering role and now a new role is emerging called the data engineer. And we're hearing that loud and clear at this re-invent. It's kind of come out like, okay, platform engineering, we know what that is, shift left, DevSecOps, securing the pipeline, shift left. But data now has to be kind of, not rethought, but architected and engineered to maximize the value for the generative AI pipelines. So that's data pipeline, that's engineering. Airflow is an app that Airbnb did. They donated that to open source. So a lot of stuff's going on. What's going on with Kubernetes and your role at Open Source to help get that new infrastructure engineered? What are some of the areas you see that's going to be important for that persona infrastructure? I almost say middle layer, but infrastructure meets that abstraction with Kubernetes. That's a sweet spot, it's almost like a bullseye area. It is, and you said at the beginning, Kubernetes is not really, but almost becoming boring in the sense that it's matured. It's a nicer way of saying it. It's gone to a level of maturity where the amount of dependence on Kubernetes is now huge. So people look at this and say, that's the best way for me to get into the cloud. And what we like to talk about is like, it is the front door to the cloud now. The Kubernetes API is how people want to get into the cloud and run things. So this gap that you're describing is really between, you have these operations teams, you have these experts at infrastructure managing it and running it and orchestrating things onto it. And you've got this collection of different skills around data scientists and data engineers. And what sits in between those is typically going to be an IDP, an internal developer platform. And that is the abstraction layer between this underlying sets of infrastructure and the components associated with it. And something a little more understandable and grokable by folks who are not actually infrastructure experts. And that's the key layer that we're seeing a real explosion at. Adobe actually talked quite a bit about this in their talk this week. Well I think that's a great call out and I want to just double click on that because I think that's where I was getting that action area because what we see happening on theCUBE and our CUBE research team is doing some work in this area is that the power law of models coming out. So that's more the top end of the stack. And then now you have Kubernetes, okay it's getting boring for the right reasons. It's a good thing, Linux is boring, but it's everywhere, right? It's not boring technically, but it's nothing new this. It's great, it runs stuff. As Kubernetes evolves to maturity, as gets more mature, the conversation shifts from, oh this is how you stand up a cluster, here's how you do stuff, two more use cases. And generative AI is going to accelerate the use case conversation around what needs to be done or work backwards from that use case where now the Kubernetes maturity phase is, I've got clusters going up. They could be GPU clusters, they could be up on bare metal or in the cloud. So you have now this new thing going on where that next level is enabling the work use cases. That are going to be driven by creative people that are going to do low code, no code with Q or whatever. So you now have a composability simplification productivity drivers, not how to set stuff up and mechanisms, I know Amazon loves mechanisms, but the point is that's the next level where Kubernetes is going and it's got to be ready. How do you see that? It's a good point. So I think that there are fundamentals to this and I think one of our base premises is focus on fundamentals, focus on security first, focus on operations and operation capability second and everything else after that, it has to be reliable, it has to scale and it has to run well. That's been the focus for the last five years. Kubernetes in AWS, EKS turned five this year. So we were off celebrating with cakes yesterday. So in that time, there's been this maturity for those operational processes, those scale processes and this next tier is the simplification. How do you take it and make it more consumable by these sets of people who are coming in the door and saying, I can't even spell Kubernetes. But I need to run these workloads and I need them to scale out and I have a sense of what I'm trying to do but I don't have a sense of how I can do that on the infrastructure. That's the next place. That's going to be the big push. Yeah and then this workload is going to come fast so it's going to have to be fast. So the next question is, what's your reaction to the data conversation? Because I know Kubernetes is more of an infrastructure thing on cloud native, but the data piece is coming in quick. Inference is going to be in these clusters. So does that impact much what you're doing, the whole data tsunami or the data engineering? Does that impact your area? It does have some impact. I mean I think in general if you look back in time you would say that oftentimes Kubernetes was amazing for stateless. It was really great for stateless and you didn't see as much of the connection to data. And now as you look at ML training in particular but also inference you actually need really tight high bandwidth connections to data. And so that is a space where we've got a lot of focus now as well. And you see people looking with a strong interest in connectivity. We announced S3 mount points for Kubernetes this week. Give you a file system interface into S3 through EKS. So that gives you a really nice path into big high bandwidth data applications like some of the ML ones that we're seeing. I can see a lot of conversations that are here and there at KubeCon and re-invent. Well it depends on what level of stack you're on. If you're more to the app, you just love it. And you could hallucination, we'll deal with that through model integration, maybe some testing. But when you get down into the platform engineering there's almost zero tolerance for hallucinations. Yeah. Because we're talking about infrastructure here. So how is that being thought through? Or what do you think the market place is thinking about? What's the psychology of that persona right now? Relative to generative AI aspect. Automate what they know as a more automation concept sir. So I think what's interesting about things like Q and is that as we look, which was one of our launches this week as you know. And as you look at this, what you really want is to go from storytelling, which is what we've captured the world's attention with generative AI and large language models to actually training models to be very specific and very technically focused on correctness. So you go from teaching a model how to tell a story to teaching a model how to debug a system or how to configure a system. And that's a super interesting switchover that I think is where the meat for the enterprise is going to be, which is how do I simplify those kind of workflows to really be game changing for operations teams. Yeah, yeah, yeah. And architecting and optimizing the workloads in the IDE. That's came out huge on the keynote. Exactly. Much of the business side was like for the co-pilot answer. But I think, you know, in testing, you mentioned testing, that's a hot area. I was just taught, I saw Bill Vass, all he's gonna be coming on today in the registration check-in. He talked about the role of synthetic data and how hot that is right now. Is that being used at all in the infrastructure side? Is that not an issue for synthetic data? You see any of that going on? I haven't seen as much of that yet. I think it's early days still. And I think there's just a lot of experimentation. This is kind of the fun early days for these things because you, a little bit of Wild West and a lot of like, I'm going to go try this. And you see that level of experimentation. It's one of the things that we really like is the ability to do those experiments rapidly and iterate and then change and then come back with a new idea. Fair, I really appreciate your insights on the marketplace. Let's talk about what your role is at Amazon VP of Kubernetes and Container Registries. Share with the folks some of the things that you guys have done. A lot of people might not know the work you guys have done in open source. A ton of this happened. We've been reporting on it as well over the years. What's some of the big accomplishments you have? And what's going on now for you? I think there's a few things. So Kubernetes has gotten a little more boring but not that easy still. And we talked a little on some of those changes. One of the things is so many enterprises are using it that there's a mental shift in the enterprise. And if you know enterprises, there's this tension that Kubernetes creates with enterprises because enterprises typically want to upgrade more or less never unless they absolutely have to. And Kubernetes as an open source project and a highly innovative one with a lot of people involved moves very fast. And so these two things don't go together great. So one of the things that we just launched was extended version support for EKS. And our idea here was what we really want to do is allow folks to upgrade a little bit more on a cycle they define and not be dragged forward by the rate of change in Kubernetes. So you pick a version, you can stay on it an additional 12 months. It gives you 26 months of support. We will go take on the burden of CVE find and fix. So allow you to feel comfortable and safe on staying on a version longer. Work with your ISV partners to ensure you have the certifications you need and then roll yourself forward. And we're building tools to help that roll forward as well. So it feels like a little in the weeds but it's been really well received. It's a culture thing too. Enterprises also the enterprises tend to solve complexity more complexity. And that's not the way Kubernetes is going. It's being more of a hard top to enable the developers there. What about going forward? What's on your agenda now? What's on your roadmap? What are you working on? What are some of your goals? Yeah, I think it's a goal. We talked a little about simplification and I think that's really one of our goals. How can we work with our customers to understand what are the pieces you're putting together and how can we help offload some of those? What AWS has been really good at in this ecosystem has been to say, what are your problems and how can we operationalize pieces of that so that you don't have to do that lifting? That's exactly what EKS was when it started. It was like, we're going to go run that control plane for you and you can focus on the nodes and you can focus on the other pieces. So as we look forward, we continually work with customers to continue to grow our assistance if you will and make it easier for people to roll out these new workloads. It's interesting, you guys are both an end user and an enterprise contributor. Because that's the power of the CNCF right now is you got a developer community of just individuals and you got companies like Capital One and you got companies like AWS contributing. So they're contributors and consumers but the idea that an Airbnb or an Envoy with Lyfts can just donate all this IP is a good thing and I think we're seeing more of that now as open source matures like your projects. Is that a feature or a bug? I mean, I don't think it's a bug. I mean, I think this is where the open source goes because as the business models change you're seeing a lot more contributions in bulk, componentize. I absolutely think it's a feature. I think you do see some people who are confused by it still or trying to wrap their heads around like, hey, I'm giving this away. Is that really the right thing to do? And the reality is that the strength in something like Kubernetes is its community. It is the fact that you can take an idea and express it into the community and watch it grow even bigger. And in our view in say the EKS world is really simple. We want to make EKS the best place to run those workloads. And if we can do that with the community, then we're going to win that business and that's our goal. Barry, I'm old enough to remember in my house in college we used to pirate software and that was illegal. It was almost like dealing software. I tell the young kids, what, you bought? Wasn't free. And so open source has gone through such great iterations. But it's one, it is the software industry for proprietary software is moving into these hard top environments where it doesn't matter, it's for performance. You're seeing that with chips and other system component levels. But open source as a software is standard. I mean it's kind of, I think there's no real role for proprietary software outside of unique environments. Exactly. Yeah, I think what you've seen the drive in this world B is, and it's driven by the consumers of the software themselves. They want to have more flexibility in open standards, open APIs, gives customers the view that I am flexible. I have enough control that I can also modify it to suit my needs. A lot of enterprises like you mentioned some like Lyft who jump in and say, I'm going to solve this problem the right way for myself. And so by creating that set of contributors, you really have a much more robust way for everybody to be sharing in the key aspects of that. And then under the covers on there, there's going to be things that are very AWS specific that we'll continue to own, we'll continue to drive to give you the best experience on AWS. It's a great world we live in. I mean, I had a quote on my Facebook page a decade ago, beauty's in the eye of the beholder. And now you have a personalization with GNAI that the app layer, people can customize solutions, big theme here at reinvent customization, the idea of general purpose software. I mean, not a lot of, not a big marketing or maybe some small market, but specialism, specialty models, custom silicon, customization, fine tuning. This is all kind of like the next wave. This is it. Not one model to rule the world, not one architecture. Right. It's more flexibility. It's about flexibility. Flexibility, adaptability. Yeah. Barry, thanks for coming on theCUBE. Really appreciate it. And we'll see you at KubeCon. Great work you guys do. Love the open source mojo going on at AWS. And it is a standard and open and choice will always win on these big inflection points, making things simpler, reducing the steps it takes, making it easier. That's the simple formula for success in these big waves. And we're seeing it play out here on theCUBE. We'll be right back with more coverage. Back to Palo Alto.