 Hello and welcome back to SuperCloud 5, special edition. This is the battle for AI cloud supremacy featured in SuperCloud special edition. And this topic in this next session is fueling the next generation of modern apps with CUBE alumni, Pranima Padmawan Bhanant, who's the senior vice president general manager of VMware modern apps and management of VMware. Timely conversation that the world is looking at this next level application market fueled by, you know, a new kind of AI infrastructure, a new kind of AI middleware, managed services with multiple foundation models. As developers operationalize, this is the top conversation during this week. And as the year ends, especially of all the conversations around what's open, what's closed, what's going to be in the enterprise. Thanks for joining us. Absolutely great to be here. So the apps that are coming on board are born in the cloud but also being refactored for having new AI capabilities is the hot conversation. A lot of developers are experimenting and there's a real big push for moving into production these new kinds of apps, okay? Which requires, you know, operational things like standing up Kubernetes clusters to manage all the infrastructure. But at the end of the day, it's putting an application workload into production. This is the top priority, but in a new way. This is what you've been focused on. Take us through what is the VMware's position on how you see customers accelerating this new modern app delivery and production workload. Absolutely. So ultimately, the way we look at it is to in today's world, business agility comes from software agent. And so it is very critical that customers have the ability to accelerate their application delivery to production, just like you said. But that requires a bunch of things. So what we have done is put together a solution under the Tanzu umbrella that allows customers to accelerate their application delivery by helping them develop, operate and optimize their applications and deliver them to any cloud, any Kubernetes anywhere. Now that is the core of everything we do in Tanzu. If you've heard of other things, those are in material. This is the core of everything we do. Now, when we start talking, go ahead. Now continue. I was about to get into the AIPs and now what has happened is one of the things that customers want is also to incorporate artificial intelligence into their applications, provide better customer service, better intelligence around data that they may have. And in order to do that, not only incorporate that in a fast efficient way, but also make sure that it stays safe and within guardrails, that the right set of data is used is critical. And that is where having an application platform discipline, having that discipline around golden patch to production can go a long way in accelerating while staying within guardrails. And of course, this is very, I guess, current topic at this point this week. The opportunities out there, you see the shifts, the generational shift too. There's younger generation, people want to accelerate AI, the whole AGI thing, which kind of rolls, I roll my eyes a little bit on that, but the truth is the opportunities there, there are challenges with Genervai, privacy, security on the app side. But to make the apps work, you need to have this new software layer and certainly fast infrastructure. So you start to see the platform engineers and the operations teams, the security teams. So you got DevSecOps, the platform engineering operations teams and developers working together. And we've been covering this for a while, but now with the AI conversation, it even highlights the need for adaptability. Okay, dealing with multiple models, making developers secure in their CICD pipelines. Okay, making sure that there are guardrails. What does that even mean? So this is the focus of the conversation. How do you see this evolving Tanzu's position for this, okay, to streamline the ops? Here, talking about AWS re-invent, you got a hybrid world now. Yeah, you got public cloud, on-premise. The data can be sitting anywhere and the data is feeding the AI. This is an opportunity, but also a challenge for the platform teams and developers and ops teams. What's your vision? So here at VMware and specifically within Tanzu, we look at it as twofold. The first part is, how can we help customers incorporate AI in a faster way in their applications? And within that, there are a bunch of innovations that we're doing. As you know, our core portfolio, so it might be worthwhile to just quickly introduce the core portfolio, right? When we said Tanzu is our accelerating application delivery, there are three components to it. One is the core application platform, and it comes in two flavors. The Tanzu application service, which is our Cloud Foundry-based platform, and Tanzu application platform, which is our Kubernetes-based platform. And that is the core of what we do. But of course, no application is built in an island, right? And it connects to things. It has to be managed from a data perspective. So we surround the application with two other components. One is a set of data services. These data services offer things like database, caching, messaging, warehouse, and capabilities so that the application can truly leverage the power of data, right? You brought up that. And especially with AI, it becomes even more important. And the second piece that we surround this application platform is manageability. You have to consider manageability while you're building applications. How are you going to secure it? How are you going to scale it? How are you going to make sure it's performant? How are you going to make sure it's cost effective? And that is the Tanzu intelligence portfolio. So Tanzu platform surrounded by Tanzu data and intelligence is the portfolio. Now, when you start thinking about something like AI and how do you accelerate AI? What we have done is we have started curating what we call AI accelerators. These are templatized approaches to building AI into your applications. It consists of leveraging things like Spring AI project which gives a brokering method so that you can connect to different models but with a common consistent API, right? So it really speeds up. It also, what we have done is gone and enhanced this around databases. So now our data warehousing solution, the Green Plum solution, for example, has built-in capabilities for model.q. Our vector database support is there across our caching solution and database solutions. So what it means is if you want to build an AI ML-based component in your application, you quickly start with the accelerator, you add on the right data sources and you're able to then pick the models of choice based on your application need very quickly. And the instrumentation of the pipeline also automatically happens so that when now the AI is out in the world and you want to do data operations, it becomes very easy. So that's the first part. How do we accelerate application delivery with AI now built-in? The second one, and you and I've talked about this before, is we are also leveraging the power of AI significantly within the Tanzu portfolio to deliver better experiences. We'd be crazy about it. And as you know, with the Tanzu Hub and Graph, we now already have this powerful source of data, right? True understanding of the multi-cloud universe, the applications, the dependencies, the security characteristics, performance characteristics, the delivery characteristics. So by putting a layer of generative AI, both AI as well as generative AI on top of it, we can take that mass of data and then allow customers to just ask simple questions of NLP. How's my application doing? Where is the problem? Tell me how to fix it. So it's two-fold approach. Talk about the thing, by the way, we had Explore had some great announcements at Barcelona and you just recently had some announcements. You want to hit that really quick to just highlight some of the key things that was announced. I want to get into some of the intelligence services pieces. I think that's going to resonate with this next cloud wave, especially the Amazon customers. Give a quick overview of the Explore announcements and Vegas and recently in Barcelona. Yeah, sure. And especially since we are here at Reinvent, I'll also talk about what we are doing specifically for AWS. Now, in Explore, from a core platform perspective, the heart of our platform is the Spring Framework. So we need to talk about our 20 years of Spring and 10 years of Spring Boot and how we are doubling down on the Spring Framework and doing more by integrating it into our platform and allowing for rapid delivery on the Spring Framework. Better long-term support, more consulting support, more set of capabilities around services with Spring. The second piece is on the core platform. We, as I said, we announced R-A-I-M-L acceleration and we announced this notion of an application engine. Very critical to accelerating application delivery. Imagine, rather than a developer having to give me details about Kubernetes, Yamls and infrastructure environments, all they say is, hey, here's my application code. And by the way, I want H-A, I want encryption. And then you go after the races and the platform takes care of all of that. And that is our application engine announcement, huge announcements. Then we had a slew of announcements on data services around, well, as I mentioned, we have enhanced them to have those core A-I-M-L capabilities built in, vector database support, model fine tuning support, support for pipelines for A-I-M-L. And finally, with the intelligent services and I'll, we can double click on that is, look, intelligent services is something that we have been investing, as you know, since the last few years, quite a bit. And what was extremely exciting is to truly see the power of Gen A-I in being able to interface with boring data. Data that is usually, you know, painful to look at, you know, alerts and security issues and vulnerabilities. And I'm very excited about those. Take us through the Amazon customer, Amazon web services customer that it's today, they're challenged with, okay, I'm at scale. I got distributed computing architecture, cloud operations, generative AI stack, modern stacks emerging, data pipelines, I mean, some great stuff, zero ELT from AWS. So the data piece is getting better, right? So you mentioned data, either boring data or other private data, you guys have private AI. You guys are off to something here and this is where I see the connection. How do I manage my data? And so you got the cloud customers who are running on top of AWS today, your customers. They already have stuff on the cloud. How do you guys help with the intelligent services, bring those workloads that can run across environments? This has become the super cloud concept, but also there's data involved because you can, if the data is available on premise, for instance, or edge, that could feed the AI at the edge. Now you got more inference coming. You have a lot more training, a lot more inference. The inference is becoming the killer app. So how do you improve the life of an Amazon web services customer today? But things already have running in the cloud. So I think I'll start with just what we have done for Amazon web services customers. We have a very close partnership with AWS and specifically we are seeing a lot of modern app development happening on the public clouds, especially AWS. And as you mentioned, just think about the alphabet soup of technologies you mentioned. Customers are saying, okay, I want to use AKS. I want to use the new data services. I want to use ELT services. I want to use KL2Zero type of capabilities and screen framework. So they have a long list of capabilities that they want to use and services that they want to use. And as each team tries to figure out, it becomes very complex. Finally, the pack to production becomes very complex. So one of the things that we bring it bring to the table for AWS customers is, first of all, we have added extensive support for core AWS services, including life cycle management of the core infrastructure, which is EKS, right? How do you manage multi clusters? How do you manage security? How do you manage backup and restore and policy? That forms the foundation of that platform. Then we connect it to our Tanzu application platform so that as customers build their golden packs to production, define set of capabilities on how to build, how to test, how to scale, these can be seamlessly delivered to the AWS platform. So the question is, how do you guys bring these enhancements of Tanzu to the customer? Because the app development focus is the top priority in the conversations. Obviously the speed of the cloud, speed of performance is getting better. It's a silicon layer, a lot of goodness with inferences and data services coming on. So how do the latest enhancements to VMware Tanzu and Spring, particularly with the integration of AI and ML contribute to the next gen applications? And what are the advantages do they offer for the development teams? I think as we know, and even as we know from the recent developments that have been happening, everybody wants to incorporate AI, but also people want to incorporate it safely with guardrails, with the right set of data sources, with the right set of testing capabilities. So that is kind of one piece. And so when we think about the offering, what we want a Tanzu platform to be able to do is be able to deliver those capabilities within your application with those guardrails. To the edge, to the data center with private AI as well as to the public cloud, right? Across all of these. So from an enhancement perspective, as I mentioned, the few things that we have baked into the platform so that it is just default, you're not looking at AI distinctly. It is just a component of your application is things like AI accelerators that we have built into the component capability, building in data services with AI awareness, building in pipeline services with AI awareness. So that is a big part of that. The other part is when you introduce AI, ML, more into your applications, you are still increasing another layer of complexity. So managing these applications and most customers, right? Even AWS customers have a polyglot environment. They have AWS, but they also have the agenda data center. So how do you understand your environment irrespective of whether it is in data center, edge or in public cloud? So that you can make the right kind of trade off decisions. How do you make the decision on which workload, which application do I run where? Based on your cost performance security and privacy needs. In order to do that fundamentally, you need a different type of data. I'm not talking about the customer consumer data, but you also need data about your infrastructure, applications and dependencies. And that is where we had started talking about Tanzu intelligence, which really brings that state of multi-cloud universe at your fingertips, continuously updated in a near real-time way. And on top of it, we have been able to put a gender to be there with our Tanzu intelligent assist, which then allows you to start querying the data in a much more comprehensive way and start taking actions. This is what we keep hearing from customers. They have got so much data, from so many different sources, but it doesn't convert to information. So what the Tanzu intelligence services does is allows to take your platform engineering data, your infra and application data and convert it to information. This is where the AI is, generate AI is great, it takes that data, exhaust turns it into gold. I love the concept of trade offs you mentioned. Also, this is a theme here for the battle supremacy with certainly with the reinvent conference conversations is around things like trade off models. I want the best answer for the cost. What's the price? What about the workloads? What do I need? This is all going to happen dynamically. So I love the intelligence angle there. You mentioned data. One of the themes here this week is this idea of data services, right? Zero ETL, which they talked about last year came up. You start to see the formation of data services enhancements around with ML and AI as platform engineers start getting generate AI working for them. Helping in the plumbing, codifying generate AI into the operations. The new Tanzu data services enhancements also added new ML AI capabilities. Can you elaborate on how these enhancements streamline the data management integration? Because we think that is going to be a huge impact as data management and governance is built in from day one for these next generation workloads across the cloud environments. It's really if pipelines are going to be developed in real time and be managed intelligently, the data services have to be locked step. So you're right, data has become very critical. And if you think about a platform engineering role, it's no longer about just curating pipelines, right? It is curating that entire application platform that allows you to develop operate optimize. And data services is a critical function of the platform engineering team going forward. So these data services, we're doing a few things. Of course, the core data engine, we have enhanced screen plumb for faster model fine tuning. We have enhanced our caching solution with Gemfire with vector database support. We have got more out of the box database supports. But what we've also done is added a management layer, a centralized management layer that goes into Tanzu Hub that allows you to manage these data services anywhere, be it on-prem or in the public cloud, right? Because again, it's all about choices. And this we expect gets managed by the platform engineering team. So while the database experts are still there, but how does the service get integrated into the pipeline? How does it get incorporated into the application is something that the platform engineering has to try. And as the other thing that you mentioned is, in all of these, right from the beginning, from the model selection piece, from the definition of the code piece, you consistently want to always measure the implication in terms of cost performance and security. And that's where the intelligence piece being integrated is also very critical. This year marks the 20th anniversary of the spring framework and the 10th anniversary of Spring Boot. How are these frameworks evolved over the years and what new capabilities, how are they modern on the modern app trajectory? Obviously we saw it explore great support turnout from your community there. Obviously a nice momentum continues. What's the current state of the art with the frameworks to support these developers who are building more scale, more data services and faster and more efficient apps have to be built in from into the coding. What's some of the new capabilities with Spring Boot? I mean, Spring has been just amazing. I continue to be amazed by how the momentum is there from the community. It is, as you said, 20 years of spring framework and 10 years of Spring Boot. And it's still going strong. I'll give you another fact. Spring has grown 50% year over year for the last five years. It is one of the, I mean, this growth will be the envy of almost anybody, right? And this continues to be the most important framework for large enterprises as well as small ones to build their enterprise applications, Java-based applications. Now, Spring, the community is very active and we continue to actively nurture the community and nurture the contributions. And I'm very excited about, as I mentioned earlier, the Spring AI project. It allows you to have that common interface. As you know, the model landscape is continuously changing. It's evolving so fast. There are new models coming up on the horizon every day. So you want the ability to be able to change your models as you go based on the nature of the problem statement that you have in front of you. And so Spring AI gives that common brokering interface common APIs. And we expect that progression to continue with more work on that AI space there. We are investing in the community. We are investing as VMware in that project. And you'll expect to do, you'll definitely see more from us there. Awesome, Purnia. Great to always have you on. But I got to ask you of the past week, the recent developments in the AI community has been a real shock. Almost the tectonic plates going around. And the conversations are open choice adaptability or close one model rules the world. Open source is continuing to be successful as does democratization of the AI, but also software life cycles changing. Half life is a term that's been kicked around. Generative AI is going to come in and change all that as an industry expert and take your hat off and be kind of a practitioner in the field. If you're in the field coding and building and managing life cycle of workloads life cycle management with this democratization wave coming with AI software is like oxygen. Everyone's going to have free software then how you deploy it end to end with data will become the critical piece of this. This is going to elevate the engineering practices to be very systems oriented. It's going to impact certainly public cloud on premise and ed significantly as AI comes in and gets into the plums into the infrastructure. What's your vision? How do you see this evolving? Because everyone's kind of like seeing this trying to figure out, you know, read the tea leaves. We hear about safety. AGI is going to take over the world. Some great memes out there. But what we're talking about here is a major inflection point of how infrastructure and middleware and applications, you know, what we all know is distributed computing is being remade. A new stacks emerging. New developers are coming in. What's your vision of all this? Look, there is no denying that which is, I agree with you, it's a tectonic change, right? And it is going to impact everything that we do. I do believe the power of generative AI in addition to being applied to various problem statements is also going to be applied to the problem statement of coding itself. So as you said, the developers should be doing less of any activity related to getting the code to production and they should be doing more of actually thinking about the business logic, right? You talked about half-life of a software. You know, the difference between traditional, even today, and world-class companies is spectacular. Traditional companies release software one to four times a year. World-class companies release 4,000 to 18,000 times a year. And I believe if we get the power of generative AI for things like not just code generation, which is a little bit further out, but things like code debugging, test generation, and things like that, you can significantly accelerate how much software you put and how quickly you put the software out based on your business needs. Then if you think about it, if that's what's happening, what needs to happen is the infrastructure that supports it, the platform that supports it, the systems and pipelines that support it have to scale equally. Not only do they have to scale in terms of capacity, you know, the GPU capacity that we keep talking about, but they also have to scale in terms of systems, exactly as you said, right? You cannot manually configure things anymore. Everything has to be automated. Everything has to be intent-driven. And that is why we believe there's a huge opportunity for a platform like Council, right? Now I'm switching my hand back on, whether it is we do it or someone else does it, there's an opportunity to be captured for a platform that truly takes away that complexity and allows this development to happen at a faster rate. It's interesting, you know, you mentioned the GPUs and performance, right now people are buying GPUs. I got to get my hands on those GPUs before they lose them. It's like commodity, hot commodity. But it's really about end to end is what's going on around the chips. And one of the conversations here this week is, yeah, faster silicon, great, model choice, big models, sub-specialty models, data specific to a company edge models. You're starting to see that end to end concept. So it's really what's going on around the chips. This means that we're going to have a agile, adaptable infrastructure world where you got the coding going on, the business logic connects the dot. This is kind of where you guys fit in. You know, vSphere has been there, you got great infrastructure with vSphere, VMware, well operated. Now the operators, platform engineering, becomes the czar in the company to wield the infrastructure to set up an environment so that the infrastructure can be adaptable tuned. It's like, what do you want your chips with? You want a little wine with your food, kind of like mixing chip and models together. You know, that's the theme, you know, pairing. That's an interesting one. Red wine goes with the red meat. So, you know, this is what we're seeing with chips and models, right? So that's going to enable applications. That's going to be a developer construct, not so much an infrastructure. Yeah, and actually it is a platform engineering concept. And how does a platform engineer curate and consume the infrastructure resources and expose the right set of capabilities to the developer so that they are then racing, right? Toward business logic. So this glue that is bringing these pieces together so that a developer is not really thinking about, okay, which chip do I use to read me? It's going to be very critical, right? And as VMware, as you said, we already have, at least with the Private AI Foundation, we offer the data center centric capability set. We have the edge capability set and we are also tightly aligning with our hyperscaler partners like AWS to offer the cloud capability set. And Tanzu is that layer of platform engineering across all three. Pernina, great to have you on. I'm glad to get your take on that. I mean, I love this new world, you lean next to each modern apps. It's like automating, you want some chips and models, you get them together. It's like making that coffee, automated coffee making, you know, you can have a lot of choice for those workloads. And again, this is where modern apps go. So appreciate you coming on and sharing your vision and also the update and where Tanzu fits in with the intelligence services and the data services. It's a whole new world and we're excited to have you on. Thanks for coming on. Absolutely. Thank you, John. Okay, we'll be right back with more SuperCloud 5, the battle for AI supremacy, fuel and next generation modern apps here with VMware. I'm John Furrier, your host of theCUBE. Stay with us for more SuperCloud 5. We'll be right back.