 Welcome back everyone, live coverage here at VMware SportsFootage with you. theCUBE's 13th year covering VMware's user conference, formerly VMworld, past two years. It's been explored. What a journey, multi-cloud, AI all over the place. I'm John Furrier, Rob Strecce. We're talking about modern applications and right now as AI has just become a gift to the multi-cloud distributed computing architecture that is cloud operations, the applications need to land somewhere. Well, we are with the person that's figuring that out, Pranina Padmahapahan, Senior Vice President and General Manager, Modern Applications and Management VM where you get all the key things for the future. The modern apps, which will be cloud-native and AI and all the other stuff wrapped around it. That is true. Welcome back. Oh, great to be here, John and Rob. So the number one question is, okay, the architecture is changing, AI is forcing this real time. Look at all the growth organically in the models, LLM and foundation models, and then now the cloud-native next gen has got a lot of data. So that's security. It's also better user value. Her Raghu is saying it's about the databases and the workflows built on top of it. The runtime is multi-cloud. I love that in the keynote, a little nuanced point, but that's a system. Yes. Operating systems have applications on top of them. Yes. What's your vision? This is happening. This is happening now and it has truly given this boost to the whole application space, right? And when I look at applications and when I look at generative AI, right? For us, the opportunity is two-fold, right? One is, how do we use generative AI in our own solutions to make it easier for our customers, platform engineers, the developers, as well as the ops? And that is basically using conversational AI. You heard a whole bunch of announcements. Tanzu Intelligent Assist. So I'm sitting on a pool of data with my management solutions, right? Logs, metrics, traces, application, information. How do I make sense out of all of that? And so one is really making it available in a conversational term. So I could go and say, where is my application? What is the problem? How should I fix it? You're asking the assistant. So that's one pool of solutions. The second one, which is the opportunity you talked about, is how do we curate an environment? You've already got, Ragu already laid out the Private AI Foundation, but then how do we also create an application foundation? An application platform, where out of the box I have accelerators that tell me, hey, you want MLflow, you want Spring, and you want the following Lang chain, you're following LLM models. That's all I pick, and everything gets curated for me. So that is what we are going with the Tanzu application platform. Last year we talked a lot about simplifying, single plane of gas, abstraction, observability, making everything work. Now AI hits the scene, hugging face, these new models are out there, and you can almost think about connecting the dots by saying, okay, and I'm going to go there, I want you to get your reaction. If AI apps start hitting the workloads, you can essentially automate away faster, all the, you mentioned observability, all these, a lot of things going on under the covers. You can almost have AI for that, a bot for that, or an AI assistant. It's kind of like, are you creative, or are you more of a math person? It's like, left side, right side of your brain. You can have AI that can do these things. And do it all faster. Absolutely, and also make it more accessible. That is the main part, like, okay, I have to write a query to get something, or I just say, hey, tell me what's wrong with my application, you write the query and get me what is wrong. And that is when you have the data together, that's the power that we have. So last year's foundation turned out to be perfect. So last year with the Hub, ARIA Hub now which we have labeled Tanzu Hub, we aggregated all the data, we created a foundation of the multi-cloud universe with a graph database, so that anybody could access it to understand what's happening with their application. Now when the generative AI hit, we simply applied that model on top. Yeah, you had the data. We had data, we had curation, we had content, and so when we applied, now you go see the demo. You just go and ask Hub to say, hey, tell me what should I look for? Tell me how should I fix it? Go get me the documentation, right? So that, and better still, once you know what needs to be fixed, go fix it for me. Yeah, that'll be ideal. I mean, I think that- I can't wait for the R-Cube AI, just get the videos. What'd you say, just in 2023? Just make me look better, that's all I ask. Just make me look exactly like that, I want that AI. That's the AI I'm looking for, but I think, and again, having been in a lot of the pre-briefs and a lot of the briefings, I think, and by the way, I think it's a really ambitious goals that VMware has in the modern apps. Yeah. How you get any sleep is beyond me because there's a lot to do. But there's also a lot of rebranding going on, and I think that's been over the last couple of years, and I think it's trying to simplify things for customers. How do you see these, because you just even mentioned the one with ARIA Hub becoming Tanzu Hub and some of the other ARIA products are still ARIA, and like Cloud Health is- How do you make sense of all of that? Yeah, yeah. You call the decoder ring. The decoder ring, and let me do a simple ring for you here. So first of all, we want Tanzu to be the broader application acceleration brand, right? Whether it is microservices based application landing on private cloud or on public cloud, whether it is AI applications, whether it is chatbots, whatever it is, you come to Tanzu for accelerating your applications. Now, when you think about the larger application ecosystem, what we started looking at is last year, we introduced the Hub, we introduced the management Hub and management solutions, but often you think of management post the fact. Applications are built, and then I go and do cost performance and security on it. So as we started bringing the ARIA and Tanzu portfolio together, we said, what happens when I'm developing an app? I need to know the cost of my app. I need to know how it might perform. I need to instrument everything while I'm developing, and we need to give this data to the coders, right? So the shift left. What happens when I'm operating the application, when I'm deploying it to Kubernetes or when I'm deploying it to even VM, I need to know how it would look like. So fundamentally, we said the Hub as a construct is not just a post the fact construct, but it needs to be pulled and shift left all the way. So what we said is, we will expand the Tanzu portfolio to say, truly it's about being able to develop, operate and optimize applications through their entire life cycle, all the way from its creation to its optimization, and we would have the Hub be the anchor that always gives you that state of the union. What is my application? What is it depending? And how is it performing? Yeah, I mean, I think that totally makes sense. I think when you put it that way, and by the way, I'll make a clip of this and put it out because I know there was few people, few customers that asked me about it as well as they're going through and learning themselves. And I think one of the things that I think is also interesting is that there's a lot of open source, like in between where NVIDIA leaves off and where VMware is picking up, there seems to be a layer of open source. How are you working in that community? Because there was even, well, I went in Terra, which is the service mesh like stuff that's being. Yeah, so there's not a lot of buzz around that, which is actually a little bit, it was like first it was no open source, then it was all open source, and now there is open source, but it hasn't been really talked about this week. Help us understand where that fits in. So as far as modern apps is concerned, let me let us be clear. We embrace open source across the board. If you think about it, spring community is a very important community to us. We are continuing to invest heavily in spring, but then all our commercial products, we want also to make sure that they support that open source ecosystem, and Tanzu should be the best place to run spring apps. Now you're talking about other open source solutions. And this is where the model and clinics we established last year, with ARIA still apply to all of Tanzu also now, which is it's a plug and play model. When I go to a modern app development shop, that people are building modern apps, they have something already. They have an open source CID, CICD solution, some other open source solutions. So we are taking a very modular approach to say, we work with what you have. With Tanzu application platform, you can plug in what you have and then create a complete solution. So that is what we're doing with open source elsewhere. What are we doing with our own open source? We're heavily investing in spring. That's a huge community. Then you brought up service mesh, right? That's an important one. And so rather than throw piece parts, oh, here's Istio, here's Envoy, here's all that together, a service mesh, you go figure it out, we introduce something called Tanzu application engine as part of the Tanzu portfolio. That was announced here. That was announced here, yes, in the morning today. And the whole point is, I don't want you to have to build all these piece parts yourself. So it depends on open source, but VMware Tanzu application engine will give you an application runtime. And guess what, when you want to runtime, underneath it is the mesh. Underneath this is the computing. So we are pulling it all together so that walking in, you talked about this AI, you know, the amount of development on applications that's going to happen is going to explode. And so a developer imagine, walks in and says, look, this is my intent. I want multi-region, I want encryption turned on, I want PII data, and I want it to be spring apps. By the way, I want ABC models, and I want PQR data sources. Complexity is blowing my mind. But they just say that. Yeah, yeah. And then application space will create it for you. Think about what you just said, because I want to just double click on that. What you just said, without Tanzu app engine or the hub and all that coming together, that's hard. You have to stand up all the infrastructure, write all the middleware, and then it changes all the time, it's got pressure on it, it's critical infrastructure. So the question we're seeing, or the question that I'm asking, those people are asking me, is that I want to build the best AI apps. Now, to one, what does that mean? So that's the first question. And then who's the DevOps team, the platform engineering team, that's going to set that up? Because as developers have to deal with data, they're not going to be in the weeds as infrastructure. They're not going to be wondering like, what the platform engine, if Rob's the platform engineering team, he's going to be like, Hey, I love this stuff. I'm going to just create this massive enabling infrastructure. And by the way, with VMware, and love to get your thoughts, you have now horizontal movement across environments, public, on-premise, and edge, and ultimate edge, the person. Yes, yes, yes. That is a unique thing. Yes, indeed. Okay, so take me through, is that part of your vision of the modern apps are going to sit on this kind of horizontally traversed or scalable environment? Yeah, modern apps have to sit wherever they need to be used, right? If it is an edge application, it has to sit at the edge. If it is something that has to be served from public cloud in different regions, it has to sit there. So really, the modern app platform has to be agnostic of the infrastructure and support all that infrastructure. Now, you talked about customers want to build AI applications fast. Yeah. But they want to build all applications fast. And it is actually wrong to think that you would have a different DevOps model for AI applications. And I'll tell you why. I was building our own applications, asking our development teams to add this conversational AI to the hub. But it is an integral feature of the hub. So if I had a completely different development team doing the hub with a different process, how would I bring all these products together? So fundamentally, just like we think of, you never think, oh, I want a database and that'll be a separate team. Incorporating chatbots, incorporating AI models have to become fundamental part of the DevOps process and the platform. The pattern is clear. If you look at the AI right now, just kind of zoom out. Look at Amazon. They got Bedrock, SageMaker, and other. Basically, you're on your own. You got the hub, Aria, Tanzu, all together. Now that's called what now? Tanzu application platform. Okay, that's the Tanzu application platform. Okay, so you have the platform. Same concept, you get curated stuff. Yes. Library, there's Lama, other stuff, or other applications, and then open source. Okay, I get that. But do you, what infrastructure is that land under? Because the question is, I'm going to build an app. Where does it land? Does it land wherever I want it to land? Do I have to architect that? The question is, do I have to do all this re-architecting to make it happen? I think this is where we have to define who is the you. You have to design it. So there is the developer, and the developer should not think about where it lands, how it lands. They may say where I want it to land. So we are more and more seeing the evolution of a platform engineering team, and I think we've talked about it. So the platform engineering team not only curates this golden patch to production, as we talked about, curates what models are approved, curates what are the guardrails, but they also define what environments will hand this application land on. And with what we talk about with the Tanzu application engine, what we can do is, as long as it is an environment that is a supported environment, it can land on that. And what is supported today? Any Kubernetes, any cloud, including VMware cloud. Now, of course, if it lands on VMware cloud, you get the extra secret sauce of all the private AI foundation. Yeah. I love this business. I think modern applications is kind of not understood well right now. We're going to help get the word out. That would be awesome. I think it's the future, and I think some people wonder chicken and egg, but I think that's not the case here, because you can kind of get it done. Sorry, what do you mean by chicken and egg? Well, do I have to build the platform to get, do I need the platform first, then the apps come second? Can I have apps sit on the platform? So I think that's what people, they're first come to the conclusion of, they might look at it that way. But when you look at how you do it, you can do the apps first. You can do the apps, and then the platform. You have to be- There's no dependencies. There's no, you meet the customer where they are. If you're starting Greenfleet and thinking about the platform, great start with that. If you already have apps, you can bring them onto the platform and get the benefit of all the things that we talked about. Talk about your plans going forward, because this is going to be an integral part of the developer, the DevOps. I agree with you by the way, we were 100% alignment on DevOps and Engine. We've been pontificating to the cows come home. Yeah, platform engineering. We were in February in KubeCon talking about the platform engineering. In fact, they think SuperCloud's VMware, and it's really our initiative, because we think the SuperCloud is about an operating system across multiple environments that runs apps and has all kinds of subsystems and glue, connective tissue, glue layers, whatever they call it. Right. IO, sound familiar? So you guys are well positioned for that. What do you, what's the plan? Get the word out? Is it getting more pilots? What's your business plan to get to the customer? Few things, right? Again, now if you think about the Tanzu portfolio, you have the Tanzu application platform and the Tanzu intelligence services. People don't have to consume it all at the same time, different landing points. So for the Tanzu application platform, of course, getting the word out that this is easy. This is, makes your life easy. You want to get applications built. You want AIML in your applications. Use Tanzu application platforms and your developers get productive, right? Getting that word out and getting more and more customers engaged. The good thing is we already have some high scale Tanzu customers and we just need to expand that reach further. With Tanzu intelligence services, which is essentially cost performance and security across clouds, we already have a customer base, but it is helping these customers not just think about problems as isolated things, but rather shift left into the development cycle. Shift left into whatever platform they may have. In fact, I'm fine if intelligence services are attached to bedrock or something else, right? Similarly, an app platform can have some other subsystem managing and monitoring it. So they're independent, but yet together they will sing. So you're okay with mixing and matching? Absolutely. I would be crazy if I say to a customer that you have to rep and replace everything. So very critical. I want to make sure it also lands. These are modular so that you get the pieces you need and we will help you accelerate. Well, how would you describe Ragu's keynote today? Because obviously he's enthusiastic. You saw him up there. Him and Jen's like two little kids holding each other. We just discovered something. Look at the smile on their old guys like they've been around the plot. They've been through the industry together. And it's so respectful. But Ragu is passionate about this. He has been reading Dave Vellante's breaking analysis. He loves Rob's research. He watches theCUBE. He's a student of AI and he's geeking out. He's geeking out. How would you summarize his view and what he presented today? Because he kind of laid out a pretty good case that you got data scientists, but now you have every application in an enterprise available for in upgrading, basically. Exactly. How would you translate his keynote? I mean, when Ragu and I have had conversations, one thing, and he has had conversations with all of us, is AI has always been there, but generative AI definitely has been path breaking. It has just lit up people's imagination. Imagine, you know, in two months from when it was announced to LGBT, 100 million users, unprecedented. And that is what he wants to harness. It's legit. It's next level. It's next level. And that's what he wants to harness. And for us, even what we could unleash, people are shocked. Go optimize the cost for this. And by the way, you just ask it in English. That's crazy, right? And so the power to transform healthcare, the power to transform legal, the power to transform customer success, there are some areas that are so ripe for transformation. And you know, thought of as old and stodgy, right? We can really bring that power here. So Ragu, that is why he's so excited. It's not just the technology, but the transformational power of that technology that makes him excited. And then that is why we spend all this time, you know, in frenzied effort to say VMware has to be the best place for people to land in order to do that transformation. You know, I was, first of all, I think that the AI is legit in foundation models. It's just changing the game at many levels. But I feel younger, just like because of it, because it's such a game changer from a computer science standpoint, from an operation standpoint. So I can see how it's a fountain of youth for us, old system thinkers, like, you know, like, because we love it. And so I met a startup who's young, young guys, they're under 30. They have built an app, a system that integrates into Slack and where the workers are. And you just issue commands right there in Slack. To the team Slack. Provision my Kubernetes just goes out and does it. So you're starting to see the ease of use coming into application. So it's not just apps themselves, it's functionality in other apps. It's our space, right? It's functionality that has been traditionally our space. It's personalization. Yes. Personalization is coming to IT. I mean, that's a mind bender right there. Yeah, and I think also it's, you have the big LLMs and the big players and the big models. And then as you get down, there's actually, and Dave Vellante, John and I went through this power law type where there's just a really long tail of these specialized or segmented language models. That are going to be used. You see the number, right? Yeah, and I think that seems to be where you guys are going with some of the ones that you're in, are you going to talk about today and tomorrow? I know I don't want to give away any of the keynote for tomorrow, but it seems like that is an objective. Yeah, I mean, there are probably two different objectives. From an app platform perspective, I embrace all models. Seriously. Because what we want to do is make it easy for people to develop apps and put those models at their fingertips. And you'll see some of those demos there. And then from a making systems management easy, the slack example you get, take a look at what we are doing with Tanzu Intelligent Assist, right? That is next level game changing. And what I tell customers is you have to be ready to capitalize on the opportunity. And that's the boring part. Like we did the boring part last year. You know, hard part of bringing everything into a graph database, normalizing the data. So this time we could just turn it on. There's no such thing as luck. Preparation means opportunity. Yes, yes. You guys have done a lot of work. And by the way, you know, with our transcript example of theCUBE, people who are hoarding data and doing some foundational work around platforms, were well positioned because they got the data right for what you were doing. And in ARIA, I think you nailed that last year. It's pretty obvious that you had that right. But now AI is a gift. AI is a gift. The gender of AI is a gift to you. Yes. And your customers. Yes. It takes away all the mundane stuff. Yeah. And makes it smarter. Exactly. And also things that, if you think about it, why should somebody have to know that for a multi-region, I have to call these 25 commands to create all those clusters? Why should somebody have to know that? No, it's going to be- And that is what's- It's going to create a creative class in IT. I mean, as tech lovers, we love tech TV, it's going to create a creative class in technology that we've never seen before. Cambrian explosion. Because the AI will handle stuff. AI will handle stuff. The manual, labor, and or sporing, grinding, toil, whatever you want to call it, configuring and all the stuff that you have to do. And more importantly, I mean, we have had AI for a long time, but now you can speak to it in English. I think that's the power. It scales it. I was showing a demo to this Tanzu Intelligent as it's to our teams, and they just went crazy. People were just, wow. It's like systems engineers who have looked at this and said, my goodness, I want it tomorrow, right? Well, great to have you on. Good to see you. Thanks for sharing all that data and enthusiasm. We went off with Tansu a little bit there, getting excited about the technology. Exactly. You guys are welcome to Tansu Application Platform, the benefit area in Tansu. We're going to be in spring. Congratulations on the spring. Give a quick plug for the spring one, the conference just had. Give it 30 seconds. Yeah, spring one conference was fantastic. This is the first time we have converged it with VMware Explorer, packed audience, standing room only, and the enthusiasm of that developer community is just amazing and we'll continue to double down there. Thank you so much. Senior Vice President, General Manager, modern applications for VMware, big part of the focus of the next gen VMware. Of course, we got all the covers here in the queue, bringing you the data to you as soon as there will be AI bots. I'm John Furrier, Rob Schryer-Jay. Thanks for watching. Thank you.