 Everyone, welcome to theCUBE here in Palo Alto, California for a special presentation on cloud native at scale, enabling super cloud modern applications with platform nine. I'm John Furrier, your host of theCUBE. We've got a great lineup of three interviews we're streaming today, Madhura Meskasky, who's the co-founder and VP of product of platform nine. She's going to go into detail around our lawn, the open source products, and also the value of what this means for infrastructure as code and for cloud native at scale. Dick Lee, the chief architect of platform nine, CUBE alumni going back to the open stack days, he's going to go into why our lawn, why this infrastructure as code implication, what it means for customers and the implications in the open source community and where that value is. Really great wide ranging conversation there. And of course, Bhaskar Gorty, the CEO of platform nine is going to talk with me about his views on super cloud and why platform nine has a scalable solutions to bring cloud native at scale. So enjoy the program, see you soon. Hello, and welcome to theCUBE here in Palo Alto, California for a special program on cloud native at scale, enabling next generation cloud or super cloud for modern application cloud native developers. I'm John Furrier, host of theCUBE. Pleasure to have here Madhura Meskasky, co-founder and VP of product at platform nine. Thanks for coming in today for this cloud native at scale conversation. Thank you for having me. So cloud native at scale, something that we're talking about because we're seeing the next level of mainstream success of containers, Kubernetes and cloud native develop basically DevOps in the CI CD pipeline is changing the landscape of infrastructure as code. It's accelerating the value proposition and the super cloud as we call it has been getting a lot of traction because this next generation cloud is looking a lot different but kind of the same as the first generation. What's your view on super cloud as it fits to cloud native at scales up? Yeah, you know, I think what's interesting and I think the reason why super cloud is a really good and a really fit term for this. And I think I know my CEO was chatting with you as well and he was mentioning this as well. But I think there needs to be a different term than just multi cloud or cloud. And the reason is because as cloud native and cloud deployments have scaled, I think we've reached a point now where instead of having the traditional data center style model where you have a few large distributants of infrastructure and workload at a few locations, I think the model is kind of flipped around, right? Where you have a large number of micro sites. These micro sites could be your public cloud deployments, your private on-prem infrastructure deployments or it could be your edge environment, right? And every single enterprise, every single industry is moving that direction. And so you got to refer that with a terminology that indicates the scale and complexity of it. And so I think super cloud is an appropriate term for that. So you brought a couple of things I want to dig into. You mentioned edge nodes. We're seeing not only edge nodes being the next kind of area of innovation, mainly because it's just popping up everywhere and that's just the beginning. We don't even know what's around the corner. You got buildings, you got IoT, OT and IT kind of coming together. But you also got this idea of regions, global infrastructure is a big part of it. I just saw some news around cloud flare shutting down a site here. There's policies being made at scale, these new challenges there. Can you share, because you can have edge, so hybrid cloud is a winning formula. Everybody knows that, it's a steady state. But across multiple clouds, brings in this new un-engineered area yet. It hasn't been done yet, spanning clouds. People say they're doing it, but you start to see the toe in the water. It's happening, it's going to happen. It's only going to get accelerated with the edge and beyond globally. So I have to ask you, what is the technical challenges in doing this? Because there's some business consequences as well, but there are technical challenges. Can you share your view on what the technical challenges are for the super cloud or across multiple edges and regions? Yeah, absolutely. So I think in the context of this term of super cloud, I think it's sometimes easier to visualize things in terms of two axis, right? I think on one end, you can think of the scale in terms of just pure number of nodes that you have deployed a number of clusters in the Kubernetes space. And then on the other axis, you would have your distribution factor, right? Which is, do you have these tens of thousands of nodes in one site, or do you have them distributed across tens of thousands of sites with one node at each site, right? And if you have just one flavor of this, there is enough complexity would potentially manageable. But when you are expanding on both these axes, you really get to a point where that scale really needs some well thought out, well structured solutions to address it, right? A combination of homegrown tooling along with your favorite distribution of Kubernetes is not a strategy that can help you in this environment. It may help you when you have one of this or when your scale is not at the level. Can you scope the complexity? Because, I mean, I hear a lot of moving parts going on there. The technology is also getting better. We're seeing Cloud Native become successful. There's a lot to configure, there's a lot to install. Can you scope the scale of the problem? Because what about at scale challenges here? Yeah, absolutely. And I think, I like to call it the problem that the scale creates. There's various problems, but I think one problem, one way to think about it is it works on my cluster problem, right? So I come from engineering background and there's a famous saying between engineers and QA and the support folks, right? Which is it works on my laptop, which is I tested this chain, everything was fantastic, it worked flawlessly on my machine, on production, it's not working. Only the exact same problem now happens in these distributed environments, but at massive scale, right? Which is that developers test their applications, et cetera, within the sanctity of their sandbox environments. But once you expose that change in the wild world of your production deployment, right? And the production deployment could be going at the radio cell tower at the edge location where a cluster is running there. Or it could be sending these applications and having them run at my customer site where they might not have configured that cluster exactly the same way as I configured it. Or they configured the cluster right, but maybe they didn't apply the security policies or they didn't deploy the other infrastructure plugins that my app relies on. All of these various factors add their own layer of complexity and there really isn't a simple way to solve that today. And that is just one example of an issue that happens. I think another whole new ballgame of issues come in the context of security, right? Because when you're deploying applications at scale in a distributed manner, you got to make sure someone's job is on the line to ensure that the right security policies are enforced regardless of that scale factor. So I think that's another example of problems that occur. Okay, so I have to ask about scale because there are a lot of multiple steps involved. When you see the success of cloud native, and then you see some experimentation, they set up a cluster, say it's containers and Kubernetes and then you say, okay, we got this, we configure it, and then they do it again and again, they call it day two. Some people call it day one, day two operation, whatever you call it. Once you get past the first initial thing, then you got to scale it. Then you're seeing security breaches, you're seeing configuration errors. This seems to be where the hotspot is and when companies transition from, I got this to, oh no, it's harder than I thought at scale. Can you share your reaction to that and how you see this playing out? Yeah, so I think it's interesting, there's multiple problems that occur when the two factors of scale, as we talked about, start expanding. I think one of them is what I like to call the, it works fine on my cluster problem, which is back when I was a developer, we used to call this, it works on my laptop problem, which is you have your perfectly written code that is operating just fine on your machine, your sandbox environment, but the moment it runs production, it comes back with P0s and P1s from support teams, et cetera, and those issues can be really difficult to try out, right? And so in the Kubernetes environment, this problem kind of multi-folds. It escalates to a higher degree because you have your sandbox developer environments, they have their clusters, and things work perfectly fine in those clusters because these clusters are typically handcrafted or a combination of some scripting and hand crafting. And so as you give that change to then run at your production edge location, like say your radio cell tower site, or you hand it over to a customer to run it on their cluster, they might not have configured that cluster exactly how you did, or they might not have configured some of the infrastructure plugins. And so things don't work, and when things don't work, triaging them becomes like initially hard. But it's just one of the examples of the problem. Another whole bucket of issues is security, which is as you have these distributed clusters at scale, you've got to ensure someone's job is on the line to make sure that the security policies are configured properly. So this is a huge problem, I love that comment. That's not happening on my system. It's the classic debunking mentality. But at scale, it's hard to do that with error prone. I can see that being a problem. And you guys have a solution you're launching. Can you share what Arlon is, this new product? What is it all about? Talk about this new introduction. Yeah, absolutely, I'm very, very excited. You know, it's one of the projects that we've been working on for some time now, because we are very passionate about this problem and just solving problems at scale in on-prem or in the cloud or at edge environments. And what Arlon is, it's an open source project, and it is a tool, it's a Kubernetes native tool for complete end-to-end management of not just your clusters, but your clusters, all of the infrastructure that goes within and along the sides of those clusters, security policies, your middleware plugins, and finally, your applications. So what Arlon lets you do in a nutshell is in a declarative way, it lets you handle the configuration and management of all of these components and at scale. So what's the elevator pitch simply put for what dissolves in terms of the chaos you guys are raining in? What's the bumper sticker? What would it do? There's a perfect analogy that I love to reference in this context, which is think of your assembly line, you know, in a traditional, let's say, you know, an auto manufacturing factory or et cetera, and the level of efficiency at scale that that assembly line brings, right? Arlon, and if you look at the logo we've designed, it's this funny little robot, and it's because when we think of Arlon, we think of these enterprise large scale environments, you know, sprawling at scale, creating chaos because there isn't necessarily a well thought through, well structured solution that's similar to an assembly line, which is taking each component, you know, addressing them, manufacturing, processing them in a standardized way, then handing to the next stage, where again it gets, you know, processed in a standardized way, and that's what Arlon really does. That's like the elevator pitch. If you have problems of scale, of managing your infrastructure, you know, that is distributed, Arlon brings the assembly line level of efficiency and consistency for those problems. So keeping it smooth, the assembly line, things are flowing, CICD, pipelining. Exactly. So that's what you're trying to simplify that ops piece for the developer. I mean, it's not really ops, it's their ops, it's coding. Yeah, not just developer, the ops, the operations folks as well, right? Because developers, you know, developers are responsible for one picture of that layer, which is my apps, and then maybe that middleware of applications that they interface with. But then they hand it over to someone else, who's then responsible to ensure that these apps are secured properly, that they are logging, logs are being collected properly, monitoring and observability is integrated, and so it solves problems of both those things. Yeah, it's dev ops. So the dev ops is the cloud-nated developer. The ops teams have to kind of set policies. Is that where the declarative piece comes in? Is that why that's important? Absolutely, yeah. And Kubernetes really introduced or elevated this declarative management, right? Because Kubernetes clusters or your specifications of components that go in Kubernetes are defined in a declarative way, and Kubernetes always keeps that state consistent with your defined state. But when you go outside of that world of a single cluster, and when you actually talk about defining the clusters or defining everything that's around it, there really isn't a solution that does that today. And so Arlan addresses that problem at the heart of it, and it does that using existing open-source, well-known solutions. Adore, I want to get into the benefits, what's in it for me as the customer developer, but I want to finish this out real quick and get your thoughts. You mentioned open-source. Why open-source? What's the current state of the product? You run the product group over there, platform nine. Is it open-source and you guys have a product that's commercial? Can you explain the open-source dynamic and, first of all, why open-source? And what is the consumption? I mean, open-source is great. People want open-source. They can download it, look at the code, but maybe you want to buy the commercial. So I'm assuming you have that thought through. Can you share open-source and commercial relationship? Yeah. I think starting with why open-source, I think we as a company, we have, one of the things that's absolutely critical to us is that we take mainstream open-source technologies, components, and then we make them available to our customers at scale through either a SaaS model or on-prem model. But so as we are a company or startup or a company that benefits in a massive way by this open-source economy, it's only right, I think, in my mind that we do our part of the duty and contribute back to the community that feeds us. And so we have always held that strongly as one of our principles, and we have created and built independent products, starting all the way with Fission, which was a serverless product that we had built, to various other examples that I can give, but that's one of the main reasons why open-source. And also open-source because we want the community to really firsthand engage with us on this problem, which is very difficult to achieve if your product is behind a wall. And behind a blockbox. Well, that's what the developers want, too. And what we're seeing and reporting with SuperCloud is the new model of consumption is I want to look at the code and see what's in there. That's right. And then also, if I want to use it, I'll do it. Great, that's open-source, that's the value. But then at the end of the day, if I want to move fast, that's when people buy in. So it's a new kind of freemium, I guess, business model. I guess that's the way it is long. But that's the benefit of open-source. This is why standards in open-source is growing so fast. You have that confluence of a way for developers to try before they buy, but also actually kind of date the application, if you will. Agent Cockroff uses the dating metaphor. Hey, I want to check it out first before I get married. And that's what open-source is. This is how people are selling. This is not just open-source. This is how companies are selling. Absolutely, yeah, yeah. I think, you know, two things. I think one is just, you know, this cloud-native space is so vast that if you're building a closed-source solution, sometimes there's also a risk that it may not apply to every single enterprise's use cases. And so having it open-source gives them an opportunity to extend it, expand it, to make it proper to their use case if they choose to do so, right? But at the same time, what's also critical to us is we are able to provide a supported version of it with an SLA that's backed by us, a SaaS-hosted version of it as well for those customers who choose to go that route. You know, once they have used the open-source version and loved it and want to take it at scale and in production and need a partner to collaborate with who can support them for that production environment. I have to ask you, now let's get into what's in it for the customer. I'm a customer. Why should I be enthused about Arlo? What's in it for me? You know, because if I'm not enthused about it, I'm not gonna be confident and it's gonna be hard for me to get behind this. Can you share your enthusiastic view of, you know, why should I be enthused about Arlo if I'm a customer? Absolutely. So, and there's multiple enterprises that we talk to, many of them, you know, our customers where this is a very kind of typical story that you will hear, which is we have, you know, a Kubernetes distribution, it could be on-premise, it could be public cloud native Kubernetes, and then we have our CI CD pipelines that are automating the deployment of applications, et cetera, and then there's this gray zone. And the gray zone is, well, before you can, your CI CD pipelines can deploy the apps, somebody needs to do all of that groundwork of, you know, defining those clusters and, you know, properly configuring them. And as these things, these things start by being done hand-grown, and then as you scale, what typical enterprises would do today is they will have their home-grown DIY solutions for this. I mean, the number of folks that I talked to that have built Terraform automation, and then, you know, some of those key developers leave, so it's a typical open source or typical, you know, DIY challenge. And the reason that they're writing it themselves is not because they want to. I mean, of course, technology is always interesting to everybody, but it's because they can't find a solution that's out there that perfectly fits that problem. And so that's that pitch. I think Offspeaker would be delighted the folks that we've spoken with have been absolutely excited and have, you know, shared that this is a major challenge we have today because we have, you know, few hundreds of clusters on EKS, Amazon, and we want to scale them to a few thousands, but we don't think we're ready to do that and this will give us the ability to do that. Yeah, I think people are scared, not scared. I won't say scared, that's a bad word. Maybe I should say that they feel nervous because, you know, at scale, small mistakes can become large mistakes. This is something that is concerning to enterprises and I think this is going to come up at KubeCon this year where enterprises are going to say, okay, I need to see SLAs. I want to see track record. I want to see other companies that have used it. How would you answer that question to or challenge? You know, hey, I love this, but is there any guarantees? Is there any, what's the SLAs? I'm an enterprise, I got tight, you know, I love the open source, trying to free, fast and loose, but I need hardened code. Yeah, absolutely. So two parts to that, right? One is Arwan leverages existing open source components, products that are extremely popular. Two specifically, one is Arwan uses Argo CD, which is probably one of the highest rated and used CD open source tools that's out there, right? It's created by folks that are as part of Intuit team, now, you know, a really brilliant team and it's used at scale across enterprises. That's one. Second is Arwan also makes use of cluster API, CAPI, which is a Kubernetes sub-component, right, for lifecycle management of clusters. So there is enough of, you know, community users, et cetera, around these two products, right, or open source projects that will find Arwan to be right up in their alley, because they're already comfortable, familiar with Argo CD, now Arwan just extends the scope of what Argo CD can do. And so that's one, and then the second part is going back to your point of the comfort and that's where, you know, Platform 9 has a role to play, which is when you are ready to deploy Arwan at scale, because you've been, you know, playing with it in your DevTest environments, you're happy with what you get with it, then Platform 9 will stand behind it and provide that SLA. And what's been the reaction from customers you've talked to, Platform 9 customers that are familiar with Argo and then Arlo, what's been some of the feedback? Yeah, I think the feedback's been fantastic. I mean, I can give examples of customers where, you know, initially, you know, when you're telling them about your entire portfolio of solutions, it might not strike a chord right away, but then we start talking about Arlan and we talk about the fact that it uses Argo CD and they start opening up, they say, we have standardized on Argo and we have built these components homegrown, we would be very interested, can we code develop, does it support these use cases? So we've had that kind of validation, we've had validation all the way at the beginning of Arlan. Before we even wrote a single line, of course, saying this is something we plan on doing and the customer said, if you had it today, I would have purchased it. So it's been really great validation. All right, so next question is, what is the solution to the customer? If I asked you, look, I'm so busy, my team's overworked, I got a skills gap, I don't need another project, it's so tied up right now and I'm just chasing my tail. How does Platform 9 help me? Yeah, absolutely. One of the core tenets of Platform 9 has always been that we try to bring that public cloud simplicity by hosting this and a lot of such similar tools in a SaaS hosted manner for our customers. So our goal behind doing that is taking away or trying to take away all of that complexity from customer's hands and offloading it to our hands and giving them that full white love treatment, as we call it. And so from a customer's perspective, one, something like Arlon will integrate with what they have, so they don't have to rip and replace anything. In fact, it will even, in the next versions, it may even discover your clusters that you have today and give you an inventory. So customers have clusters that are growing, that's a sign. Correct. Call you guys. Absolutely, they have massive, large clusters, right? That they want to split into smaller clusters, but they're not comfortable doing that today. Or they've done that already on say public cloud or otherwise and now they have management challenges. So it's basically operationizing the clusters whether they want to reset everything and remove things around and reconfigure and or scale out. That's right, exactly. And you provide that layer of policy. Absolutely, yes. That's the key value here. That's right. So policy-based configuration for cluster scale up. Profile and policy-based declarative configuration and lifecycle management for clusters. If I asked you how this enables SuperCloud, what would you say to that? I think this is one of the key ingredients to SuperCloud, right? If you think about a SuperCloud environment, there's at least a few key ingredients that come to my mind that are really critical, like they are life-saving ingredients at that scale. One is having a really good strategy for managing that scale in a, going back to assembly line in a very consistent, predictable way. So that all on solves. Then you need to complement that with the right kind of observability and monitoring tools at scale, right? Because ultimately, issues are going to happen and you're going to have to figure out how to solve them fast. And Arlon, by the way, also helps in that direction. But you also need observability tools. And then, especially if you're running it on the public cloud, you need some cost management tools. In my mind, these three things are like the most necessary ingredients to make SuperCloud successful. And Arlon fills in one. Okay, so now the next level is, okay, that makes sense. It's under the covers kind of speak, under the hood. How does that impact the app developers and the cloud native modern application workflows? Because the impact to me seems, the apps are going to be impacted. Are they going to be faster, stronger? I mean, what's the impact? If you do all those things, as you mentioned, what's the impact of the apps? Yeah, the impact is that your apps are more likely to operate in production the way you expect them to. Because the right checks and balances have gone through and any discrepancies have been identified prior to those apps, prior to your customer running into them. Because developers run into this challenge today where there's a split responsibility. I'm responsible for my code, I'm responsible for some of these other plugins, but I don't own the stack end to end. I have to rely on my apps counterpart to do their part right. And so this really gives them the right tooling for that. This is actually a great kind of relevant point. As cloud becomes more scalable, you're starting to see this fragmentation gone of the days of the full stack developer to the more specialized role. But this is a key point I have to ask you because if this Arlo solution takes place, as you say, and the apps are gonna be do what they're designed to do, the question is what does the current pain look like? Are the apps breaking? What is the signals to the customer that they should be calling you guys up and implementing Arlo, Argo, and all the other goodness to automate? What are some of the signals? Is it downtime? Is it failed apps? Is it latency? What are some of the things that would be indications of things that are effed up a little bit? Yeah, more frequent down times. Downtimes that take longer to triage. And so your mean times on resolution, et cetera, are escalating or growing larger. Like we have environments of customers where there they have a number of folks in the field that have to take these apps and run them at customer sites. And that's one of our partners and they're extremely interested in this because the rate of failures they're encountering for the field when they're running these apps on site because the field is automating their clusters that are running on sites using their own script. So these are the kinds of challenges. So those are the pain points, which is if you're looking to reduce your mean time to resolution, if you're looking to reduce the number of failures that occur on your production site, that's one. And second, if you're looking to manage these at scale environments with a relatively small focus nimble ops team, which has an immediate impact on your budget. So those are the signals. This is the cloud native at scale situation, the innovation going on. Final thought is your reaction to the idea that if the world goes digital, which it is and the confidence of physical and digital coming together and cloud continues to do its thing, the company becomes the application, not where IT used to be supporting the business, you know, the back office and the media terminals and some PCs and handhelds. Now, if technology is running the business, is the business, the company is the application. So it can't be down. So there's a lot of pressure on CSOs and CIOs now and boards are saying, how is technology driving the top line revenue? That's the number one conversation. Do you see the same thing? Yeah, it's interesting. I think there's multiple pressures at the CIO level, right? One is that there needs to be that visibility and clarity and guarantee almost that, you know, that the technology that's gonna drive your top line is gonna drive that in a consistent, reliable, predictable manner. And then second, there is the constant pressure to do that while always lowering your costs of doing it, right? Especially when you're talking about, let's say retailers or those kinds of large scale vendors, they many times make money by lowering the amount that they spend on providing those goods to their end customers. So I think both those factors kind of come into play and the solution to all of them is usually in a very structured strategy around automation. Final question, what does cloud native at scale look like to you? If all the things happen the way we want them to happen, the magic wand, the magic dust, what does it look like? What that looks like to me is a CIO sipping at his desk on coffee, production is running absolutely smooth and he's running that at a nimble, nimble team size of at the most a handful of folks that are just looking after things for things. So just taking care of yourself. And the CIO doesn't exist. There's no CISO there at the beach. Indore, thank you for coming on, sharing the cloud native at scale here on theCUBE. Thank you for your time. Fantastic, thanks for having me. Okay, I'm John Furrier here for a special program presentation, special programming, cloud native at scale, enabling super cloud modern applications with platform nine. Thanks for watching. Welcome back everyone to the special presentation of cloud native at scale, theCUBE and platform nine special presentation going in and digging into the next generation super cloud infrastructure as code and the future of application development. We're here with Dick Lee who's the chief architect and co-founder of platform nine. Dick, great to see you. Chief alumni, we met at an OpenStack event in about eight years ago or later earlier when OpenStack was going, great to see you and congratulations on the success of platform nine. Thank you very much. Yeah, you guys have been at this for a while and this is really the year we're seeing the crossover of Kubernetes because of what happens with containers. Everyone now has realized and you've seen what Docker's doing with the new Docker, the open source Docker now, just a success of containerization. And now the Kubernetes layer that we've been working on for years is coming bearing fruit. This is huge. Exactly, yes. And so as infrastructure as code comes in, we talked to Baskar talking about super cloud. I met her about, you know, the new R-lawn, you guys just launched. The infrastructure as code is going to another level and it's always been DevOps, infrastructure as code. That's been the ethos. That's been like from day one developers, just code, then you saw the rise of serverless and you see now multi-cloud around the horizon. Connect the dots for us. What is the state of infrastructure as code today? So I think I'm glad you mentioned it. Everybody or most people know about infrastructure as code, but with Kubernetes, I think that project has evolved the concept even further and these days it's infrastructure as configuration, right? So, which is an evolution of infrastructure as code. So instead of telling the system, here's how I want my infrastructure by telling it, do step A, B, C and D. Instead with Kubernetes, you can describe your desired state declaratively using things called manifest resources and then the system kind of magically figures it out and tries to converge the state towards the one that you specify. So I think it's an even better version of infrastructure as code. And that really means it's developer just accessing resources, declare, give me some compute, stand me up some, turn the lights on, turn them off, turn them on. That's kind of where we see this going and I like the configuration piece. Some people say composability. I mean, now with open source so popular, you don't have to write a lot of code and it's code being developed. And so it's integration, it's configuration. These are areas that we're starting to see computer science principles around automation, machine learning, assisting open source, because you got a lot of code. That's right. You're hearing software supply chain issues. So infrastructure as code has to factor in these new dynamics. Can you share your opinion on these new dynamics of as open source grows, the glue layers, the configurations, the integration, what are the core issues? I think one of the major core issues with all that power comes complexity. So despite its expressive power, systems like Kubernetes and declarative APIs let you express a lot of complicated and complex stacks. But you're dealing with hundreds, if not thousands of these YAML files or resources. And so I think the emergence of systems and layers to help you manage that complexity is becoming a key challenge and opportunity in this space. That's right. I wrote a LinkedIn post today with comments about, hey, enterprise is the new breed. The trend of SaaS companies moving, our consumer-like thinking into the enterprise has been happening for a long time. But now more than ever, you're seeing the old way used to be solve complexity with more complexity and then lock the customer in. Now with open source, it's speed, simplification, and integration. These are the new power dynamics for developers. So as companies are starting to now deploy and look at Kubernetes, what are the things that need to be in place? Because you have some, I won't say technical debt, but maybe some shortcuts, some scripts here that make it look like infrastructure as code. People have done some things to simulate or make infrastructure as code happen. But to do it at scale is harder. What's your take on this? What's your view? It's hard because there's a proliferation of methods, tools, technologies. So for example today, it's very common for DevOps and platform engineering tools, I mean, starting teams, to have to deploy a large number of Kubernetes clusters, but then apply the applications and configurations on top of those clusters. And they're using a wide range of tools to do this, right? For example, maybe Ansible or Terraform or Bash scripts to bring up the infrastructure and then the clusters. And then they may use a different set of tools such as Argo CD or other tools to apply configurations and applications on top of the clusters. So you have the sprawl of tools. You also have the sprawl of configurations and files because the more objects you're dealing with, the more resources you have to manage. And there's a risk of drift that people call that where you think you have things under control but some people from various teams will make changes here and there. And then before the end of the day systems break and you have no idea of tracking them. So I think there's real need to kind of unify, simplify and try to solve these problems using a smaller, more unified set of tools and methodologies. And that's something that we try to do with this new project, Arlon. Yeah, so we're going to get to Arlon, and I want to get into the why Arlon. You guys announced that at ArgoCon, which was put on here in Silicon Valley at the community meeting in about in two or they had their own little day over there, their headquarters. But before we get there, Baskar, your CEO came on and he talked about SuperCloud at our inaugural event. What's your definition of SuperCloud? If you had to kind of explain after someone had a cocktail party or someone in the industry technical, how would you look at the SuperCloud trend that's emerging? It's become a thing. What would be your contribution to that definition or the narrative? Well, it's funny because I've actually heard of the term for the first time today, speaking to you earlier today. But I think based on what you said, I already get kind of some of the gist and the main concepts. It seems like SuperCloud, the way I interpret that is clouds and infrastructure, programmable infrastructure, all of those things are becoming commodity in a way and everyone's got their own flavor. But there's a real opportunity for people to solve real business problems by perhaps trying to abstract away all of those various implementations and then building better abstractions that are perhaps business or application-specific to help companies and businesses solve real business problems. Yeah, I remember, that's a great, great definition. I remember not to date myself, but back in the old days, IBM had a proprietary network operating system, so the deck for the mini-computer vendors, Decknet and SNA, respectively. But TCPIP came out of the OSI, the Open Systems Interconnect. And remember, Ethernet beat Token Ring out. So not to get all nerdy for all the young kids out there, just look up Token Ring, you'll see if you've probably never heard of it, it's IBM's connection for the internet, the layer two. Is Amazon the Ethernet, right? So TCPIP could be the Kubernetes and the container's abstraction. That made the industry completely change at that point in history. So at every major inflection point where there's been serious industry change and wealth creation and business value, there's been an abstraction somewhere. What's your reaction to that? I think this is, I think a saying that's been heard many times in this industry and I forgot who originated it, but I think the saying goes like, there's no problem that can't be solved with another layer of indirection, right? And we've seen this over and over and over again, where Amazon and its peers have inserted this layer that has simplified computing and infrastructure management. And I believe this trend is going to continue, right? The next set of problems are going to be solved with these insertions of additional abstraction layers. I think that's really, it's gonna continue. It's interesting, I just remember another post today on LinkedIn called the Silicon Wars AMD Stock is Down. ARM has been on the rise, we've been reporting for many years now that ARM is gonna be huge, it has become true. If you look at the success of the infrastructure as a service layer across the clouds, Azure, AWS, Amazon's clearly way ahead of everybody. The stuff that they're doing with the Silicon and the physics and the atoms, the pro, you know, this is where the innovation, they're going so deep and so strong at IaaS. The more that they get, that gets gone, they have more performance. So if you're an app developer, wouldn't you want the best performance and you'd want to have the best abstraction layer that gives you the most ability to do infrastructure as code or infrastructure for configuration, for provisioning, for managing services. And you're seeing that today with service meshes, a lot of action going on in the service mesh area in this community of KubeCon, which we'll be covering. So that brings up the whole what's next. You guys just announced Arlon at ArgoCon, which came out of Intuit. We've had Mariana, Tesla, at our super cloud event, she's the CTO. You know, they're all in the cloud. So they're contributed that project. Where did Arlon come from? What was the origination? What's the purpose? Why Arlon? Why this announcement? Yeah, so the inception of the project, this was the result of us realizing that problem that we spoke about earlier, which is complexity, right? With all of this, these clouds, these infrastructure, all the variations around compute storage networks. And the proliferation of tools, we talked about the Ansibles and Terraforms and Kubernetes itself. You can think of that as another tool, right? We saw a need to solve that complexity problem. And especially for people and users who use Kubernetes at scale. So when you have hundreds of clusters, thousands of applications, thousands of users spread out over many, many locations, there needs to be a system that helps simplify that management, right? So that means fewer tools, more expressive ways of describing the state that you want and more consistency. And that's why we built Arlon. And we built it recognizing that many of these problems or sub-problems have already been solved. So Arlon doesn't try to reinvent the wheel. It instead rests on the shoulders of several giants, right? So for example, Kubernetes is one building block. GitOps and ArgoCD is another one, which provides a very structured way of applying configuration. And then we have projects like cluster API and cross-plane, which provide APIs for describing infrastructure. So Arlon takes all of those building blocks and builds a thin layer which gives users a very expressive way of defining configuration and desired state. So that's kind of the inception of projects. And what's the benefit of that? What does that give the developer or the user in this case? The developers, the platform engineer, team members, the DevOps engineers, they get ways to provision, not just infrastructure and clusters, but also applications and configurations. They get away, a system for provisioning, configuring, deploying and doing lifecycle management in a much simpler way, okay? Especially as I said, if you're dealing with a large number of applications. So it's like an operating fabric, if you will. Yes. For them. Okay, so let's get into what that means for up above and below this abstraction or thin layer. Below is the infrastructure. We talked a lot about what's going on below that. Above are workloads. At the end of the day, I talked to CXOs and IT folks that are now DevOps engineers. They care about the workloads. And they want the infrastructure's code to work. They want to spend their time getting in the weeds, figuring out what happened when someone made a push that happened or something happened. They need observability and they need to know that it's working. That's right. And here's my workloads running effectively. So how do you guys look at the workloads side of it? Because now you have multiple workloads on these fabric. Right. So workloads, so Kubernetes has defined kind of a standard way to describe workloads. And you can, you know, tell Kubernetes I wanna run this container this particular way. Or you can use other projects that are in the Kubernetes cloud native ecosystem, like Knative where you can express your application in more at a higher level, right? But what's also happening is in addition to the workloads, DevOps and platform engineering teams, they need to very often deploy the applications with the clusters themselves. Clusters are becoming this commodity. It's becoming this host for the application and it kind of comes bundled with it in many cases. It's like an appliance, right? So DevOps teams have to provision clusters at a really incredible rate and they need to tear them down. Clusters are becoming more. It's becoming like an EC2 instance. Spin up a cluster. We've heard people use words like that. That's right. And before Arlon, you kind of had to do all of that using a different set of tools, as I explained. So with Arlon, you can kind of express everything together. You can say, I want a cluster with a health monitoring stack and a logging stack and this ingress controller. And I want these applications and these security policies. You can describe all of that using something we call the profile. And then you can stamp out your applications and your clusters and manage them in a very... So it's essentially standard. It creates a mechanism. It's standardized declarative kind of configurations and it's like a playbook. You just deploy it. Now what's there is between, say, a script. Like I have scripts, I could just automate scripts. Yes, this is where that declarative API and infrastructure as configuration comes in, right? Because scripts, yes, you can automate scripts, but the order in which they run matters, right? They can break. Things can break in the middle. And sometimes you need to debug them whereas the declarative way is much more expressive and powerful. You just tell the system what you want and then the system kind of figures it out and there are these things called controllers which will in the background reconcile all the state to converge towards your desired state. It's a much more powerful, expressive and reliable way of getting things done. So infrastructure as configuration is built kind of on a super set of infrastructure as code because you need infrastructure as code but then you can configure the code by just saying do it. You're basically declaring and saying go do that. That's right. All right, so cloud native at scale. Take me through your vision of what that means. Someone says, hey, what does cloud native at scale mean? What's success look like? How does it roll out in the future? I use that future next couple of years. I mean, people are now starting to figure out, okay, it's not as easy as it sounds, Kubernetes has value. We're going to hear this year at KubeCon, a lot of this. What does cloud native at scale mean? Yeah, there are different interpretations but if you ask me, when people think of scale, they think of a large number of deployments, right? Geographies, supporting thousands or tens or millions of users, there's that aspect to scale. There's also an equally important aspect of scale which is also something that we try to address with our own and that is just complexity for the people operating this or configuring this, right? So in order to describe that desired state and in order to perform things like maybe upgrades or updates on a very large scale, you want the humans behind that to be able to express and direct the system to do that in relatively simple terms, right? And so we want the tools and the abstractions and the mechanisms available to the user to be as powerful but as simple as possible. So there's, I think there's going to be a number and there have been a number of CNCF and cloud native projects that are trying to attack that complexity problem as well. And Arlon kind of falls in that category. Okay, so I'll put you on the spot, we've got KubeCon coming up and obviously this will be shipping this series out before. What do you expect to see at KubeCon this year? What's the big story this year? What's the most important thing happening? Is it in the open source community and also within a lot of the people jockeying for leadership? And there's a lot of projects and still there's some white space in the overall systems map about the different areas, get runtime and availability in all these different areas. What's the, where's the action? Where's the smoke? Where's the fire? Where's the peace? Where's the tension? Yeah, so I think one thing that has been happening over the past couple of KubeCon and I expect to continue and that is the word on the street is Kubernetes is getting boring, right? Which is good, right? Boring means simple. Well, Well maybe. Yeah. Invisible. No drama, right? So the rate of change of the Kubernetes features and all that has slowed in a positive way. But there's still a general sentiment and feeling that there's just too much stuff. If you look at a stack necessary for hosting applications based on Kubernetes, they're just still too many moving parts, too many components, right? Too much complexity. I go, I keep going back to the complexity problem. So I expect KubeCon and all the vendors and the players and the startups and the people there to continue to focus on that complexity problem and introduce further simplifications to the stack. Yeah. Vic, you've had a storied career VMware over decades with them. I've been doing 12 years for 14 years or something like that, big number. Co-founder here at Platform. Now you guys been around for a while at this game. We talked about OpenStack, that project we interviewed at one of their events. So OpenStack was the beginning of this new revolution. I remember the early days it was, it wasn't supposed to be an alternative to Amazon, but it was a way to do more cloud, cloud native. I think we had a Clouderati team at that time. We would joke about the dream, it's happening now. Now at Platform 9, you guys have been doing this for a while. What are you most excited about as the chief architect? What did you guys double down on? What did you guys pivot from or to? Did you do any pivots? Did you extend out certain areas? Because you guys are in a good position right now, a lot of DNA in cloud native. What are you most excited about and what does Platform 9 bring to the table for customers and for people in the industry watching this? Yeah, so I think our mission really hasn't changed over the years, right? It's been always about taking complex open source software because open source software, it's powerful, it solves new problems every year and you have new things coming out all the time. OpenStack was an example when the Kubernetes took the world by storm, but there's always that complexity of just configuring it, deploying it, running it, operating it, and our mission has always been that we will take all that complexity and just make it easy for users to consume regardless of the technology, right? So the successor to Kubernetes, I don't have a crystal ball, but you have some indications that people are coming up of new and simpler ways of running applications. There are many projects around there. Who knows what's coming next year or the year after that? The Platform 9 will be there and we will take the innovations from the community. We will contribute our own innovations and make all of those things very consumable to customers. Simpler, faster, cheaper, always a good business model. Technically to make that happen. Yeah, I think the raining in the chaos is key. Now we have now visibility into the scale. Final question before we depart in this segment. What is that scale? How many clusters do you see that would be a watermark for an at scale conversation around an enterprise? Is it workloads we're looking at or our clusters? How would you describe that? When people try to squint through and evaluate what's a scale, what's the at scale kind of threshold? Yeah, and the number of clusters doesn't tell the whole story because clusters can be small in terms of the number of nodes or they can be large. But roughly speaking, when we say large scale cluster deployments we're talking about maybe hundreds, two thousands. And final question. What's the role of the hyperscalers? You got AWS continuing to do well, but they got their core IaaS, they got a PaaS. They're not too much putting a SaaS out there. They have some SaaS apps, but mostly it's the ecosystem. They have marketplaces doing over $2 billion of transactions a year and it's just like to sit in there. It hasn't really, they're now innovating on it, but that's going to change ecosystems. What's the role of the cloud play and the cloud native at scale? The hyperscalers themselves? Yeah, AWS, Azure, Google. You mean from a business perspective? Yeah, technical. They have their own interests that they will keep catering to. They will continue to find ways to lock their users into their ecosystem of services and APIs. So I don't think that's going to change, right? Well, they got great performance from a hardware standpoint. That's going to be key, right? Yes, I think the move from x86 being the dominant way and platform to run workloads is changing. And I think the hyperscalers really want to be in the game in terms of the new risk and arm ecosystems and platforms. Yeah, and joking aside, Paul Moritz when he was the CEO of VMware when he took over once said, and I remember our first year doing theCUBE, the cloud is one big distributed computer. It's hardware. You got software and you got middleware. And he kind of oversimplified, he's kind of tongue-in-cheek, but really you're talking about large compute and sets of services. That is essentially a distributed computer. Yes, exactly. As we're back in the same game. Vic, thank you for coming on the segment. Appreciate your time. This is a cloud native at scale, special presentation with platform nine, really unpacking SuperCloud, Arlon, open source and how to run large scale applications on the cloud, cloud native for developers and John Furrier with theCUBE. Thanks for watching and we'll stay tuned for another great segment coming right up. Hey, welcome back everyone to SuperCloud 22. I'm John Furrier, host of theCUBE. We're here all day talking about the future of cloud, where it's all going, making it super multi-clouds around the corner and public cloud is winning. Got the private cloud on premise and edge, got a great guest here, Vaskar Gorty, CEO of platform nine, just on the panel on Kubernetes, an enabler blocker. Welcome back, great to have you on. Good to see you again. So Kubernetes is a blocker, enabler, but with a question mark I put on that panel, was really to discuss the role of Kubernetes. Now, great conversation, operations is impacted. What's interesting about what you guys are doing at platform nine is your role there as CEO and the company's position, kind of like the world's spun into the direction of platform nine while you're at the helm. Right, absolutely. In fact, things are moving very well and since they came to us, it was an insight to call ourselves the platform company eight years ago, right? So absolutely, whether you are doing it in public clouds or private clouds, the application world is moving very fast in trying to become digital and cloud native. There are many options for you to run the infrastructure. The biggest blocking factor now is having a unified platform and that's what we really come into. Pastor, we were talking before we came on stage here about your background and we were gonna talk about the glory days in 2000, 2001, when the first ASP's application service providers came out, kind of a SAS vibe, but that was kind of all kind of cloud like. It wasn't. And web services started then too. So you saw that whole growth now fast forward 20 years later, 22 years later, where we are now. When you look back then to here and all the different cycles. In fact, as we were talking offline, I was in one of those ASP's in the year 2000 where it was a novel concept of saying we are providing a software and a capability as a service, right? You sign up and start using it. I think a lot has changed since then. The tooling, the tools, the technology, has really skyrocketed. The app development environment has really taken on exceptionally well. There are many, many choices of infrastructure now, right? So I think things are in a way the same, but also extremely different, but more importantly, now for any company, regardless of size, to be a digital native, to become a digital company is extremely mission critical. It's no longer a nice to have. Everybody's in their journey somewhere. Everyone is going digital transformation here, even on a so-called downturn recession that's upcoming inflation's here. It's interesting. This is the first downturn in the history of the world where the hyperscale clouds have been pumping on all cylinders as an economic input. If you look at the tech trends, GDP's down, but not tech, because the pandemic showed everyone digital transformation is here and more spend and more growth is coming, even in tech. So this is a unique factor which proves that that digital transformation's happening and every company will need a super cloud. Everyone, every company, regardless of size, regardless of location, has to become modernize their infrastructure. And modernizing infrastructure is not just some new servers and new application tools, it's your approach, how you're serving your customers, how you're bringing agility in your organization. I think that is becoming a necessity for every enterprise to survive. I want to get your thoughts on super cloud because one of the things Dave Vellante and I wanted to do with super cloud and calling it that was, I personally, and I know Dave as well, he can speak for himself, we didn't like multi-cloud. I mean, not because Amazon said don't call things multi-cloud, it just didn't feel right. I mean, everyone has multiple clouds by default. If you're running productivity software, you have Azure and Office 365, but it wasn't truly distributed, it wasn't truly decentralized, it wasn't truly cloud enabled. It felt like they're not ready for a market yet. Yet, public clouds booming on premise, private cloud and edge is much more dynamic, more real. Yeah, I think the reason why we think super cloud is a better term than multi-cloud. Multi-cloud are more than one cloud that is connected, okay? You have a productivity cloud, you have a sales force cloud, you may have, everyone has an internal cloud, right? But they're not connected. So you can say, okay, it's more than one cloud, so it's multi-cloud. But super cloud is where you are actually trying to look at this holistically, whether it is on-prem, whether it is public, whether it's at the edge, it's a store, at the branch. You are looking at this as one unit and that's where we see the term super cloud is more applicable, because what are the qualities that you require if you're in a super cloud, right? You need choice of infrastructure. You need, but at the same time, you need a single pane or a single platform for you to build your innovations on, regardless of which cloud you're doing it on, right? So I think super cloud is actually more tightly integrated orchestrated management philosophy, we think. So let's get into some of the super cloud-type trends that we've been reporting on. Again, the purpose of this event is to, as a pilot, get the conversations flowing with the influencers, like yourselves, who are running companies and building products and the builders. Amazon and Azure are doing extremely well. Google's coming up in third. Cloud works in public cloud. We see the use cases, on-premises use cases. Kubernetes has been an interesting phenomenon because it's become from the developer side a little bit, but a lot of ops people love Kubernetes. It's really more of an ops thing. You mentioned OpenStack earlier. Kubernetes kind of came out of that OpenStack. We need an orchestration. And then containers had a good shot with Docker. They repivoted the company. Now they're all in an open source. So you got containers booming, and Kubernetes as a new layer there. What's the take on that? What does that really mean? Is that a new de facto enabler? It is here. It's here for sure. Every enterprise somewhere else in the journey is going on, and most companies are 70 plus percent of them have one, two, three container-based, Kubernetes-based applications now being rolled out. So it's very much here. It is in production at scale by many customers, and the beauty of it is, yes, open source, but the biggest gating factor is the skill set, and that's where we have a phenomenal engineering team, right? So it's one thing to buy a tool. And just to be clear, you're a managed service for Kubernetes. We provide a software platform for cloud acceleration as a service, and it can run anywhere. It can run in public, private. We have customers who do it in truly multi-cloud environments. It runs on the edge. It runs in stores, there are thousands of stores in the retailer, so we provide that. And also, for specific segments where data sovereignty and data residency are key regulatory reasons, we also run on-prem as an air gap version. Can you give an example on how you guys are deploying your platform to enable a super cloud experience for your customer? Right, I'll give you two different examples. One is a very large networking company, public networking company. And they have hundreds of products, hundreds of R&D teams that are building different, different products. And if you look at a few years back, each one was doing it on a different platform, but they really needed to bring the agility. And they worked with us now over three years where we are their build test dev platform where all their products are built on, right? And it has dramatically increased their agility to release new products. Number two, it actually is a lights out operation. In fact, the customer says, like the Maytag service person, because we provide it as a service and it barely takes one or two people to maintain it for them. So it's kind of like an SRE vibe, one person managing a large, 4,000 engineers building infrastructure. On their tools, whatever they want to do. They're using whatever app development tools they use, but they use our platform as a service. And what benefits are they seeing? Are they seeing speed? Speed, definitely. Definitely, they're speeding speed. Uniformity, because now they're building able to build. So their customers who are using product A and product B are seeing a similar set of tools that are being used. So a big problem that's coming out of this super cloud event that we're seeing and heard it all here, ops and security teams, because they're kind of two part of one team, but ops and security specifically, need to catch up speed-wise. Are you delivering that value to ops and security? Right, so we work with ops and security teams and infrastructure teams and we layer on top of that. We're like a platform team. If you think about it, depending on where you have data centers, where you have infrastructure, you'll have multiple teams, okay? But you need a unified platform. Who's your buyer? Our buyer is usually the product divisions of companies that are looking at, or the CTO would be a buyer for us functionally, CIO definitely. So it's somewhere in the DevOps to infrastructure, but the ideal one we are beginning to see now, many large corporations are really looking at it as a platform and saying, we have a platform group on which any app can be developed and it is run on any infrastructure. So the platform engineering team. You were in just two sides of that coin. You've got the dev side and then infrastructure side. Another customer that I give an example, which I would say is kind of the edge of the store. So they have thousands of stores. Retail. Retail, food retailer, right? They have thousands of stores that are on the globe, 50,000, 60,000, and they really want to enhance the customer experience that happens when you either order the product or go into the store and pick up your product or buy or browse or sit there. They have applications that were written in the 90s and then they have very modern AI ML applications today. They want something that will not have to send an IT person to install a rack in the store or they can't move everything to the cloud because the store operations has to be local. The menu changes based on. It's a classic edge. It's classic edge, right? They can't send IT people to go install racks of servers. Then they can't sell software people to go install the software. And any change you want to put, do that. You know, truck rolls. So they've been working with us where all they do is they ship, depending on the size of the store, one or two or three little servers with instructions that. You say little servers, like how big? One, you know. It's a box. Like a small little box. Yeah, it's a box. And all the person in the store has to do, like what you and I do at home and we get a router, is connect the power, connect the internet and turn the switch on. And from there, we pick it up. We provide the operating system, everything, and then the applications are put on it. And so that dramatically brings the velocity for them. They manage thousands of them. Two plug and play. Two plug and play. Thousands of stores. They manage it centrally. We do it for them, right? So that's another example where on the edge, then we have some customers who have both a large private presence and one of the public clouds, okay? But they want to have the same platform layer of orchestration and management that they can use regardless of the location. So you guys got some success. Congratulations. Got some traction there. That's awesome. Question I want to ask you is that come up is, what is truly cloud-nated? Because it's lift and shift of the cloud. That's not cloud-native. Then there's cloud-native. Cloud-native seems to be the driver for the super cloud. How do you talk to customers? How do you explain when someone says, what's cloud-native? What isn't cloud-native? Right, look, I think, first of all, the best place to look at what is the definition and what are the attributes and characteristics of what is truly a cloud-native is CNCF Foundation. And I think it's very well-documented, very well-built. Qcon, of course, and Detroit's coming up. So it's already there, right? So we follow that very closely. I think just lifting and shifting your 20-year-old application onto a data center somewhere is not cloud-native. You can't port to cloud-native. You have to rewrite and redevelop your application and business logic using modern tools, hopefully more open source. And I think that's what cloud-native is. And we are seeing a lot of our customers in that journey. Now, everybody wants to be cloud-native, but it's not that easy, okay? Because I think it's, first of all, skill set is very important. Uniformity of tools, there's so many tools. There are thousands and thousands of tools. You could spend your time figuring out which tool to use, okay? So I think the complexity is there, but the business benefits of agility and uniformity and customer experience are truly being done. I'll give you an example. I don't know how cloud-native they are. They're not a customer of ours, but you order pizzas, you do, right? If you just watch the pizza industry, how Domino's actually increased their share and mind share and wallet share was not because they were making better pizzas or not. I don't know anything about that, but the whole experience of how you order, how you watch, what's happening, how it's delivered, they were the pioneer in it. To me, those are the kinds of customer experiences that cloud-native can provide. Being agility and having that flow to the application changes what the expectations are for the customer. Customer, the customer's expectations change, right? Once you get used to a better customer experience, you will not, that's got to wrap it up. I wanna just get your perspective again. One of the benefits of chatting with you here and having you a part of the SuperCloud 22 is you've seen many cycles, you have a lot of insights. I wanna ask you, given your career, where you've been and what you've done and now the CEO of Platform 9, how would you compare what's happening now with other inflection points in the industry? And you've been again, you've been an entrepreneur, you sold your company to Oracle, you've been seeing the big companies, you've seen the different waves. What's going on right now? Put it into context, this moment in time around SuperCloud. Sure, I think as you said, a lot of battles, a lot of cars being in ASP, being in a real-time software company, being in large enterprise software houses and a transformation, I've been on the app side, I did the infrastructure, right, and then tried to build our own platforms. I've gone through all of this myself with a lot of lessons learned in there. I think this is an event which is happening now for companies to go through, to become cloud-native and digitalized. If I were to look back and look at some parallels of the tsunami that's going on, is a couple of parallels come to me. One is, think of it, which was forced on us, like Y2K, everybody around the world had to have a plan, a strategy and an execution for Y2K. I would say the next big thing was e-commerce. I think e-commerce has been pervasive, right, across all industries. And disruptive. And disruptive, extremely disruptive. If you did not adapt and adapt and accelerate your e-commerce initiative, you were, it was an existence question. I think we are at that pivotal moment now in companies trying to become digital and cloud-native. That is what I see happening. I think that e-commerce is interesting and I think just to riff with you on that is that it's disrupting and refactoring the business models. I think that is something that's coming out of this, is that it's not just completely changing the game, it's just changing how you operate. How you think and how you operate. See, if you think about the early days of e-commerce, just putting up a shopping cart didn't make you an e-commerce or an e-detailer or an e-customer, right? So I think it's the same thing now, is I think this is a fundamental shift on how you're thinking about your business. How are you gonna operate? How are you gonna service your customers? I think it requires that just lift and shift is not going to work. That's great. Thank you for coming on, spending the time to come in and share with our community and being part of SuperCloud 22. We really appreciate it. We're gonna keep this open. We're gonna keep this conversation going even after the event to open up and look at the structural changes happening now and continue to look at it in the open, in the community and we're gonna keep this going for a long, long time as we get answers to the problems that customers are looking for with Cloud Cloud Computing. I'm John Furrier with SuperCloud 22 in theCUBE. Thanks for watching. Thank you. Thank you, John. Welcome back. This is the end of our program, our special presentation with Platform 9 on Cloud Native at Scale, enabling the SuperCloud. We're continuing the theme here. You heard the interviews, SuperCloud and its challenges, new opportunities around the solutions, around like Platform 9 and others with Arlon. This is really about the edge situations on the internet and managing the edge, multiple regions, avoiding vendor lock-in. This is what this new SuperCloud is all about, the business consequences we heard and the wide-ranging conversations around what it means for open source and the complexity problem all being solved. I hope you enjoyed this program. There's a lot of moving pieces and things to configure with Cloud Native Install, all making it easier for you here with SuperCloud and of course, Platform 9 contributing to that. Thank you for watching.