 Hello everyone, welcome to theCUBE here in Palo Alto, California for a special program on cloud native at scale, enabling next generation cloud or super cloud for modern application cloud native developers. I'm John Furrier, host of theCUBE. Pleasure to have here Medora Meskowski, co-founder and VP of product at platform nine. Thanks for coming in today for this cloud native at scale conversation. Thank you for having me. So cloud native at scale, something that we're talking about because we're seeing the next level of mainstream success of containers, Kubernetes and cloud native develop basically DevOps in the CICD pipeline is changing the landscape of infrastructure as code. It's accelerating the value proposition and the super cloud as we call it has been getting a lot of traction because this next generation cloud is looking a lot different but kind of the same as the first generation. What's your view on super cloud as it fits to cloud native at scales up? Yeah. You know, I think what's interesting and I think the reason why super cloud is a really good and a really fit term for this. And I think I know my CEO was chatting with you as well and he was mentioning this as well. But I think there needs to be a different term than just multi cloud or cloud. And the reason is because as cloud native and cloud deployments have scaled, I think we've reached a point now where instead of having the traditional data center style model where you have a few large distributants of infrastructure and workload at a few locations, I think the model is kind of flipped around, right? Where you have a large number of micro sites. These micro sites could be your public cloud deployment, your private on-prem infrastructure deployments or it could be your edge environment, right? And every single enterprise, every single industry is moving that direction. And so you got to refer that with a terminology that indicates the scale and complexity of it. And so I think super cloud is an appropriate term for that. So you brought a couple of things I want to dig into. You mentioned edge nodes. We're seeing not only edge nodes being the next kind of area of innovation, mainly because it's just popping up everywhere and that's just the beginning. We don't even know what's around the corner. You got buildings, you got IoT, OT and IT kind of coming together. But you also got this idea of regions, global infrastructure is a big part of it. I just saw some news around cloud flare shutting down a site here. There's policies being made at scale, these new challenges there. Can you share, because you can have edge, so hybrid cloud is a winning formula. Everybody knows that, it's a steady state. But across multiple clouds brings in this new un-engineered area yet. It hasn't been done yet. Spanning clouds. People say they're doing it, but you start to see the toe in the water. It's happening, it's going to happen. It's only going to get accelerated with the edge and beyond globally. So I have to ask you, what is the technical challenges in doing this? Because there's some business consequences as well, but there are technical challenges. Can you share your view on what the technical challenges are for the super cloud across multiple edges and regions? Yeah, absolutely. So I think in the context of this term of super cloud, I think it's sometimes easier to visualize things in terms of two axis. I think on one end, you can think of the scale in terms of just pure number of nodes that you have deployed a number of clusters in the Kubernetes space. And then on the other axis, you would have your distribution factor, right? Which is, do you have these tens of thousands of nodes in one site or do you have them distributed across tens of thousands of sites with one node at each site, right? And if you have just one flavor of this, there is enough complexity, but potentially manageable. But when you are expanding on both these axes, you really get to a point where that scale really needs some well thought out, well structured solutions to address it, right? A combination of homegrown tooling along with your favorite distribution of Kubernetes is not a strategy that can help you in this environment. It may help you when you have one of this or when your scale is not at the level. Can you scope the complexity? Because I mean, I hear a lot of moving parts going on there. The technologies are also getting better. We're seeing Cloud Native become successful. There's a lot to configure, a lot to install. Can you scope the scale of the problem? Because what about at scale challenges here? Yeah, absolutely. And I think I like to call it the problem that the scale creates. There's various problems, but I think one problem, one way to think about it is it works on my cluster problem, right? So I come from engineering background and there's a famous saying between engineers and QA and the support folks, right? Which is it works on my laptop, which is I tested this chain, everything was fantastic, it worked flawlessly on my machine, on production, it's not working. Only the exact same problem now happens in these distributed environments, but at massive scale, right? Which is that developers test their applications, et cetera within the sanctity of their sandbox environments. But once you expose that change in the wild world of your production deployment, right? And the production deployment could be going at the radio cell tower at the edge location where a cluster is running there. Or it could be sending these applications and having them run at my customer site where they might not have configured that cluster exactly the same way as I configured it. Or they configured the cluster right, but maybe they didn't apply the security policies or they didn't apply the other infrastructure plugins that my app relies on. All of these various factors add their own layer of complexity and there really isn't a simple way to solve that today. And that is just one example of an issue that happens. I think another whole new ballgame of issues come in the context of security, right? Because when you're deploying applications at scale in a distributed manner, you gotta make sure someone's job is on the line to ensure that the right security policies are enforced regardless of that scale factor. So I think that's another example of problems that occur. Okay, so I have to ask about scale because there are a lot of multiple steps involved. When you see the success of cloud native, and then you see some experimentation, they set up a cluster, say it's containers and Kubernetes, and then you say, okay, we got this, we configure it, and then they do it again and again, they call it day two, some people call it day one, day two operation, whatever you call it. Once you get past the first initial thing, then you gotta scale it. Then you're seeing security breaches, you're seeing configuration errors. This seems to be where the hotspot is and when companies transition from, I got this to, oh no, that's harder than I thought at scale. Can you share your reaction to that and how you see this playing out? Yeah, so I think it's interesting, there's multiple problems that occur when the two factors of scale, as we talked about, start expanding. I think one of them is what I like to call the, it was fine on my cluster problem, which is back in when I was a developer, we used to call this, it was on my laptop problem, which is you have your perfectly written code that is operating just fine on your machine, your sandbox environment, but the moment it runs production, it comes back with P0s and P1s from support teams, et cetera, and those issues can be really difficult to try out, right? And so in the Kubernetes environment, this problem kind of multi-folds, it escalates to a higher degree because you have your sandbox developer environments, they have their clusters, and things work perfectly fine in those clusters because these clusters are typically hand-crafted or a combination of some scripting and hand-crafting, and so as you give that change to then run at your production edge location, like say your radio cell tower site, or you hand it over to a customer to run it on their cluster, they might not have configured that cluster exactly how you did, or they might not have configured some of the infrastructure plugins, and so the things don't work, and when things don't work, triaging them becomes like marriagely hard, but it's just one of the examples of the problem, another whole bucket of issues is security, which is as you have these distributed clusters at scale, you've got to ensure someone's job is on the line to make sure that the security policies are configured properly. So this is a huge problem, I love that comment, that's not happening on my system, it's the classic debocking mentality, but at scale, it's hard to do that with error prone, I can see that being a problem, and you guys have a solution you're launching, can you share what Arlon is, this new product, what is it all about? Let's talk about this new introduction. Yeah, absolutely, I'm very, very excited. It's one of the projects that we've been working on for some time now, because we are very passionate about this problem, just solving problems at scale in on-prem or in the cloud or at edge environments, and what Arlon is, it's an open source project, and it is a tool, it's a Kubernetes native tool for complete end-to-end management of not just your clusters, but your clusters, all of the infrastructure that goes within and along the site of those clusters, security policies, your middleware plugins, and finally, your applications. So what Arlon lets you do in a nutshell is in a declarative way, it lets you handle the configuration and management of all of these components and at scale. So what's the elevator pitch simply put for what dissolves in terms of the chaos you guys are raining in, what's the bumper sticker, what to do. There's a perfect analogy that I love to reference in this context, which is think of your assembly line in a traditional, let's say, an auto manufacturing factory or et cetera, and the level of efficiency at scale that that assembly line brings. Arlon, and if you look at the logo we've designed, it's this funny little robot, and it's because when we think of Arlon, we think of these enterprise large-scale environments sprawling at scale, creating chaos because there isn't necessarily a well thought through, well structured solution that's similar to an assembly line which is taking each component, addressing them, manufacturing, processing them in a standardized way, then handing to the next stage where again it gets processed in a standardized way and that's what Arlon really does. That's like the elevator pitch. If you have problems of scale of managing your infrastructure that is distributed, Arlon brings the assembly line level of efficiency and consistency for those problems. So keeping it smooth, the assembly line, things are flowing, CI CD, pipelining. Exactly. So that's what you're trying to simplify that ops piece for the developer. I mean, it's not really ops, it's their ops, it's coding. Yeah, not just developer, the ops, the operations folks as well, right? Because developers are responsible for one picture of that layer, which is my apps, and then maybe that middleware of applications that they interface with. But then they hand it over to someone else who's then responsible to ensure that these apps are secured properly, that they are logging, logs are being collected properly, monitoring and observability is integrated and so it solves problems of both those things. Yeah, it's dev ops. So the dev ops is the cloud-nated developer. The options have to kind of set policies. Is that where the declarative piece comes in? Is that why that's important? Absolutely, yeah. Kubernetes really introduced or elevated this declarative management, right? Because Kubernetes clusters or your specifications of components that go in Kubernetes are defined in a declarative way and Kubernetes always keeps that state consistent with your defined state. But when you go outside of that world of a single cluster and when you actually talk about defining the clusters or defining everything that's around it, there really isn't a solution that does that today. And so online addresses that problem at the heart of it and it does that using existing open source, well-known solutions. Madhura, I want to get into the benefits, what's in it for me as the customer developer, but I want to finish this out real quick and get your thoughts. You mentioned open source. Why open source? What's the current state of the product? You run the product group over there, platform nine. Is it open source and you guys have a product that's commercial? Can you explain the open source dynamic and first of all, why open source? What is the consumption? I mean, open source is great. People want open source to download and look up the code but maybe want to buy the commercial. So I'm assuming you have that thought through. Can you share open source and commercial relationship? Yeah, I think starting with why open source, I think it's, we as a company, we have one of the things that's absolutely critical to us is that we take mainstream open source technologies, components and then we make them available to our customers at scale through either a SaaS model or on-prem model. But so as we are a company or startup or a company that benefits in a massive way by this open source economy, it's only right I think in my mind that we do our part of the duty and contribute back to the community that feeds us. And so we have always held that strongly as one of our principles and we have created and built independent products starting all the way with Fission which was a serverless product that we had built to various other examples that I can give but that's one of the main reasons why open source and also open source because we want the community to really firsthand engage with us on this problem which is very difficult to achieve if your product is behind a wall, behind a block box. Well, that's what the developers want too. I mean, what we're seeing and reporting with SuperCloud is the new model of consumption is I wanna look at the code and see what's in there. That's right. And then also if I wanna use it, I'll do it. Great, that's open source, that's the value. But then at the end of the day, if I wanna move fast that's when people buy in. So it's a new kind of freemium, I guess, business model. I guess that's the way it is long but that's the benefit of open source. This is why standards in open source is growing so fast. You have that confluence of a way for developers to try before they buy but also actually kind of date the application, if you will. We, you know, Adrian Cockroft uses the dating metaphor, you know, hey, I wanna check it out first before I get married and that's what open source is. This is how people are selling. This is not just open source, this is how companies are selling. Absolutely, yeah, yeah. I think, you know, two things. I think one is just this cloud native space is so vast that if you're building a close-flow solution, sometimes there's also a risk that it may not apply to every single enterprise's use cases. And so having it open source gives them an opportunity to extend it, expand it, to make it proper to their use case if they choose to do so, right? But at the same time, what's also critical to us is we are able to provide a supported version of it with an SLA that we, you know, that's backed by us a SaaS hosted version of it as well for those customers who choose to go that route. You know, once they have used the open source version and loved it and want to take it at scale and in production and need a partner to collaborate with who can, you know, support them for that production environment. I have to ask you, now let's get into what's in it for the customer. I'm a customer. Why should I be enthused about Arla? What's in it for me? You know, because if I'm not enthused about it, I'm not going to be confident and it's going to be hard for me to get behind this. Can you share your enthusiastic view of, you know, why should I be enthused about Arla as a customer? Absolutely. So, and there's multiple, you know, enterprises that we talk to, many of them, you know, our customers where this is a very kind of typical story that you will hear, which is we have, you know a Kubernetes distribution, it could be on-premise, it could be public cloud native Kubernetes and then we have our CI CD pipelines that are automating the deployment of applications, et cetera, and then there's this gray zone. And the gray zone is, well, before you can, your CI CD pipelines can deploy the apps, somebody needs to do all of that groundwork of, you know, defining those clusters and, you know, properly configuring them. And as these things, these things start by being done hand-grown and then as you scale, what typically enterprises would do today is they will have their home-grown DIY solutions for this. I mean, a number of folks that I talked to that have built Terraform automation and then, you know, some of those key developers leave, so it's a typical open-source or typical, you know, DIY challenge. And the reason that they're writing it themselves is not because they want to. I mean, of course, technology is always interesting to everybody, but it's because they can't find a solution that's out there that perfectly fits that problem. So that's that pitch. I think OpsFico would be delighted the folks that we've spoken with have been absolutely excited and have shared that this is a major challenge we have today because we have, you know, few hundreds of clusters on EKS, Amazon, and we want to scale them to a few thousands. But we don't think we're ready to do that and this will give us the ability to do that. Yeah, I think people are scared, not scared, I won't say scared, that's a bad word. Maybe I should say that they feel nervous because, you know, at scale, small mistakes can become large mistakes. This is something that is concerning to enterprises and I think this is gonna come up at KubeCon this year where enterprises are gonna say, okay, I need to see SLAs. I wanna see track record. I wanna see other companies that have used it. How would you answer that question to or challenge? You know, hey, I love this, but is there any guarantees? Is there any, what's the SLAs? I'm an enterprise, I got tight, you know, I love the open source, trying to free, fast and loose, but I need hardened code. Yeah, absolutely, so two parts to that, right? One is Arwan leverages existing open source components, products that are extremely popular. Two specifically, one is Arwan uses Argo CD, which is probably one of the highest rated and used CD open source tools that's out there, right? It's created by folks that are as part of Intuit team, now, you know, really brilliant team and it's used at scale across enterprises. That's one. Second is Arwan also makes use of cluster API, CAPI, which is a Kubernetes sub-component, right? For lifecycle management of clusters. So there is enough of, you know, community, users, et cetera, around these two products, right? Or open source projects that will find Arwan to be right up in their alley, because they're already comfortable, familiar with Argo CD, now Arwan just extends the scope of what Argo CD can do. And so that's one, and then the second part is going back to your point of the comfort, and that's where, you know, Platform 9 has a role to play, which is when you are ready to deploy Arwan at scale, because you've been, you know, playing with it in your dev test environments, you're happy with what you get with it, and then Platform 9 will stand behind it and provide that SLA. And what's been the reaction from customers you've talked to, Platform 9 customers that are familiar with Argo and then Arlo? What's been some of the feedback? Yeah, I think the feedback's been fantastic. I mean, I can give you examples of customers where, you know, initially, you know, when you're telling them about your entire portfolio of solutions, it might not strike a card right away, but then we start talking about Arwan, and we talk about the fact that it uses Argo CDN. They start opening up. They say, we have standardized on Argo, and we have built these components homegrown. We would be very interested. Can we code develop? Does it support these use cases? So we've had that kind of validation. We've had validation all the way at the beginning of Arlo. Before we even wrote a single line, of course, saying this is something we plan on doing, and the customer said, if you had it today, I would have purchased it. So it's been really great validation. All right, so next question is, what is the solution to the customer? If I ask you, look, I'm so busy. My team's overworked. I got a skills gap. I don't need another project. I'm so tied up right now, and I'm just chasing my tail. How does Platform9 help me? Yeah, absolutely. So I think, you know, one of the core tenets of Platform9 has always been that we try to bring that public cloud-like simplicity by hosting this and a lot of such similar tools in a SaaS-hosted manner for our customers, right? So our goal behind doing that is taking away or trying to take away all of that complexity from customer's hands and offloading it to our hands, right? And giving them that full white glove treatment, as we call it. And so from a customer's perspective, one, something like Arlo will integrate with what they have, so they don't have to rip and replace anything. In fact, it will even, in the next versions, it may even discover your clusters that you have today and give you an inventory. And that's cool. So customers have clusters that are growing. That's a sign. Correct. Call you guys. Absolutely, they have massive, large clusters, right? That they want to split into smaller clusters, but they're not comfortable doing that today. Or they've done that already on, say, public cloud or otherwise, and now they have management challenges. So it's basically operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure and or scale out. That's right, exactly. And you provide that layer of policy. Absolutely, yes. That's the key value here. That's right. So policy-based configuration for cluster scale-up. Profile and policy-based declarative configuration and lifecycle management for clusters. If I ask you how this enables SuperCloud, what would you say to that? I think this is one of the key ingredients to SuperCloud. If you think about a SuperCloud environment, there's at least a few key ingredients that come to my mind that are really critical, like they are life-saving ingredients at that scale. One is having a really good strategy for managing that scale in a, going back to assembly line in a very consistent, predictable way. So that all on solves. Then you need to complement that with the right kind of observability and monitoring tools at scale, right? Because ultimately, issues are gonna happen and you're gonna have to figure out how to solve them fast. And our lawn, by the way, also helps in that direction. But you also need observability tools. And then, especially if you're running it on the public cloud, you need some cost management tools. In my mind, these three things are like the most necessary ingredients to make SuperCloud successful. And, you know, our lawn fills in one place. Okay, so now the next level is, okay, that makes sense. It was under the covers kind of speak, under the hood. Yeah. How does that impact the app developers of the cloud-native, modern application workflows? Because the impact, to me, seems, the apps are gonna be impacted. Are they gonna be faster, stronger? I mean, what's the impact? If you do all those things, as you mentioned, what's the impact of the apps? Yeah, the impact is that your apps are more likely to operate in production the way you expect them to. Because the right checks and balances have gone through and any discrepancies have been identified prior to those apps, prior to your customer running into them. Right? Because developers run into this challenge today where there's a split responsibility, right? I'm responsible for my code. I'm responsible for some of these other plugins. But I don't own the stack end to end. I have to rely on my ops counterpart to do their part, right? And so this really gives them the right tooling for that. This is actually a great kind of relevant point. As cloud becomes more scalable, you're starting to see this fragmentation gone of the days of the full stack developer to the more specialized role. But this is a key point I have to ask you because if this Arlo solution takes place, as you say, and the apps are gonna be do but they're designed to do, the question is what does the current pain look like? Are the apps breaking? What is the signals to the customer that they should be calling you guys up and implementing Arlo, Argo, and all the other goodness to automate? What are some of the signals? Is it downtime? Is it failed apps? Is it latency? What are some of the things that would be indications of things are effed up a little bit? Yeah, more frequent down times, down times that take longer to triage. And so your mean times on resolution, et cetera, are escalating or growing larger, right? Like we have environments of customers where they have a number of folks in the field that have to take these apps and run them at customer sites. And that's one of our partners and they're extremely interested in this because the rate of failures they're encountering for the field when they're running these apps on site because the field is automating their clusters that are running on sites using their own script. So these are the kinds of challenges and those are the pain points, which is if you're looking to reduce your mean time to resolution, if you're looking to reduce the number of failures that occur on your production site, that's one. And second, if you're looking to manage these at scale environments with a relatively small focus nimble ops team, which has an immediate impact on your budgets. So those are the signals. This is the cloud native at scale situation, the innovation going on. Final thought is your reaction to the idea that if the world goes digital, which it is, and the confluence of physical and digital coming together and cloud continues to do its thing, the company becomes the application, not where IT used to be supporting the business, you know, the back office and the mini terminals and some PCs and handhelds. Now, if technology is running the business, is the business, the company is the application, so it can't be down. So there's a lot of pressure on CSOs and CIOs now and boards are saying, how is technology driving the top line revenue? That's the number one conversation. Do you see the same thing? Yeah, it's interesting. I think there's multiple pressures at the CXO-CIO level, right? One is that there needs to be that visibility and clarity and guarantee almost that the technology that's gonna drive your top line is gonna drive that in a consistent, reliable, predictable manner. And then second, there is the constant pressure to do that while always lowering your costs of doing it, right? Especially when you're talking about, let's say retailers or those kinds of large scale vendors, they many times make money by lowering the amount that they spend on providing those goods to their end customers. So I think both those factors kind of come into play and the solution to all of them is usually in a very structured strategy around automation. Final question, what does cloud native at scale look like to you? If all the things happen the way we want them to happen, the magic wand, the magic dust, what does it look like? What that looks like to me is a CIO sipping at his desk on coffee. Production is running absolutely smooth and he's running that at a nimble, nimble team size of at the most a handful of folks that are just looking after things, but things are just taking care of themselves. And the CIO doesn't exist, there's no CISO there at the beach. Thank you for coming on, sharing the cloud native at scale here on theCUBE. Thank you for your time. Fantastic, thanks for having me. Okay, I'm John Furrier here for special program presentation, special programming, cloud native at scale, enabling super cloud modern applications with platform nine. Thanks for watching.