 Well hello everybody and welcome to another OpenShift Commons briefing. This time it is my great pleasure to welcome back Trevor Williams who's now with Replicated and Josh Dwin who's also with Replicated and we're going to talk about some interesting stuff that's going on over at Replicated that they're doing for apps on OpenShift and it's a long title there, Simplifying Deployment Management, trouble shooting of third-party enterprise apps on OpenShift with Replicated Cots. I'm going to let Trevor and Josh introduce themselves and take it away and give the presentation. There's a really neat demo coming too and then we'll have live Q&A at the end. If you have questions throw them in the chat wherever you are, Twitch, YouTube, or in BlueJeans and we will relay them back to the speakers and try and get them answered. So without any further ado, Trevor, welcome back and take it away. Thank you so much Diane for the amazing introduction. I'm so happy to be here. I'm super excited. Time-appropriate greetings, OpenShift community. I am thrilled to be here and as Diane said, I am presenting about super long title. So let's go ahead and get jump into it. So let's introduce Replicated as a company first. So Replicated was founded in 2015 by Mark Campbell, who is our CTO and Grant Miller, who is our CEO and what they do specifically Mark, what they do is they have created a complimentary series of both enterprise and open source tools for multi-prem environments, including the ones that we're going to be talking about today. As for the voices behind the screen, there is my colleague Josh DeWin. You can find him on Twitter at George DeWin. He's waving with that beautiful tropical background. Josh is a Linux docker and Kubernetes superfan and he is also a solution engineer for Replicated and he also sporadically contributes to some open source projects like troubleshoot.sh, nudge nudge wink wink, and also some open source frameworks like OpenFast and is also apparently a pretty big fan of coffee and cats. So I mean, you can see how we'd be friends. As for me, I am the community manager slash dev avocado for Replicated. I am also a recent inductee into the Red Hat Accelerated program. So gave me and I also still use pack stack right from my cold dead fingers. I've been involved in several open source communities for a while, open stack, open shift, Kubernetes, and a few other projects. And in the before times, I traveled a lot with that little guy in photo who's actually sitting on my desk right now and provided that KubeCon LA actually happens this year, you can catch us there along with a huge Replicated booth. And if you do, huge managed to make it to KubeCon, drop by a booth because we've got some really cool swag. So all right, now that we're done with all of that, let's talk about Replicated projects. Or as I like to call them, the Replicated universe. So our CTO Mark, one of his favorite things to do is to build out really straightforward open source tooling to solve everyday problems for developers, engineers, sys admins, like those little annoying things and sometimes those big annoying things that take forever to fix. He finds super quick ways to fix them, and then he open sources it. So the first project and the project that we're talking about today is Cots. And Cots is an abbreviation of Kubernetes off the shelf software. And what Cots is, is it's a sophisticated and highly adaptable containerized application delivery and management platform. And it supports and also can build air gap and embedded clusters. We'll talk more about what embedded clusters are in just a second, such as whole type. And then up next is Troubleshoot. That's one of my personal favorites. Troubleshoot is a two-part admin tool set that validates environments, collects, sanitizes and analyzes data and a ton more. I could spend all day just talking about Troubleshoot preflight. That's only one component. Let's not even jump into the other components. It does so much. If you have time, I highly encourage you to jump over to Troubleshoot.sh and check out the website. And speaking of websites, outdated has the most amazing layout ever. But what it actually does is it analyzes outdated cluster images. And how it does that is by comparing the content shell of the original image repo to the image that is running in your cluster. And then it'll tell you how outdated that image is or if it's outdated at all. Super cool. It's a kubectl plugin, takes like 30 seconds to install. You should definitely check that one out. But before you do that, go to the website, outdated.sh. It's amazing. And it'll make you want to buy some roller skates. Up next is Unfork. That's another kubectl plugin. And what that does is it reconnects forked home charts to the original upstream, which is super useful. Because I mean, it's a nightmare dealing with a home chart or anything that you forked from an original repo and trying to reconnect it. I hate that. Next up is Schema Hero. What Schema Hero is, is a database schema migration tools. And what it does is it converts schema definitions into migration scripts that can be run anywhere. So highly adaptable, highly compatible, a lot of fun. And by the way, Schema Hero is actually in the CNCF sandbox. So if you are involved with Linux Foundation or CNCF, you may or may not have seen mention of it. And you're certainly going to see a whole lot of mention of it at KubeCon this year, which I certainly hope you are going to be attending. And then last but not least is Curl, which is a play on Curl with a C, because it uses Curl with a C. I digress. And this is where the embedded part in the embedded cluster that I was referring to in Cox comes from. Curl is a tool for building customized Kubernetes installers and distributors. And I'm super excited to announce this. It officially became a CNCF certified Kubernetes installer as of this past Friday, August 27. So round of applause for our engineering team for getting that done. That's amazing. Now, all of these projects that I just listed are open source. They play extremely well with Red Hat and OpenShift. And they're available for starring, forking, contributing, and just otherwise general interaction on the replicated HQ GitHub, except for Schema Hero. That has its own GitHub, which is Schema Hero. I would be thrilled if you stopped by and checked them out, because I would love to see feedback and interaction from Red Hat people, because I am still a Red Hat fan girl after all these years. So I want to see Red Hat involved with my new job, which I absolutely love, and with all of these projects that I manage. There are also a number of other open source projects that Replicate has available that I honestly just couldn't fit on this page. And honestly, I could talk about these all day, but we don't have time. But if you're curious about what those are, you can follow Replicated on Twitter at ReplicatedHQ or send me email. Like none of this is a secret. I'll tell you what they are. All right. So I guess that's enough about that. Let's go ahead and move on. And let's talk Cots. And we're going to talk Curl and Troubleshoot too. So one of the many features that makes Replicated Cots such an innovative and useful platform are the vendor and admin tools. So the vendor tools enable software vendors to create, monitor, control, application release channels, licensing, and entitlements across multiple environments. That's a mouthful. It will click and it will make more sense once you see the demo. While admin tools, they handle installation and configuration of the customer environment to best support your apps, push updates, and also give access to preflight and support tools via Troubleshoot.sh, which I was just evangelizing one page ago. When we get into the demo, you'll see how all of these tools interact with one another and how they're so complementary. In addition to compatibility with a wide array of Kubernetes distros, including GKE, the Google platform, AKS, Amazon platform, VMware, Tanzu, and of course OpenShift, Cots also enables independent software vendors to deliver apps to uncouberdated customers with embedded clusters. And this is where Curl enters the chat. Basically, whether this is an air gap customer that doesn't have access to the public Internet or if it's a customer that has public access, you can bundle in a custom Kubernetes installer along with your application to set up the customer environment exactly how you need it for your application to run. Send it over to the customer either in a tar bundle or with a Curl URL with a C. They run that by the bing by the boom, environment is set up, applications installed, that's it. So that's what makes or that's one of many things that makes Cots and Curl so cool. But I'm kind of going off on the tangent again. Let's go ahead and move on. As I mentioned, Cots is pretty popular with ISVs or yeah, independent software vendors. Some of our customers include but are not limited to by a long shot. SmartBear, HashiCorp, Jomik Software, Broadcom, and as a matter of fact, 60 of the top Fortune 100 companies use Replicate to manage their applications. So we don't even go over here I would say. All right. So that is enough of me rambling on. I'm sure you're tired of me stuttering. So I'm going to hand it over to Josh. Take it away, man. Thanks. We recorded the demo up front. So Chris, if you can play it, if it doesn't work, then we're going to fall back to an actual live demo and let Murphy kick in. So we'll see how this goes. Thanks, Riva, for this introduction. So let's get started. I'll start sharing my screen here. I'm going to share a couple of applications at the same time. And this is where I would like to get started. So what we do with Replicate is we try to enable vendors, ISVs, to deliver their applications in an on-prem or a multi-prem environment. And obviously OpenShift can be one of those on-prem environments wherever they are running OpenShift in there. So if you look at Replicate, there are really two sides of the spectrum. One is what vendors like a HashiCorp, a Harness, but also many others like UiPad, what they have access to, which is what I'm currently sharing here. And the other side of the spectrum is what customers of those vendors have access to. So how do they experience the installation of the vendor's application, but also how do they experience troubleshooting the application if something goes wrong during that installation? So I would like to start with what we call the vendor portal, which is what I'm currently showing here. In this case, I have a single demo application called the OpenShift Sentry application. And in the vendor portal, one of the main concepts is what we call Channels. So Channels are, we can compare it a little bit like your CI CD pipelines. So you have like this case, as an example, you have a stable channel, a beta channel, an unstable channel. You can, of course, have any channels you want. A very common example is in your application, each time you have a feature branch that will correspond with a corresponding channel inside the vendor portal. And in most cases, those channels are even auto-generated because everything is typically hooked up with CI CD solutions. So each time there's a new get branch created, the whole CI CD kicks in and creates the whole channel in an automated way. It's very common. Once the feature branch is, let's say, merged into the main branch, the main branch goes to the unstable channel. Once there's some final QA testing that has been done, it might go to the beta channel. And once there is a full approval, it is going to go to the stable channel in there. So you have a deeper look at one of these channels. Let's say, for example, the stable channel, you see a couple of concepts in there. I'm just going to silence my phone here for a second. There we go. One concept that you have in there is what we call releases. And you can see here, this one has a latest release in there. Another concept that you have in each channel are what we call customers. The customers are going to correspond one-on-one licenses. First look at releases. Then let's open this up here. See, I have a couple of releases in there that I want to show you what the release looks like. On the left, what you see here are a whole set of YAML files. And what you have below the horizontal line, those are YAML files that are what we call application specific. So as an example, I took the Sentry Enterprise application. So all the YAML files that you see here below are really specific to the Sentry demo application. The YAML files that you see above the horizontal line, those are what we call replicated specific YAML files. Those are making the application a replicated application. Just a side note, the application doesn't need to use raw YAML files. There might also be help charts in there. Or there might even be a combination of multiple help charts and raw YAML files all together put in here that make up the application to do the installation process. Now, why are there several YAML files that are replicated specific? Well, let's highlight a couple of them and then maybe to start with, there is what we call here a preflight YAML file. And the preflight YAML file is defining what you could call the minimum requirements that are needed in order to install the application. And I'm going to come back a lot more on to this later on, because this is based on a framework called troubleshoot.sh, which is fully open sourced and can even be used outside the scope of replicated. There are multiple vendors who are using troubleshoot without relying on the replicated enterprise version in there. So a good example to start with is we have an analyzer here that does a check on the cluster node count. If in my open shift cluster, I have less than three nodes, then the preflight check is going to fail. If I had less than five, then we're going to send a warn message that's obviously five or more than we're going to define the cluster has enough nodes. So this is a very simple check, which is very common, where you see, oh, I have an enterprise application that needs to be installed on an open shift or a Kubernetes cluster. In order to install my application, you need at least X number of nodes. In order to validate that before doing the actual install, you just add an analyzer that checks the number of nodes. Similarly, you can do that with memory, CPU, and basically anything that you can run in a pod, you can collect information from and then start running analyzers on that. You can see there are many more YAML files in here. We're not only talking about validating the minimum requirements, we're also talking about generating support bundles, or for example, doing automated backups of the application. So there's quite a lot of different YAML files that make up the application itself. Going back to my channels, so each channel is having a release assigned to it potentially, but a channel can also have what we call multiple customers. So very typically, if you're delivering enterprise software into, let's say, a financial institution, then you will also have to provide what we call a license to that financial institution. So if I go here to customers, then you can see I have in this demo environment that I have a single customer, which is some bank in there. And if I open up the customer, then you're going to see there's several details in there to have my customer name. The customer is assigned to a certain channel, so that's going to be the release that they will be able to install. But the customer can also have what we call an expiration policy, which is very commonly used, for example, when a customer is doing a trial. Yeah, if it's a paid customer, they might not have an expiration date, but there might also be customers who are vendors, sorry, who are doing installations that maybe they have a community version of the application. And then the installation is going to show community if their customer is upgrading to the enterprise version, then they switch the license to a page first. And there's, of course, many more things in there. One thing that I want to highlight here is maybe two things. One is aircaps. As we're delivering enterprise applications, like, for example, a Terraform enterprise into on-prem environments, those on-prem environments sometimes have what we would call the aircaps requirement, meaning they're not able to access the public internet or public container registries in order to download all the images. They're fully isolated aircaps environments. And sometimes even as far as the only way to bring in the application or an updated version of the application is to physically walk in even with a disk containing all the tar balls. What we do with Replicated is the creation of all these what we call air gap bundles. They're multiple. We do that fully behind the scenes in a fully automated fashion. So we download all the images, all the Apple files or the Helm charts and everything that is needed in order to properly do the installation. For our enterprise customers, we even provide what we would call a download portal. I'm just going to generate a password here because it's password protected. Let's copy this. And then I'll show you what the download portal looks like. So this can be shared with a vendor's customer. You can see here I have a demo application already used the OpenShift logo. I'm now into the download portal of my customer called SunBank. And my customer can go here, download their license. They can download the Cots admin console Cots air gap bundle. This is the Cots QTL plugin to install, but also the different images to install into the cluster, but also the OpenShift application. It's sorry, the Sentry application air gap bundle itself. And on top of that, what we do with customers is what we call custom fields or custom entitlements. So we have many vendors who might already have an existing licensing system in place, or they might have certain custom requirements on licensing. And a very common requirement there is let's say number of users. So when the application is installed even in an on-prem environment, then the license for that specific customer might be limited to let's say a certain number of users, or it could be a number of nodes, or it could be a number of messages, number of API requests. So all these can be customly configured in here. And then all of these custom fields are available during installation of the application, but also at runtime. So on the customer environment, the Cots admin API exposes an end point that allows your application to check if, let's say for example, the license was expired, what the customer type is, but also all the values of the custom fields so that you can start acting on that. And maybe if the number of users goes above, then that you will automatically start sending a message or showing a message that your customer must contact the sales department or you start automatically upgrading the customer. So that is a little bit what it looks like from vendor portal UI perspective. The next thing I want to show you is what it would look like from a customer and user perspective. So also on the channels for each of these channels, what we're exposing is the installation commands to install the application inside the Kubernetes cluster or the OpenShift cluster. If the customer doesn't come with, let's say an OpenShift cluster installed or they just come with a virtual machine, an EC2 instance, one thing that we do from a replicated side is we allow the support for vendors to install in what we would call an embedded cluster. What that will do is it is going to basically run this on a VM or bare metal machine and it's going to spin up, in this case, the curl cluster. Curl is our upstream version of Kubernetes that is going to be installed in there. On top, it's going to install calls on top. It's going to install the application itself. In the case that someone is already having a Kubernetes cluster up and running and that's where OpenShift comes into play, so they already have OpenShift up and running, OpenShift installed, then it basically comes to installing the application with the following two commands. First one is installing the God's tube CTL or OC plugin in there, which is the first one. The next one is using the God's tube CTL plugin. What you will do is it's going to install the OpenShift sign tree application in there. That is also what I've already done up front because it takes a couple of minutes to do this. I spin up an OpenShift cluster and then using the OC command, I actually installed my sentry application. The first thing it does is it asks for the namespace. It's going to create the namespace. It's asking me for providing an admin password and then it's set up automatically a port forward for me. I canceled this one because I'm running this on a Google Cloud VM. I want to expose that same God's ADM port using port forwarding of all the addresses. Once that is established, me as an end user, I can go into my application and I can browse to that port and I can start logging in with that corresponding password. Let me grab that password again. There we go. Logging in in there. In order to install sentry as a demo app, the next thing I have to do is upload that license file. If I go back to the vendor portal because I talked about customers, I talked about licenses, each of these customers also have a license file in there. It's just a YAML file that can be shared with customers and provided to them. Let's save this file. I go back here to my customer experience and then I can, in this case, I can drag and drop this YAML file in there and upload my license. Uploading this license means also that God's ADM will understand which application is installed, but it will also know that, oh, for this particular customer, air gap was enabled. I'm going to present that customer with the option to install in an air gap environment. In that case, we have to provide the location of the registry. It's an existing cluster. We don't know where the container registry is going to be, so then the customer will have to provide what that container registry will be in order to upload all the images. In this case, I'm not going to start uploading to our balls. I'm just going to download sentry enterprise from the internet, so it's going to start downloading that, installing my license. The first thing that then pops up is it's going to ask me for configuring my application. Once the license is installed, it's going to allow me to configure my application, which is what you see here, which is happening here. What is this all about? Well, if you're developing any application in there, and it doesn't even matter if you use YAML files or Helm charts, most applications require a certain amount of configuration in order to be installed. A very good example is, well, you write an application. You might need certain storage. As an example, that could be, let's say Postgres that you might use as a database. Well, if you're a vendor, then you could package Postgres as part of your application, and then that would be like an embedded Postgres. But you might also want to give your customers the option to say, well, maybe you're running this on AWS, and you would allow your customers to use a Postgres RDS instance, and then the customer will have to provide the configuration of Postgres in there. What's the host, the board, username, password, in order to be able to connect with that external databases? Very similar, you can have things like SMTP configurations or LAP, or the directory configurations, or for example, on top here, what you see is the admin username or the admin password. So vendors can configure this with, I'm going to switch here to VS Code to show you what it looks like. So here on the left, you see all these manifest files. In this case, they're all grouped together. I previously showed already like the preflight manifest file, but there is another replicated specific manifest file, which is the configuration. You can see this is really a one-on-one mapping with what you saw in the UI a couple seconds ago. So you see like we're asking here for the admin username, the password, number of worker replicas, database configuration, whether it's embedded or external. And you see also some notations in here, for example, like this one. This is auto-generating a secret key behind the scenes so that the customer doesn't have to do that. You see also other notations in here. And this is what we call GoTemplating, which is available throughout all these YAML files or throughout your help charts in order to start allowing to make your application configurable in there. In this case, I can accept the defaults in here. So I can just press continue. Once I press continue, this gives dot ADM basically all the information that is needed in order to, you could say, install the application. So we have the license information. We have the custom entitlements, like say, for example, number of users, but we also have the custom configuration, which storage options to use, what should be the ingress, what should be the host, all that information should be available at this moment. So based on that information, what we can do is we can validate the environment if it meets the minimal requirements. So that corresponds with running all these preflight checks. That is what's currently happening in there. And obviously common preflight checks are, let's say, for example, customer note count, the number of CPU, but also let's say the customer provided an external postgres to the endpoints. Does that postgres meet the minimal requirements? Does it meet the minimum version of postgres? Can we even connect with it? Does it have the right credentials in there? This takes a while. So I actually ran this a little bit before already. Once that is finished, the output of that will look very similar to like this. So we're checking cluster note counts. In this case, it's giving me a warning. Things like criminal storage. Is there a default storage plus? So all of this is configurable within that single yellow file, which you as an application creator can provide in there. And all of this is also based on a framework, which is called troubleshoot.sh. Now you can see it here, right? You can run the preflight. It's running as a Qtl plugin. So it's kind of the difference. You can even run it to command line. You can run it through the UI. The options are available in there. And what I wanted to show is I talked a little bit about postgres. There is, for example, on the analyzer, there's many analyzers available out of the box. There is also a postgres analyzer available that you can use as is. So what this one does here is in a preflight checks, it checks with a collector. If it can connect with the postgres instance itself, then using an analyzer as an example here, it checks the version of postgres must be 10.x or later. And then based on that, it's actually going to fail or pass the preflight check in an automated fashion, which is very beneficial because you as an application developer, you might not even be there when your application is being installed. Your customer might be doing that totally offline without you being involved. So this will also help your customer in order to do that installation. By the way, you see some of these checks failing here. I just want to make a side note. These are failing on purpose just to give you an idea of what it would look like if one of these checks fail. You can actually link them. You can, in each preflight check, you can put a link that points back to your knowledge base that gives more information of how to configure this or what needs to be changed or what would fail if you overruled it. Because customers can't overrule. I can just continue here and I can start deploying the application in there. So what it's doing now, it is actually deploying my application. So it is actually deploying all those YAML files, all those Kubernetes resources onto my cluster. And one thing that you see here is from a UI experience, we are checking here the status. You can see it's still showing unavailable. Also, that comes from some of those replicated specific YAML files. For example, this one just defines my replicated application. And what you have in here are what we call status informers. Status informers are the Kubernetes resources that we are checking using Cots ADM to see if the application is up and running. So, so far what I've shown is like what we tend to call like a day one of operations. This is the initial install, the first install of the application. We're not changing anything. Day two of operations is when things might start going wrong in the application. And that is where what we call support bundles and troubleshoot comes into play. So end users of Cots, Kubernetes off the shelf, they also have access to troubleshoot and they can also start analyzing their application. With a single click on the button, I can start analyzing my OpenShift Sentry application. I'm very similar to what preflights are doing. And preflights also rely on the troubleshoot.sh framework. Support bundles also rely on troubleshoot.sh. So in my application, I also have a support bundle resource that defines which information I am collecting. I'm collecting certain information while I'm generating my support bundle. And then on top, I'm putting analyzers on top of that that are checking the information that has been collected. So very common thing there is you might have multiple resources that are being deployed. You have multiple pods that are running each producing log output. Well, you can actually start collecting all these logs in there and then start defining analyzers on top of that. And a very common analyzer there is you can see it's already finished here, but I want to first go to troubleshoot itself is there is, for example, also an analyzer that allows you to use regular expressions. And as an example, what you have here is collecting the log files of all our pods of our application. And then as an analyzer, I'm defining a regx to see, oh, if there is this kind of a message that the password authentication failed for a user, then me as an application developer, I know as an outcome, this is kind of like a little bit of reverted logic here that if that regx is not present, then we know that the database credentials will work. If that message is present, then we know that there is a problem with the database credentials. Yeah. So if you run this from back here a couple bit, you can run this from the command line with the Qubectl plugin. There's a preflight and a support bundle Qubectl plugin. So we're switching here to the support bundle side of things. But you can also run this as part of COTS. COTS gives you then the whole UI experience in there. If I go back here and COTS, this is what the UI experience would look like for an end customer who is installing this application. So they get this tile output that shows what is going wrong with that particular installation. And currently only showing the errors in the warnings. You can also show everything in there. You can see there's a whole bunch of analyzers running in there. Obviously, people using troubleshoot, they have access to the corresponding files. And besides collectors and analyzers, there's also a third component to troubleshoot, which is what we call detectors. We go back to troubleshoot, then go ahead to the docs. So on one side in troubleshoot, you can define collectors. You can see there are a whole bunch of collectors available out of the box. And if it's not there, you can just create your own pod in there and create your own collector of information. Second part, within troubleshoot, there are analyzers similar to collectors. There are a whole set of analyzers available out of the box. But the third component is what we call redactors. And redactors are typically defined by an end customer. They define which information needs to be taken out of the collectors in order to be able to share the troubleshoot support bundle. So what happened here when I was running is, behind the scenes, there's also a redactor report. So we've taken out things like usernames, passwords, IP addresses. And this is just to make it more secure. End users can also define their own redaction. So they can actually confine which additional redactors need to be part of it. And of course, the last thing is this can also be run command line. So this is based again, a Kube CTL plugin. In this case, the support bundle Kube CTL plugin. And then if I would run this command line, I would actually run this single Kube CTL command in order to get this up and running in there. And of course, if it's running in an OpenShift environment, we simply replace Kube CTL with OC, and that's going to do the corresponding thing in there. So one last thing that I would like to do is give you an example of what would happen if I start changing something, when I change something in preflights or in support bundles. So one thing I'm going to do is in my preflights, I'm going to do one additional check in there. And first of all, I'm going to take a collector here. I'm going to add a collector in here. And what the collector is going to do is it's going to check if I can connect with OpenShift commands. I'm going to make sure that I'm doing the indent right here. This is collecting the information. So it's using an HTTP get request, and it's going to check if it can reach commands.OpenShift.org. So I just wrote this up front, so they don't make any typos in there. Next to that, I'm going to add an analyzer. In this case, I'm going to use a regular expression. I'm going to use text analyze. And as part of my output of my collector, I would like to have 200 back. If I get status 200 back, which is this outcome, then I'm going to send a message. Yes, we can reach OpenShift commands. If I don't get a 200 back, then this outcome, then I'm going to send a message that we can't reach OpenShift commands. It might also be that OpenShift comments is down, or it might be that I do not have outbound connectivity from my cluster. Going back to my support bundle, and at the very bottom, I'm going to add my analyzer. Just going to check. This one needs to be indented. There we go. And then I'm going to save this one. In order now to make this available to my end customer, I need to send this new version of my preflight YAML file. I need to send this to my vendor portal. So I'm going to use the replicated CLI in here. I guess I ran this already. So what I'm doing here with the replicated, I can do a replicated to release LS. This is going to show me in my vendor portal all the releases that I have available. You can see I just installed this particular one before we started the webinar. If I want to change this, I can change a new release. You can do a replicated release create, and I do dash dash auto. And I want to promote it immediately into the stable channel. Because the stable channel is also this channel that I'm using in my cluster as an end customer. So what this will do is probably going to, oh, yeah, because I used auto. So it's going to create a release, put it in the stable channel in there. And it's going to look into, because I specified auto, it's looking into manifests in there. And just say yes, this will send everything into the vendor portal there. Vendor portal is now going to have a new version available, which is going to be sequence number six. Me as an end customer, how do I get this available into my open shift environment? Well, there's actually a cron job running here that constantly checks for updates. But I can also trigger this myself. So I'm just going to check here now for updates into the vendor portal. If there's a new version available, you can see there is a new version available. There's a license change, which I previously did, I think, in there, but there's also an upstream update. This upstream update in particular that I'm interested in. And first thing that we do out of the box is when there's a new version is we check the preflights again. So if I go to my preflights, one thing that I'm particularly interested in is the one that you see here at the bottom. So this at the bottom that you see here is what I just added. As an application developer, I added a collector, just sending an HTTP request to check if I can reach open shift commons. And then I'm checking the output of that one, which is done here in the text analyzer, which shows me that, well, as an output, there must be a status 200 there because I'm seeing this message. And of course, I can continue this one, or I can just install the new version of my application, which, to be honest, isn't going to install anything because I didn't change anything in the actual application. So it's going to go fairly quick in there. That's it from my side. So I think next step is to go over and go into Q&A. Well, thank you guys. That was great. And really, I've seen replicated at Kubecons and other things, other events, and I've run by the booth and done that. But I think that was probably the best overview. I've got two streams going at the same time. Hang on. Hang tight here. I'll get my audio figured out here. So that was a great demo. So thank you very much for that. It really finally pulled together everything that I was wondering about where replicated was going and how it fit into the Kubernetes ecosystem. So that was incredibly useful for me. And I hope so for everybody else who watched. And one of the things Trevor and Josh that I was really, it was funny because you were just as you were demoing, and I was typing in, can we talk about the air gap stuff? Because that's like one of the more complicated use cases when you're doing some of this stuff. And you were right segwayed right into that as I was typing. So good on you guys for covering all of that. That was really great. And one of the things that's so interesting to me about this is because I know you had a really long-winded title for this, Trevor. But it really now makes sense, right? Because you're not just deploying, but you're managing, and you're patching, and you're upgrading, you're doing everything for all of these third-party apps. And for any enterprise, from a vendor point of view, you think, oh, you should just use our tools for OpenShift. We'll solve all of the mysteries of everything. But a real enterprise is running so many third-party apps and deploying so many different platforms that this is, I can totally see the use for this. But one of the things that I always ask people about, because I come from a bit of an IT audit background, and I'm a bit on it, is what I didn't really see was from a compliance and risk point of view. How you're tracking some of that, or the reporting in the back end of what you're doing with Replicade. How does this appear? So if I want to see, and maybe Josh can talk a little bit to that, if I want to see what's going on and keep track of all of the releases from an overview point of view. Yeah, I'm unmuted. Yeah, I did one update, basically, in COTS-ADM. But normally what you will see is that each time there is a corresponding change, it is going to show up in COTS-ADM and the whole person history. And then what I also did not highlight on is there is the capability to do a diff on all of those releases so that you can actually see what has been changed between all of them. And then that's going to be in the corresponding YAML. What we also see is that in terms of compliance and that kind of stuff is that sometimes in the more advanced use cases, and customers who want to have access to the actual raw YAML files, the ones that are going to be installed, there is of course access to that. And we even go a step further that instead of COTS-ADM installing the app, we can actually send everything into a GitHub repository and then people can use like an Orgo CD or a Flux to apply the application or install the new version of the application. We see that that is more common with the more advanced Kubernetes users who are experienced in that, who already have standardized on one of those GitHub solutions and they want to standardize on that for all of the applications that they have in their enterprise environment. In terms of audit, what you bring up is a very also a very good use case because I think I brought up like during the demo like number of users as part of the license. Well, what you can do is with troubleshoot is there's common use cases where people deliver third-party apps and they say it's licensed based on number of users, but they never validate the number of users because they do not really want to restrict their end customers. They just trust their customers. What you can do of course with troubleshoot is you could define a collector that collects the number of users and then you define an analyzer that just says, okay, you're in violation of your legal contract because you have 15 users and you're only paying for 10 or just put a warning up there, something like that and then do your whole audit tracking in there. Especially troubleshoot gives a lot of capabilities in order to run within a customer environment in there. You can basically anything that can run in a pod, anything that can run in a container, you can run that as part of troubleshoot and you can start analyzing on top of that which makes it great. So, I also have been reading a little bit about this enterpriseready.io site which sounds like in combination with everything else you're doing. How does that play in with everything else you're doing? Do you want to take that or shall I? Yes, Michael. I'm trying to find the unmute button. Enterprise Ready, that is actually our podcast website and that is just interviews with thought leaders in the tech community and it's very, very interesting. Our CEO grant and our CTO mark, they usually handle those interviews published weekly. Super fascinating. I highly encourage you all to subscribe. But yes, that's what Enterprise Ready is. I think the Enterprise Ready website also has kind of been, it's a kind of a content platform that we've sponsored for a number of years and it really, it's a guide for ISVs who kind of gauge their level of readiness to deliver their application to security conscious, large enterprise customers. So that has a great checklist of kind of feature roadmap, if you will, especially if you're a small SaaS vendor and you're trying to figure out how to take that SaaS application on-prem to a large enterprise customer, one of the sorts of features that those customers are going to demand, right? So we talk about things like audit and identity management and single side on and things like that. So it's really a great guide. We view it as just a resource for entrepreneurs and people that are building software companies and then kind of the centerpiece of that is the podcast that Treva mentioned where we've had some really just amazing founders, you know, folks like Loris DiGiorni from Cystig, you know, the folks that invented Wireshark, Tom Preston Warner, original CEO, founder of GitHub. So really just a great, you know, set of startup luminaries and just smart founders and CEOs that Grant and Mark speak with on the podcast. But the website again is really kind of a cool guide for software companies that are trying to make that journey to make their product enterprise ready as the name implies. But thank you so much for having us, Diane. This has been great. I think that that's the wonderful value proposition I think replicated is that you're really facilitating a lot of other people's applications getting into and being useful to enterprises and being secure enough that enterprises will take them seriously. And that's I've seen and listened to some of the podcast stuff in the past as well. And I think that's really the sweet spot that you guys are in is that you are making it possible for all of these third party applications to, you know, in some ways you're helping with the enterprise ready content and coaching and pieces in helping incubate these third party apps and get them ready for prime time. And then once they're there using the replicated suite really helps the enterprises trust and be able to deploy these things securely in a consistent way. So it's really been very interesting and a good eye opener for me. One of the things like I said in passing at the beginning, I've walked by the replicated booth at KubeCon and stopped in. I've chatted with people. This has really helped me place where you are in the ecosystem very nicely today. We're really thrilled that you guys have joined OpenShift Commons. We look forward to hearing more about each of these individual projects, Trevor. Don't worry. We will find time for everything troubleshoot. I hadn't seen it before. It is like totally cool. And so I think I'm encouraging people to take a look at that as well because I think there's lots of tools in here that rather than reinvent the wheel, people can also use in their installment and deployment practices. So I think that's really another useful thing. And I just love how open source you guys all are. So I so appreciate that. Trevor, tell us a little bit about what's going to happen. The community Cal, where we're going next, where we need to go find you guys to get involved. Yes. So main page, replicated.com. As I mentioned, we have a lot of open source projects with the list growing every day. It's about to expand by two or three in the next couple of months. Go over to replicated.com slash OSS for a list of all of our current projects and keep an eye on what is coming next. I think you're really going to like the next project that's coming out. So the projects that we talked about today are on the replicated HQ GitHub repo. And that's at replicated HQ slash COTS for COTS Kubernetes off the shelf. Curl replicated HQ slash curl for curl and troubleshoot for troubleshoot. I'm not reading this out very well. I apologize, but hopefully like seeing them written out will help you to get there better. And also there's the community calendar. We are having community meetings for troubleshoot this coming Thursday at 2pm Eastern. And then we're also having meetings for schema hero and for COTS, I believe for curl. I'm sorry. And we'll be adding more to that calendar. So that's an easy way to keep track of what's happening with like replicated open source replicated community. And I'll be adding more events as they come along. So I certainly hope that you'll subscribe to that. And I hope to see you either online or in person. Gaby and I really enjoyed being here. Thank you so much Diane for the invite. But I think that's all my feel. So join the community. One last thing to add Diane is that we are replicated as a Red Hat business partner now. And also in the process of getting the replicated platform that you just saw certified for open shift. As you know, there was that change a while ago to having to ship on the UBI. So we have a little bit of a delay while we get onto the UBI. But we absolutely have every intention of getting the replicated solution certified for open shift. So that both enterprise and users and ISVs can kind of ship with confidence knowing that it is a runs on open shift certified platform. So that's our goal. That is awesome to hear that that's on your hit list. And you're going to get that done. So we'll have you back when that happens again and celebrate and raise some a glass or two. Hopefully we'll see you all at KUKON in person. I'm really hoping. Fingers crossed. And Schema Hero is something I want to hear more about. I'm going to launch a cloud native TV show sometime later in September. Mentoring new open source projects that are in the CNCF. It's called Cloud Next. And I think that might be a good candidate for it. Because now with the something I don't know how many we're going to have 50 odd sandbox projects. There's a lot of noise. And so it would be wonderful to get to deep dive into Schema Hero a little bit. So I'll reach out to you about participating in that at some point as well, Trevor. So as always, huge pleasure hearing from all of you. Thank you for giving us this update and insights into what you're doing. There's lots more to come, obviously. So take care all. And thanks to Chris Short, our producer for making this happen. And Amit and the whole team, Josh and Trevor, well done. Well, we look forward to hearing much more from you all. Take care all. Thank you, everyone. Thank you. Thanks, everybody. Bye.