 Great. Thank you. Before we go find out whether open source one or not. Let's go back one step and just say thank you everybody for joining us. I'm Karsten Wade and I'm from community architect working on operate first. And let's go ahead and dive in. So we all know the story about how open source one, right. First, originally you had a mainframe and you could buy the software and the hardware from one vendor. And only the manuals were there to help you use it and there was no way to improve this tool. But in the opening and of the computing in the components of the data center by by open sourcing all of these open source was able to return back to everyone, the value of creating and running software. And suddenly now everyone can be involved in the open source development model and it's it's wide open to the in the hands of the people. So we can rest on our laurels right or or can it. So there's been a shift in the day since open source went from a foothold to really taking over and opening up the data center. Starting with the operating system, it grew outward to provide all the core data center services. But one of the important powers of open source was that theoretically anyone could get the software and make it run. They can operationalize it and do things with what is otherwise a cold lump of code. While many were writing the software, everyone had access to the code to do things with it to turn their computer that was essentially just a warm heater to actually operating and doing something. So this level the playing fields for all for using these tools, but this leveling meant that we could compete or innovate or get lazy whatever we wanted to do. We had a fair and equitable chance to make that happen as we wanted when writing software, especially from scratch is hard, but learning operations one acts action or standard operating procedure at a time is far more accessible. So as more and more value was found in the operation of the code and not just the creek writing of the code. And that came along with the dawn of the era of cloud computing, and that brought an increasing ability to capture that economic and innovative value with all that workload data, and its ability to improve operations all inside these proprietary environments. This provided the public cloud operators with a recursive feedback loop that essentially paid them to learn through micro failures. They provide the compute and infrastructure as a service level that also allows them to be continuously learning from the telemetry. So to be learning from what had always been exhaust of the back of an IT shop, because of the scale of logs and data versus the ability of humans to parse that into meaning. So the open source movement had done was succeed in involving the way we develop the software as a people, but we never saw the problem of operating that software in a truly compatible way, making it possible for all open source to come self operating, whether operating at small scale or operating at large scale, whether it's go figured out slice and dice new configs from old it's all in the wrist to the complete reinvention of operations as an engineering discipline. It does little good for the improvement of open source software when developers cannot get access to and learn from a live production cloud. This learning has been restricted to employees at large organizations that implement these dev ops and SRE practices. This restriction of knowledge for the what and how and the why of operationalizing open source software means the developers of the software being operationalized with proprietary ops, do not gain from that knowledge. Again, the open source developers don't gain from the knowledge of that operationalized software. They have no chance to actually watch that code running in production environment. So open source projects have a natural method for incorporating knowledge back into the code base to try and true process here. And we're open source projects are purposely built to allow for natural progression of curiosity that needs a fix I wonder if I can contribute that fix. What should I change. How do I make it work. Oh, my change broke. Oh my god, others can use my change. How do I release it. All that interest gets turned into a contribution. But that doesn't work on the closed operations side of open source software, which I'll lump back here as a software as a service essentially, and that quickly chokes off that interest toward making a contribution. And, and that interest was making the software better. So software as a service drives out this funnel, because you can get stuck right at the beginning, where you don't have access to the stuff that runs the service. What operate first brings in is the power of open source, the ability to turn users into contributors. And now that you can see all the operations, you'll be able to contribute to not just the operations but to the part that you that are most important to you as a user developer. And that opens up that wealth of data developers for improving our software. This means for developers have read only access to all the data, and by all the data we need the metrics the logs the support issues created just as you would have access to every line of code in an open source project. And open source communities thrive because of these easy onboarding processes, meaning we can turn just a small fraction of users and contributors, where you can read about a problem or report a problem and then if you really want to walk that mile, you can resolve the problem. So this is what operate first is. It's a concept to incorporate operational experience into software development by extending development to include operating, testing and proving that code in a production environment, a live production environment, ongoing lifecycle testing, no pipeline looping end. And I'm going to notice here that that this is a method that that doesn't just mean open source it's, it's beyond it's a DevOps methodology it's something that people are doing already internally. So it's us taking this operate first methodology and applying it to open source ecosystem. And we know from the parameters that by the very nature of open source development, if you can provide meaningful data back into development from a live production environment, it's going to have exponential improvements, not just to that code base but beyond. Open source developers who gain meaningful data, as they tweak an upstream library are using it's a 7% improvement in performance, those developers are going to have an easier time convincing upstream to accept their patches. That's just the tip of the potential. So now let us. I'm going to start here and shift over. Let's talk about community building in the operate first context. We want to be inclusive to all personas to be inclusive in this context means to welcome those that wants to get started on a journey in a certain area and be inclusive to those who already mastered a certain area. So somebody who wants to learn how to run a cloud environment and wants to get their hands dirty and those who already know how to do all this and wants to share their knowledge and want to train people. So really be inclusive on that complete spectrum from beginners to professionals and experts in a certain field. What personas are we targeting? It starts with operations with service reliability and engineers and DevOps people that run, maintain all the components. Then we have developers that create workloads and want to run applications in that cloud but also those developers that develop certain building blocks of the cloud. Being inclusive to those who use the cloud without any users you will have a very sad deployment that is a service that nobody really uses. Only if you use it you will run into those corner cases. Being inclusive to people that do product support who help users and connect them to developers and operations people to solve problems and create better documentation. Being inclusive to architects, architects in the sense of building a certain use case and want to run and deploy this use case in the community cloud open for inspection, open for usage by users, open for inspection by developers and work together with the ops people to make this deployment stable and a pleasure to use. And then at some point also inclusive for all the robots and the small integration tools that help users and ops people alike and someday feed into AIOps tools and will have this self-healing and self-driving cloud environment. So how does this operate first community cloud production environment look like? We started out with one cluster at the Boston University or mass open cloud. That's in a physical data center. It's comprised of a large bare metal cluster running in that environment. But we also extended that into a deployment running in Europe at Hetzner. So we already have something that spans multiple geographies one in the United States and one in EMEA. We also have clusters running in the actual cloud in AWS. And we're looking forward to extend that into other bare metal data centers at universities and into other cloud providers such as IBM cloud. Now with all these different data centers and clusters you want to run workloads there. On the forefront we have the open data hub which is a project for doing cloud native data science. So you will have things like Jupyter Hub there, Kubeflow pipelines and other data science services. This service is available for everybody in the community to use and inspect. Then there's other workloads such as Thoth which provides services for AI DevSecOps and build pipelines. We have some projects from the Java community such as AP Curio, Quarkus and a POP, a Python index. And we have a lot of other smaller deployments so workloads are running across these multiple data centers. Obviously you want to manage and automate all this stuff. There's ACM Advanced Cluster Manager which is being used for deploying clusters and managing clusters. We use GitOps for deploying workloads and deploying clusters configuration. For that we have Argo CD and then we have Prowl, a CI tool from the Kubernetes community. We have Tecton pipelines for running our build pipelines and we have the observatorium components for doing monitoring such as Prometheus and Loki for storing logs. So all these tools are there. They're open for usage, buy and inspection from the community. And we consider everything as a service. So Open Data Hub and the Tecton pipelines or the monitoring stack. They are not only used to run the cloud but they are also exposed as a service that everybody in that community can reuse and build on top of these services. And at that top there are these operators which you can run in your cluster and everybody can use that as a service that an operator provides. And we have relaxed requirements for running these operators. In your typical production cloud you might need to convince the people that run the cloud for getting a community operator installed. But in the Operate First cloud we really embrace beta and alpha versions of operators or community operators available for trying things out and see how they integrate with all the other components there. So it's really open for a variety of different operators. And obviously we want to create lots of operational data because without all the data open for inspection you would always start from scratch. So whatever we create here is preserved for later inspection and for people to build upon that knowledge that we create and to inspect the data that is being created. And then maybe train some AIOPS models on top of that data. Since it's really, really hard to get a hold of data that is proprietary, we try to be as open as possible and create all the metrics, create all the logs, create all the incidents, all the tickets and give them back to the community so that they can build upon it. The same is true for blueprints. Blueprints are basically architectural decisions that rewrite down and distill into documents so that you can follow that train of thought. If you run into a similar problem or if you're facing a similar decision such as how do I store my secrets? How do I manage single sign-on? And these decision records can be used as a basis if you want to create a similar deployment of that Operate First Community Cloud. So we invite everybody to partake and join the community and help building out this variety of tools and best practices. And now a demonstration of the Operate First Community Cloud in action. Demo one, onboarding a cluster to the Operate First Community Cloud. In this demo, we onboard an existing OpenShift cluster to the Operate First Community Cloud environment. Steps, one, add cluster to Argo CD, two, enable single sign-on, three, enable monitoring. Let's get started. In this demo, we're going to explore how we manage OpenShift clusters within the Operate First Community Cloud by showing you the cluster onboarding procedure. We're starting here at the welcome screen of ACM, which stands for Advanced Cluster Management, which is a RedHeads product based on Open Cluster Management project. On this screen, you can see the fleet of clusters under Operate First Community Management. ACM allows you to directly provision OpenShift on various providers, as well as connect existing clusters to the fleet. Adding a cluster to ACM means executing a single command as a cluster admin on the target cluster. In this demonstration, we'll focus on this demo cluster, which is already connected to ACM. As you can see, it's running OpenShift 4.8, it's running on AWS, and it's powered on. In order to onboard this cluster under our GitOps management, we're going to follow this Jupyter notebook guide. We use Jupyter notebooks since it works nicely as both static documentation and as an interactive guide that can be followed and executed. By the end of this notebook, we'll get a Git commit that we can open as a pull request against our Git repository. As you can see, this particle guide has some prerequisites to have an OpenShift cluster up and running and to have that cluster connected to ACM. We've already done that prior to this demo. This whole guide is written in a script-like pattern where we can set all the necessary variables in the first cell and then just execute the rest of the notebook. For the sake of this demo, we'll take a closer look at each of the steps, but in real life this is not necessary and we can execute the cells as they are. In the next step, we're going to fork and clone the Apps repo. As you can see, I already have a fork available. Let's clone it and change our working directory. In Operate First, we manage all Kubernetes resources via Argo CD. To connect our Argo CD instance to this new cluster, we use ACM integration capabilities. ACM can set up the Argo CD cluster connection, create a service account on the target cluster and set up proper permissions. We instruct ACM to do so via a shared KIDOPS enabled cluster set resource. So all we have to do now is to label the managed cluster resource for this demo cluster by the appropriate cluster set label. We will add this resource to the ACM application folder in the Apps repository. Next task on the list is to enable SSO on the demo cluster so users can log into its console using social login. Let's see how the situation looks now on the cluster before enabling SSO. I'm going to open the console for this demo cluster and also a console for this other SMOKE cluster we have so we can compare it. On the demo cluster, we're only prompted for credentials which we don't have. On the SMOKE, on the other hand, we have this Operate First identity provider tile that can be used for direct access. In order to enable SSO on this new cluster, the first thing we need to do is to configure our SSO server instance to accept the new cluster as a client. We can do that by a keyclo client resource. We will use the cluster name as a client ID and a generated UUID as our secret. Next up is to propagate the same credentials to the onboarded cluster. We create a secret resource containing the client secret and encrypt it by a source. Then we patch the OpenShift's OAuth resource by adding a new identity provider. As you can see, we reference our keyclo server as the issuer, specify the client ID to match the cluster name and reference the secret we created above as the client secret. Now we can assume we have SSO in place on the cluster so we can define a cluster admin user group granting cluster admin role to few users in case of emergency. As you may have noticed above, all the resources that should live on the onboarded cluster as part of a cluster management live in an overlay in the cluster scope folder. Since we are effectively creating a new overlay in this folder, we need to define a customization file so Customize knows which manifests to bundle and how to combine them. Here you can see we are pulling resources from a common overlay which defines shared resources like user groups. We are also pulling some other resources from a base. There we keep track of all privileged resources for all of the clusters. You can find username spaces there, operator subscriptions and so on. We will store this customization file in the cluster scope folder as a new overlay. In addition to the basic onboarding, we are going to enable proper monitoring on this new cluster. We will leverage OpenShift user workload monitoring which is a centralized Prometheus-based stack. Enabling it is as simple as pulling in a config map patch into the cluster overlay from cluster scope base. To complete the monitoring stack, we deploy an GitHub alert receiver instance which consumes cluster alerts and turns them into GitHub issues that makes them publicly accessible. We already have all manifests in place for that. So in order to add the alert receiver to the new cluster, we need to create a cluster-specific overlay in the alert receiver folder and adjust a few labels and such. Now we will put things together by defining application resources for Argo CD. So our Argo CD instance knows what manifests should be deployed and where they should be deployed. We embrace the app-of-apps pattern here. So we have a centralized application watching for application resources for this cluster in our kit repository. That way we get all the application resources managed via kithops as well. We define where the application resources are located in Git and that they should be placed on the cluster running Argo CD instance. Now we create the corresponding folder and populate it with applications. The first application resource we create now is an application for the privileged resources. We call it cluster resources. This application is set to consume manifests from the overlay in cluster scope where we placed our alert config and other resources in previous steps. Then we created another application this time for the alert receiver and again sourcing the overlay we created above. Now all that is left is to commit all the created files and create a pull request. Once the PR is created our CI will pick it up, run some checks on it and we wait for it to be reviewed. We use Pro for CI and review. We lend all the files via a pre-commit and attempt to build all the changed manifests via Customize. We also use Pro to streamline the review process so once CI has passed and the PR was properly reviewed, Pro will label the PR and merge it automatically. Seems like our PR was successfully reviewed now so let's wait for Pro to label the PR and merge it for us. Since Argo CD is watching for changes on our repositories, it will also notice the new commit and attempt to sync the resources for application with automated sync policies. Now let's take a look at our Argo CD instance. Once the PR got merged we can see our cluster appearing in the list of clusters available to Argo CD. We can also list applications deployed to this cluster. As expected we have two applications available here, one for cluster resources and the second one for the alert receiver. Both are syncing now. We can dive into the individual applications to check on which resources are being applied. Among others we can find the namespace for our alert receiver here, secret for SSO credentials and the alert configuration. In the alert receiver application we see that the deployment was successful and it is doing well. Let's also check on the cluster console real quick. This is the login screen from before. Let's refresh the page to see if the SSO was enabled. As you can see we have the SSO tile available now. Let's try it out. We've been successfully logged into the OpenShift console without being prompted for a password. Now we can consider the cluster onboarded and available to users so they can request their namespaces via github. Demo 2. Onboarding a project to the Operate First community cloud. In this demo we onboard a user project to one of the Operate First clusters. Steps. One. Project onboarding. Two. Deploying application. Three. Testing application. All performed via GitOps. Hey all. Now thanks Tom. We have a cluster available here in Operate First for us to deploy our project. So what we will do now is we will work as project admins and see how we can deploy our project to a cluster. On Operate First. And we will do this the Operate First way. To do this we have a guide, a hitchhiker's guide to Operate First similar to one that Tom used earlier to onboard the cluster to Operate First. Essentially what this does is helps you create a bunch of manifests that need to be merged to the Operate First app's repo. And once they are merged in that repo the manifest are applied. Let's start. Here we have a bunch of variables that we need to set. And let's start with the user variables. For the GitHub username I have set my handle here. And for namespace name let's set it to demo Jupyter Hub. Why Jupyter Hub? Because Jupyter Hub is the application that we are going to deploy in this namespace. Demo Jupyter Hub namespace a small display name so we know what's deployed in this namespace. It's like a small description. And next for the team name I like demo team. For the cluster we can choose the demo EMEA cluster which was earlier onboarded by Tom. And we will add Humair as a namespace admin because he's going to perform a demo on this namespace. Once we have applied all the variables what we want to do is we want to work to Operate First app's repository. And clone it. Let's go there and hook it. This is the repo where we have all the manifests that are applied to all our clusters. And let's fork it. I've already forked this. What we want to do is we want to clone it. We don't really need to copy the address here since the guide has already set up the commands for us. All we need to do is just run it. Which is true for the rest of the guide as well. All we need to do is just run all the cells and it should create and push all our manifests for us. Let's see. Now we're going to create a manifest for namespace. To do this we will use the Operate First CLI, the OPF CLI tool that we have created. This lets you create manifests for the namespace and it will also create an OpenShift group for us. And anyone in this OpenShift group will have the namespace admin privileges for the newly created demo Jupyter namespace. And this group is called the demo team. Let's run it. And next what we want to do is we want to add this namespace to a specific cluster. So here in the next cell we have specified this target cluster. And earlier in the variables we set up the demo cluster as a target cluster. And what now we want to do is we want to add me and Humair as namespace admins. So this cell will do that. Let's run it. And next we want to finalize everything. We want to make sure all the files that should be modified and created are having created and modified. To do so what you can do is you can go to the files here and go to cluster scope directory and see the specific files were created. But we have already tested this. So let's just add this to our repository and push them so we can create a pull request with it. Pull request with the changes that we have done. Now let's see if the changes were pushed. Awesome. We are going to come in ahead. Let's open a pull request. In this pull request you can see we will be going to add a bunch of manifests. Let's add a description for this pull request. This will add me and Humair as project admins for namespace demo Jupyterhub. And then we specify the cluster. On cluster Emia demo cluster. Perfect. Once we create the pull request it needs to be reviewed by one of the operator first ops members. And all the CI tests need to pass as well. So after these tests have been passed and it has been reviewed by one of the ops members it should be merged into the repo. Humair merged it in. Thank you Humair. And now since these changes are in the apps repository Argo CD will detect these changes and try to apply them to the cluster. Let's go back to the cluster console and see if these changes were applied. Let's refresh. The changes usually take about a minute or two. Awesome. We can see here the demo Jupyterhub namespace was created. But we don't really have anything deployed in this namespace yet. Now we want to deploy Jupyterhub. How do we do that? In our production environment we have Jupyterhub set up and we use the open data operator to deploy it. But since operators are a cluster scope resource what we need to have is we need to have higher privileges. And as project admins we don't really have those privileges. I mean here I am a cluster admin so I can just go and search for the operator and click install and it would install the operator for me. But we want to do this the operator first way. And how a project admin on operator first would do it. So to do this we have created a pull request. This pull request what it does is it will be adding a subscription to the open data operator. This is how we will be installing the operator on this cluster. You can see we have a subscription file here. And we need to get this PR reviewed and merged. Let's wait for the CI to pass. I can see Homer has already approved it. It's awesome. Awesome. Now the CI has passed. The PR should be merged in soon. And awesome. Now that the PR has been merged in our Go CD will again detect these changes and it will try to apply these changes. The change is being opened up operator being deployed on this cluster. Let's check the installed operators. Let's give it a minute. This operator is going to be deployed in the open shift operators namespace. Let's give it a minute. Oh we can see a pod being created here. And oh yes. The open data operator has been installed. We should be able to see the pod running for it. Let's go to the Jupyterhub namespace. Now what we need to do is we need to use the installed operator to deploy Jupyterhub for us. How do we interact with this operator? We interact with the open data operator using kfdev files. Here is an example file. But since we deployed Jupyterhub in our production environment we already have a manifest available for us. So we're going to just paste it and create it. Typically or normally we would do this in Git. But in the interest of time we are just going to apply this manifest here. And a project admin person would have these privileges. So we can just do that. Once we have applied it we can see Jupyterhub pods being created. Let's see. Oh they're ready. We should have Jupyterhub deployed now. Now let's try and use it. We have a route available. Let's click on it and log in. So Jupyterhub is an application that we can use to deploy a Jupyter notebook server for us. Let's click on the minimal Python and start a server. Jupyterhub users don't normally have access to the open chip console but we do. So we are going to see what's happening under the hood. We can see a pod is being created and we also have a PVC available here. PVC is where all the persistent data will be stored. Let's see. The pod is still creating and it should be available now. Awesome. We have a Jupyter lab environment available. We can create a new notebook and see if it works. Let's try running a simple command. Let's print hello world and see if it works. Awesome. So this works. Now let's go back to the open chip console. You can see everything's running and that's about it for this part of the demo. This is how we would deploy an application the Operate First way. Demo three. Performing SRE functions on the Operate First community cloud. In this demo we see how Operate First ops team members perform SRE functions. Steps. One. Set up alerting rules. Two. Break application. Three. Check alerts. Four. Fix application. Hello everyone. In this demo we will run through a scenario on how to handle service disruptions at Operate First. We'll be working on the project that Anand has set up for us within the Jupyter Hub namespace where he has given me access during his project onboarding demo. Before we introduce a problem with this service first we need to have our alerting in place. For this particular scenario we'll be alerting on when a user runs out of storage for their notebook. So thanks to Tom during cluster onboarding he enabled what's called user workload monitoring for us. Traditionally for alerting we would have to deploy a separate Prometheus instance plus alert manager with an alerting and rules files both to manage and maintain. With user workload monitoring the weight of maintaining such resources is lifted from the project admin. So now if I want to alert on something regarding my service I can just create what's called a Prometheus rule and just leverage the Prometheus that is already available to the entire cluster. So to do this we will navigate to Prometheus rules. For that we have to go to API Explorer. Type in Prometheus rule in our search. Click Prometheus rule instances and create Prometheus rule. So I already have a rule prepared for us for this demo. I'm going to just paste it in here. And essentially what this rule will alert on is when a PVC in this namespace is over 50% capacity it will alert. So why this alert? Because when a Jupyter Herb user pod fills up on their PVC they will be unable to restart their server. So this is definitely a useful alert to have. So let's go ahead and create this rule and we can see it now listed under Prometheus rules. Okay so where do we want to send this alert? Well when Tom onboarded this cluster he added an alert receiver that forwards all of our alerts at her repository in our org called alerts. We do this for all our alerts, cluster alerts. Having it available in this repository keeps our alerts open to the public and gives further visibility into the state of the cluster and the operate first cloud as a whole. Awesome so now our alert will go to this repo. So next we will pretend to be a data scientist and work in our notebook server. I have a notebook server already spun up following the same steps as Anand in the last demo. Let's open up a terminal. Now I'm going to fill up our storage with some large files by running this command. And we expect this to then use up more than 50% of our storage and thus trigger the alert. We'll give this command a second to execute. We can see that this file has been created. Awesome so now our storage has got this file taking up most of our space. And if we go to now the alerts repo and we can give it a second for the alert to fire. Okay cool so now our alert has showed up and it was sent to this alerts repository. Now this is a pretty tame alert but imagine if this alert was something more serious. For example imagine if this alert was about a production to better hub instance going down. In such an event we would have to start an incident. So let's go ahead and pretend that this alert is very serious and we will start an incident to resolve this alert. We also manage our incidents on GitHub. Once again to keep it open to the public. Our incidents go in our SRE repository within the same org. As you can see here by filtering on the incidents label I can see our past incidents. To create a new incident we go to new issues and we have a template that we can follow by clicking get started. I already have an example incident form like this filled out if I switch to the new tab. As you can see over here I can link the alert that has been firing as part of the kind of additional information under relevant links. I can see that there's other information which cluster is affected, the users that are affected and some basic description about the incident. I'll go ahead and submit this incident and then we can update this incident thread with comments on the steps we take and information we collect. We can also comment on how the incident is resolved and since this is on GitHub someone searching for a similar issue can benefit from our gained knowledge. Alright so now let's look at the alert itself. The way we have defined our role we can see that it actually will add to mention the user that is responsible for this PVC. In this case that user is me so this is my GitHub username and I can see indeed that I have a... If I go to the notifications I can see that I was notified of this alert but it's not highlighted because I already have the thread open. Cool. We can also see that there is a runbook linked to this alert. If we follow this runbook we can see that there is a section for insufficient disk space for a notebook pod and then it lists the steps I need to take to resolve this problem. And in this case it's just to increase the PVC size. So let's go ahead and do that. So I'm going to go back to the OpenShift cluster and go to persistent volume claims, find my PVC and just simply go to the YAML and update how much space this PVC has. In this case I'll raise it from 2 to 6. Awesome. And if I go back to the alerts repo and if I refresh click this alert give it a second to update. And as you can see that the alert has been automatically resolved. We can navigate back to our incident that we created. We can comment that this alert has now been resolved. And also mentioned that an RCA will soon follow if there's a root cause analysis that is needed. And I can go ahead and close out this incident. And that's it. And this is how we would go about resolving an incident in our operate first cluster. And hope you guys found that helpful. Thank you. Alright everybody. Thank you. It's always impressive to me every time I get to watch that process. So let's, we've only got a few minutes left until the top of the hour. I don't want to keep anybody over because, you know, in this clock driven world, that's what we thought. But do we have any questions we want to, that came up in chat anywhere or anything we want to field for all my little faces. So if you are an attendee and you want to put any questions into the chat, you can feel free. So that there's a separate Q&A section for you or not. Questions. Thank you. I love your virtual background there myself. Actually, actually it's a live background. It's animated. AI is capable of doing these things. We don't have any questions flowing in. I mean, we can we have some questions prepared, so to say on the SRE and GitHub we post. So maybe we want to call out the action, the, like the call to action boxes on the operate first website where we have tasks prepared for all future community members to starting to tackle. That's one question. Yeah, let me, and I'll go ahead and answer the question. I'll ask the question out loud and then we can see who wants to jump in with an answer. But the first one is what other applications are deployed on the operate first environment. And so in this particular, this community cloud because an operate first environment is is defined as much by by the method of connecting the developers and the operations together in an open environment. And so this is a particular reference implementation, so to speak. So what are their applications do we have a boy deployed in the operate first community cloud. I mean, can anyone join in. And how would you get started. Who can wrap up those applications off the top their head. So it's the answers a bit twofold. So one is the applications that we deploy by we meaning the people who manage these clusters. So some of the people on this call and a few others. And those that's one of those applications for example is Jupiter hub. And there's a lot of others, a lot of applications that are offered by open data hub offerings and then there's some that are a lot of you probably may have heard of like Kafka, and Anna, and we make these we manage these ourselves but then there's also a second part to the answer, which is that people who get on boarded. They can, they also deploy their own applications right so they deploy their own applications and manage them themselves and they can once they have permission themselves access the way on and showed, which anybody can do because it's done through than anybody, you can continue to manage your applications via get using get ops, or you can also just deploy things live. Right. So those are the applications that's a bit like the user applications managed by them. And I think an important connective PC or is that we're in terms of this kind of open sourcing of operations, rather than going backwards and trying to take code that's existed in the past we're moving forward from here and, and a lot of ways the future of how, how the SR reactions how DevOps and and machine learning everything to come together is within this AI ops environment. And so the, this is one of the tools that we have out there are data science tools. And there are learning pathways being put up and accesses, you know already available for for people to come and start to use the data science tools, which includes people who would be otherwise, you know, who might who might come from a different background than a regular data science background, but who might be doing say log analysis and other technology work when it comes to the when it comes to the other managed services that we offer and provide to other users we can mention also, that some pipelines, cute flow, and services like that so basically our initial support was aiming the data science folks, but we have other services that some of our users provide so that is the thing that Marcel mentioned during during his presentation, like the Quarkus services Epicureo or pulp. And we got another question is this service free. Yeah, that's, and that's a really good point because that's a good segue from what what Tom was just saying as well because the, I mean this is an open source project right and we're, we're sort of the services are available to users. And, and in particular what we say users, you know, of course that could be anybody but in particular, the core the users we're looking for. So that the people to bring applications like Tom mentioned, if you've got an open source project, and you want to come run it in a live production cloud and have an opera and opportunity to get access to all that to all that data that that exhaust from before. And that that's how you can you can do that here and so so it's so it's freely available, but there are the the cost is you've got to come in and bring a code and start and do the work to with us to get it into the environment. Right now the environment is a it's a Kubernetes based environment you saw the pieces of it, and having an operator or writing an operator to plug in is a good way to go in the future. There's no one way to do an operate first environment. You'll see us are working with other other groups to bring up other types of environments either in our either in our community cloud or in their community clouds as we will share around these blueprints and so forth in between systems. Did you already mentioned that people can just, at least for the Jupyter service they can just come in and log in using their GitHub account and they will be able to spawn their own servers. And that applies to also other many services, most of the services are publicly available. And if the service is not publicly available for whatever reason, the only required onboarding is to ask for the access so that's the old does the barrier that is there for certain services, but the rest is publicly available. It can be publicly available. It is publicly available. Right. And what I mentioned before is that we're, we've got all these services and capabilities. But but the sort of the user experience depending on who you are and how you're going to get there and understand what to do is something that we're working on with with the website right now so you know so this pathways are going to continue to evolve. In terms of what's there so but there's between here and there's a lot of videos on our YouTube channel that explain how to use the different tools and how to get how to get access to them as well so in terms of kind of DIY learning environment it's ready to go right now. And we're working on that helping you with your learning as well. So regarding the is it free to use question I would I would say that yes it's free to use but you also bring your usage to the table so in by using it you bring some real value to these services that are being offered because we essentially want to try them out or provide them to real users and then create some real usage patterns create some data by using those services. So there are many ways to be part of this community it's either you use these data science service or the Kafka services that are available and by using them. You can use the usage of those services or you go to the operate first website and join the SRE pathway and you help maintaining and running these services so it's also free in terms of I want to be an aspiring SRE person and once you exactly and who may and Anand currently just showed in your browser in your Git console in your terminal console and you can just spin up your terminal terminals or Jupyter notebooks and really do the SRE work there. Also, in terms of yes it's free as a learning resource. So if you want to show how to get to the operate first website and click to those get up issues that are available for starters for beginners. We could do that that would that could be a good I was just I was actually just looking for something we could do to wrap up since we're right. We're getting close to the top of our, up of our hour does anybody have that have that handy to bring in. Yeah, so we're we're we're operate dash first dot cloud. And you can find all the things you need on there. We have and community mailing list web forum, basically for for discussions amongst. I use as an contributors. Let me see. Just quickly share the website. Yeah, I think the chat is not recorded because people will also watch this not real live and then at least they see the website and know what to do. Yeah, yeah. So basically these are the two ways that's that's that's Tom and Anand and we just showed if you want to do the operations bits and pieces you click on the right button. Basically, you take the right pill, and you are on off on the SRE races and if you want to deploy your community service like we just showed how to deploy Jupiter up there. You click on the left button as an open source developer and you being guided how to host and yeah, supply your projects to that community. Okay, and I think that's a probably a pretty good spot for us to wrap it up. Do we have any more questions. No, I don't see any. Okay, great. All right, well, thank you everybody for joining us today. We appreciate your, your time and efforts and come to join us for a journey where we're just just beginning to tell the story to everybody out there and any input you thought is welcome and as well as your if you've got any interest in bringing your open source projects. We're come join us on the mailing list or come find us in the, in the select channel, and you can find all of us where we're in the social medias are various other places. And I think that's about it. Marcy do you have any other, do you have anything else we need to finish up with, otherwise I think we're all out. Great. Yeah, I think we're good. Thank you so much to the whole red hat team for your time today and thank you everyone for joining us just as a quick reminder this recording will be on the Linux foundations YouTube page later today. So we hope to see you back here for future webinars. Thank you so much again.