 Good morning. Good afternoon. Good evening wherever you're hailing from. Welcome to a special cube con EU office hours This office hour. We are going to be talking about cluster management or more specifically open cluster management joined by my usual team of ACM folks and OCM folks and And very happy to have Josh Burke is here that has organized this. So Josh, please take it away. Thank you, Chris. Yeah, I so this morning or afternoon or evening, depending where you're located. We have three of the team from the open open cluster management project. We've got another Josh here just to add to your confusion. We've got Scott and we have Michael So do you want to introduce yourself? I'll start with Josh Absolutely. So my name is Joshua Packer. I'm one of the architects that work on advanced cluster management for Kubernetes The pillars or areas of expertise, I guess you would say or at least a focus for myself are both in the application space And so that's get ops helm Object storage today as delivery man of delivery mechanisms as well as cluster life cycles. So rolling out your cloud provided open shifts or broken, etc as well as Importing those clusters as well. So expanding your fleet And Scott Hey, yeah, I'm a product manager So I take all the goodness that Michael and Josh and the team are building and look at the opportunities and upstream Try to be strategic about, you know, this project and that project and how do we build that into a seamless Delivery with a red hat brand on it. So I'm a product manager based in Austin, Texas And I'm really excited about Josh's sweater and I want to hear more about that during the call today because that is dope And Michael Hi everyone, I'm Michael elder I also help lead and drive what we do around advanced cluster management and open to open cluster management in the community as well just excited to come and talk about anything from Making it easier to manage a fleet making it easier to deliver configuration and policy across the fleet making it easier to deliver and manage Applications across the fleet making it easier to manage and understand the help of the clusters that are running across an entire fleet So excited to go through those use cases and provide an update or enter questions is needed Many times and Michael say easy and make life easy You know, I think I think that's our goal always right with software We're trying to make humans lives easier by making the lives of computers harder It is kind of a boring goal But it like if our if our job was to take that problem space and just make it easy You know, that's what we wake up and get paid to do is take the multi-cluster management challenge and Make it easy and allow you to get on the bigger and better things with your life So I don't I just I like the way you said that Michael resonates with me So this is an office hours and so for people who are new to this Let me explain how it works. The folks here are here to answer questions We have a few things to talk about that we will go through here But you are encouraged to interrupt at any time by asking questions in chat And we will pick up your questions and ask them the panel So you can hear the answers out loud. You can also for the special coupon edition you can also ask questions in the six coupon red hat channel on CNC f slack Because that is the the chat attached to the conference and I will see the questions there and pick them up for the session the so with that I Have a few questions, but but maybe y'all wanted to actually get started Talking a little bit about what a cluster management is today about where OCM is today Sure. I'm happy to jump into that. In fact, let me I'll share a picture just so we all have something to look at So we talked about this a little bit As part of OpenShift Commons as well So you may have already seen this picture, but it at least gives us an idea of what open cluster management is really about This is a project that we are focused on Growing is an upstream community project We've got a couple of other vendors of beyond red hat that are participating in it now and really the focus is as we have seen the growth of Kubernetes is a Powerful and de facto standard for developing workloads on cloud. There is this sort of net new Generation of challenge, which is okay now. I've got clusters maybe running Kubernetes is a is a great way to normalize how I think about the environment OpenShift provides me a great distribution of Kubernetes to use that in a supported way, but how do I start to Blend and understand Multiple clusters that might be running on multiple clouds and so open cluster management As this name implies is really about making it easy to drive that type of provisioning behavior So we can provision an OpenShift cluster across different clouds We can also import either clusters that have been provisioned separately or managed on That are actually imported from something like a managed EKS or managed AKS environment we bring together a Governance and compliance framework that is native or Invented within open cluster management But also reach out and integrate with compliance frameworks like open policy agent Making it easy to distribute the gatekeeper admission controller making it easy to distribute and enforce regal policies across a fleet There's also other partner integrations around Falco so not just having a Not just trying to invent another way of doing policy, but really making it simple again that word To understand how to drive that across the fleet One of the other aspects here is providing a common API for an inventory of clusters We had originally, you know going back in the technology history We had adopted cluster registry at some point in the past. It provided some value We found that we had some additional use cases and growth and ultimately we Built a management a managed cluster inventory API that let us think about the inventory. Let's just think about World-based access control right a team or user what clusters can they see or interact with and then let's just think about an API for placement rule So placement rule is a Kubernetes kind everything here is Kubernetes native API CRDs And so we can think about how do we place workload or place policy against a fleet and that set of APIs can be Leveraged by any operator that wants to become multi cluster And so in fact, we use that when we integrate Thanos when we integrate Argo We use those and even OPA we use those core API that's in open cluster management To say I need a particular kind of operator deployed within a cluster I need a particular configuration to be enforced for that operator I want to know if I'm in compliance or not. There's a simple agent that runs as pods. It's extensible and then You know, we bring together multiple parts of a broader Set of projects trying to create an integrated solution that understands fleet management So I'll pause here and see if that helps answer some questions about what open cluster management is really all about CS is any questions coming in and I don't have the View of questions Josh and maybe if you want to highlight those. Yeah, I actually have one question, but I didn't quite understand it So I'm waiting. I've asked him to clarify I can ask a question What has it been like going from? You know working on advanced cluster management at IBM to coming over to Red Hat and then open sourcing This product right like how has that experience gone for y'all eye-opening? I'm gonna jump on that because you know from a product perspective You're always thinking about the strategy in the market. How do you win? You know, how do you bring in partners? What are the right relationships? But at the same time our organization? Driven through has had this transparent and open culture within us. And so Whether we were at IBM or here at Red Hat, it's always felt like yeah jump in this boat, you know grab a paddle Let's all move in the same direction. I Think one of the interesting takeaways Chris is when you look at upstream there are some games, right? Like we don't need to get in that, you know, there's politics and there's you know Do you put emphasis in this project or this project and this one's gaining steam and this one's not? I don't understand that lexicon very well Like that's just not the history that I've been in but I think that's my learning curve is kind of figuring out How does red hat really embrace and support? Communities and move in a direction to really encourage them to flourish Things about CNCF things about Linux Foundation other foundations that do support, you know The general cause of open source and community. So I think that's been the biggest learning Learning experience for me it doesn't feel like a harsh pivot because even though we're coming from IBM like I said our work has always been very open Very progressive in terms of bringing on new features and function and talent So I think we've always kind of had that at the heart of our organization Josh and Michael, I'm gonna add to that the development perspective that it's also been a lot of work But and a little eye-opening, but you know as an overall I guess Transition it's definitely been for the better and you know, it's helped improve Our code just the jet in general the way we work But you know doing it in the open is it's been a little bit It's been liberating to be honest and it's it's it's quite nice But it also it also makes you think a little bit harder before I hit that push button You know making sure that I crossed all my T's and dotted my eyes Before it's out there for the big, you know the big great world to to see and so you know It hasn't been it wasn't just like we flipped the switch and we were done, but It's been a transition, but the teams adapted really well to it and it really is it's a it's it is a bit of a Liberating feeling to do it completely in the open now versus be behind closed doors is maybe what I'll call it was before So we actually we have a few questions from the stream the One of which actually this came from slack, which is somebody picked up on on You talking about the third-party components like integration Falco and that sort of thing and so one of the things that he does with Ocm is Manage upgrades for things in the clusters Mm-hmm. So can it also help you manage upgrades for those third-party components? sure so and I think I was able to pull up the slack just to see the question is written there as well. So Maybe let's think about how do we configure a third-party extension on a cluster? Typically, we're gonna do that by Deploying a an operator so setting up the oil and subscription which will drive the configuration of that operator on the cluster or We might develop an application in Open cluster management and deliver that application which includes things like helm charts So when we think about upgrading it What we're really now thinking about is when do I push out the next level of configuration for that operators? Subscription or when do I push out the next version of that helm chart? So open cluster management will help you drive that next version of config right the next version of the home chart or the next version of the oil and subscription and Then it's going to rely on either the operator or the home chart to properly handle its own upgrade Behavior so open cluster management doesn't have innate awareness of the the inner workings of everything Under the Sun, but it is able to drive a desired state a declarative state of configuration to clusters that are under management And then to validate whether those clusters are in a thumbs up or thumbs down state right was the operator configuration push correctly Does it report any status violations does it have any compliance violations with a policy those types of aspects And I think if it's helpful I can pull up an example of what that looks like as well Sure, and what I want to add to that is just that you know It may not understand all of the different pieces But it knows about them all and it track keeps track Well, you can you can keep track of those types of pieces as well Okay So here is let's pick on this operator. This is an example operator that Actually allows us to configure the update service or the upgrade service for OpenShift itself And I'm on a early dev build So you're gonna see a couple of bugs pop up here and there because this is pre-release code Everything I'm showing you is in the publicly released versions of the project You can go and run this stuff now, but I just have a very early build right now in any case This particular configuration has a statement around the operator life cycle manager or OLM subscription to the operator And it's expressing a desired channel and it can also express a desired version. So in this case, there's not a Unless my eyes are missing it There's not a specific version highlighted if I go back here is one that does have a specific version that is highlighted So this one I think is like seven point five Seven four six So it's got a starting CSV and then a particular channel and then in this case it's set up to allow automatic Upgrade so as the operator channel provides a new version it'll automatically Pick up the next version of that By the way, this is a bit of a sidetrack, but since you've got the UI up right now Which says advanced cluster management I wanted you to just quickly go over the relationship between Open cluster management advanced cluster management and OpenShift So that you understand the different names Absolutely, so I have thanks for calling it out. So open cluster management is the upstream community project It's where we develop the technology that we then deliver in a supported product Which is where the advanced cluster management name comes from So open cluster management open-cluster-management.io is the site you can see it up here If you want to connect with us on GitHub, GitHub Open cluster management and then probably within here the community repo or the enhancements repo or the API Repos are good starting places to start to see what's going on within this this Organization now right now Because of the way that we transitioned over and we were in the process and flow of open sourcing the technology There are some repo all the parts of what we think about is the product with the exception of one component Which is Redis graph which has to do with licensing concerns and our ability to open that part up But with that one exception everything else is open source There are other parts of the organization to get up work They deal with like our build process and things like that that are supporting features But they're not supporting our mechanics of release and delivery But are not actually part of what a user needs in order to run it so everything there is open and you can get started each of the repos tries to provide specific information about How to build and how to leverage it so registration So for example the registration operator It's read me is going to tell you how to stand it up and how to run it a lot of these parts are also available on operator hub So if I wanted to take any cluster Kubernetes okd OpenShift and I wanted to make it a hub. I can go and deploy the cluster manager operator for that particular cluster and then I can import clusters by Going to the cluster I want to import and deploying a cluster with some configuration and at the end of the day what that gives me is now On my hub I can view that cluster that is under management I can drill in and I can see some basic details about it How many nodes where it's got compliance policies or violations what applications are running on it and this works as well I think I have one of my other hubs up here. Let's see this one actually Trill in here just to show you one that has more clusters attached to it And so here I've got clusters that are running on Amazon and Google these could be running on Azure This could be managed EKS managed AKS, etc. So all of those are also available Did you mention okd Michael? I know you've been playing around in that space Yeah, and then that's actually a question that we've had from a couple of people in chat, which is When are they going to be able to install this on okd? So you can actually So one of the things as a community that we are focused on growing and improving right now is just making things more consumable so for instance if you wanted to Provision an okd cluster you can do that today There's just not a lot of doc that explains exactly what you would modify In this case, there's an API object called a cluster Image set and you can take an okd release image drop it into the cluster image set And then when you actually create a cluster deployment object, which is the API kind that will trigger the provisioning of an in this case An okd cluster it'll pick up that release image for okd And it'll actually provision okd as your cluster on your infrastructure of choice whether that's Amazon Google Azure vSphere bare metal etc So from that perspective, it's possible today. It's something that we like other areas, right? It's a young project. We're looking for community members to get involved. We're looking for help both in understanding Things like that where the community wants to do something that just hasn't been our primary focus on a day-to-day basis So help there is always appreciated in terms of improving Either both pointing out. Hey, you guys aren't easy enough to consume here easy enough to understand in this particular area So just providing a feedback and then also contributing here's improved documentation Here's improved examples that can be contributed to the community. Those are things we very much welcome And want to have more involvement from the community Okay, and on that note, there is actually a bi-weekly community call So I'll post that link in the thread That you've got in the coupon chat. It's also in the zoom chat. So if we get the twitch as well, but that Board is actually the proposed agenda. That's how we manage our Conversations that are public and open All of the prior sessions are available on YouTube as well So you can go back and watch the history of those sessions if you want to get into more detail in particular topics Okay, the Where do people what's the best place for people to ask questions and give feedback if they can't make that meeting because of time zones right we maintain a slack channel on the Kubernetes slack team Which is open cluster in GMT and Let me see if I can grab a link to this here And I'll drop it in the chat There thank you and then also our community channel Or open cluster management and Dropped it into the other slack. So you can connect with us there That's that is the easiest place I think we're very much as a culture our team is very slack heavy Less so maybe then email and other mediums and then if you want to actually submit a concrete proposal To either add to or grow or change We have the enhancements repo which has a place that you can create a pull request and actually describe your use case or scenario and then come to the community meeting and present that and We can talk through that as well and I'll pull this up just again to put it on the screen But just to show what an environment looks like that has a broader array of clusters inter management So in this case you can see we've got Azure AWS IBM cloud and some of these the ones that you can see arrow That's the Azure Red Hat OpenShift. So that is a managed OpenShift that was provisioned and imported There is Amazon EKS provisioned and imported GKE which is provisioned and imported the IBM Red Hat OpenShift Kubernetes service or rocks and then the Red Hat OpenShift on AWS so Lots of fun acronyms, but if it runs OpenShift or if it runs a managed cube then you can interact with it through a Open cluster management hub Are there any Sharp edges per se around running it on like a vanilla or other Kubernetes distro kind of deal So I think Like any good community project that's in a growth phase. There's always sharp edges, right? They think we call it the bleeding edge for a reason. I think probably the biggest sharp edges now or is that In order to deploy the complete system You're still kind of picking up the individual operators and parts and Deploying those parts that something as a product we focus on trying to pull together all the things and making it easy as a community though We still have some questions in our mind. Like would you rather you really have a Single person who wants to do all of cluster provisioning management and all of application delivery Or do you really have a user that is more platform operator centric and a second user that is more application delivery centric? Right, like I don't know a single person that would leverage all the parts of open cluster management But as an organization what we're trying to do Or presenting a solution that an organization as a whole can consume so they have a consistent way Across where all of these roles intersect to think about life cycle management of the fleet cool You see that question so from the from the channel somebody's asking about Multiple clusters in different architectures That is like you have an arm club I think it's saying is like you have you know one x86 cluster and another arm cluster and you have Different workloads, but you might say might want a unified security policy Maybe Scott you want to take that one? Very open to that in terms of open cluster management being a central Management hub to to see the world and manage it. I don't think we've done any testing on arm yet We we are aggressively Moving in the direction of power and z so the multi-arch support for power and z is Being baked in as we speak and then we also want to be able to run the hub on power and z so you know importing the fleet making sure we can manage that and and I'd love the idea of using arm I've actually seen our team doing some scale testing with k3s to shout out to another project with a lot of movement behind it So, you know all of that stuff kind of falls into this umbrella of cncf, you know that the pattern that we were looking for is conformance to an API spec and agreement to kind of a cncf a Structure in terms of the way that they're building out their code and we feel like we can probably play pretty well Within that space. That's a really broad definition of what you can support Whether that's like you said arm Josh or whether that's you know a large set of customers that are invested in the power and z So there's a lot of wiggle room in that space when it comes out of the product side We get a bit more specific, but in the open cluster management side. Why not? I mean I want us to be touching all that stuff. I want us to be in every type of arch we can be in Yeah, I would just I was thinking arm mostly because I have stuff running on kubernetes and arm and pack likewise. Yeah, I Can see it. It's on the shelf behind you Right, yeah, yeah, that's that's what that wire is the so Actually, I used to in fact have an arm v8 server Here loaned to me from the arm spec coalition that we used to actually build A fedora atomic for arm Nice, that's what you do on Fridays, right? Yeah, it was nice to get it out of my office though because it was noisy Yeah, it's definitely not designed to go into home office. Yeah They also didn't have it was funny. They loaned me the server But of course they're the arm spec coalition one thing they don't have this storage So it didn't have any storage built in so I had to do a lot of messing around In order to attach storage to it interesting Fun weekend projects, right? Right if the Okay, so I actually want to have I want to have a couple of details. This is just this is sort of random But I realize I don't actually know these things Cluster let and cluster manager what what are these two components do? How do they relate to each other? Maybe josh. Do you want to take us through that answer? Sure, absolutely. So cluster let is the general term we used for the binary or the Images that we're running on our managed clusters. So those be the ones that you deploy like michael showed or you import like michael show So this is the the sort of the brains or the visitor that That does all the work brings us back, you know Stands up does the initial handshakes make sure we're secure Exchanges our certificates, etc Gets approval to join the hub and then once it does brings what we call Our at cluster let add-ons in and so add-ons are things like the application subscription That we talked about and is in the community today as well as the policy So the grc and compliance has a number of these additional add-ons each of those being containers that live side by side with the With the cluster let that give us those That give us those additional capabilities. So there's an im policy. There's There's a compliance policy cluster a little bit add-on. So the cluster let is for all intents and purposes the bag of the bag of Automation or binary that runs on the managed cluster that Allows us to control and interact with it and then you have the the cluster management side, which is the week We often refer to the hub side where the UIs you'll find when you're and when you start with the open cluster manager where you build out all of your initial control plane pieces and so that side is what is sitting there and listening for the clusterlets to phone home and Validate them and then approve their approve their sort of onboarding into control from the cluster manager and then Once they're once they've been approved and they're brought under control That's when it then goes and the cluster manager looks at okay these are the add-ons that are supposed to go for this specific managed cluster and Sends those details that well allows those details to go down and the agent on that managed cluster expands with those plugins and That have been metaboom your your under management and you know where one of the things we're looking for is to expand those clusterlet add-ons as well. So it's not you know, eventually we don't want it to just be ACM pieces that ACM brought but also Plugins from third parties that can be enabled or disabled that a customer can choose or a user can choose when they It's my business side the customer comes out that a user can decide what they you know when they want to use it with it And we've got the policy controller displayed right here So josh the add-on framework is extensible. I think michael touched on that just briefly but yeah, that was what I was Yeah, talk me through like how how would someone go start to contribute and you add on in that space Right, so we have a number of these and they're all in the in the public domain And so you literally could as simply as fork one of those and begin to modify it So, you know, we have them in the application domain so you can fork that and begin working there We have them in the compliance and security space. You see the cert policy controller the im controller the policy controller All all listed here and so you can fork those and start to modify them and and expand or you know, you can Completely start from from scratch. We have the layouts Available and so you could start to build one and so literally it's a you provide a configuration on the we'll say the north side of the hub and Once that's once that's provisioned the cluster that will see it at the cluster lit being the managed cluster And pull those in and so as you see with the true and false you can turn them on and off as needed For the end points as well So we've got a bunch of different sort of spaces where you can fork a project and start to work on one in that space Or you can, you know, if you have something completely new You can use one of these as a base and and start that or reach out in the community and we'd be happy to work with you as well Um, I have some follow-up questions, but the audience questions are more important So let's take the audience questions one of them is about using Ocm in a disconnected environment that is specifically where you're managing clusters all of which are in isolated data centers Yeah, that's a that's a really common use case josh, whether they're purely disconnected or we'll see more often as a hybrid You know, they they have an on-prem Hub or they have a reason why they need their storage locally, you know data residency geopolitical reasons Latency reasons But yet they want to start to take advantage of the cloud and they want to start to take advantage of some cost savings and some Some benefits of you know positioning workload closer to a region where it needs to run In the edge space you you kind of see some the reverse of that in some ways You see they want to take advantage of rosa as a central management point for where they want to run their hub in some particular locale or data center But then they're going to have a bunch of edge Devices and widgets that are running, you know more in the on-prem space disconnected on a boat Or a kiosk or something So you get a mix of both Both directions of where they want the cloud to be and how they want to leverage the cloud It's really not up to us to dictate We want to be able to play in any of those hybrid scenarios if we fully support the disconnected on-prem Uh, we have you know a lot of interests in the community around making sure that that is a stable supportive functional path And that will always continue to be a lot of our bread and butter A lot of our background comes from that private cloud space. So understanding the the constraints of air gap understanding understanding those constraints of you know data residency and Minimizing the exposure and the risk around that It's kind of refreshing actually Okay, so another sort of different question. I'm going to paraphrase neels a little bit, but um So, you know say I've got kubernetes deployed The um Yeah, and I'm now starting to run some stuff in production etc What are some things that? In your opinion would trigger my really wanting to look at deploying ocm, right? So cluster size Having a certain number of separate clusters Having, you know, certain business requirements. What are what are things or you know for more for your experience working with users What are points at which you see users saying? Hey, I need ocm So I'll jump in on this one. Maybe I think one of the first use cases that we come across is we find a team that is Providing access to kubernetes Or open shift as part of a bigger organization And they're looking for a way to simplify How a consumer so that might be a developer that might be a tester qe engineer It might be someone who is responsible for running a production application They're trying to simplify that process of how they get access to a cluster and so within the provisioning behavior we can make it easy to Drive a self-service workflow where a given user who's maybe not a kubernetes expert not an open shift expert But they can click through a form-based wizard and now they've got a running cluster that is provisioned and Uh available for their use So if you've got a situation where you're managing access to clusters That's definitely a strong trigger for where we see open cluster management adding value If i've got one cluster that needs to have a certain set of policies enforced because i'm Following a certain set of technical security standards And those security standards say i need these types of technical controls or configurations on my cluster The policy framework that's in open cluster management lets you enforce or audit Those types of technical controls so even if you only have one cluster that's running in a public cloud or one cluster running in vSphere and your security team says look every Two weeks. I want you to send me a doc that says You validated all of these technical controls and you can do that by hand Or you can use the framework that I showed earlier with policies That allows you to actually define those as yaml resources that you control and get And then you can apply those resources through a get-ups flow And now your dashboard is constantly continuously validating those rules So you can just send them the link to that dashboard and say When you want to see the latest go look at the dashboard, right? So if you've got a compliance scenario, that's that's also a trigger and then if you're developing an application that Has components that run on different clusters or that itself is needs to be more globally available Then the fact that we can distribute the app and manage its placement dynamically And how to deal with that problem That's also a trigger as well. So if I introduce a new cluster and i'm starting to treat clusters is more disposable And i'm provisioning them as needed putting applications on them. Maybe tearing them down more dynamically So this would be an example where i'm using stateless applications But in that case that's also kind of a trigger so Teams that want to provide access to clusters and make that process easier Teams that need to conform to a certain set of security constraints or audit controls And then teams that want to deliver the application Within the first category the notion of people that are responsible for cluster health The fact that we can syndicate all of the health information from all clusters to the hub And render that in a dashboard in fact, maybe i'll pull it up and show it while i'm chatting That's also a pretty powerful capability that That just simplifies the life for that user who's or that that platform operator that's responsible for that type of use case Yeah, actually I want to follow up a question around my own personal use cases Because one of the things i'm very interested in is application placement So Can I use ocm to do something like say hey I have this database application And I only want it placed on clusters that have a certain class of storage There are placement rules a lot of it so the way that you would do that today There's not a condition that knows about certain classes of storage, but it's very label-centric So for instance, I could have a label. So here's a cluster that's provisioned in tokyo and aws I could add a label here that might say Storage class equal gp2 that happens to be the storage class on that cluster And then certainly when I have my application The placement rule can say I need that particular condition to be met And I'm trying to remember if I've got my apps running on this cluster. Let's see here. Nope So let me go find another one. I don't know if I have an app cluster up right now in my Dev environments, but in that placement rule definition Actually, I'll pull up an example in github that you can take a look at and even follow along at home But if I look here I've got Find a resource This is a definition of an app and one of the objects here is a placement rule And among the placement rule it's got a set of match labels So certainly I could have a placement rule that said storage class colon gp2 And when I apply this resource to a hub that's managing clusters It will dynamically select any cluster that has that storage class equal gp2 gp2 label Or I could use a match expression and I go I could say Any cluster that has storage class gp2 or io1 as an example The sky's the limit there. This is this is josh's You know neck of the woods when he when he shows off his get ops capabilities and Maneuvering applications across the fleet, you know, blue green scenarios Like all the enterprise controls for how do you deliver it to prod versus how you deliver it to dev All within that matching label space that that michael is showing right now That was your key josh take it away. I was gonna say well, no yeah get ups was going to be one of my questions, right because Obviously, you know, we've shown off a bunch of the the gooey stuff and I know that there's admins who like The graphical interfaces, but I'm not a graphical interface person I may I may check something into a get repo person Yeah, no, absolutely and like and as scott said sort of the labels Puts the sky the limit which is pretty pretty much the cool cool piece of it You can slice and dice things up in And you know infinite number of ways which which means it will you know, you can make it meet You know your needs regardless of what they are And so, you know, michael demonstrated very very quickly the you know, how you how can do it for storage We definitely do it for pillars like development Qe for test and production even within the production There's automatic labels that are generated onto the systems as you import them. So you get things like regions For aws and azure, etc So that you can, you know, you can you can choose where you want the application and all this is in the ocm placement rules As well as you can start to play We have something called the cluster replica for a placement rule Which is I can have a label that matches two clusters but set the cluster replica to one And so the placement will keep the app just running in a single space if you have, you know, storage pieces You need to move or a need for a single state as well as we were discussing using this as a Bursting scenario where you have a placement rule where you actually define the two cluster names that you want One on premise and one in the cloud you set the cluster replicas to one And it's going to use the first one in the list which would be on on prem and then I see oh, I you know, I'm I'm coming up to Black friday. I I know I'm going to have a lot of people hitting my application I make a mod the night before I can or I set up some automation on a timer That flips the replica from one to two and the system will automatically burst the application out to that space and so bring it out into the The action the cloud solution so you have it running both internal external and then we get into there's things like ansible hooks Where you can control the I'm not going to go too deep down the rabbit hole But you can you know you can attack you can deal with external pieces Like a load balancer so that you're inbound for your application knows I'm not just pointing to the local replica ports, but I'm also now need to point to this cloud provider And you can have that change happen before or after there's all kinds of different ways to slice it Yeah, I I'm actually I'm very interested in this but I'm not going to get too far into it because um, you know my thing is stateful services on on kubernetes And so obviously with stateful services, you have a lot of potentially complicated requirements like For a particular stateful service You want to have at any given time one writer origin one content origin point running on one of your clusters And you want all clusters in a certain class to each have a mirror of that Absolutely, and if the writer origin has to move you want to take the mirror down on that cluster So I won't go into it here, but we should definitely we could take it into the slack channel But we are doing a bunch of work with different two different storage teams to do just that That type of work with a stateful stateful applications in peering clusters or peering groups Sorry, and then using a cm as a as a catalyst to be able to you know Span that or and ocm will have the capability as well since everything is developed in the upstream first But be able to span those peer groups across clusters So we definitely should we can take that conversation and and have it in the slack channel So others others will be privy to it as well. Yeah Um, you want to read there's a we got another question from chat. You want to read that out chris? Yeah Uh, how easy or hard is it to use ocm a cm to migrate live workloads to a new cluster or region? Without major disruption It's a good question You want to I can start at michael if you want All right, so I I'm gonna I'm gonna key myself back to I mentioned the ansible integrations of the externals So external pieces so it's always possible to be running a load balancer and there are operator controls for load balancers So all traffic comes into one cluster and then sprays can be expanded out to spray between the two But to get back to how simple it can be is that I have cluster a that has a label that says application one I add that label to my second cluster that has application two that I wanted to go to the placement rule will Automatically it's got a watch going on all of the managed clusters and ocm or ocm And it's going to say okay the app needs to go to this new Position so it's going to write out what we call a new decision And the application model the open source subscription is going to read that new decision and say okay I need to go to this new cluster now the one thing It's going to check before it goes to that new cluster is is there any pre-hook work I need to do and so that can be something like creating a ticket or modifying a load balancer Any kind of automated setup and well pretty much anything available in the ansible galaxy Can be executed there and all of this can be done with the ocm piece And so it'll run those pre steps, which likely would be updated possibly updating the load balancer Although likely if you want to zero outage the load balancer will be in the post hook So the first thing the placement does is maybe open a ticket that says or sends us a message to the slack channel saying Hey, we're we're bursting over to cluster number two This is the list of clusters that we're using it will then the subscription will then apply The application there and once the application is up and running and all the status is coming back as As a go it'll run the post hook which would then update the load balancer to say This is you know, this is the new new additional route that I can use And so while traffic going into that external load balancer will then get sprayed to both clusters And then you can go, you know after the after the time if you find your traffic is dropping You can go the opposite way as well And so you remove you remove the label which will then Reverse all of those pieces and so the app will go away the load balancer gets updated And you're back to that single that single cluster from the burst And I'm pretty sure we that we have that in a twitch video. So we're doing that as well That's from Ansible Fest last year. I think that was October. Yeah, I think I remember that correctly There's a cool project in that space just to dovetail on the storage part There's an upstream project called scribe Which we're very keen on and looking at as a way to do that replication Of persistent data from cluster a to cluster b. So in that migration scenario or in that bursting scenario Josh described ways that you can handle off cluster things like The Ansible hoax that are talking to f5 and service now and blah, blah, blah But also let's look at the storage part of that. You know, it's a very common concern is how are you going to replicate that storage off of the in the cloud native architected app that still uses a You know cockroach dv or you know something on the back end that needs to be copied over Submariner is another tool in that space that that we've actually started to already include as an operator So, you know looking at those tools and part of that part of that problem space Always gives us a new opportunity to look at how can we make that job easier? How can we make the life of the central ops team? smoother, you know less friction as they approach these environments Awesome, so there's a question in chat here Right now we deploy Open shift get ops argo cd on a cluster and manage application placement dev qa prod, you know, that kind of three stage environment From that one install So sounds like advanced application lifecycle management can complement argo cd Supposed this is not a replacement for get ops flow. I mean, what is what is your opinion? Josh, I think if you want to walk through what we're working on in that space, I think they'd be great All right. So yeah, absolutely. So So we look at them as complimentary and not competitive That they work together. So we didn't touch on it But in the add-ons and I don't know if it was visible when scott had it up So maybe i'll try a screen share here. We'll see how that goes There is in the add-ons. We actually have integration today for working with With argo and so I'm going to click over So we see this a visual web terminal, but this is all visible from the cli today so that we talked about the clusterlet add-on configs that come with our Each of our clusters that we either deploy or we imported And so when you look on these In the Which cluster or which add-ons we have we have the argo support today? And so if you flip the bits to from false to true on an import or deployment Then that imported cluster now becomes available to any Argo that's running on that same cluster as the hub. So that's sort of the step one integration in Our coming up and we're just in the process. I think in about two weeks in the upstream we'll be committing We're adding a capability to be able to do the same kind of import But not just one argo but two multiple argos if you have multiple argos running in different namespaces for different development teams Etc etc etc and so this what this does is it just populates the argo section that you would use for Or the argo cluster list that you would use as targets So your remote target so anything you provision anything you import in the in ocm now becomes a target That you can leverage from argo as well And then this is coming the will be coming I guess in time to the upstream But uh is available Maybe in the downstream, but i'm going to touch on it anyways because it's pretty cool stuff in the acm side of it is We're also bringing you know as close as we can the argo integration because with our subscriptions because you know both technologies do a very similar set of jobs although have different In different spaces um different reasons to use them and I guess our My point being is that we're attempting or we're working towards embracing Both and that right our goal is to make that is is not to do one or the other But to make them you know make them coexist and make them work as as closely together as you can and so I i'm just going to point out and I guess I we can I can share the links to this as well um, so this is actually a get ops scenario where I have created the An initial subscription, which is called infrastructure build out and so that is an ocm construct resource So this subscription it points back to a get repo called fleet management And in that repo, I have a bunch of other Subscriptions as well as argo applications as well as argo application sets and so that single Point of start is giving rise to a configuration of all of these applications As well as these ocm policy pieces Such as installing open or well in this case, it's open shift get ops, but can be the argo The the argo operator on a remote cluster. We've got the compliant stuff We talked about we also have some security pieces where we talked about where we were talking about such as making sure that Dtd encryption is enabled. So all of that from just that single acm subscription And you know again pointing out how how similar a lot of these pieces are could be a could be Triggered from a single argo subscription as well But my point being though is that we're building these to inter inter opera operate as well as to coexist in a In a sort of seamless way and a similar visual way. So regardless of whether I'm looking at An acm described app with a topology and again topologies and whatnot. We're actually we're bringing to the open source as well And they actually will display even today. You can see topology views In the in the upstream code and then we have the same views for the argo as well And so you get that same look and feel and this one shows an error just and this is purposely has an error on it Just so we can show you can go and look and it you know It gives you that same kind of readout that ocm does as to where a problem is with the application so again very powerful who are I guess interconnected is no better way to say it and you know, we're working not One against the other but how do we bring them together so that you know, if you're using one or you're using the other You know, you're not locked in You can you can move back and forth between the two That's awesome. Like that's very very good. So a follow-up to that So a tooling cluster with argo an acm hub and os sm With multi-cluster support basically one key ollie to rule them all would make sense here, right? Yeah. Yep. Okay love it Let's see some upstream activity and that's right. I would love to see some enhancements come in To really provide that picture. Absolutely. Yeah, that would be amazing Um, so we got about six the four minutes left Anything you want to talk about future plans or anything you haven't mentioned yet You want people to know now's the time to get it out because we're about to cut over to the okd office So hour here at the top they are I'll go first, but I know josh and michael have a lot to say too I think one of the areas that we're really keen on is getting better stronger faster smaller You know in a lot of ways. Yeah, yeah a smaller footprint out on the edge providing scale capabilities, you know Hundreds of thousands of things in your fleet that you need to keep an eye on So those are kind of the questions on my mind is is what makes the most sense if from a A bundling or like a packaging scenario Josh and michael showed off and they demonstrated the value of the add-on framework And yeah, you could add on 30 things But maybe you don't need that in a very lightweight high-scale type of environment So what are the what are the minimum pieces you need? You probably need policy You probably need some level of metrics You probably need even in those metrics you only need the most critical things right like I don't need to be inundated with events That are happy state So just show me the critical stuff across the entire west coast for example So those you know my mind it's like how do we start to to make that easy job even easier by eliminating a lot of noise And when you get to the large scale types of environments Awesome, josh michael where your thoughts at So I think there's a lot of A lot of areas that we can still continue to make easier I think you know, josh talked about the fact that having a global load balancer in front of a fleet of applications is a very common thing And there's some work in upstream around multi cluster service import and export And we can actually support that through our submitter integration in open cluster management I think there is work around tying into The application parts that are not strictly containerized yet And that's part of the reason we've brought an ansible as a way to bridge into Maybe traditional workloads. It'll still running in virtual machines Or parts of the it process that are not Code tickets still make up a huge aspect of delivering changes tracking changes So we can we can actually use that ansible hook to drive the creation And management of some of those ticketing flows so that Where the process still needs that doc it's there But the way the changes are made are still heavily automated And then I think there's more work to do around You know, we can provide visualization dashboards in griffon And there's some ways to get access to metrics about events and and surfacing alerts But I think there's more that we can do to make The events that are coming out of the fleet more consumable at the hub Right making it easy to prioritize what users are doing. So Definitely lots of different areas that there's still lots of room to grow and innovate But you know, at this point, I think we're very much It's still a relatively young community So people that are excited about this topic and want to get involved and get engaged You can have a big impact on how this this grows over time nice Awesome. So we're at time josh You got like 50 seconds if you want it I was just gonna I was gonna echo what michael was saying and it's it's you know It's about growing the community and in interactions and and trying to You know evangelize and bring the open cluster management to as many additional projects and or interact with additional projects as as we can To you know to just to grow the space Awesome. Well, folks, they're looking for your help out there. So go see what you can do to help them out and Until next time y'all I will uh, I'll be in touch with scott in particular about getting y'all back on track with open shift tv episodes here Thank you chris for all that you do. Thanks for having us on today. Yes. Thank you chris. Thanks for joining us Have a good one. Thank you everyone out there. Uh coming up next is the okd office hour and uh, we will catch you there in like a minute So take it easy out there. Bye everybody