 Okay, hello everyone. I found it interesting that after showing the micro shift, there will be IBM guy showing the enter large enterprises view So I think that's that's kind of a interesting segue. So this is about how we as IBM came up with a interesting Development patterns to address large enterprises Requirements for workload isolation using OpenShift Lyser cool manager or all them I Go I hope that you guys know what all I miss or OpenShift Lyser cool manager is if not There are plenty of red hat guys who can explain what it is. It's a it's a It's it's a specific way Which you can use to to deploy operators part of the operator framework Few words about me. I'm IBMer and that's that's it what you need to know I'm I come from IBM from Poland not Holland Poland A developer advocate and I'm responsible for IBM cloud box architecture You'll hear in this talk cloud box talk about cloud box Karina mentioned cloud boxes. She worked with cloud box So let's let's deep dive into into the the actual matter So I started with introducing the enterprise customer use cases of cloud box, but let's start first understanding what cloud box are Cloud parks are IBM products running on Kubernetes specifically an open shift Or actually running on OpenShift And there are seven of them and they are special specifically developed and curated to be first-class citizens for on OpenShift platform and They are grouped together collection of capabilities for some logical Domain whether you want you're interested in data analytics machine learning securing your infrastructure Automating business processes or integrating different environments. That's that's what IBM cloud parks are for What's in what's in common is that they run on OpenShift? Container platform. They are 100% containerized and they are using heavily OpenShift lifecycle operator lifecycle manager IBM invested heavily in making sure that our applications are Deployed and managed by operators. We use heavily all of them and We are delivering over 200 operators 250 or so last time I counted and this number is growing and And we also came up with a common architecture and patterns how operators shall be developed and used to address Not only to be developed not only to be deployed and by by customers But also address interesting tenancy and workload isolation scenarios that our customers are after What what those scenarios are? So in earlier talks today You showed even in the numbers guys presenting that they are running really large clusters like you know, it's not five node cluster It's like 96. I think I saw but I've been customized a lot running even larger thousands nodes and Believe it or not. They are not running on your IBM software there They're running other software even though we like we would like them to running on your IBM But actually they're running something more and they are partitioning their clusters into isolated Arias managed by individual line of businesses and Cluster admins make wants to make sure that clusters remain stable secure reliable and operational whenever individual line of businesses are provided the horsepower of individual set of namespaces and and there is a emerging pattern that our customers are running really huge those clusters and Cluster admins are simply saying hey line of business a Now you are provided those set of namespaces go and deploy your application there But ensure that your application only lives within that namespace, right? Cannot reach anything outside of this namespace cannot Infer any other workloads happening in other namespaces now Whenever you want to upgrade application application in in a single tenant namespace. It's only your business You must not influence any other thing which and Moreover, I I be improved cloud parks which are deployed in those individual namespaces are shipped on the independent schedule So they are not really rolled out today to the customers at the same time a different schedule and Customers are also upgrading individual applications at different schedule. We don't we don't own that schedule And because of some complexities How this setup can be can be deployed it it slowed down a bit adoption of IBM IBM cloud packs so therefore we came up with architectural patterns how we We as IBM Are using operators now now there is a managing work work how to rearchitect all M into all MV one Which is great thing and we are interested to to look how it goes However, we need to address customer use cases now not tomorrow today So so therefore we came up with a few patterns that we believe that in general makes makes a lot of sense So therefore we are showing here So first of all given that our applications Cloud parks must be running within the namespaces and must not infer with any other namespaces must not have any cluster role so I Wrote remove or reduce some of the applications require access to nodes, which is cluster permission But in general we really scrutinize any cluster permission bit by bit verb by verb Resource by resource and we need to justify every single cluster role if there is any left Just not to mention that any wild card permissions are no go at all whenever there are some services which are common between between multiple Instances of applications We are calling them singletons and we are either isolating them and segregating and actually we are saying that whenever Tenant is deploying application Actually tenant asks cluster administrator to do some prep work just like cluster admins are doing the storage setup, right? We are saying the same for set manager OpenShift Set manager will help with that Or other services so therefore all of the singleton services are being Extracted and managed by cluster admins not by tenant admins We also address a scenario where namespaces for operators and operands are Segregated we see customers where they implement the firewall rules between namespaces and API server So therefore only selected set of namespaces are allowed to interact with with with API server So therefore operators can interact with Kubernetes APIs, but operands operands are the actual workloads Those which are doing the actual business database machine learning system. Those are operands, but they have operators Those operands must not talk with Kubernetes. They are absolutely prevented from from doing this We are providing a Open-source component which which is conceptually very similar to what all MV1 will come of us as part of the aria operator Which manages permissions across multiple namespaces cloud parks are not deployed in a single namespace, but in a group of namespaces So sometimes our topology spans multiple namespaces and we need to have the permissions being projected in the individual namespaces So therefore we have operator which is which is which is doing this but last but not least something which which is very frequently forgotten is that Operators are shipped and and and are delivered together with custom resource definition CRDs CRDs is the schema of the custom resource which actually operator acts upon right so we saw Earlier a few examples of CRDs. I think I saw one for What was that forgot anyway? CRDs are cluster scoped whenever you deploy operator into a cluster and Operator defines custom resource definition. It is seen across all of the namespaces and Interesting thing happens if you install two operators a different version different versions in Into two different namespaces if those operators Defined different CR different custom resource definition schema that interesting conflicts arises and In order to address this Again now not tomorrow now without any v-cluster technology without any multi-tenant control planes We as IBM implement implement a very strict regime for what how customer resource definitions are defined They must be backwards and forwards compatible, which means that we allow customers to deploy To have different environments different namespaces set of namespaces on the same cluster One of them being deaf instance one of them being QA instance at different versions V1 and V2 of our clockback capabilities and They don't conflict with each other even though custom resource definition is a is a global global thing It has an interesting side effect that we are using heavily Kubernetes preserve unknown field, which is a specific marker on the CRD open API, which simply means that Kubernetes will not prune any Field which are not explicitly describing the schema Which effectively we are having a little bit of schema less CRDs Which is kind of heartbreaking because CRDs were introduced to Kubernetes to enforce schema for custom resources But given that cause the custom resource definitions are not Tenant level, but they are cluster scoped. That's that's the whole reason is a is a little bit of pain but also with a Very well-defined software development practices You can ensure that your operators even with schema less CRDs operate fine Now That's that's what you have today, but Tomorrow in the future We want to explore other options where Clusters are really truly multi tenant We are exploring we are starting with Making sure that operator catalogs catalog sources are scoped to namespaces. They are not in open-shift marketplace Namespace on open-shift, but they are in the individual tenant namespace So therefore whenever customer Updates the definition of catalog source updates are not pushed to our towards all of the tenants But only just to one one tenant at a time. So therefore we have already avoiding manual plan approvals and complex setup like that Of course, we are aware and collaborating on all MV one effort, which is a Different architecture of operator lifecycle manager and specifically we are super interested and we'll collaborate on the migration pattern from existing OLM v0 into v1. So therefore any customers, but also vendors like the pair perhaps some of you guys are already implementing operators Also can leverage from this work. So therefore your technology your operators can be migrated from existing OLM into LM v1 That's that's that's essential piece and in them at the same time there is a Evolution of Kubernetes by itself and open shift as well towards multi tenant control planes Perhaps you may have seen in Earlier open-shift commons gathering introduction of KCP Which is also kind of interesting promise how Certain problems of the multi tenant world or clusters can be you can be resolved There are other technologies which are which are pretty pretty actively Trying to solve the same problem But it's a journey. It's something which we as IBM have realized that Customers Initially we're running very small clusters fine-tuned for the specific applications They are running now large ones multi-tenant ones and technology needs to come Kubernetes needs to come up with it all em needs to come up with it and We are addressing We are and we will be documenting the development practices To the field to the to the to the general population, but also at the same time actively exploring upstream activities And I think that's it It showed a different perspective. It's not micro shift, but it's like a huge enterprises view on On this ecosystem, but I'm happy to take any any question from you guys