 Hello, everybody, and welcome to another OpenShift Commons briefing. I'm Diane Mueller, and I'm really thrilled to have with me today Tal Learon from the Telco Solutions and Enablement Group at Red Hat, part of the Office of the CTO, coming to talk to us about Kubernetes Operators for Telco Workloads, which basically covers every buzzword bingo of every conversation I've had, probably for the past week, with the Telco teams and the operators team, and everybody does love a bit of Kubernetes. So I'm really thrilled, Tal has really thought a lot about the pattern of Kubernetes operators and wrote an incredible internal white paper, which he's turned into an external one that will give you a link to at the end of this. And I think he really kind of expressed the role of operators and really explained the greatness of using this pattern. So I'm going to be quiet and let Tal walk through his thinking around this topic. And you can ask questions in the chat. We'll leave some time at the end, wherever you are, whether you're watching on YouTube or BlueJeans or in Twitch. You can ask questions and we'll relay them back to Tal at the end. If we don't get to all of your questions, we will post it in the blog and try and answer them in a blog after this. So with that intro, Tal, thank you very much for today and taking the time. Tal, tell us about yourself and take it away. Well, thank you very much for the kind introduction. So I think you covered everything. I'm an engineer in the Telco solutions and enablement team at Rehat for part of ecosystem engineering. I've been working with Telco workloads on Kubernetes for a very long time and operators is always the topic that comes up. So correct. There's a lot of buzzwords here, but hopefully after this presentation, you'll you'll realize that there's some substance behind it. So I will dive right in because I have quite a lot to cover. This presentation will be divided into four parts where the first part will be kind of a history and context to get people on board to understand what we're even talking about here, why why is this important? Why are Telcos using Kubernetes operators and why are they even interested in that? Then we're going to move to talking about what operators really are. Then we're going to go back to Telco and look at actual use cases where Kubernetes operators can can really help. And finally, I'm going to discuss some practical considerations. So much of this presentation could be interesting for people who are not working in Telco and are just generally interested in what operators can do. So we'll move in and out. So here we go. Part one, how did we get here? Why are we talking about Telco workloads and Kubernetes operators? I'll give you some prehistory in this graph in this slide. So the issue is network equipment. And from the very start, Telcos have been decoupling things. So this is a history of decoupling to an extent where you had big networks run by a big telecommunications company. Think about a transatlantic telegraph right in the 19th century. You have all kinds of equipment along the way, but it's always been decoupled. That is, we had network equipment providers that made and sold the equipment and also serviced it. And then you had the operator. And here, operator in the telecommunications sense, we have a little bit of confusion here. But in this case, when I say operator, I do not mean a Kubernetes operator. So the network operator, the telecommunications company, would be working with these other companies that would actually provide the hardware. And it was from the start also a very multi-vendor situation where you might have equipment coming from different providers, different vendors. And they would need to work together, right? You have telegraph on both sides of the Atlantic. They might be bought by two different companies, but they have to work with each other. So standards to be on the left at all of these steps in the history are still with us. And we're still using a lot of equipment that is basically a specialized computer. And those are becoming interesting too, because when you look inside those specialized computers, sometimes they can look a lot like clouds. But I'm jumping ahead here. This is part of a general evolution from analog to digital. But keep in mind that we still have a lot of analog, right? If you think of something simple as a digital analog converter of voice to digital, right? Antennas with radio waves for cellular networks, right? We're still dealing with a lot of analog stuff, and computers are still doing analog work here. But there's definitely, we can see a trend of evolution to make more and more of our information digital. The second big phase that, again, we're still part of this history is virtualization. And a colleague recently corrected me. You know, we sometimes think of this word if we're engineers in terms of things like virtual machines, or virtualizing network interfaces like SRIOV. So we have virtualization technologies built into our computers these days, and operating systems that we definitely use. But virtualization here is meant in a different way. The idea here is that virtual is as opposed to physical or actual. The idea being that we're decoupling the hardware from the software. So suddenly our equipment, we can talk about it being virtual, meaning that it is no longer a physical piece of equipment, but it's actually a piece of software that can run on a piece of hardware. And this decoupling basically introduced new kinds of players to the game. If it was before we talked about these network equipment providers in the prehistory slide, now we can talk about hardware vendors versus software vendors. And we're also changing our terminology a bit. So we can still talk about equipment, but now we're talking about network functions. And I'm going to dive into that a bit more in a bit. And so we can see here that we really have two different kinds of network functions, the physical ones and the virtual network functions, BNFs. And again, I'll emphasize, a virtual here does not mean it's a virtual machine necessarily. It could be, but anything that's decoupled from hardware would be a virtual network function, even if it's running in a container, because that's the idea here of virtual. The important parts of this story are that we started to evolve to use off-the-shelf products, off-the-shelf hardware and off-the-shelf software. And this story is kind of hard, this evolution, because again, some of this equipment can be very specialized and require very specialized hardware. But in some cases, especially in software, that's easier to move around. So the first phase is the one that we're really involved in right now. I'm calling it cloudification. Here, we're complicating the picture even more because in terms of software, we're talking about cloud-native network functions, that is network functions designed to run on clouds, on cloud platforms. But then we're also introducing the platform vendors who actually create these cloud platforms. So more and more players, more and more decoupling. And again, there are some trends in the evolution here, so we're still using a lot of virtual machines for sure. But the trend is also to move to containers in many cases and possibly the most interesting trend for us here in this presentation is the move from what I call management orchestration, moving from something like OpenStack to something like Kubernetes. And here it's important, the move from OpenStack to Kubernetes is not just a move from virtual machines to containers. That's actually the least interesting part of it. The real interesting part is that we are no longer providing infrastructure, but rather orchestrating our workloads. That's what Kubernetes really brings to the table. And I'm oversimplifying this quite a lot here. There's a lot going on, but my point here is just to give you context. And I'm mentioning here we're also using public clouds in some cases. And that might sound strange to you. Why would a telecommunication network function run on a public cloud? Well, so much of it ends up looking a lot like enterprise software, or IT software even. Think of a cellular network where you're connecting your cell phone with a SIM and somewhere there is a record of that SIM and who would belong to what account. There's a database where all of this is stored with security keys, et cetera. That's kind of a normal IT workload that could potentially run on just truly generic hardware and truly generic clouds. And they could even be public clouds. So that's the overall context of where we are. And it was important for me to present all this because all of these steps, these three steps are still with us. And that's part of the story of where Kubernetes operators become interesting and useful. OK, now let's switch gears and move to really discuss something more abstract. What are these operators that we're talking about? There's something called the operator pattern, which has been with us for a quarter of a century right now. I'll open the link here. This is the oldest version that I at least found on the internet, but I'm sure they're older. From 1997, the operator design pattern for parallel computation. And my point here is that it's really used in a computational sense. And we're not talking about functions, even when we talked about network functions before, we're talking about something that does continuous work. So don't think of it in terms of arithmetic and also don't think of it in terms of a computer program calling a function. That function is triggered by one thing and then it ends. The function is called, eventually it returns. And operators are kind of the same in that way. But here we're talking about things that do continuous work. The network function is constantly functioning and the operator is also continuously doing its work. Our context here is really when we apply this to a declarative intent-oriented orchestration context just like Kubernetes, the operator has a special meaning. It consumes intent as its input and emits intent as its output. And the next slide, if that's not very clear, the next slide we'll talk about it a bit more. But again, I'm emphasizing the fact that this is continuous. The operator is continuously monitoring that input, the set input changes, and continuously managing the output to make sure that it matches our intent. We can see here that we can divide really operators into two kinds. There would be pure operators that have no side effects and that all they do is consume intent and emit intent. But a lot of times we're interested in impure operators that do additional work in addition to emitting intent. And let's take a look at some applications of the pure operator pattern in Kubernetes and hopefully that will clarify what we mean here. So there are many examples of this, but this is probably the most accessible to everybody. Think of creating deployment in Kubernetes. The deployment descriptor will be managed by the deployment controller, which in this case is an operator. So the deployment controller will take that deployment and create a replica set. So the intent consumed by this operator would be the deployment and the intent emitted and managed by the operator would be the replica set. And this is a pure operator, right? The deployment can, if we delete the deployment, the replica set will be deleted as well and they're tied together. The second operator in the chain is also pure operator and that's the replica set. So now we have the replica set controller and the replica set controller takes that replica set descriptor and emits pods, one or more or zero or more pods and with their descriptors. And again, it's a pure operator. That's all it does. It doesn't have any side effects. That's its entire job. Finally, we reach the pod controller and in this case we are not applying the operator pattern or at least not in a strict sense because we are actually terminating the operational graph and moving to the real world. Here we're only interested in the side effects, the actual containers that are going to match work with the pods. And the pod controller makes sure to work with the container runtime to connect those to each other. So we have a graph, right? We have a graph of two operators reaching and they're terminated at one side. So one way to look at this graph is that we move from the left side the more abstract to the right side being more concrete. I encourage everybody to open their imagination a little bit regarding the operator pattern because it's not just abstraction to concreteness. You can think of it as a kind of translation as well. We're translating intent between different kinds of domain and this again would become very important in telco workloads where we were constantly doing work of translation between domains and standards, et cetera. So it's not just about getting more concrete. It's about a different kind of intent almost or even translating between paradigms, really. Okay, this next slide. I'm not gonna do the satrum bone sound but I hope you hear it in your head. I wish I didn't have to include this slide but unfortunately the Kubernetes official terminology is very confusing. If we look at the actual links that I provided here. So when we talk about controllers in Kubernetes these are the built-in controllers and there's really discussion about the controller pattern and this is all okay. What's not so okay is the discussion of the operator pattern which is simply wrong. For 25 years we've been using the operator pattern terminology but Kubernetes has decided to use that in a different way. So in the Kubernetes terminology when we talk about controllers we're talking about the built-in operators. So everything you saw here in the previous slide all of these are controllers even though they implement the operator pattern. But when we say operator and according to the official terminology we're talking about a custom controller. So any custom controller that you create and add to Kubernetes would be called an operator. Even if it doesn't implement the operator pattern. So it's confusing and we're just stuck with phrases like we can talk about a Kubernetes controller that implements the operator pattern or a Kubernetes operator that does not implement the operator pattern unfortunately. But that terminological soup is just there but it doesn't change what we're actually interested in here. And that's the important thought. We care about Kubernetes operators even if they don't apply the operator pattern whether they're pure operators or impure operators could be important to the architecture we're creating but in the end they're useful even if they're not applying the operator pattern. So if you want to think of it as a custom controller that's one way to think about it. And the question of whether you need the operator pattern has to do with your entire architecture and your entire strategy. Are you trying to create a kind of Unix philosophy tool that could be reused? For example, if you go back here to replica set controller replica sets are usable even without deployment controllers, right? They can be used directly and they could be connected maybe to other operators that would sit before them and generate intent. So yeah, don't worry too much that you have to apply the operator pattern everywhere in order to make good use of Kubernetes operators. But it's important to know what you're doing which I hope this presentation is clarifying. And also another thing you should not do is reinvent wheels. There's a whole bunch of operators out there already of diverse quality and diverse abilities but look at them first. And of course something off the shelf might not do exactly what you need and then you need to generate your own. So sometimes you do need to retrofit a certain wheel maybe. But this is almost all of these are open source and that's possible to do if you need to do it. Okay, we'll switch, we'll switch. Sorry, shift gears back to talking about telco and look at how everything we discussed until now could be used for telco workloads. The first use case is stateful components and this is the vast majority of operators that are out there in the wild are really a dealing with state. And there's a good reason for this. Life cycle management is built into Kubernetes in a very specific way, right? You're familiar, you create a resource for example, a pod and it will create the containers around it. If you delete the pod, those will be deleted. If you update the pod, maybe some change to the containers will happen too. So there is life cycle management in Kubernetes but it's a very, very specific. Unfortunately, many stateful components are not, they don't work that way. If you have a say a database cluster, let's say it's a database cluster with 10 instances running. They might work together in ways that you do not want to simply remove delete one, right? You want to maybe, or add one, maybe it needs to be synchronized when it is added in. There are all these issues with state that are more complicated than just adding and deleting. And that's when your operator come in, an impure operator to handle all that specialized work that you would need to do. And again, remember the operator is continuous. So it would be constantly monitoring, say your database cluster making sure that if a node falls off that we can create another node and maybe add it in the correct way, et cetera. There is a lot of state and network functions too. So here I'm emphasizing that you might think that networks are very stateless things, right? A component along the way is like a router, a gateway. Where's the state there? Well, there's configuration states but beyond that there is state that has to do with the actual network. There are a lot of issues with maintaining connections, sessions, load balancing them. There are aspects that are cross components. You might have multiple network functions that are actually part of the same overall session. So again, same thing with databases. Think of it that way. You can't just add a node or delete a node whenever it works for you. You have to ensure that you can scale out or scale down without packet loss and while maintaining these sessions because yeah, I think that's pretty obvious. So again, stateful components are a great reason or a great use case for creating operators. Configuration management is another very big one. Network functions I think more than any kind of workload you might be familiar with involve very heavy configuration. So much so that a whole set of standards and stacks of software have been created just to configure network devices and network functions. And they're complex enough that you can really imagine a complete workflow of configuration. So think of it, think of configuring a router. Well, first of all, you need to find out well, maybe how many interfaces the router has. So you need to query the router and according to if it's this kind of router or that kind of router, you might have a different branching workflow here and do other kind of work. So there's a lot of work with configuration and again, this is a great use for an operator and operator can encode a lot of that configuration and because it's running on the cluster, it can do that locally. So for a long time in configuration management for networks, we had some sort of orchestrator that sits maybe far away from the site that manages the configuration from afar. But by moving it to locally to and operate on Kubernetes we are delegating some of that work. I'm gonna talk about that a bit more in other sites but it makes sense to keep the code close to the data in this case. And here I'm linking to an operator that I've been working on that's been doing exactly that. It lets you write these workflows using Python and have them run in your cluster. Third use, desegregation and modularity. And this might seem obvious but again, it has a very specific bent I think in Telco. We can't think of breaking up that big orchestrator that sits far away to a bunch of operators that work locally at the sites. And when I talk about sites, I'm talking about hundreds of thousands of sites in some cases. Think about a network operator that, and here I'm using the term operator as a not a Kubernetes operator but a Telco telecommunications company that covers a continent. Think how many cellular antennas are out there? How many big clusters in cities? We're talking about many, many, many sites. So the scalability challenge is mind boggling almost. So here again, breaking up that big orchestrator to Kubernetes operators that can work locally at the sites means that first of all, they'll operate faster because they're close to the things that they're operating. And they're able to make autonomous decisions. And here there's an entry for using machine learning and artificial intelligence to allow them to make these decisions better. So even if they lose connection to the central big orchestrator, they can still operate autonomously. So huge advantage for reliability too. And huge advantage I think to design. You can create an operator that does one thing well rather than work on a very big orchestrator which could be an enormous project and yeah, you can separate it off. So it fits again with our history of decoupling. So let's have more decoupling. We can think of these then as these operators as part of a large orchestration narrative. And so they fit in with other things. And again, they accept intent and they could emit intent. And you can think of it as part of a big graph that really describes your entire orchestration strategy. The fourth use, it sounds like the opposite of the first use that the previous use. So yes, you can disaggregate, but you can also use operators to actually integrate. And we see a lot of operators like this actually, operators that are installers. So think of creating a custom resource that describes your whole product. We see that a lot with databases for example. So you'd like to install say MariaDB cluster. Well, the MariaDB operator will install all of those for you. Installer is not the best word for those because we're also talking about day two changes. That is after it's installed, you might be updating it and you'll want to see those changes happen as well. At least if it's a good operator. There's some history to this that predates Kubernetes and really predates classification. The terminology here, I'll just go over it briefly. For a long time, we've been talking about specialized virtualized network function managers versus generic virtual network function managers. The idea being that an equipment provider can provide the equipment but also the virtual equipment and also provides a management suite designed for that equipment. So if we want to manage the equipment, we would work with that management suite. But a lot of us has been hoping that there will be generic versions of that. So we wouldn't have to reinvent the wheel every time we're using different kinds of equipment and have something that might be specialized in QWERTY. So that kind of manager would be able to be configured or changed or programmed in some way to handle other kinds of CNF. So that's again, something that I'm working on specifically. There's a project called Turandot which is a Kubernetes operator that uses Tosca to configure what kind of network functions you're working with or general workloads. So yeah, there are two trends here, right? There's one trend here to disaggregate and modularize and use the unit's philosophy. But there's another trend where we can encapsulate all the work inside an operator. And the question is, should we use one approach or the other? And I say both. I think both of them make sense and it depends on what you're looking at. The rule of thumb that I always apply is keep the code close to the data. So if that means that you want to integrate it into an operator, then okay, you're integrating. And if that means the opposite, then you're disaggregating and using the operator to modularize. And I think this is, yeah, this is the final use case and then I'll jump into the next topic. Why not do all our work in Kubernetes? So if you remember, we still, the three phases of history that I started with are still with us today. We still have a physical network functions and they're not running on the cloud, on any cloud platform. They're running on some sort of specialized hardware. Why not manage them from Kubernetes too? So I've come to call these representational operators because they're operators that do work on representations that exist in Kubernetes rather than the resources themselves, right? They're not running in the cluster. Now, why would you do that? Well, you're basically putting all your work in one place. If you're already committed to using Kubernetes for orchestrating your telco workloads, why not use Kubernetes to also orchestrate your PNFs? And it makes sense too because the technologies end up being very similar too. If you think of configuration using, say, the NetConf protocol, well, the physical network functions are using it but the cloud-native network functions might be using that as well. And there are examples of the industry of exactly doing that and a Kubernetes operator or you can call it a controller if it doesn't, it's not applying the operator pattern that manages a box that sits outside of the Kubernetes cluster. And we can update the cluster resource for it, et cetera and it would seem to work as if it's within the cluster. So we stay within the same paradigm. Okay, I'll shift gears again now and talk about, well, how do we do all this and what should we be worried about and what are the pain points? And this is from doing a lot of work with operators and reporting from the trenches over some of the problems you might encounter. One problem is the custom resource definition implementation in Kubernetes. Sad trombone again here because custom resources are names-based in Kubernetes but custom resource definitions are not. They are defined cluster-wide. It might sound like, okay, that's not a problem. You can just, well, you'll just install them in the cluster but in a lot of management scenarios, not everybody has those privileges. The question then becomes, is the operator part of your workload or part of the platform itself? And there's no easy way around this. I hope that Kubernetes ends up fixing this. It would require a major change to how it's designed and how custom resource definitions are designed but I think that would go a very long way to make operators more portable that you can include them as part of workloads and not think that you have to install them separately. But, and another problem with custom resources that I'm mentioning here is that there's a side limit of about one megabyte but depending on what you're doing you might hit that side limit. The thing is, don't worry. There are very good alternatives actually to using custom resources. Custom resources are a nice tool that comes with Kubernetes but you don't have to use it if it doesn't work well for you. And putting out as one alternative you can use config maps. They're also general purpose resources in Kubernetes and you can use them any way you want. Why not use them to store your custom resources? Use them as descriptors and you can add annotations and your own validation and schema, et cetera. You can build that yourself on config maps. Another approach is to use annotations and here we can talk about fine-grained operators that is operators that don't work on complete intent but work on aspects of intent. That's still an application of the operator pattern and a great example of this is Multis. Multis would look for annotations on existing resources and those become part of the intent. They're not part of the custom resource but they're actually in additions to existing resources. That's a great strategy too. You don't have to start with CRDs. So again, I'm going back to that page, the operator design pattern from the official Kubernetes documentation and yeah, again, it talks a lot about custom resources but no, you don't have to use custom resources if they're hard to use or if they don't do what you need to do. Or another alternative is you can actually store your intent elsewhere and sometimes it's already stored elsewhere if you have some sort of orchestration system that you're working with or a database, you could use the custom resource or config map or an annotation to just point to that. So work with IDs instead of actually storing everything there and that of course solves the problem of the one megabyte limit to each of these. So if you store it in your own database, do what you need and yeah, get ups, right? Why not store it in a Git repository too? That's another approach that you can take. So your custom resources will be the live pointers to intent stored in a Git repository and your operator again, remember it's continuous, so it would have to monitor and look at changes to Git and you can do that through triggers for commits and react accordingly. Another practical consideration and it a little bit relates to the previous one. Should the operator be installed in the namespace in which it's going to work or should it be general and work with all namespaces and for different kinds of operators, it could make sense to have one versus the other and many operators actually support both configurations. So if you want, you can install it within a specific namespace or separately and again, this goes back to the issue of, well, if I'm packaging a network function with its dependencies, its operators or do I say, well, are these dependencies something that the platform has to provide for the workload to run? So this is a question that needs to be solved as part of the overall deployment strategy, but technically it's rather simple. It's usually not hard to create an operator that can support both of these. The Kubernetes API server always requires a namespace anyway when it works, so my advice is why not do both if you can make it a flag when the operator runs to say whether it's running in namespace or a clustered mode. The difference, of course, if it needs to be working cluster wide mode, you do need cluster wide permissions to install it. And of course, if it uses a custom resource definition, it does require those cluster wide privileges. Another practical consideration is garbage collection. Kubernetes has a garbage collection. That's nice. It's very useful for the operator pattern because the idea is that you can make the emitted intent owned by the consumed intent. So if you delete the input, the output will be deleted too. And that works if you delete a deployment in Kubernetes, the replica set will be deleted. When the replica set gets deleted, the pods get deleted and eventually the containers would be destroyed. So this kind of built-in garbage collection could be great. It could really add robustness. Even if your operator crashes, Kubernetes can make sure to delete the output intent. It's not good for stateful components though, for all the reasons we stated before, where there's a specialized lifecycle management. So we don't just want, say, the database to be deleted. The node, maybe it needs to, we need to update a connection URL. Maybe we need to make sure that the data in the node has been synchronized before we delete it. So again, an operator can come in here and implement its own garbage collection. So just something to be aware of. Decide for yourself whether you want to use the built-in garbage collection or not. And finally, the big practical consideration really is developing these in the first place. It's not trivial. Kubernetes is a very asynchronous environment. Kubernetes is a written in Go, but I do want to emphasize that you do not have to write your operator in Go too, even though it's the native language of Kubernetes. But whatever you do, it's not going to be trivial. Jumping into developing operators requires a learning curve and a commitment to working within this complex ecosystem. I listed here a whole bunch of links that people can take a look at to see, either get inspiration or get a head start. Some of these can be used maybe for prototyping an operator before you're actually going to maybe develop it fully. And there's no easy way out of this. I'm hoping that making operators will become easier and easier. Ansible operators, for example, are very easy. If you know how to write an Ansible Playbook, you can package it inside an app operator. That's a great way to start. But of course, if you have to work with network devices and things that don't have the Ansible support, then you have to do it anyway. And then if you have to encode the lifecycle management logic too, well, there you go. So it's a good idea to look first at what's available off the shelf that could help you before you jump into developing a full-blown operator on your own. And with that, I'm glad that I managed to fit it in because I did want to leave room for questions. I will notice here, as Diane said, this presentation is based on a much longer document that I've created in the context of the CNCF working group. So we have a working group there that we are trying to develop best practices for developing cloud-native network functions. And from the discussions we had there, I created this very long document, which this presentation was a boiled-down version of. And it is a public document, so anybody can comment on and just make sure that you remember that when you comment on it, that this is public and could be seen by anybody. And with that, I am done with my main presentation and happy to have questions, comments, and discussions. Well, as always, Tal, it is a tour de force on the topic and it stretches the definitions that we're used to and may be comfortable with around operators. And really, thank you for this because I think I'll post the link in a minute into the chat with the full document link. But I really appreciate the distinction between pure and impure operators at the very beginning of this. I think before I read your paper and listened to this, I wasn't really thinking of them in those terms and how the prehistory of all the operator pattern that we take into Kubernetes and we co-opt for our own purposes and how important naming things is and what the confusion could be. So I think what you've done today has really helped me and hopefully everybody else who's listening to really start thinking about this in another way and maybe a little bit clearer thinking about it, especially in terms of the telco stuff. There's a couple of questions coming in from YouTube in the chat, if you see them, Rico Suave. I think that is a pseudonym, but maybe not. Should telco loads be deployed in its own specific cluster or in a generic cluster as long as the CPUs and RAM are available? And then do. Yeah, sure. That's a very good question and I erased through it and as I said, I really oversimplified the history. Generic, we would love to have a write once run everywhere universe where you can write the code, write the software and not think about the hardware in which it runs. And we're constantly moving more and more towards that and but it's not true always and it really depends on the workload too. We have workloads that are maybe closer to the physical world, right? Things that require specialized networking equipment, think of something like an antenna, right? That's not generic, but we have specialized network cards. We have workloads optimized even for one CPU architecture versus the other. Think about x86 versus ARM, for example. So it's not true that you could just deploy something to a cloud and expect it to magically work in many cases. So we're constantly thinking it's not a simple problem to solve and part of it is efforts being baked into existing Kubernetes solution. So Red Hat's OpenShift does a lot of work in terms of managing the infrastructure of itself. So you can have different kinds of nodes. You can have a multi-architectural cluster, for example. And then your workloads would be able to be annotated to say, well, I need an ARM environment and I need this amount of memory and I need this access to this kind of accelerator. So the dependencies are sometimes rather specific. But still the advantage of using cloud technologies is not just that we can write ones from everywhere. Even if it's very specific hardware, using Kubernetes allows us to orchestrate at scale. That's the big C change that I think Kubernetes brings to the table. We have orchestrators everywhere right now. It's different from OpenStack in which you're just providing virtual machines but then something would need to install soft round of virtual machines. So actually Kubernetes manages our software. So yeah. I really like the phrase you used about Kubernetes being and I think it was extensible orchestrator. Right. That I think was a key thing there. He has a follow-up question about what makes a cluster telco grade? He's heard about CPU pinning feature, et cetera, but what is it that makes it specialized? So telco grade is a term used to talk about software and hardware that would match the requirements. Now the requirements are not just, hey, we want it to run fast. There are regulations and standards. There are timings that have to work in order for the whole chain to work. So it's not the only industry with standards, of course, right? Lots of industries have reliability standards and regulations from medical to financial and security standards, et cetera. So I would say what telco grade is just another variation of those kinds of standards. In the end, it's not too special in itself, right? So if you have certain timings that you need to work within, well, other industries do as well and it's not a problem that's unique for telco. So I would say that and CPU pinning definitely not unique to telco. So the question isn't so much as telco grade, certifying a platform for saying this, this Kubernetes implementation on this hardware is telco grade, but rather asking the question over what workloads are you gonna run on it and do they have what they need to run well? And the good thing about that story is that it becomes not a telco story, but a general cloud story. So if we solve it for one industry it will be solved for others as well. And that's an advantage that telecommunication companies understand too moving to off the shelf stuff. Being a snowflake is not a good position to be in. Being in a multi-vendor cloud situation gives you much more power to choose different vendors to switch between them, to negotiate and to use industry knowledge that has been developing for years and apply it to telecommunication. I think it's also interesting, like I started the whole conversation out about how this past couple of weeks it seems that Kubernetes operators and telco have been in every conversation that I've had with people inside of Red Hat and outside of Red Hat. It's really been telco has a history also of being sort of a hotbed of use cases that get then implemented outside in the rest of the world. And we've been talking about edge computing and there's another initiative inside of Red Hat around AI ops in Telco coming from America Mobile did a great talk on Role Race is working on something called the enterprise neuro initiative system or something enterprise neuro system and using AI ops to make those autonomous decisions on the edge and about operations and about operating at telco scale. But those things, those are the kinds of ideas and I say ideas because I'm from New England so I can't say it the other way that help all of the other industries and all of the other spaces, which is why it's, I think recently we've just been seeing a lot of the things that telcos have had to do especially around edge computing and keeping the code close to the data and making those work decisions autonomous and local. And this is really what I think is that the crux of leveraging Kubernetes operators in telco is really just another set of ideas and patterns that we're gonna see applied in a lot of places and move through lots of other industries. So it's really been, yeah, I think telcos, you don't think of them as really personally as bleeding edge thinkers but recently, a lot of the work that we're doing here at Red Hat has been taking telcos use cases and applying them elsewhere. So I'm thinking in your office of CTO work and that telco solutions work, we're gonna keep watching you and watching what you guys do and bringing you back to get some of these big thoughts and big thinking ideas and see how we can apply them in other places. Yeah, without a doubt, I'll add to that that telco use cases, you know, telco grade use cases they do push the envelope. There are, we can definitely say that in terms of just networking and by networking here I mean this low level networking support, TCPIP for example, you can obviously imagine the telco workloads, CNS would require much more sophistication than other than say a database and enterprise application. So I think approaching the telco use cases and trying to solve them benefits everybody in the cloud space. One of the complaints about Kubernetes is that it doesn't, it's pretty weak with networking. It doesn't have a lot of opinion about networking. You know, you bring your own networking to Kubernetes. You have some sort of SDN, software defined networking solution that you plug in but Kubernetes itself doesn't care too much about that. Well, that's nice until you really need to do a lot of low level networking work and then you're asking, well, can Kubernetes help me with this? And if it can, then you have to develop all these systems on yourselves. But, you know, across the board I think the telco industry is pushing Kubernetes to be better with networking and we see it happening. We have, for example, SCTP support just recently added into Kubernetes and we're seeing the network plumbing group introducing multis that additional interfaces. So yeah, telco is not behind the times in every way. In every way, I think in some ways it's really pushing the envelope forward. And I think some of the conversations I've been having with Paul Lancaster who's sitting out there listening in, I'm sure somewhere and around the certified container CNS and the big push and so a lot of the vendors that have CNS and are working with Red Hat to get them certified to work on Kubernetes and with OpenShift and doing all that stuff. They have been, you know, once those things are there, ah, Paul is right there. Yes, Paul, yeah, if you wanna chime in, I think that that area is, Paul's been preaching to the choir. I mean, the only thing I would add, Diane, is that, you know, a lot of the things that Tall points out the work that we're doing in the CTO's office is actually helping our ecosystem. You know, what we're finding is as these, as the ISVs or the business units that create software within the network equipment providers that are targeted, they're in customers or the service providers, the telecommunication service providers. As they migrate their applications, they actually take advantage of a lot of the work that we've done from an operator perspective. So there's operators that we actually ship with OpenShift to be able to take advantage of things like CPU pinning or SRIOV. These are, you know, and so what we find is not only the work that we're doing upstream in, you know, OPNFB or LFN now, it's for the work that we're doing in CNCF, but it's actually making it into the ecosystem of customer ISVs that they're deploying at scale in the service provider. So that's really what I would add. The other thing, and this we're probably gonna have to, we can go a little bit longer, the live stream may end, but you can hang out in the blue jeans if you like and keep talking about this. One of the things that you brought up was about making it easier to build operators. And I know, and Wednesdays, which is today, is our day to talk about operators. So we've had the Java operator, SDK group on last week and the week before, people talking about the operator framework and that. How do you see, I mean, and you didn't mention that in your list on your slide, the operator framework stuff. Have you worked with those crew and are they embedded into your conversations around the Kubernetes operators for Velco? Of course they are, right? Especially for targeting OpenShift because the operator hub and the operator life cycle manager are so embedded in OpenShift that it makes it a very natural place to start. And Red Hat, of course, as a big proponent of operators can give you a big head start. But in the spaces that we think about, I work a lot with standards body like Etsy, and ORAN, and Oasis, and at the end of the day, Kubernetes is an upstream project itself too. So within that space, I think there are a lot of different kinds of opinions and needs. I can't expect, for example, every company to have a go developers, right? Or to be able to invest in the go programming language. Telecommunication companies are kind of interesting. Some of them do a lot of development in-house. Some of them specifically outsource development elsewhere. There's a variety, and because of that variety, I think it's important to also provide a variety of solutions and approaches to doing that. So as I said, you could use Ansible operators. That's a great start if you're already invested in Ansible and designing playbooks. If you're a big Python house, there are solutions to that too. The main thing I wanted to emphasize is that Kubernetes is itself, the API server in Kubernetes is language agnostic. And that's a huge advantage in terms of plugging into it. So it works well with also the microservices approach to development, Agile, where you have teams maybe working on a very specific unit, a functional unit, a network function, or an operator. And they might be using different technologies from other teams, right? For various reasons. Maybe one is written in C++ and working directly with drivers. The other is written in Ruby because it's talking to some middleware database system. But all of these can work together within the ecosystem of network functions and Kubernetes operators. So it's the kind of work that I do, which is very, a lot of evangelism and promoting. I emphasize the truth, which is that there's a diversity of ways to approach this. No, and I think that's great. I think that's one of the things about having the office of the CTO and having people be in that arena and being given the time and space to do the big thinking and the promotion of these new ideas and to help us bubble them up and get them out there into the universe. And Tal, that is one of the things I love about your talks is they make us think. We all have opinions and we're all opinionated. But I think sometimes having that, the overview of the landscape, like you did a previous talk on introducing Tosca and an AMA on that and I throw the link on that too. One of what, and these are the kinds of talks that I think start discussions and start conversations and hopefully drive some standardization and some really useful innovations into the space. So I am grateful for you for taking the time to, first of all, write that wonderful paper, which I will send out over the the internet and send you a post with it, the YouTube video up on our YouTube channel and on the OpenShift Commons. And we are definitely gonna have you back and every time it's surprising to me. So I'm looking forward to whatever it is you're working on next and I hope you'll come back. So thank you very much for today. Thank you very much. I have a few things on the oven. As always, yes, so we'll definitely give you the podium and do this again. So everyone who's out there listening on the internet and we'll put all the links up and the slides shortly. And I think the raw video will be up on YouTube almost immediately because we're doing live streaming to YouTube. So Rico Suave, thank you for your questions and everybody else, Paul, thanks for being here. We'll have you back again soon. Take care everyone. Thank you everyone, bye.