 that's the better. We'll be like zooming in on a change. No, no change. The slides are visible and this is where it is. Just even like the stop change gets worse sometimes. We're live so you already start whenever it's one o'clock. How do I start? Just let them know that. Okay, ready to go. Just one more question. Okay, but put the earpiece in your ear so you can hear a response. How many of the earpieces is this one, right? Earpiece, this is the mic. And you just press down this button to talk about it. Yeah, do it. Two minutes to the top. Yeah, we'll get to another two minutes. Yeah, we'll start. Okay, take off in another two minutes. And got to be able to start in the convenience area. We won't be offended. So we'll be designing it. Ear? If you go on the right of it, we're not in the screen. So whenever you want to be in the screen. Hey, welcome everybody. We'll be using Dev Air Talk on how we build managed services like right now, at least our flavor of it. And we get picked off by just telling you a little bit about ourselves and why we have some experience to be able to talk about the subject today. I've worked on a bunch of different services. I think I've managed more recently. Yeah, my name is David French. I guess that's the real and I have five years of experience building managed services in Red Hat. First similar journey to David's. I've worked originally on the mobile SaaS platform, Red Hat application, Red Hat on a paid platform. So we've gone to the Red Hat management integration and then over the last two years that we're working on. What most people know as managed package or open chip streams is the marketing name for it. I have, I was a principal software engineer. I'm the team leader of the previous managed services I've worked on and the managed application services of the architect. Most recently I've moved into people management. About a month ago, where I've now been hearing money here for, to say, control playing. I'm not security teams of the industry. Sorry, I wasn't mentioning myself and David. We've worked together. I think we've worked together. Okay, so do you kind of set things up? It's an official service first of all. It's just something that's kind of servicing. They just want to manage service. Help manage service, SaaS. It's an old software service. And ultimately what we need is some sort of managed service, managed by an app. It helps the users, because our customers are interacting with this API. They expect some sort of service. Is that better? Yeah. Great. So the customers expect some sort of a lot of agreement to work on tracks because they're paying for this service. That's what we're talking about. And in terms of the level of technical knowledge required for this topic, it's, I don't need to say tournament required, but if you do have experience with any of the bad ones, touch on a couple of Q&A terms. And if you have run anything in a production environment that most users can have, somehow, with these issues against service running a production, you should hopefully see some barriers. Hopefully that sets the context and gets on to the promise. And we're going to make a term for a lot of areas. David, do you want to say anything about that? Yeah, thank you. So here's what we're talking about today. And I think you said, right, the level of knowledge for this. We wanted to set it at the level of knowledge that we could give it to, give this talk to a very broad audience that maybe doesn't have as much experience to some of the concepts. And what that means is we're going to talk at a very high level about all of these different areas. Architecture, the security to billing to the point of best practices, to the point of only lovely things. And even if you're looking at this and maybe you don't have experience building managed services, when you do have experience building software, you might say, hey, hold on a second, but these things apply to me. And you're not wrong. So there's a lot of overlap in terms of what it requires to build a managed service versus what's required to build a managed software today. And we will touch on each of these things. We only have 50 minutes for this talk. It's going to be an S15 don't pass a few houses. It's the other level detail where we're going to be subject. But the intention behind this is that you will hopefully use this to jump in board. If there's a topic that you enjoy and talk about, you can deep dive into that yourself. There's been other talks during this conference that deep dive into the specific area for us today. We're going to have to say at a pretty high level pretty much the time that we have. We know if you're done a good job as well. That's a good question, David. Hey, what do you think? Hopefully everyone will leave here a good sense of the amount of work that goes into creating a managed service. But we don't want you to feel overwhelmed. I can have them. I want you to feel like it's achievable. We're going to look right up, of course. We want you to understand that working with others is important for achieving. So every organization, every group, every team is going to be different. So you need to find some implementation. This is how we do it. We manage application services in Manhattan. There's going to be some things that work for people, some things that don't. So find what works. So the first thing, the architecture. And just before we get into architecture, one of the core aspects of this talk and what we'll touch on throughout it is to have a customer-centric mindset in this client's and software capability today, especially for managed services. Put yourself in the shoes of that customer. Think how they are thinking. How would it impact them in what you're doing today? That comes into architecture. One of the first areas that we need to explore as part of architecture for your managed service is tenant security. And then ultimately tenant isolation, organizing neighbors. And then things like how do we capture the architectural decisions that we've been talking about. So let's keep into, let's go on to the next slide. Some of the tenant security. And this may differ from working or familiar with the building software for customers that are defining themselves, but what is tenant? Tenant is a customer. And when we talk about tenancy, we talk about the final model for your managed service. Single tenant is where a customer has an instance of your software that is isolated in a dedicated environment. So let's take, for example, for us, we don't just use a service instance to this case, it would be a pathway cluster. And in a single tenant environment, that pathway cluster would be deployed to an open shift cluster where there's no other customer's capital cluster in the place that's a open shift. Still managed by Red Hat. Still managed by Red Hat, right? And if we break that down, what do we really mean about the fundamental building box that we talk about isolation, we talk about tenancy, we're talking about resource, we're talking about compute, CPU and memory, we're talking about network, we're talking about storage and database, and it's ultimately about tenancy is really talking about whether those underlying resources are shared between customer instances or not. And there's pros and cons to all of them. But a single tenant environment with a single packet cluster for a customer is applied to a single open shift from a 90s cluster and a second cluster will get their own dedicated environment. What's the opposite of that? A multi-tenant environment. So in that scenario, resources are shared. You could have, I'm not sure if you're familiar with packet here, but within packet, you have topics where you produce and consume messages. So in a multi-tenant environment, what that would look like is you have a single packet cluster and each customer would get one or more topics in that one instance of packet that they would produce and consume messages against. Now, what does that mean? Well, now you need to deal with isolation. You need to make sure that customer access to messages on a topic from customer view. Right? So these are the things you need to think about when you're thinking about architecture and you're thinking about tenancy. There's pros and cons and it's completely dependent on your reduced case between when you're going to choose. For single-tenant, the pro really is it's easier, right? You don't have to deal with that isolation if we tenant, but it's going to cost you more. If you have an ownership cluster, you're going to have master nodes, you're going to have infer nodes on which you dedicated, and in that scenario, you're going to have that for each single service instance that you're applying on behalf of the customer. So it's going to end up costing you more than a multi-tenant instance. Then we've got the mixed-end where some of the resources may or may not be shared, it's kind of broad range. This is a model that we use within OpenShift rooms today. And how we do that is we have a fleet, several OpenShift clusters that we call our data plane, and we apply customer-captain instances together on each of those clusters. So what that means is that a customer has their own captive instance that they don't share with another customer, but it applies to the same OpenShift cluster meaning underlying resources are shared between those cabinets. So there's some isolation so it makes them all, right? Next slide, please. So then to talk about how we deal with it, specifically for our scenario, it makes sense to see how we deal with isolation. Again, coming back to those fundamental resources to compute the network, the storage, how we handle that, we use some Kubernetes and OpenShift concepts. So we dedicate a working worker that owns a cluster to specific captive instances. And what that means is to compute, the CPU and memory are not shared in separate captive clusters for the customers. What's the network here? So network, so that's a good question. The network is shared and it's a scale that I believe but I won't get into that right now. But what we don't share, we don't share Ingress with the standard clusters. So if you're getting into console or an OpenShift, we have separate Ingress controllers for all of the traffic to all of the captive instances that are applied. But that network is still shared to be all of the captive instances that are on OpenShift. How we handle that is we have coldest, so we have a max data transfer that can be for a captive instance. So it's very standard that we know if there's a next number of captive instances, how many Ingress controllers we need based on load testing and scale testing that we've done. And the last one is separate storage. So each captive instance will have real storage in that case. Okay, so in a world where if you're working for another company where if you're not building one managed service, maybe you're building two, three, four, you have a suite of managed services that you're looking to build. You think about architecture, how do you reduce duplication? How do you reduce duplication across all of these services that you're building so that you don't have all of these different endearities and large enterprise solving the same problem? That's a waste of effort, is it not? What are the ways that you can solve this in the company? For order APIs, the API contractor definition is an open API spec which is a source of truth and that's hosted in the strategy line and on the rest of it and automatically generate SDKs for that. How does that reduce duplication? All of the API consumers can use the SDK rather than implementing their own logic for coding after the endpoint is better necessary. That also means as soon as we as soon as we change the open API spec and pop the version of it there's a new SDK auto-generated and released that can be consumed by those clients when they need it. In terms of order to reduce duplication there could be a common functionality between different service that you think about things like authorization which is a great talk from Ian Alex just an hour ago or two hours ago when we were on that's an example of where you can have another shared service by handling these common pieces of functionality such as authorization in order to manage service. What about coding and billing? How do you know what the customer signed up to your managed service and how many of these service instances are left to create? Well, when you think about is that a library should it be a service? Very easy way this should be the only thing you take into account. The easiest way to think about it is whether it's stateful or stateless and in that scenario when I talk about folder and if you're allowed to create a service instance that's stateful and there's different managed services that may need to find out request that information in that scenario a shared service makes more sense than a library but there's other common scenarios or common functionality that might make more sense and lastly in architecture and it does not just apply to managed services it can only be taken one thing away and it's thought to be used in yours but what they are so we all as engineers at the art industry we make architectural decisions daily, weekly, monthly whatever it may be that's great and maybe we document them and maybe we don't but the thing is is maybe in a year's time or even there could be another engineer team in your company that might want to find out what architectural decisions you've made not just what why why did you make that decision why did you discount alternatives at the time to guarantee you if I came to anyone and you made that decision better than I do that's the purpose of maybe yours it's not just to capture the why just to capture the why and have that historical decision lab or all of your architectural decisions apart from your service for you I would benefit everybody else in your company okay so we've we've defined an initial architecture everybody can answer to that so we're not too mournful I mean that's here we're going to keep on going back to architecture again and we're going to keep going through development cycles and say we know what our initial architectures don't look like let's see what kind of development best practices start with okay so how do you change the configuration about having to release other services that are possible is it possible for your services to scale it's little to no I know actually who does someone have to get paid to and re-tune some sort of benefit do you have to scale something and you want to avoid that if at all possible let's try to push this back in the development cycle and continue to scale as well so how do we make around all tolerance from day one so a few technical examples the first is just multiple replicas so scale your plots out increase number of replicas your app needs to be stateless to be able to do that that's the first thing building on that use multiple AZs multiple availability zones in the in the platform provider so this is just a failure to be annotation so my part of it will only get scheduled to set the model to your client if there's a rating over it that means it's a my part of schedule on this particular model that it's this particular failure to be annotation and it doesn't mean that I'm so excited about it if that AZ has some failure I know that to keep things going another example of how to be full tolerance is using this thing called a plot disruption this helps with things like Kubernetes load upgrades load training you can specify for example the minimum number of available parts of this particular part so if Kubernetes is about to terrain a node and that is the last instance of the ticker part you have this set it's not going to care about part it's going to wait till there's at least one instance of that part running somewhere else then you're gone and it's sort of fusing what you can do just with one of that that story of full tolerance is a part of how you sold uptime during underlying cluster upgrades because as we talked they talked about it in the beginning in SLA a customer has an expectation for you for different service level agreements and they talk about SLOs and everything in a bit but you need to be able to upgrade your application you need to be able to upgrade a cluster then I think customer service in that way is a part of what they think I think a good indicator that you might need to rethink it is if you need to schedule upgrades and tell the customer where the upgrades are happening that you should be thinking they're okay why do I need to tell them yes they're going to be some down time okay can we can we stop that down time it's a bit of a shift I should mention there's nothing new with what I'm saying here it's just about comparing this all back to the idea of like 12 factor I have a link with what some of these slides here many of you want to check after so it's a very common thing about kinships so earlier in the process how do you rely on first scale you know you want to scale at some point should you do that up front yes that pretty much the answer you want to try and build the scale okay you want to eliminate that toy to use that S and E term that possible annual interaction that is just avoid the idea of value for somebody to come along and scale so you can specify initial capacity for storage there are ways to do like multiple so putting some limits if you want to specify and all these these volumes kind of and it might make 10 games for customers sometimes some room like push that so that's a very common conversation compute from every request of limits I don't have an example here but that's a very common thing to specify to if you use them we're actually caveats where we're already can come to just another year a good example of the happening of production some of the stuff scaling node size and adding extra nodes and this is probably this is probably drawing on some stuff we can go to streams cause on the scale what kind of clusters that we bring up means six over we need to have some slack capacity to allow for new people bringing up candidate clusters but sometimes a lot of people come along create clusters and make sure to have a few tools so we're doing that enough next one that's just a touch on our quick date just the alternative to that is that you already have the noted place which means you are paying money to your online tenant provider for whatever those nodes are so it's ultimately cost-effectiveness to be able to scale at least where our nodes end to be and bring up the user's story up and down based on the man that other way to agree on his partner service which he don't want to do I think he touched on this when working that's very difficult to control at least at hand do some testing on your service to get an idea when the network I think in the case of Kafka there's a way to put so see what happens it's going to be different type of tenancy model there's some development best practices an important area is I don't think I'll do it justice here but I will you know a few important things that we can bring the area into the development service so a few things to consider here so do you know if your service is doing what it shouldn't do can you look at your service right now in production and say for certain is doing what you expect it to do what users expect it to do do you have those signals in place can you look at data and say users are not experienced sometimes customers will raise issues sometimes they might object it's not a huge issue but in those kind of mid-brand scenarios how do you pick those up how do you detect them see that there's some hj's how do you how quickly can you find the cause of the issue so this is all going to answer those questions plus a factor so some of their observability fundamentals here so it's all about knowing the internal state of your service hj's account will also enter that now for example you can get some metrics okay so this it's going to buy today the data it's just a fact of stack that we use in services today there's a time Kubernetes which we tend to go with the previous metrics so I'm going to give you a first so the best to read in metadata a lot of data very valuable but so we try to adapt that as we might say that there is a possibility to do that one of these we try to define these things or certain indicators they are signals that correct what is the user's character so in case of Africa not to use this here not to use this in case of competition so can be can be get some signals yes but there are kind of questions to think about building on that to find some objectives around we're going to going to embrace this we're going to accept failure it's going to happen at some point we're going to be open with that and we're going to say we're going to be okay with that in some balance actually I'm going to make that if you can detect that turning towards reaching you will then other places to do some creative this this is this is a period of practice it takes a long time to get that point this sets you on that turn that error which is the next nice the camera but so the the historians may not have heard of that and the easiest way to think about is that SLI most of your APIs availability is your indicator would be the response codes for a request so whether it's a 200 is successful or whether it's a 500 that's your indicator and then the objective is that you want 99% of requests to be successful AK200s right that's probably the easiest way to think about what a indicator is I'm going to just a metric and then the objective is what you're aiming for for a customer what are the three longer support uh uh you don't again it's a good question we're not going to take that down in just a second you of course have to stress that we don't we don't create both it's all a perfect code everything is a type of code yeah exactly so yeah error budgets when you have those objectives you can use that to base on policy across the various teams it's not just the engineering team it's the team the support team when we reach that error budget for a particular AHA period we'll also have a case of counter guys uh engineering to stop what you're doing stop what you're doing and you need to prioritize some work that tries to fix that yeah I mean I'm not going to do that again we don't introduce both so let's guess okay so we have Johnson have our observability make we want to work on continuous deployment how long does it take for a change to go from first to be available in some environment by dealing with that as far as possible on to trust that is not a great thing does your development need a case we should fix that does your pre-production environment production in other words local data looks like stage your pre-production looks like as much as feasible guess you're not going to bring up three or two of those what do you think would be a better time what should people do for development if they need so it's a good question and it's not a good thing the specific open chip screens in that scenario we will stand up and open chip dedicated buster what people deploy what difficult single buster it's a national worry and it's again one of the offered parts about just limitations of the computer it's on your machine and depending on your individual component absolutely I will run it on my local laptop and for most things I will point to the production environment things like SSO and other services that we integrate with OECN okay but also what we want is the ability to move as safety that's important that's to be safe and we want to allow the motion between environments so the visual that the blog is just open about the environment so I am on a stage I don't know how many of these steps are ultimately ideally the creation of the pure that's my new one developer has come along and creates the changes of the pure and someone needs to approve it so it gets merged yeah that ideally those are just integration tests and so on I continue to run this stage when you move up a couple of other things they're going to create support issues and they're going to lose customers because of this so whatever you do don't move fast and break things don't follow that saying just don't do that this is a service that people pay for there's a contract this is seen called a service certificate that's the contract customer is what they expect from the service if you do something you're seeing that it breaks it down to that so I think we do quite that right so yeah it's something that we talked about we have the objective to stay for the 99% and we are going to pay you back X amount of money that's you but that's the contract that our customers signed for us just to highlight the SLA actually a little bit it's customer signs it is a different payment and service definition but they do it's not a service definition but if you have to close to the service then it's close to no payment if you don't do this yeah if you want to just slide yeah it's like it's like it's just like that at the end it's not like that but if you put in red half half half row left we already have all okay저 We have for this the guysah we're not gonna move fast j again, and they mean very specific things in the career. And right now, for their center, there's some major on the planet, and again, changing those numbers means something very specific. As much as possible, with a service, much of what you do can happen. If you change something, you can't go over yourself. Just like I said, there is a video that you see some old photos on, and you break down a new version of the capital on that next area, security. Alright, thanks for the story, David. It's not easy to build a managed service because you are responsible for the security of your customer service that you are running on their behalf. Two general aspects when we talk about again, this high level, you could do, you can do everything you want to on security, but we're touching on it at a high level. And broadly, just two aspects of security when you think about a managed service are a lot of software. You think about your compliance, and then you think about vulnerabilities that are either part of your dependencies or within your service area, or within your clients themselves. Let's talk compliance. What do we mean? I think everybody's familiar with the word, and there is two different types of compliance. You will have some compliance for managed services in general, and then you'll have compliance that will allow you to either sell within a specific industry or within a specific region. So things like that, that's a type of client compliance that will open up a governmental, a US governmental agency who can meet these compliance before they are able to purchase your managed service. So what happens if you don't have the compliance, you just can't sell to those customers? 300 compliance, 7 compliance, there's 2707, which is standards laid out for cloud service providers, CSPs. That doesn't necessarily open up new areas of customers, but some customers may look that you have this compliance before they will even consider purchasing your managed service. At every industry and geographical compliance, we think about things like HIPAA. I think most people that are from the States are working with this, not cloud service or managed service specific, but this is a type of compliance that you will need to meet if you want to sell your software within the healthcare industry. I think the baseline there is data center needs to be HIPAA compliant, and then the entire provider will talk about HIPAA compliance. And at your service, this is using HIPAA compliance. So that one is very tricky. 100%. This is one slide of things that makes up each one of these things individually. If you were to look at what's required for FedRata, it's a lot. Then the last one in terms of regional compliance, GDPR, which I think most people are familiar with these days around data regulation, if you want to make your service available within work, within EU, then you need to be GDPR compliant. I believe there's also something in Australia where your data cannot be in Australia whatsoever. With this compliance, I can only imagine how to edit that is for some of the press that we need to do both stuff and look at the laws. Do you have to have a detailed description of what you're doing on that? It sounds like you're editing something. So then, vulnerabilities are, I think, what most people are familiar with. And like I said, there's two different aspects of this. There's the surface area of IPIs in general, because when people hear about CDEs, generally they are found against more popular libraries and things like that. But what can you do specifically for your IPIs? You can do what's called trip modeling. And what that is, is we're looking within a great app. We have a full product security team. And what we've done for OpenShift Stream Service is we have we work in the product security team that we've defined our architecture, they reviewed it, and we've identified what threats are possible against their IPIs, and then we will offer resolutions depending on the risk of whatever the potential threat is. So do you mean for each architectural decision record, someone from Product Security is actually looking at that too? That's a great question. So what we've done is initially we're a full architecture once it was defined. We've gone over with them, but for each new architecture, we will ask the question of does this impact the architecture or not, to go back and revisit the initial threat model? So differential changes are okay? Yes. A good question though. So then CV resolution. So everybody in the room is probably familiar with the words, and maybe people will get a little jump when I say the word luck for J, or luck for Shell. And that's what we mean by CVs. That was probably one of the most serious ones, recent memory, at least, that allowed. So for those that are not familiar, there was a vulnerability within the Java longing library that allows for remote execution on a server that your application was running on by using this dependency. Pretty serious, right? And what that means, it was fixed in the library, it was a new release, and you had to update. It's not ultimately what CV resolution is for your dependencies, the libraries that you include as part of your application. But that's why you need to care, because if you don't even have anything in place to resolve them for your dependency, you're potentially needing your whole managed service vulnerable in different ways. So there's two other that would do that, depending on what you're doing. There's sneak, and there's other things that will inform you for any of the libraries that are affected by different CVs, and tell you what version you're big. I think this is the whole thing. The whole business is around this, and how you can solve it. Okay, and here's the time. The last few topics are pretty small. So I'll quickly go through Billing and David for this one. I'll close this. But Billing, from an engineering perspective, which we are, you may ask the question, what do we care about Billing? Well, there's two types of Billing here. A customable pay off forms for a sediment board, anything after the fact. I think the electric bill is like this. For up run, not super fast, but for up run, let's not worry about it too much, but let's talk about consumption-based pricing. That's what's important for engineering, because it is tightly coupled with observability. You need to ask the question of how do you know what your customer has to use? So some of the decisions that we're going to charge on data transfer, storage, maybe cluster hours, or how long it's been up, but you need to use that's accurate. You can't just say, I'll use this and I'm sure that's close enough. Maybe accurate, otherwise the customer can have a happy and that's tightly coupled with observability to determine usage for a customer's happy cluster in their scenario, where we do charge on true about storage and cluster hours. Support and escalation, so we're getting out of here, seeing our service and production here, or imagining all the issues that we have. So if something goes wrong for a user, who finds out first? That's an interesting question, so the correct answer is, you find out first. That's a personal one. Service. You want to get to that, but that's not what I'm talking about. Just everyone in your escalation their responsibilities we have as a first line of support, but who do they contact? Do you understand the layers of service that you build? There's a platform that sits on top that has two almost different types. How do you use missions and elements? Using a mission, it's a whole process. How do you turn to which mission? It's getting more complex where you have a tendency to go from an escalation to another. Why do you think there's an escalation path? Have you experienced people's team's role that exists? Learning through support and understanding the roles of teams in the escalation. Whatever there is, some sort of instance or some sort of issue, it should be a sign that there's problems. What's the exit? Is it public? Is it public services? Or is it a retrospective? Misprovidization of actions in the area of service in particular? Is it a product of the nation? How does that agree? What do you get? If you don't have a real positive analysis, including consumed, engineering are required to prioritize the states or whatever, and causing to solve a lot of these issues. Fighting choices. They're great for finding various jobs with this escalation path. You have this you think have things now, it's getting into your mind, which is good. What we're here is this champion called combat sessions. Originally named after Kafka, sort of came to me. But for other services as well. What does the session look like? You have some scenario that you can set up with your service where something has gone wrong. The various people on that team have support, docs, SP, engineering and they don't know what's going to happen. They have someone acting as a customer saying hey, I'm seeing this particular error of my client, and then checking the data. Most difficult thing as an engineer among these roles is actually state of life. I've tried to read character that's written. As an engineer you have to imagine the support that she has come to the support person to support the engineer. They're not going to be there in the future with the shirt on. Then she will come back to me. Those fire jokes are excellent for teasing them. It needs to have some time, unfortunately that's left. But for all the engineers on the other team, I'm sure you don't want to get too much. It is important. I'll very quickly give a quick overview. The one thing I will say is agility, the ability to switch priorities in a quick manner that has sprints two, three, four weeks. It's about customer retention. Because the customers have requests if there's bones that you need to solve being agile in the way you plan and prioritize your work will help you to retain your customer. This isn't about money services, this is about building software. This does not pertain to it at all. This is a topic that of course, it doesn't work with and I'm saying again it's a tip. It's a tip with the iceberg for all the topics that we've touched on. There's a lot of overlap in everything that we've touched on between money services and just developing software. What we talk about money services is that it's so free to build money services specifically about an open chip stream and other money services that are right at it. So use what today wants and do that when you're an engineer, whether you're a product engineer, product manager, whatever it may be, you all need to be in that customer mindset from day zero. Thanks very much gentlemen. Do we have time for questions? Any questions? One, two we've got his hand up just beforehand. I think we've got to walk up to the mic. Order the queue please. How long do you leave something different? Yeah, that's a great question. Our current defecation policy is set that we say six months is how long we will leave the old API on first. I'll be honest with you we haven't been the best at that and we have skirted around it a little bit while we had maybe a queue like before when we had just internal customers and the ability to talk to them but don't do what we've done. Have a defecation policy and stick to it because that will allow all engineers to understand and reinforce that that's what it's meant to be. But nothing, yeah, defecation policy because of that scenario. Exactly, yeah. Don't do what we do. I'm sorry, we don't make those. What's our unique spend here for many services because there are a lot of vendors offering many services. Are there any examples there's, you know, ARDS or you know, and somebody's offering a really great example of these many services? Yeah, perfect. So just to make sure I have some questions, what's the differentiator for OpenShift Streams that question? So it's a really good question, right? And we're still very much in the early stages of building managed services at Red Hat and we have a lot of experience with building better managed services here but there are other market leaders we mentioned confident. That's a big market leader in the managed healthcare space. There are other ones in MSK, Amazon others that are providers of other operating. The intention behind Red Hat is to build a suite of services that customers can use. We're not just offering them managed infrastructure with OpenShift and other things, but we're offering them the ability to transfer data with Kafka. We're offering them the ability with service registry for scheming with connectors to be able to use and connect our Kafka cluster to external services and then there's other ones that are coming up as well but it's really the differentiator with Red Hat is to be able to build so that customers can only use infrastructure but we may get a application to production quicker. Good question. Just one more, please. Yeah, I'm going to go back to you. Well, we appreciate your time. Thanks very much. If you have any other questions, please don't be afraid to talk to us afterwards. I appreciate your time and thank you.