 Let's get into the big topic of open source, something that we actually have in mind. This is so awesome. We are an open culture that is actually able to fix that process that a developer or, let's say, as the Kubernetes ecosystem really brings. Welcome to this week's Ask an OpenShift admin office hours live stream. As always, I'm mouthful and I am Andrew Sullivan, the co-host of the stream. And as always, joined by Johnny. Johnny, happy Wednesday. Happy Wednesday, my friend. How are you? You know, still vertical, so I can't complain too much. It's another great red hat day, and that always makes a difference. That's right. How's the weather out there? Like the weather here in San Antonio, it's a little chilly right now, but it's gonna be like 82 today, nice and sunny. It's great. It's that time of year. Okay. So last week it was so funny because it was like Tuesday through Friday was just all gray all the time. And for me, like I'm one of those people who I wake up best, like with the blinds open, right? And just let the light do its thing. And I wake up and I feel, you know, I don't feel groggy or anything like that. So all week last week, I was struggling to get up at my normal time, which is usually like, you know, 7, 7, 15 in the morning. And then Saturday morning, it's bright and sunny out. And of course, you know, it's like, so immediately first thing in the morning I'm awake. And then Sunday, of course, we had the time change. So yeah, it's, it's been fun. Ruined everything. Yeah. So glad that you're having good weather though. Oh yeah. So hello, Khalid. Thank you for joining us again. Nice to see you this week. And hello to everyone else in our audience. So we are excited to be here. I'm happy to welcome one of my peers in the technical marketing team and that is Ortwin Schneider. So today's topic is the Red Hat OpenShift Service Mesh. This is something that we've had on our calendar for a long time and Ortwin happens to be one of the top subject matter experts inside of Red Hat. So very happy to have you join us, Ortwin. Sorry, I just realized that I had to close that as you were taking a drink of coffee. Thank you for having me. So yeah, it's, this is one I'm looking forward to. Cause if I'm being honest, I really don't know much about service mesh. Like I can, I can tell you kind of what the purpose is and kind of the components, but that's about it. So this will be an interesting one. Yeah. I mean, I have to honestly admit that I'm, I'm not sure if I'm right here. So this is ask an OpenShift admin. So to be honest, I'm more a developer. So at least my background is more development. So I thought I could ask you some silly questions about service mesh, at least at the infra level. No, I'm just kidding. We'll learn together. What's the worst that happens? This is going to be amazing. I hope for interesting, you know, discussions. And I mean, I think service mesh is for both, right? So this is for, there are aspects for developers. There are a lot of platform aspects. And, but just to mention, I'm also not that deep in the infra side. So I'm more from the developer perspective. So, so, and, and I'm really interested in, in the audience today. So what, what, what, what is the experience really like? How are they using and how is the separation of, you know, how they use the service mesh? Like what, what the ops people are doing, what deaf people are doing? Where's the handle with things like this? Because there's a lot of discussion, confusion everywhere, everywhere around it, right? So, yeah. Yeah. And it's, it's interesting that you bring that up. So I'll use that as a segue into our top of mind topics. And I'm going to surprise both Johnny and Stephanie behind the scenes by adding one at the very last second. Actually, Johnny already kind of knows about this one. So let me share my screen here. I want to share this guy. And the first thing that I'm going to bring up here is actually a blog post that Johnny and I, we discussed inviting the author here, Christina, on to the ask an open shift admin. And I think this is a really interesting topic and very much in line with what or when just brought up. Right. Which is, you know, hey, we're not creating, deploying, managing, you know, we're not doing this job as an open issue, open shift administrator for ourselves. We're doing it for the applications. We're doing it for the development teams. And I thought this blog post is really interesting because it very much takes the same approach that we tried to hear, which is, Hey, reach out and talk to those folks, find out what they find out what they're doing so that, you know, you can work together so that you can figure out the best solutions for all the things that are going on. So I would definitely encourage you to check out the post, read through it. Again, we will reach out. We'll see if we can get, get them to join. And, you know, we can have a conversation about it and all of that. Let me, let me post the link here. Yes. I want to watch now Twitch. I want to, I want to watch myself. That's not strange at all. Yeah. She did a really good job of breaking it down, right? And really like simplifying, not really a complex process, but just like complex conversations with customers. Yeah. And I think she did an awesome job on that. Yeah. And it's something, you know, even I sometimes forget, you know, who the customer is when you're an administrator, which is those app teams, those developer teams. And, you know, last week we were talking about BGP, right? And, you know, hey, go talk to your network admins, go, you know, buy them coffee, go bring them donuts or bagels or something and make friends. The same thing with storage teams and, you know, so on and so forth. You know, if you're, if you're not the virtualization admin, the same thing with the virtualization team, you know, on and on and on. So, yeah, it's important to remember that and to, of course, build bridges, so to speak. Yeah, for sure. It's all about the so what to everybody, right? Like it, because everybody matters at this point. Yep. Exactly. Okay. So after my little surprise there, let's talk about a couple of other things. So first, we were a little bit mum about it because we kind of had to be, but OpenShift 4.10 is GA. It is fully released. It actually, I think the bits hit the mirror, like maybe 20 minutes after we went off stream last week. So they, we knew it was in the process, but it also, that process takes something like 12 hours. So we didn't know when it would actually go out and I didn't want to, you know, I didn't, I didn't want to spoil it or otherwise cause frustration or anything. If something happened at the last minute that, that caused that to get delayed. So surprise, OpenShift 4.10 is GA. And if you would like to upgrade today, you can switch to the fast channel and you can do that. OpenShift 4.9 to 4.10 upgrades are available. Remember fast channel is fully supported. So if you can counter any issues, you can pick up the phone and call us and we'll help. But stable will take a little bit of time. So we usually talk about this with every new release of OpenShift where it takes somewhere between 45 and 60 days for a new release to reach the stable upgrade channel. But remember that doesn't mean that new deployments with OpenShift 4.10 or upgrades in the fast channel, they're both still fully supported, generally available. So on and so forth. Our hope nine, apart from being able to enforce security within the cluster, Navel's tracing helps focus on the right components. Yep. Yeah. Yeah, our hope nine, I know this is a topic that you've been looking forward to in particular for a long time. So I know you're excited to have a work went on and be able to ask those questions and talk about it. Switched back to stable 4.10. Yes, that is a, that is a good option and a good reminder. So as our hope nine pointed out there in his last comment, so you can switch to the fast channel to do your 4.9 to 4.10 update and then switch back to 4.10 stable. Right. And that's that's a fully supported process as well. And then you'll pick up only the stable releases for 4.10 after that, depending on how many using already right now that is or for it to become stable. So the process, the precise process that they use to decide when to go from fast to stable or even candidate to stable is unknown to me. And every time I ask engineering, they, they kind of give me this weird look like, why do you want to know that? You don't need to know that. Just trust us when we do it. So what I know is effectively it is based on a number of things. So one of course is how many, how many times people have gone through the upgrade process. Right. So yeah, some of it is based on simply, you know, people going through the updates and all of that other stuff and seeing if there are any problems, right? How many tickets were opened? How many BZ's have been found? What are the severity of those BZ's and so on and so forth. And you know, it isn't solely dependent on that because of course in the background, the CI systems, the engineering folks are constantly doing upgrades as well. Like literally CI is every moment of every day is burning through different deployments and upgrade scenarios and all of that. So it's really a confidence thing that it is going to work with minimal issues for the widest set of folks is when they make that decision. Let's see. What's next? Oh, oh good. Another CVE. So last week we talked about a CVE and now which one is escaping me and I don't have that note stock up. I did check it before we started the stream today and I did not see any updates since last week. But the, there was a CVE that was recently announced or released rather. And it is 20, 22, 25, 6, 3, 6, 2, 5, 6, 3, 6. I'll go ahead and paste this link over into Twitch. So this one is a, it seems to me to be a fairly severe one, right? It is a 4.7 on the impact score here. One thing to note here and Johnny, please correct me if you have anything to add. So one, as we see down here, rel 8 is affected. Right? So remember, CoroS is at rel 8. So CoroS and OpenShift will also be affected with this. So please do keep an eye on the update channels for whatever Zstream or the Zstreams for whatever why release you happen to be using. But you'll notice that the main mitigation here is to turn off username spaces. And there is a very handy note here that says, well, if you're using containers, you can't turn off username spaces. Well, of course, OpenShift uses containers. So the mitigation here doesn't apply. I don't have any further details or information on that at the moment. I'll continue to keep an eye on it. If you have any questions, feel free to reach out to the various, you know, usual ways, you know, Twitter, Reddit, Kubernetes Slack. There's an OpenShift users channel and Kubernetes Slack. So you can always reach out and ask for clarification or ask for questions. And of course email, Andrew.Sullivan. But that's, that's all I know for now. Johnny, do you have, do you have anything? No, that covers it. Okay. And so, I'm sorry, I was going to say in the, in the link or in the chat, I put the, the CVE for last week, the dirty pipe, there is a fix for Relate. So it came out, it was released March, March 10th. Sorry about that. No, no, no worries. Thank you for posting that because I did not see that. So usually what that means is, let me think from the last time this happened. It takes somewhere between one and four weeks for a rel patch to make it into CoroS, to make it into OpenShift, you know, depending on all of the things that are going on so on and so forth. So again, keep an eye on the Z streams that are released, keep an eye on the release notes for each one of those to see which ones it is patching and affecting. Next up, a quick one. So sandbox containers, which remember we talked about in episode 50 with Adele Zalook. So sandbox containers are now generally available. Yay. So if you happen to be using or have a use case for sandbox containers, remember that is a fully isolated container, right? It basically uses KVM, right? Virtualization to create a fully sandbox, fully isolated container instance, separate kernel and everything inside of there. So maybe important for some security, you know, high security type scenarios, various other things where it is untrusted code that you're running inside of there. So yeah, be sure to check that out if it's interesting to you. And I've got a link to episode 50 here. I'll just copy that in. Oops, wrong window. So I'll post the YouTube link there for episode 50 if you want to go back and review sandbox containers. Let's see. Two quick ones left. Johnny, Podman 4. Yep. So Podman 4, we talked about it last week where Podman 4 was released and there's a bunch of support. But I was going back through the blog and I saw that Podman 4 is not going to be released on Fedora 35. It introduces a breaking change on Fedora and Fedora has a long standing process of if there's a breaking change, they won't allow it to be released. So if you're looking, if you're using Podman 30 backup, if you're using Fedora 35 and you want to upgrade to Podman 4, you cannot. They won't be available. And then there's another release today, 4.0.2. And that gives, if you're a Mac user and you're using Podman, then it allows for volume access. So where you can now run like Podman run with dash dash volumes and you can mount your Mac volume into your container. Very cool. Yeah, I have to admit that I use Podman, but I use it in VMs and stuff like that to run things like a web server and stuff like that. I don't use it often on my laptop and definitely not on my Mac laptop. Yeah, I heard Lever use it on a Mac. I mean, I'll do it just to kind of like tinker with it, but you know, generally I'm the same way as you. I'll run it on my Linux machine and go from there. All right. So last but not least, I'm going to switch the window that I'm sharing here over to my terminal. And so this one came from somebody internally asked, you know, hey, what are all of the OpenShift star namespaces? What does each one of those do? And there is no documentation, right, or nothing that I'm aware of in the docs that says, you know, hey, you know, OpenShift dash API dash server is this or hey, OpenShift dash storage is this and so on and so forth. So there is a command and am I sharing this? Yes, I'm sharing this. So if we share that terminal window, there we go. So there's a command that you can use in order to see basically all of those things. So if I do an OC get cluster operator or OC get CO, I have this whole list of cluster operators running in my in my cluster and you can see I'm running 4.10.3 here, which is the GA release. And for each one of these, I can basically identify which name spaces are assigned to it or used by it and some other information just by doing a describe on the cluster operator. So if I do OC gets is that random will go with machine API and I want to describe not get it helps if I tell it the resource step. So down here at the bottom we have a related objects section and then you see here I've got like a name space and machine sets roles cluster roles. So this one only has one name space and it happens to be OpenShift machine API. So that's the first hint that you know if I'm looking at OC get project and OpenShift so I've got this huge list of system level name spaces. So that's the first indication of you know, hey OpenShift machine API is owned by well the machine operator. So what if I want to find out more information about that? So there's a command OCADM release and info and it gives me a whole bunch of information about the particular release that I'm using and you can see in this big list here here is all of my cluster operators right all of the images that are associated with it. So that alone is somewhat useful but what's really useful is if we use the dash dash commit dash URLs option I spell it correctly. And here we now get the GitHub repo link for each one of these operators. So machine API operator for example we can see here I can find in github.com slash OpenShift slash machine API operator and I can go there and I can now find out a whole bunch of information about that operator and what it manages and you know usually the devs are really good about including things like how to troubleshoot or how to get more information. If you feel like reading go code you can certainly dig in and see what's going on inside of there. So I thought that was an interesting one. I think we talked about this like way back in the early days of the stream. I want to say like somewhere in before episode 10. So it's been a while since I looked at this command the last time I did it there was maybe 30 of these and now there's well a lot. So interesting bit of information interesting way to kind of find out how to really dig deep into particular operators or particular functionality. Yeah I didn't even know about that. That's an awesome command. So yeah I don't remember how I found that out. It was either from engineering or for some port it might have been somebody like Eric Rich on the support side who told me that one. Yeah that's pretty awesome. That's really cool, really cool and again learn something I think I should see this show more often. We always welcome new audience always. Alright I'll stop sharing now. That's all the top of mind topics we had for today. So yeah let's talk about service mesh and I don't want to steal your thunder or anything like that. I think you had some slides prepared Art1 and I have to say I love your title slide and the prettiness of it so I won't spoil anything by asking you any pre-questions. I'll let you tell us about service mesh. Yeah thanks Andrew. So let me just share my screen. I think I have way too much slides so this is just kind of things we could talk about and discuss but let's see how far we get and also really if there are questions out there or things to discuss yeah feel free to ask. So let me share my screen and give me a ping something. Good to go. Okay great. So today as mentioned I mean OpenShift service mesh and service meshes in general they are quite I mean this is a very extensive topic so some people say service meshes are as complex as the business itself. So that's why also there is some I would say some restraint out there in the community and the adoption of service meshes really isn't that high compared to other let's say technologies and yeah this is for a reason and today what I like to do is to kind of talk about service meshes on the introductory level to give especially administrators kind of the awareness what it is and why in the world would I need it at all and do we have some specific or is there a specific kind of you know workload characteristic where it makes sense or where it is appropriate where not who is using service meshes at all if I decide to introduce a service mesh so who are the personas using it I have developers I have different types of platform let's say responsibilities and so on and then of course we all kind of know that it is kind of a platform service so there is an additional service on top of the actual platform so there is kind of overhead so what is the overhead and this is also a concern from many customers so to say so basically what I wanted to talk today is kind of yeah like a 360 degree brief overview on service mesh so basically covering what is it in general what is the service mesh why do I need it what are key features and capabilities and very typical usage scenarios of service mesh then have a look at the specific yeah how can you get it do you need to buy it and how to install and things like this considering then a little bit the different types of applications where is it appropriate or not probably what is considered there as mentioned the personas and users overhead a little bit yeah I've prepared a demo for mesh federation this is a new feature in our recent the latest release of service mesh 2.1 so there we introduce the mesh federation features so there is a demo out there so we can have a look and I haven't installed it my plan would be to deploy together with you and see if my scripts and everything works out so and then also like FAQ and roadmap items I think is way too much stuff but this is general at least my preparation and plan for today so okay yeah this is perfect because I can spell service mesh and I know that there's Istio involved in there somewhere and I think Jaeger and that's that we've now reached the extent of my knowledge so I'll let you move on ahead and for our audience for anybody who's watching us please help us prioritize if there's things that are interesting to you that you would like to see or like to know please feel free to ask us in chat so whatever platform you're on so it doesn't matter if it's YouTube or Twitch or anything like that all the chat gets accumulated and rebroadcast all of them so don't hesitate to post those questions yep and or when I'm sure you're going to get to it but Khalid did have a question like when he should or shouldn't use the service mesh so kind of keep that in mind as you're talking through some of these things like maybe you can as you're going through your list right kind of like associate that yeah we can definitely address it in certain aspects yeah and yeah you know as mentioned the whole topic is very let's say complex extensive so if there is kind of you know in the audience there are specific interests in certain topics like whatever you know distributed tracing and so on so that could be something where we could create kind of a follow-up also in a session because there is a lot in every aspect we could really go deep dive into some things so this is really a generic overview to really get the awareness and context and just the understanding of what it is in general but there is much much more than what I covered today right so yeah just to let you know so then let us begin with the very basic question so if you really don't know what it is the question what in the world is a service mesh and if I would need to answer it in kind of one sentence I would say this is a programmable network so it's kind of an overlay on top of what's what's in OpenShift the SDN and basically you can reduce it to a service mesh is I mean there are different implementations out there in the market you know so there is Istio there is so LinkerD and so on but basically they all work the same way so we have a bunch of user space proxies sitting most of the time as a sidecar container next to your actual services so this is kind of you have a bunch of proxies and of course additional you have a control plane so you have some management processes and components with which you actually manage your proxies and they provide certain APIs for the proxies so normally the proxies will kind of access the management components to get some informations or to send some metrics to the control plane but also you have several APIs to be able to kind of configure the proxies out there so if you reduce it to this this is what a service mesh is so it is user proxies and it is a control plane and what the proxies do is they collect the calls so every traffic going to a certain service they intercept and they do something with it and normally so the feature set depends on really the proxy you use so depending on the service mesh implementation you could use different type of proxies normally you have or we are talking about TCP layer 7 aware proxies and for inbound and outbound so they act as proxies and reverse proxies and so the focus is really on the inter service communication and this is kind of also or I would say that differentiates from API management or API gateway solutions so they are more focusing on ingress traffic to APIs where service mesh is more the focus on inter service communication in a let's say trusted environment so basically this is what it is so the control plane has certain capabilities like different types of service discovery TLS certificate issuing and a lot more like metrics aggregation and so on but this is basically what it is so this is just from your page here the architecture so you see there's the data plane so all the proxies we have in our environment they kind of form the data plane and this is where the actual traffic goes through and where we apply our configuration some routing stuff load balancing whatever the proxy is capable of doing and so the communication is kind of from proxy to service and from proxy and so on and we can do a lot of nice fancy stuff with it but if you think about it we have now like 1000 services in our environment and if we add a sidecar for every service this is it sounds like a lot of overhead so when you think about proxy requirements of course they should be very fast the proxies because you're also adding latency of course so you add additional two hops to every car so on the client and service side so you add latency the proxies definitely consume CPU and memory and also you need to maintain and configure all this proxies out there so these are the things to consider and this is somewhat I would say some especially if you consider to use a service mesh a lot of customers are afraid of doing it because of these reasons so yeah and it's important to think about right because it can't get that information you know magically you know we're not doing like a IP capture right a network capture you know for every one of these things we're not doing a cross rather they get proxied and the proxy does the analysis and reports back that information to the control plane and that's how we get visibility into what's going on with the application and the things that it's doing so yes there is overhead and it comes down to you know balancing that you know value versus cost exactly so the question is now if you decide okay I'm installing a service mesh for it so all of a sudden you have to kind of manage like hundreds thousands of proxies and is it worth doing it right so this is the question and I would say there are different aspects to the answer it's worth doing it so the first thing I would say is you it is or one thing to consider is really the operational cost of deploying and this is really reduced to for example by using Kubernetes by using OpenShift service mesh the way you manage the proxies you deploy and bring them let's say into life right so this is one thing to consider but the actually more important thing is that you could really add additional logic to your existing services to your existing ecosystem and really capabilities which are vital especially for let's say a more or a bigger microservice architecture and you can do that without changing the ecosystem in any way right so and this is especially this is very important that you can add all of the capabilities without changing the service itself so this is what makes it worth the most of the time at least so you have to consider a lot of other things but basically I would say these are the benefits of a service mesh a quick question from Walid can we use service mesh for cluster level ingress or is it only for pod to pod communications yeah you have in general like you see here you have envoy proxy so our OpenShift service mesh is based on Istio and Istio uses envoy proxies at this layer and they are used as sidecars for the inter or the internal mesh communication and for external communication we have also kind of envoy proxies standalone at the edge for ingress and egress and there is also an integration with OpenShift so for example if you configure an ingress gateway you would or OpenShift service mesh would configure an OpenShift route for you so there is ingress and egress gateways to let traffic into the service mesh and also to go out of the service mesh just to be clear and a clarifying question from Khalid so effectively you can either continue to use the traditional OpenShift ingress controller based on HAProxy or you can use the envoy based ingress proxy for cluster ingress you would use for cluster ingress you would use the combination of ocp routes with the envoy Istio ingress okay so it does replace HAProxy yeah it is a combination so HAProxy is for the ocp routes the implementation right and also then the route will forward traffic to the envoy proxy to the Istio ingress envoy proxy got it okay okay so it's it does use both it's just a client from outside the cluster coming in will hit HAProxy first and HAProxy will send it to Envoy and from there yeah now is that for all services or is that just for those services that are within that within the mesh no this is only for the services you want to expose or then need to be kind of accessible from outside the mesh right okay so if I have an application just call it app A that's just a standard OpenShift app it's an ocp new app that I just stood up outside of the service mesh that would still use HAProxy ingress coming in in the normal oc routes and stuff like that without using the service mesh at all but if I had app B that was in the service mesh it would go through HAProxy and then route down to the Envoy proxy into the mesh yeah you would have like normally you would have like a new service kind of thing right for your application and this one is exposed and accessible so you would configure kind of a gateway component and this would kind of translate into an ingress gateway configuration for the Envoy and also it would create an ocp route to access this UI service from outside the mesh so from a customer or client perspective you would then hit this kind of URL you would go through the ingress gateway and also then be routed into the mesh to the different services so the only thing exposed would be this UI service and the necessary paths you need from the client side right every traffic would then stay inside the mesh okay gotcha yep that makes sense and audience please continue to ask questions if there's any confusion yeah we can see that also later if we come to the demo part right so we can see that there is an access from outside and what you see and the communication inside and things like that yeah so that's basically the thing so it's worth doing it if you have like kind of the right type of applications the right requirements it depends on what you want to do so this is why we should look at what are the key capabilities of the service mesh right so what does a service mesh typically offer so more or less everything which is out there in the market they have some key capabilities which are around traffic management this is the way on how you want to route your traffic from outside the mesh to the different services and as well as inside the mesh so and what it also includes things like implementing resiliency and reliability like for example configuring retries configuring things like a circuit breaker to prevent some cascading failures things like this and also then load balancing to the target services so with different strategies you can have different also release deployment strategy for example if you want to do a B testing canary deployments things like this so there is a way to really granular define all this traffic flow inside the mesh that's pretty nice and cool the other thing is observability and this is also for I would say for application developers very interesting because if you have really an application which consists of multiple services and like this service A called BCD and so on so you have a lot of service calls and your request path is quite long through the service mesh so you need to understand kind of where what are the paths your request go what services are getting accessed and also if there are any issues if there are latencies you need to identify these issues and we all know microservices are distributed it's a distributed architecture and to troubleshoot it and yeah help in such situations is quite hard right so observability metrics tracing is very very important also to understand what is going on there so even if you don't need any other features like you don't want to route anything you want to kind of apply any security stuff just the purpose of observability and also for example with which is our visualization dashboard it's worth to have it to really see what's happening in your application right so this is one aspect the other thing policy enforcement so they're different type types of policies so we have authentication authorization so there is a customer source called authorization policy for example and they can specify for example the who's allowed to access the service from this specific IP from this service identity and so on so you can really granular define all the authorization policies you need for all the services inside the mesh and the mesh will enforce these policies and will distribute among the consumers and so on so and the nice thing is as mentioned all the things you can do here so all the features also applying security which is nice I mean there is an integrated certificate authority coming with STO so and by default the traffic inside the mesh is always using MTLS when it's possible so this is also one way to or you could use a service mesh to kind of facilitate things like zero trust networking with authorization policies with Kubernetes network policies and also with the identity and security features that the service mesh offers so this is a combination of all the three things and with this combination you could really do things like zero trust and I want to point out that there is some overlap with some other other OpenShift features right so things like network policies and now it's escaping me what's the thing Johnny where we can turn on SDN encryption between nodes in the cluster IPsec? Yeah IPsec with OVN right so one I think that service mesh gives you a lot greater granularity and visibility into what's happening there as our hope nine just pointed out right each one of the services has its own TLS certificate right you can very granularly control those things and also I don't want to understate how valuable the metrics around the various services can be and I say that particularly if you know you're working with an application you know using or what is it the strangler pattern right with a monolithic application you know it used to run on a server that had you know 800 cores and 97 terabytes of RAM and now we're you know splitting into containers and deploying this thing across Kubernetes and it went from everything being local communication you know on a single host now it's distributed across maybe dozens of hosts in particular as it scales so sometimes application components suddenly have latency where there wasn't latency before and they have to take that into account and you know things like that observability can really make a difference for finding those gremlins if you will yeah 100% agree so we have a couple questions coming in so one of them is essentially can you have like multi-cluster applications so an app A runs in cluster one and two does service mesh support multiple clusters and application running in multiple clusters and the other one is from Pete does does the service mesh support the open shift virtualization yeah so the first question is around multi-cluster so I have some slides this is really later on so the short answer is yes we have kind of two options we support so upstream Istio for example there's a bunch of let's say multi-cluster deployment options and topologies you could use to install we have more this security first approach and currently we have kind of two things we have service mesh federation as mentioned this is the new feature with the latest version so you have kind of distinct service meshes in one or multiple clusters and then you can export and import services from one cluster to another so this is an explicit configuration you have to do and the communication is really you don't need the good thing is that the control plane so the SUD control plane doesn't need to access the other clusters Kubernetes API servers so it's the whole communication and everything is through ingress and egress gateways that you have on both clusters and this is the way you could use for example some failover scenarios for mirroring traffic in a staging environment so for example you have a production cluster now you're developing a new version of the service and you want to test it with production data right so you could for example federate the meshes and could export the mesh from the new service from your stage environment and import it into the production one and then you could mirror the actual production data to this new service in the stage cluster and you can test with production data if your service performs the right way, works the right way and things like this so Federation is one thing and the other thing is you can have also a multi cluster installation but with one control plane in one cluster what we don't have is a single control plane outside for multiple clusters and multiple service meshes so to yeah address this thing I mean it's on our agenda it's a thing for the second half of this year or beginning next year there are some efforts going on in the context of ACM so advanced cluster management to have their kind of a central way to manage multiple service meshes in different clusters so this is a roadmap item and I think that was kind of the second question right? I can touch on the second question in just a moment from Peter so I hope 9 touched on something here that might be interesting in the context of what you just said which is Submariner which is deployed and managed by ACM so you have two or more clusters that are managed by ACM and a cluster pool and then use Submariner to create essentially a Federation at the network level so does service mesh and Submariner work together is like would you have say there's three clusters would it be three distinct service meshes and the clusters are connected through Submariner or would Federation at the service mesh level be the better answer hopefully that made sense I'm going to go with no because you I mean the actual implementation with a Submariner or something I'm not sure how it's going to be done right so currently so this is really this is something ongoing right now there are plans to integrate it I have no idea what technology and what network level the integration will be with ACM so I can answer this question right now I jumped ahead currently would be really the way to use Federation you have a multi cluster scenario this is what I would recommend right now so I think and I see Gaurav asking I think it would be helpful if you have a demo that we could take a look at and see in action kind of what the capabilities are and to jump back and answer your question Peter Laderbach there so yes I believe as of open shift virtualization 4.9 if not for sure 4.10 virtual machines can connect to the service mesh so if you have virtual machine based application components that are talking to communicating with you know container based application components running in the open shift cluster with service mesh it all works and it's all happy right so the demo thing the question for me would be should I directly start with the demo or should we proceed what is your opinion especially the audience so we've got 10 minutes or so until the top of the hour so I'll let you be the judge I think we definitely want to spend a few minutes you know looking at the demo you know kind of digging into it and all of that while he's saying get to the demo I think is thinking the same thing so maybe we can for a strange turn of phrase kill birds with two birds with one stone rate and kind of discuss as we demo okay so I'm kind of afraid that you say we have 10 minutes no no I so we have more than 10 minutes I just I try to be cognizant of not going way far over you know I have prepared like 90 90 slides and we're on slide 7 and that was not my plan so this is what I wanted to say yeah well then let me let me do it this way the demo I have to see it in action so let us try to get the federation thing what we've just briefly discussed here right okay yeah welcome to the live stream where nothing ever goes according to the plan that you had and time is relative so you see a lot of other things so why do I need a service mesh difference to upstream so I've prepared much more so but we should have like in three hour session for all that so let us jump to the service measure cross clusters I just show you in one or two minutes some slides just explaining briefly the concept right how the federation things work so as mentioned with upstream studio we have really multiple deployment options and so on for multi network single network multi primary and so on but normally what you need to use there's no really a multi tenancy out of the box there at least I'm not sure with one 13 of studio I think there's still nothing out of the box but you need to need to have access with to the other Kubernetes services so our approaches as mentioned we have service mesh federation we have distinct service meshes in one cluster or in different clusters and you can enable sharing load sharing and load balancing and for specific failure scenarios for example and then also a multi cluster service mesh where you have one control plane currently this is the way we have it right now and we have a multi tenant approach by default in open shift so you don't even need privilege or escalated privileges to kind of install a service mesh so you need a cluster admin to install the operator stuff and the operator needs the privileges on behalf of the user to do the stuff but the user doesn't need any escalated privileges this is great in terms of security so now come into the federation topic what we have is if you want to federate service meshes you can do that with 2.1 plus right now and the way it works is you configure distinct service meshes as mentioned in one or in different clusters and the communications then through the ingress and additional egress gateways and then exposing services to the other cluster is we have introduced additional customer resources for that and you would specify an exported service set and also imported service set and then you can kind of use the services from the other cluster so these are kind of in terms of the federation the additional things we've done there is a service mesh peer configuration there is an exported and imported service this is what federation is then made up so different ways to use the stuff let us jump to the green wall I have let me check here by the way so if you want if you have an open shift cluster and if you want to try exactly this setup you could go to let me see the repositories here is a github repository which is called mesh federation so this mesh federation github repo you could just clone this one and then you're landing here this is what I have here we have kind of the setup what we want to do is we deploy now two service meshes two different control planes just briefly let's have a look what's in there so we have a production service mesh and we have a stage service mesh and the first thing is we have a control plane so this is the customer resource the tenant of a service mesh and you specify you want to have kiali, prometheus what type of policy telemetry so this is a quite extensive customer resource there are some complaints around it because this can be very complex this is from an admin perspective the way you would kind of upgrade your service mesh you would add additional features things like this so this is kind of the central piece of your configuration and what you see is we have an additional egress and additional ingress for the other federation service mesh so there needs to be a discovery port and so on by the way there's also an article I've written on this federation thing you could probably have this link as well and you could just follow the description here and it should work this is basically what we do right now so a control plane the next thing we need is a member role so this defines in this control plane namespace what are the namespaces projects we want to include in our mesh so we have one bookinfo production namespace then we have a peering configuration to the stage mesh where we configure like the address in this case it's in the same cluster so I will deploy both meshes in the same cluster and all the workload components so this is the configuration you need to configure the trust domain and so on so some of the steps here you need a root certificate of the other service mesh to validate the certificates and then actually we deploy in the production mesh this bookinfo which is the standard let's say demo for SEO service mesh we deploy the complete bookinfo in this production mesh and then in the stage we do pretty much the same thing we deploy also control plane we configure the peering configuration and the member role and here we only deploy kind of the new version of one of these bookinfo services this is our new one and this is what we want to narrow the traffic from production to here right so this is the setup let's get started so we just open and terminal just for the record all of this is fascinating to me because there's a bunch of different ways you can do this right I was the other day learning about Debezium and Kafka and effectively using Debezium in order to selectively extract records from a database to replicate into Kafka to then have the non-production side of the application begin doing things so it's fascinating to me to see all of the different ways that our app and development teams can accomplish this I do want to jump back I know we skipped over it it was one of your topics that you wanted to cover and that was resources or resource requirements for service mesh do you have like a 30 seconds you know deploying x number of instances of the proxies will consume y amount of resources or anything like that just a second what I'm doing forget about this one don't have a look at this docker command so Puckman is currently not running sorry so yeah the thing is with stream editor CLI it's not working as the way it works with linux you know so this is why I run the OpenShift Origin CLI container and deploy from there because I'm running in the Macenverm and yeah Andrew just a second let me just start the deploy process and then we can just directly jump to the slides there okay and it's okay if you want to we can answer that question afterwards and include it in the blog post so I figure that might be the easiest way to answer several of these by linking into so we'll take the slides we'll publish those to speaker deck I think it's speaker deck that we use now and we'll link that in the slides in the blog post and we'll answer all the questions and stuff that folks are asking so I have created a deploy script here so all the components here should be deployed out of the box and okay I'm not locked in from here just a second and I'm not sure if you're intending to share the CLI or the terminal that you're using or if you want to keep your browser ah you don't see anything oh Jesus I ask because sometimes I'm sharing a browser while doing something on the sides I'm sorry I thought I've shared my complete screen so I'm just okay just a second and wow this is not not believable entire screens we haven't seen anything from my very cool custom resources so far right so so here we are right so this is what I've mentioned so here's four example the service mesh control plane configuration so this is the git repo if you if you download so we have the control plane configuration we have the member role we have the peering config for the other mesh we have the certificate part of it and then this is the sdo virtual service mirror configuration then to mirror the traffic to the stage environment here is the imported service set so here we specify the services we want to import from the other mesh and this is the other mesh which is the stage one which is very kind of the similar thing but here we have kind of an exported service set custom resource custom resource will say okay we want to export this service that it's available in the production environment and we only have this deployment here of the detail service so basically this is what it is now I'm just a second I'm not sure what's the I'm in the container deploying this stuff so let me just check the URL of my cluster here okay what is so slow that should be this one okay so as mentioned I've I'm in this I mean it's docker container and the CLI I've mounted this git repository as volume in there so I'm deploying I'm just executing the deploy script from here now and you see okay it's starting so I'm creating the namespaces the projects right here this is by the way the deploy script so we basically create the projects we create the we install the control planes this takes some time until everything is up we create the workload projects then also the peering and member all configuration we deploy the workload yeah get the certificates and all that stuff and at the end we should have everything up and running so two measures with a mirroring configuration and with federation configure in one cluster so this is what we do we wait a little bit and in the meantime let me go back to the slides so what is the overhead was the question so there are some there's some data out there you see it here there's also the link to Istio where you can find this one we also do some benchmarking but you also find for the different versions of Istio which also applies to our OpenShift service mesh because this is kind of the foundation we use Istio and additional things for observability like Piali and Yeager and Prometheus Grafana things like this but you see that it will add so this is the low test scenario with a thousand servers and 2,000 sidecars running and so the envoy proxy consumes like a half virtual CPU and 50 megabyte of memory per thousand week first per second going through the proxy to do the control plane uses one virtual CPU in this scenario and one and a half gigabyte of memory and it adds 3 milliseconds to the 19th percentile latency so and there's some additional stuff so things to consider also for the control plane is really the rate of deployment changes the rate of config changes which have to be propagated to all the proxies out there the number of proxies of out in the field one very important thing in reality is always like by default you know all the if you if you for example create a Istio configuration like virtual service destination rule things like this so this translates to an envoy proxy configuration and by default this will be available in all proxies out there but the reality is your services they don't need to talk with every potentially available service in the mesh right if you have thousands of services this will increase really the memory and everything so one of the things always to consider is where to expose the configuration is it really necessary to have the specific routing configuration on every proxy in your mesh on or where is it necessary and you could limit things like this with a sidecar custom resource of Istio for example so this is very important in the production environment for example it's like firewall rules you don't want to have firewall rules that open everything to everyone rather you want it to be selected and also to be clear that previous slide was referring to the Istio control plate not the open shift control plane exactly yes things to consider for the data plane and then there is also some you know considerations regarding the latency depending on the amount of client connections and the specific let's say telemetry you want to access so in general you can say you add like three to seven milliseconds to each request in terms of latency so if you have really very very sensitive latency sensitive applications yeah you have to consider if service mesh is the right thing in this case it's not always is because there is a kind of overhead for normal let's say microservice applications it's not that that much important in terms of latency the resource consumption is something you will have to fine tune for your specific let's say setup and the requirements you have so normally by the way the feature which is really most often used or why do customers use service mesh is most of the time because not of the traffic routing things is most of the time the security thing because you can lock down or implement an MTLS across the complete service mesh with thousands of services with one very small customer resource for example and you can really do really nice things to offload work from security related things from developers and their services with very easy steps to do so this is one of the top things why it is used and to be clear you wouldn't want to turn the service mesh on and off if you will hey we think we're experiencing increased latency with this microservice let me turn on the service mesh let me connect it to the service mesh so I can collect a bunch of data and then disconnect it I don't know how either simple or complex non-trivial integrating with a service mesh is that maybe completely infeasible yeah normally you have a service mesh there is no way to just turn it off some way and some features off probably but you don't have a feature switch to turn the complete service mesh on and off or something our hope nine great for hip and traffic not good if you get DDoS though so yeah as far as I've seen here so our deployment is done so all the things are created and the service mesh is completed you see we could now query for example to see okay is there a connection between the meshes so we can just go and say okay get the service mesh here customer resource and see what the status is here right this is something we definitely could do okay so let me open it here so there's no you see okay this is the status section of this peering configuration for the federation and you see okay there is one for the remote so the remote for incoming and outgoing connection and you see important is that both should be true and you see okay both are true so we have a connection so this is from the stage mesh from our staging environment of view we could check the same thing now from the production environment but let us just check if our services are imported and exported so we could also check now the imported service set in the stage mesh from the stage mesh you see okay there is an exported name so this is the name under which the stage service is now exported and this is the local service name so this is the way I would address this imported service now in my production mesh and this is what we use for example for the mirroring configuration you see there is not a cluster local service your eye so it is stage mesh imports in this case there are also different ways how you could import services you can import them as local so there wouldn't be a difference so for example if you have a detailed service in your case locally running and you would import it also with an alias of local so the actual endpoints would add to your service so you would have when you address this local service you would address your local service as well as load balancing to the remote host transparently because the endpoints are just aggregated at the level yeah I like how it masks all of that complexity behind what is effectively a DNS entry for the service that makes it dramatically easier for me a lowly admin to understand yeah so you see we have a prod mesh we have the stage mesh these are the things we have deployed let me switch to the admin view here okay so here is our production mesh environment you see also here we can launch directly Kiali or Jäger so Kiali is our visualization of what we have here so let us log in wrong somehow so now we have Kiali this is kind of a visualization and I can see now I can select namespaces in here so prod mesh, prod book info what I can do is okay let's see the applications we are here what we have display idle nodes currently there is no traffic coming in right so no access so all the things are idle you see we have the details service we have a product page we have ratings service and review service in different versions so we could do canary deployments and router traffic there workload services now let us just create some traffic and to do so just a second that was the here we need the URL of the of the is your ingress route so let us just get the book info URL and also then we just curl here in the loop every second we just access the product page so now we should see in the graph of Kiali that some traffic is coming in so at least this is currently updating every 15 seconds we can also display request rates response times throughput request duration just let us add some things important is that we want to see the cluster boxes here and also the namespace and you see that there is now some traffic coming in also display the security so my browser is a bit slow so what you see we have a namespace section mesh which is our control plain namespace here is our ingress gateway this is where our curl request so our actual things coming in so we have a gateway configuration here the things we've mentioned earlier so there is gateway configuration that allows traffic to this product page to nothing else so we can just access the product page and now the product page there is one workload one version this one will call the detail and it will call the review service and here we have a configuration we have different versions of this review service and we split like one third of the traffic is going to version one, version two, version three and version two and three they are also accessing the rating service version one doesn't do it and what we have configured here is a mirroring configuration so for this detailed service you see this icon means there is a virtual service so let's go right to this actual service let's have a look into this virtual service we should have a look here in the product mesh these are the Istio configurations we have currently in place they should be if the browser or something is working we should see some of the virtual services hello hello okay if you're anything like me you've got like 800 tabs open at any point in time so so it's not really working so I do like that previous graphic because I think it highlights a couple of important things one for me it reinforces that we don't deploy service mesh or use service mesh in a vacuum it's not the administrator decided to deploy service mesh that they could give visibility into what's going on or the application team simply went in and created a mesh so that they can do it it's definitely a joint thing it's definitely the two teams have to work together to understand the requirements to understand the impact to the cluster as a whole both to resources consumed as we pointed out before but also to the application itself hey I'm going to be as you're attempting to show here I'm going to be replicating data from one to the other that's going to potentially increase the workload more than you might expect by default type of thing so it's one of those like I had some awareness some knowledge that service mesh is one of those features that spans both sides of the street if you will or sides of the aisle to use a like US politics reference and this very much reinforces that so almost Christian on YouTube asked can you auto inject sidecars in deployment configs the deployments or the samples in the docs use deployment instead of deployment configs Peter I haven't forgotten about your question I see it too yeah you can of course inject also into deployment configs the difference to upstream is to is there is no namespace based injection you cannot label a namespace to like in upstream is to so you have to explicitly yeah the sidecar injection annotation to the deployment or deployment config to inject the sidecar and the mesh namespace must be part of the service mesh domain right so it should work that way right so I'm a little bit confused I'm not sure what is not working here because this is just it's part of live streaming something always has to go wrong yes Johnny's the king of that I am yeah I've learned like you've got to sacrifice a number of goats before you go and do live demos I failed to do that my last couple so things have not gone well so Orton or Orton I'm sorry there was another question earlier about API gateway integration I know that we can use three skill with this but like are some of the upstream API gateways like Kong or whatever the other ones are are they supported with the red hat or maybe that supported in the sense of like the only thing really supported currently is the three scale integration there is a Web assembly that's the way you can extend the service mesh the features of service mesh with yeah WESM or Web assembly plugins there is one for three scale currently but there's nothing for Kong at least nothing officially supported that I'm aware of okay yeah and as I was saying I was like okay supported in the sense of like it'll just work like we don't necessarily support Kong in their implementation but we would support that integration yeah right that's it so Peter asked an interesting question a few minutes ago that I thought we might want to spend a minute on which is you know are there patterns or anti-patterns that typically apply or are conducive to using a service mesh so Peter's specific question was are there times where you think you need a service mesh but really you shouldn't be using a service mesh oh I mean this question is something you could spend really a day discussing probably one thing I mean hmm there are a lot of things to consider right so one of the things and this is also important and I think this is kind of one aspect of this question is what type of applications you see here right so what what type of applications are appropriate so to say for a service mesh and so to kind of yeah looking there at the first step would be for example microservice architecture so we're talking about a service mesh so obviously we need some kind of services whatever it is so to say so now microservices we have a bunch of let's say services communicating over a network in a synchronous fashion this is perfect this is kind of what it's made for so for let's say typical microservices architecture especially if it's pretty huge and you have a lot of services and a lot of inter-service communication so this is really where it makes a lot of sense to have a service mesh the other thing for example what is with monolithic applications so we only have like two three so very typical if you think about Java you have an application service type of application with I mean monolithic doesn't mean it's not modular or something it's just from the aspect of you have like different development teams contributing the same code base and you have more or less one deployment unit right so does it make sense if you have two, three monolithic applications some databases things like this some caching or message queue yes it could but to be honest service mesh doesn't add that much right so in this case I would think or you should think carefully about using a service mesh what features do you want to have or want to apply so security in terms of implementing it with a specific library or something might make more sense in this case for example so the reality is with monolithic applications service mesh doesn't add that much if you have a lot of inter communication between monolithic applications okay then it might but yeah think about it what is with serverless applications this type of thing yeah I would say definitely in this case it makes sense why because serverless applications and I'm talking now about OpenShift serverless which is based on Knative so we actually use also containers like everything else basically just different packaging format we have for serverless and you can scale to zero things like this there is also an official integration with serverless but the essential thing is here that serverless applications are actually microservices even smaller most of the time they serve for a specific purpose right so so definitely with serverless it makes sense with serverless it makes sense then I have some customers also asking what is with jobs front jobs batch type of things right processing yes I would say it depends on the actual implementation of a job what is this job doing is there some you know communication involved where you need observability is there something where you need to secure the communication flow to because you know very often it's things like you start a job the job is kind of grabbing something from a job queue so there's an implementation and Kafka whatever a database where we grab some actual jobs what a job should do and then it's executing job things like this so it depends really on what a job is doing and the problem with jobs as always we have the sidecar container the envoy thing and so the job won't finish if we don't kind of shut down this user proxy right so you need and there are some workarounds I know of them but there's not really an official way so to say that we support jobs and front jobs so it's possible and it depends on the workload but not really the best kind of type of workload I would say right and then go ahead so sorry I wanted to interrupt so the thing is okay sorry Anon just highlighted the same thing that you were just saying around you know the job status won't complete if the proxy sidecar is there and so there's a number of questions that have come in and for our audience please go ahead and submit any questions that you happen to have I do want to be a little bit cognizant of time because I think I have a meeting coming up here shortly so I will get through as many of the questions as we can again any that we can't answer here on the stream will follow up in the blog post where we get all of those answered including the slides that Ortwin has and all that other stuff Ortwin you're very popular because we've had a couple of folks asking can we have more than one of these streams can we have a mini-series type of thing so you know Stephanie yes I do care about your lunch hour Stephanie is chatting me on the back end here I do care about your lunch hour I am obviously a very healthy individual so oh man just a couple other questions that we have here and please everybody keep me honest we'll service mesh trust self-signed certificates applied to routes to open shift routes to open shift routes so if you're using the default CAH like you're coming in right coming in on the route it's going through the HA proxy over to the envoy will that accept that self-sign yes it does it depends on the route config if you kind of terminate at the route level or if you kind of configure the route as pass through then you would need to have the Istio from the ingress gateway the certificates for the client side right but in general I would say yes sure and then regarding just one thing to mention because I'm also working on this topic we know that especially all these day two operation things in the context of service mesh they are pretty much complex there are a lot of different ways you could do things right because there are so many options and so different types of requirements and we also know that the documentation or kind of best practices on how to do things especially from an operational side that there is kind of also let's say lack of documentation and currently we're working on a service mesh day two cookbook where we try to cover some of these things so just for admins, for operators so if you're planning for example implementing a service mesh what are the things to consider if you want to head to production and things like this just to make you aware there is something coming or when you guys should do a service mesh coloring book or just turn it holding on its head and just do a comic book just have like its own little superhero thing that would be kind of cool go ahead I've already given you permission to do it I pre-release this idea so can the open shift service mesh be extended outside of open shift for example to include a virtual machine hosted in rev vSphere or even a physical server yeah this is kind of as far as I know a roadmap item so we can say can you use open shift service mesh outside of open shift the answer is no so it is not supported outside so the service mesh itself this is based on open shift so there is the dependency but including like what we have before like external bare metal machines virtual machines which are outside the mesh outside open shift this is something which is a roadmap item and I don't know Andrew you said I'm not really aware if this is already GA I don't think so but my current state is this is a roadmap item for the over next release so for 2.3 which would be somewhere in I think June or something so in the middle of the year so yeah and so would be yes roadmap item this is coming so not yet but eventually so there's another question from and I'm so sorry if I mess your name up and it was how do you access an external service using the TLS origination from an egress gateway so like once you set up an egress gateway in your mesh how do you access that TLS endpoint I mean the way you would configure it is for external services normally you you create a service entry right to to make this available to the Istio service registry and then you configure a virtual service and the destination or or to where you kind of redirect traffic to the egress gateway and there you would configure in the destination rule the TLS config for the outgoing service then right so this is also documented and then on Istio side I'm not sure also on our side and then I think the last question I see here and apologies for any that we missed all again we always go back and look at all the chat and everything to pick them up but apologies for butchering your name Elchin can we add a delay to the sidecar shutdown I'm not sure I think you can with a notation for more proxy but I would have to look it up exactly I think it is possible with an annotation not a hundred percent sure okay I'm really a hundred percent sure on anything so I'm a hundred percent sure about that I had a conversation the other day and it was always never use always or never yeah so alright thank you very much Orwin really appreciate you joining today this one has been a long time in the making and it has certainly not disappointed from my perspective I've learned a ton here I expected to learn a lot and I learned even more than I expected to so thank you so much and yeah thank you for having me here I mean always happy to share something and just to consider that was just seven of my 90s lives I've prepared so there is probably some content for an additional session somewhere just a little bit I know who we can reach out to whenever Johnny and I are struggling to come up with ideas or topics right okay so thank you so much for joining us today Orwin again really really appreciate it I know our audience has appreciated as well there's been some folks who have been waiting for this one too yeah thank you and thank the audience as well for all the questions and the engagement it's really great yep so for our audience again thank you for joining us if you have any other questions that we didn't get to if it was happening to be something we missed for anybody who's not watching us live you're watching a recording please don't hesitate to reach out you can contact me via email I'm also on social media so twitter at practical Andrew just like you see on the screen there on reddit all of the platforms all of the places you can find me and Johnny as well I will happily throw him under the bus for being a public contact so I can I always have to look at the screen to remember your twitter username for some reason you've been here for like six months and I still have to look at it every single time because when I type it into twitter I just do at J and it's you most important first or Johnny at redhat.com and that's no H, J, O and N, Y yep and I'm pretty active on the reddit channel or the reddit subreddit so if you if you have questions out there just go ahead and post them and I'll try and reply so thank you everybody please join us next week where we'll be joined by the micro shift team and then we're looking at some other topics a little bit further out so things like open shift virtualization maybe bringing back some VMware folks so on and so forth have any thoughts, ideas, suggestions etc please don't hesitate to send those well I can't talk all of a sudden please don't hesitate to send those our way as well so with that I'll leave the last word with you Johnny Orwin thank you so much Andrew this is a great show and Johnny died on us oh no I'm lost too you baited out we lost you for like 10 seconds there so I'm sure whatever you were saying was awesome all right but yeah so if there's something that you want to see then let us know and we'll try and get a demo for you just let us know what you want to see especially when it comes to the server smash stuff because there's so much and you can see we barely even scraped the surface so all right well thank you so much everyone have a great week and stay safe bye bye