 My name is Kevin Connor. I'm an engineering manager with Red Hat. And I'm responsible for the Istio team on the engineering side or the middleware engineering side. So I've been with Red Hat for a very long time. I came across with the JBoss acquisition and went through transactions and so on. And eventually ended up with Istio. So one of the things that we're seeing, one of the trends we're seeing these days is a transition from deploying big monolithic applications over to microservices environment. So we're starting to break apart applications. And there are various reasons that we do this. It may well be that we want to isolate one part of an application from another. So if there's a problem with one part, then it doesn't impact the others. It may well be that we want to scale different parts of the application in different ways. Or we could have different development teams and we want to give them different deployment life cycles for developing, testing, and deploying their applications into production. None of this, however, is new. We've been talking about distributed computing for a very long time. And all the issues that we've already identified with distributed computing are equally as valid with microservices, especially when you look at the network, especially when you consider all the issues with networking. So distributed architectures tend to get very messy. We've got nine services there. And already you can see that there's a potential for a lot of intercommunication between the services. But when you scale that up to 1,000 services or 10,000 services, that everything just gets lost in the mesh. Sorry, mess. I don't. So that's where the tooling comes into play and that's where things like service mesh comes into play. But there are a number of fallacies that we already know through distributed computing, which are equally applicable to microservices. So our reliance on a reliable network. I mean, obviously networks are not reliable. They drop out. We get routers that will go down. We get cable that goes down. We lose packets. So we need to be able to deal with that, not only in distributed computing, but also in microservices as well. There's a latency involved in communication. It's not free. It takes time to send packets via routers or whatever it is to your other service. Your bandwidth is limited. There's a certain amount of information that you can send between services and you need to bear that in mind so that you don't swamp the service that you're invoking. Security on the network. If you have access to the network, either because you've got access to one of the machines or the hardware, then you can obviously intercept the traffic. You can modify the traffic. You can prevent packets from getting to their destination. So there's a lot you can do there. Topology changes. IP is designed to adapt when network changes. So if a router goes down, your packets get rerouted around other routes. So you end up potentially getting more latency in your services and your invocations as well. Multiple administrators. No one administrator will look after a network in your organization. There'll be a whole group of them and they'll have different responsibilities for different areas of it. Also costs to invoke services. So you need to be careful with what you're doing there. And you have multiple different types of services and languages, platforms, et cetera, that you have to consider when you're working in a distributed computing environment. So how do we deal with the complexity of this? How is it that with the distributed computing or with microservices, what are the things that we need to do to try and deal with that? First, service discovery. We need to be able to remove any kind of dependency on location for a service that we're invoking. If somebody elsewhere who controls another service decides they're moving it to another IP address, you want your service still to be able to invoke it. You don't want to have to hard code your IP address into the service. Retries, if there are failures in communicating with services, you want to be able to retry it because you've got to tolerate the failure on the network. Similar with timeouts. If you make an invocation to a service and it takes too long, then you want to time out and you want to take another course of action to try and compensate for that. Circuit breaking, if you have a service which is prone to failure, you recognize that it's failing a lot, then you want the circuit breaker to fire. You want it to be open. And at that point, you no longer want to send traffic to that service. You want to take some other compensation action and deal with it in a different way. Limiting, similar. You don't want to swamp the service that you're invoking, so you want to control how much information is going there. Load balancing, you want to distribute your request across multiple instances of a service so that you've spread the load. You can scale it up and increase the throughput that you're getting for your services. Bolt-heading, if you have requests that are blocking, you want that to be taken out and handled separately from your main application. You don't want to be handling that within your main application. There's a whole load of these, Edge, Rooting, DMZ, Rooting, doing per-request routing, so based on headers or identity or some other attribute of the request that's going through. AB testing, traffic shaping, doing a dark launch, so you want to put your new versions into production, but you don't want people to access it. You want certain people to access it. Shadowing traffic, being able to take live traffic from a service and duplicate it to another service that you're testing, but without it impacting the live service and injecting faults for being able to test your application so that you can see how it tolerates failures to do with the network or the services failing as well. Zone-aware load balancing, health checks, stats, telemetry, logging, distributed tracing, and of course security, which is the big one that you should be concerned about. So what do distributed systems tend to need? If you're starting from a distributed computing perspective, then you have your application and it will pull in lots of third-party libraries to fulfill some of those requirements that you have. So, I can zoom in. So you'll have, for example, something which can manage the configuration of your service, something you can manage the discovery, the routing behavior, the circulating tracing, whatever it happens to be. So those all get pulled into your application, they're compiled into your application. It doesn't matter whether that's Java, C++, whatever your language is, you need to have some kind of way of dealing with that, dealing with failures and it tends to be through third-party libraries. Netflix OSS have a lot which they're popular in the Java side. If you're using other languages, they have alternatives as well. There is however a problem to this which is when it comes to maintenance, it really is a nightmare because all these capabilities that you're building into your application are now tightly coupled with your application. So if you have applications which are written in different languages or they're using different frameworks and you start to see incompatibility issues between them, especially when you're looking at like thousands or tens of thousands of services. If you have an existing application, then it's very likely that these frameworks will force you to redesign your application in some way to refactor it so that you can embed their framework into your application and take advantage of their features. Now when it comes to upgrades, you now have the challenge of trying to upgrade this across all the services that are in your mesh, service mesh, sorry, distributed application at the same time. So that means that if they are introducing something perhaps because of a CVE, then you need to make sure that the other applications you're talking to, if it's changed the behavior of the application in any way, then then they can tolerate it and they also have the same fix within their libraries as well. So there should be a better way and of course there is a better way and that's why we're here today. We're here to talk about Istio. So just before I go on and talk about that, can I just ask how many people actually have knowledge of service mesh in any way? Okay, and of those, how many know Istio? And how many have used Istio? Okay, I'd hope there was gonna be more than that. I think I counted five hands there. All right, that's a surprise. I thought it was gonna be better. Okay, so you guys may know some of this especially if you're connected with Red Hat in any way but the obvious starting place is you can do distributed services with Kubernetes. Kubernetes will provide a fair bit of the features that we've talked about before. So deployment resiliency, elasticity for invocations, company management, resource management, et cetera. So that's one option that you have for doing distributed services and fulfilling some of the infrastructural requirements of it. OpenShift is obviously another one which is an extension of Kubernetes and provides additional feature sets. So again, we can zoom in. So you've got logs, monitoring, release management, load balancing, et cetera there. And of course the one that we've come to talk about today which is the Istio service mesh which in our implementations, it's on top of OpenShift in the upstream communities, it's on top of Kubernetes. But it provides, I'm trying to do this, load balancing, fault tolerance, traceability, observability, service security, chaos engineering, so injecting faults, traffic shaping, et cetera along with a number of other facilities as well. Okay, so that's what we're here to talk about. The way that Istio works is Istio takes advantage of a proxy and that proxy is Envoy or it's actually an extension of Envoy. We add a number of additional capabilities into the Istio proxy which are not present in Envoy to handle some of the features that we want from Istio. Envoy itself can handle level three, level four, level seven traffic. We've got HTTP, HTTP2, GRPC there. There are other protocols coming in at level seven so Kafka is one of the ones which is underway at the moment. They're developing that in the Envoy community. And it will handle service discovery, load balancing, basically most of the stuff that we've already discussed and said were desirable for a distributed application. It's written in C++, so it's fast. It's got a small footprint and it can be configured dynamically so there's no need to restart the proxy if your mesh configuration changes. You just send the updates to the Envoy proxy. It reconfigures itself and then off you go. There's actually work underway at the moment between ourselves on the Red Hat side and Google to implement something called Incremental to XDS which is a capability for Envoy to just take incremental changes from the control plane. So you no longer have to send all information. So this is getting leaner, faster, much smaller footprint with all the changes that we do going into the proxy itself. So the proxy is deployed into your environment. As part of your pod, it's deployed as a sidecar. So the unit of deployment within Kubernetes and therefore OpenShift as well is the pod and that consists of any number of containers and they all share the same life cycle. So if you scale up the number of pods then every time you get a new pod you get a new copy of all the containers that are in it. If you were to tear down some of the pods then all the containers disappear as well. So they're created at the same time, they disappear at the same time. They also share the same network. So to all intents and purposes the containers make it appear as if they're running on the same box. So they've got access to the same IP interfaces, they can send traffic and in the Istio case more importantly they can intercept traffic. So everything which is coming into the pod goes into the proxy, then goes to the application. Everything which is going out comes from the application via the proxy into the outside world. So the proxy sidecar can then intercept the traffic and it can handle all the infrastructure capabilities that we're talking about with Istio. So this is really what Istio gives us. Resiliency for handling service failures. We can recover from that, we can handle it transparently. Observability, so we can take a look at metrics. We can look at, use Keali which we'll show you later on to see the visualization of the service mesh. You can look at Yeager to get tracing information and try and understand how particular requests are passing through your system. Traffic control, so you can redirect traffic from one instance of a service to another depending on a certain criteria. It could be something which is specific to a request, it could be general, it could be percentage based. We'll show some of that later on. Security, you can secure the network layer so that it's encrypted and it means if anybody does intercept it, they can't change it because of the encryption. They can't read it because it's encrypted. And when you get the communication between the services, you can verify both ends of the communication so you know who you're talking to as well. Policy enforcement, that could be role-based access control as simple as saying only service A is allowed to invoke service B, no other service can do that. Or it could be something much more complicated, maybe you've got API management or something in there as well. And the chaos testing is means to inject failures into the path of your, the data path of your application so that you can test to see whether your application is resilient. So those are the main features that Istio gives you. This is a kind of high-level overview of Istio itself. So we have, it's split into two different areas. We've got the data plane, which is communication service A comes down to proxy there. It goes across to the other proxy up to B then B invoked service C. So that's your data plane. That's all, that's what your application is doing. Okay, that's all the invocations that your application is making service to service. We've also got a control plane at the bottom there. And we've got on here, pilot, we've got the telemetry and policy and we've got Citadel. So those are the main components. There are some other components as well, but for the kind of day-to-day use, these are the main ones. Citadel is responsible for giving each of your applications or more specifically each of the service accounts that's running your applications a unique identity. It creates a key for that identity and then that identity is used for NTLS so that you can verify the identity on both ends of the communication. Pilot is responsible for tracking changes within the service mesh configuration. So it talks to a component called Gali, which it handles all the Kubernetes stuff. When pilot is told about updates to configuration, it pushes all the changes out to the Istio proxies so that ensures that the Istio proxies are kept consistent and they have the same view of the service mesh. It doesn't matter whether you're in one namespace or another namespace, the pilot is sending the information to them all. And the telemetry and policy is there to handle metrics and policy enforcement. So part of that is out of the data path. Part of it is in the data path. It depends what you do. So this is the ecosystem as it currently stands for Istio service mesh. You've got kind of Istio in the middle there and we take advantage of other projects as well, like Yeager to do the tracing. You've got Gali to do the visualization on the service mesh and understand how your services are communicating and then Prometheus and Grafana to deal with metrics and alerts from like from your services. Okay, so that was a high level, quick overview of what Istio is. I'm going to go through some of the other capabilities and do some demos and the like, but before we do that, does anybody have any questions on any of that? What is in another namespace? Sorry, you said in the slide before, what happens if the pod is in the other namespace? There are three pods, but the assumes they are all in dispensers. So it doesn't assume that they're in the same namespace. What it assumes is that you have a flat network on the namespaces. So if the namespaces are linked, they can communicate to each other. And the reason that's important is when you are invoking a service, the Istio proxy is actually talking to the pods themselves. So it goes to the pod IP addresses. It doesn't use the virtualized IP address. So it needs to have network visibility of what would otherwise be a private IP address to that namespace. At the moment with the tech preview we have or upstream it can't be used. So the intention for GA is that we will have a service which will then make sure that those namespaces are linked together. So it will automatically join the namespaces. So it will create your flat namespace for those particular, sorry, flat network for those particular namespaces. And the approach that we're taking for GA is also soft multi-tenancy as well rather than hard multi-tenancy. So we will have potentially multiple control planes deployed into an instance of OpenShift Kubernetes. And each of those will control a separate subset of namespaces. So they'll have visibility of those namespaces but not the other ones and the same with the other control planes as well. So they'll be replicated copies of Yeager, Kiali, Elastic Search, et cetera within each of the control planes as well. Any other questions on any of that? Do you need to run Istio as a cluster of Istio? At the moment yes, but very soon no. So one of the guys on my team has been working really hard to get rid of that. One of the challenges we have at the moment is configuring IP tables. And there was an effort done upstream in the community to replace the init container that we currently use in the sidecar with a CNI plug-in. So that gets rid of one of the big headaches, okay? You no longer need to manipulate IP tables within the application namespaces done outside of that. There are other things as well. So at the moment on OpenShift, we require privileged access to do the IP tables manipulation and a couple of other things and we require any UID SEC as well. And those are also being one by one taken care of. So when we get to GA, well actually probably this sprint or the next sprint rather than GA, you will just be able to deploy an application as a normal user and then it will just work. So we're close. We're not quite there, but we're close, but it's getting close. Any other questions on that? All right, let's get on with the fun then. Okay, let's start with observability. So I should have if everything is working nicely. Okay, so that is Bookinfo. And we've got Istio there as well. So okay, so what I'm going to do, first of all, we're going to take a look at monitoring. So we're going to look at Grafana and Prometheus. So I'll get the application up and we'll put some requests through it and then we can take a look and we can see just what's going on on the Grafana side. So you can see Grafana in action. You can see the metrics being captured. So what I'm going to do, I'm just going to put some load on Bookinfo first and then open Grafana. Okay, that's better. So if you go to the workload dashboard, so you can choose Bookinfo, choose product page which is the first service that we'll see and I'll change this to every five seconds and then we can start to see traffic coming in, metrics coming into the system. So there are quite a few metrics that you have already within Grafana itself but obviously you can create your own based on the information that's being captured and not only that but you can set alerts based on the metrics as well. So if you have a certain threshold that you want to be notified, say the failure rate suddenly spikes on one of your services, you want to know about it, you can set an alert for something like that. So that's Grafana, that's the metrics observability that we have. Now what I will do is, actually, sorry, I'll just leave that running in the background. So the next thing that we can use to observe what's going on with the service mesh is Kiali and that gives you a graphical visualization of the services and their interactions. So I'll do, bring Kiali up, okay. So we can look at Graf, choose the Bookinfo namespace and you can see that each labels will do rates per second. Okay, so you can see traffic that's going on there. See if I can zoom in to some of these. So the green lines are the ones which are active and the numbers on them are the transactions per second at that particular time. So you can see there's quite a lot of work that's going on within the service mesh at the moment within the Bookinfo. So if I go back and Kial that generator, then let's change this to five seconds as well. Then you'll start to see those drop off. It'll eventually quiesce because obviously there's no load going into it. Excuse me. So this gives you a quite a good represent, visual representation of what's going on within your service mesh. It gives you a much better understanding instead of having like in this instance there's only what we got one, two, three, four, five, six services running, but it could be like 1,000 services. It could be 10,000 services. So this gives you a much easier way of seeing where the traffic is and seeing which parts of your service mesh you're actually on the load or which services are communicating with which other services. Okay, so while that quiesces. So the next thing we'll look at is tracing and for this we use Yeager. So when requests come into this system, we actually give it a transaction ID when it comes in if it doesn't have one. And from that point onwards that transaction ID is propagated through each of the service invocations. So we can link that invocation in one service with whatever it does in the other services as well. So you can get a complete graph of how that particular request has gone through your system. So again, so if we take a look at Yeager. Okay, so yeah, do ingress gateway. Okay, so we can look at the spans. Let's see, I tell you what, let's do this one because this one's taking quite a bit of time. So you get a graphical representation of the call graph. So this, if we, so here this is the ingress gateway. So when the request is coming into the system it's the first stop is the ingress gateway. That's it's entry into the service mesh. And then from that point on everything else is within the service mesh. So we've got a gateway set up there which invokes product page and then product page itself we'll call book info here. So this is the client side in product page and this is the service side in the details. Sorry, not book and document details. And then again we've got a client side in product page which is calling reviews and then this is the service side of reviews. Okay, so you can see how the costs are going through the different system. You can see how long it takes. If there's one in particular that you want to look at you can drill into it and the information that gets captured by the proxy the metadata that's captured by the proxy and sent to Yeager you have that information there so you can take a look at it and you can see if there's something peculiar there something that you can, if there are particular issues that you've identified with something that's taken too long then you can use that and try and replicate it and try and track it through other logs and the like. Okay, so you have those, that information on your request as well. So that's Yeager and how it is used. Okay, so those, that's just kind of your observability part. Grafana, Keali and Yeager and they all target different areas. So the next thing I want to show you is traffic control. So if you're doing one of the examples that you can do is canary deployment. You can choose to, if you want to deploy a new version of a service into production but you don't want normal traffic to hit it then what you can actually do is you can set up a rule so only specific traffic can hit it and that will allow you, for example, in this example if you're, you happen to be somebody who works in the office from Boston then you get to see the new version. You get the version three. If you're everyone else then you get to see version two. Okay, so all your usual traffic most of your usual traffic will hit V2. It's only specific people that will get to, to hit V3. So we can do something like that very easy and bear in mind when I'm making these changes I'm not touching the running services. They're all running. These are all declarative changes. So what we're going to do here and the bit that's important is that part here. So the first part says if we have a request that comes in and it has the end user header as Kevin no guess, no, you don't need to guess why I chose that name then it will go to V3 and if you have anybody else it goes to V2. Okay, so we can look at what thing I haven't shown you so far. Oh, it's not the one I want. Okay, so if we look at book info then you can see on the right hand side that at the moment with no rules there we are circling through three different back ends for reviews. Okay, so version one is that one with no stars version two is that one with black and if we get a red one that one's version three. Okay, so if we were to just create those rules with an Istio then we're now stuck in version two because we said that all traffic apart from Kevin should go to version two. So if I log in as Kevin I get red which is version three. If I sign it we're back to black if I sign in as say Bob then we're still on black. So it's only when I sign in as Kevin that we get shipped to V3. So again that's declarative. There's no changes to the application in order to do that. It's all handled by the proxy and at the infrastructure level. So that's the canary. You can do that with canary. Waded Rowdy you can do the same thing if you want just to test a new version of an application or you want to test either performance or behavior whatever it has to be you can specify a certain percentage of traffic to go to the application. So we'll do that with Istio here. So again it's just a matter of specifying the rules. We've got two destinations there. V1 and V2 and they have weightings. V1 is 95 percent. V2 is 5 percent. So let's create that and I'll put some load on the system so we can see something going through. And if you go back to Keali. Okay let me change this so it's percentage. Okay so we then get to see what to do that. See what I'm going to do this way. Right so you've got V1 is receiving roughly 95 percent. I mean it's not accurate. It's a close approximation. And V2 is receiving 6 percent of the traffic. Okay and then that changes 94 and is it getting even closer? But it fluctuates. It's not guaranteed to be always 95 percent on the dot or 5 percent. So that allows you to send your traffic to different versions of a service based on a particular weighting. So you can look to see how V2 performs versus V1. If it doesn't matter whether it's a functional change or a performance change or whatever is that you want to test. That gives you the ability to do that. Okay so let's pull those down. Okay so that's weighted routing with Istio. Dart launches with Istio. So this is when you want to put a new application into production. You want to test it with production traffic but you don't actually want to release it into production yet. So you want it to get traffic from the original production system. Handle it as if it was a request coming in from your clients but then you don't respond to it. So what happens here is Envoy will duplicate the traffic from so service A Envoy. There it will duplicate the traffic from A to B V1 and send it also to B V2 but it'll just ignore any responses that it gets. So it just sends it and forgets. That's what that does. So again the configuration is pretty simple. You have your normal destination there which is V2 of reviews and all you're saying is mirror the traffic in this case to V3. So if we were to put that into... Well actually I'll tell you what. Let me deploy it without the mirroring first and I'll show you what that looks like and then we'll go to Kiali again. Okay we should be seeing some load there. Come on Kiali. There we go. Okay. So now we are seeing traffic coming in. Now at the moment you can see we've got a green path which is coming in here. It's going to be to and straight to ratings. There's no traffic at all V3. Okay. So if we change the mirroring I'll just keep that going in the background and add the mirroring then okay hopefully this will start to become clear. So what will happen is that the proxy for V1 will start sending traffic to V3 as well. Now you won't see it as a client invocation but you will start to see V3 creating traffic as a result of that. So there we go. We've got V3 now creating traffic. So this line here is still gray. It's not showing up within the transaction but we now have another transaction being created here which is the mirror traffic and everything that V3 is doing on behalf of the... Oh okay. I've never actually seen that happen before and I only updated to Kiali the latest version the other day as well. So oh no. All right. Okay demo gods. I'm not shining. All right. Let me see if I can let's see what happens if we start again. Oh no. My demos have gone. Okay. There we go. We're back. Killed all that five seconds. Okay. So we're back. I'm sorry. I don't know what that was. I'll check the logs later on see if there's anything in the logs to feed it back to the Kiali team. Okay. So that's mirroring. That lets you know that shows you how to do that. Okay. Security, TLS at the moment none of the traffic there is being encrypted. So if we go back to here so what we have is product page here. We're currently accessing that through ingress gateway. So the request is coming in through a two proxy which is the ingress into OpenShift. That's invoking ingress gateway which is then the ingress into the service mesh itself. So from that point on if you had if you enable them TLS then all that traffic would be encrypted. Okay. But with OpenShift we can actually bypass ingress proxy and go straight to the service. That's not something that we suggest you do as part of the service mesh but I'll show you it for one particular reason because when I enable MTLS you'll then see that any requests that are coming in through that route stop working. Okay. Because they're not being you're not a client which is running within the context of a service mesh so you have no identity as far as Istio is concerned and no certainly no no no key for proving your identity with MTLS. Okay. But if we were to so we've got the app that's running at the moment now let me so if I was to so this this is a this is bypassing ingress proxy. So this is going into HAProxy at the front end for OpenShift bypassing so you're not ingress ingress gateway so bypass is the gateway and goes straight to the service which represents the service mesh. Okay. So you're coming in as an external client to the service mesh if you don't have MTLS then anybody can do that. Anybody can access your service but if we turn on MTLS okay so we set it up here that product page we set some rules here that everything which is going on with the in the booked for namespace is has mutual TLS set up and whereas the other one and the default policy is MTLS okay so anything within the service mesh will be able to talk to your service so this will still work because this is the one that's going in through ingress gateway which is the correct ingress for the service mesh and this one stops working because now the far end of the service is saying I need MTLS you need to prove to me who you are I don't know you who you are because you're not starting a TLS connection with me because you're not giving me the right credentials it drops the connection okay so that's what MTLS does it not only encrypts the traffic on the network but also enforces identity as well and enforces that you're getting requests that are coming in from somebody within the service mesh and you can take advantage of all back as well so that you can actually specifically say only certain identities within the service mesh can access your service as well so if you've got A, B and C within the service mesh then you can say C can only be invoked from B you shouldn't allow A to get there so that means if service A becomes compromised it doesn't have access to the rest of your applications okay so I'm just gonna actually I'll just bring it down I'll just bring that down in the background because that's that was actually the last of my my demos that was for as far as I got so fault tolerant aspects circuit breakers we already talked about circuit breakers before the purpose of those are really to handle services that are repeatedly failing okay you don't want to keep sending your requests to those services because you suspect there's a problem with them so you would say thanks so you would say sorry so you would say for example okay well if I get 20 errors within a certain period then that means that I no longer trust this service so I want the circuit circuit breaker to trigger at that point the envoy proxy from the client side will just not send a request to the server it will just return an error and your application can then handle that and recover from that as a cease fit okay so in this instance if service C was to become heavily loaded then you can trigger your circuit breakers for B to C and A to C as well so they would no longer invoke that service no longer attempt to invoke that service timeouts and retries again that's something that you can specify declaratively you can specify timeouts on your invocations you can specify a number of retries before you give up and return an error to the client and envoy again will handle that on your behalf it's not something your application needs to know about rate limiting similar maximum number of connections maximum concurrent requests that are going to a service again you can specify that and I'll choke the number of requests that are going into a service so you don't overload it chaos engineering injecting faults you can inject a 10 second delay for operations so you can say in this example here we have 10% of requests that are going from B to C you should add in a 10 second delay okay so that will test how service B will tolerate long operations long requests does it handle a request that takes 10 seconds as it does it fall over okay you know now rather than when you're in production when it happens for real similar you can do HTTP status code responses as well so rather than everything being rosy and you get your 200 back with your response throwing a 400 or something else and see how your application handles it and again this is declarative you choose one to do it you choose the percentage of requests that you can do it with it gives you a an opportunity to force your application to fail and just see how it tolerates failures okay so that's those are the kind of areas that I was going to cover that there is a lot more we didn't do policy or anything like that because that's much more involved usually especially we integrate with free scale which is an API management system as of the sprint we've got that integration in there and you've got the usual allow back stuff in there as well so I haven't covered any of that but I'll just take a second to tell you about a service mesh so open shift service mesh that's our distribution of Istio we take the upstream Istio and we create sent us versions of it rel version for it and it's intended to go into open shift eventually we're about to GA now with all the delays from things like open shift and the upstream Istio will be delaying will be GA roughly the end of April with our intended GA and that include Yeager Keali Prometheus Grafana as well Maestra is the upstream name everything's open source that's where we we have it we integrate with the application raw application runtimes so whether that is spring boot or vertex or Node.js or trying to think what the new sorry darn tail thank you very much I was trying to remember what the new name for that was then we integrate with those as well so they've got demos that show those languages being used within Istio and we integrate with free scale for API management as well so you can include integration of three scale through mixer so you can specify policies that can be enforced in in the three scale system and have it enforced in an Istio as well so the other thing is I don't know if anybody's looking for a job but we're actually so we have two wrecks open this quarter and I've got another wreck next quarter and I'm trying to build up the team as quickly as possible so seriously if anybody is interested in working on this then let me know and we'll get you to apply okay resources these are good books Christian Foster, Burst Sutter have done these Clemol has done the one on reactive microservices with Java the one on the right Istio in action is currently in the early access program you can get hold of it as Christian writes it he's fair way through but he's still got a way to go but it's definitely worth pursuing Christian no longer works for Red Hat unfortunately he went to work for somebody else recently took on another opportunity but he's been an absolutely fabulous help for my team very experienced with Istio he knows a lot of stuff he's definitely somebody to pay attention to there are some labs that we have as well that you can run through so learn.openshift.com forward slash service mesh service mesh or you can go to bit.istio-tutorial and you can run through those labs as well so there are PDFs of these slides available which I think will should be on the conference website or something so if I'm not giving you enough time I just saw some people taking photographs but you should be able to get it from the slides as well and that's it so questions I think we've got five minutes left for questions okay so the question was I mentioned there are seven protocols being added to Envoy Proxy and mentioning specifically Kafka earlier on there are more that have been suggested but as with anything open source it's a community project so people have to somebody needs to step up and do it the Kafka work is already underway that's been going for a while it is something that we are also interested in from our side so that's something if it's still a work which is outstanding then we already have people working in Envoy to do open SSL support for Envoy as well as incremental XDS and then we'll move them over and they can look at doing Kafka and stuff like that as well so Kafka is definitely underway it's a work in progress there's PR they're already for it and but it's tagged as WIP so it's not something they would merge but it's it's in development so it's quite far along the path to getting there I'm not aware of any others but I mean it's it's open source so if there's anything you need put in a create an issue in GitHub for Envoy Proxy and then somebody will look at it okay any other questions yes and so the question was what should we expect in the next month where can we see a roadmap so my question back to you is are you talking about community or are you talking about products or what roadmap are you thinking about okay so OpenShift side we are geeing as a product towards the end of April so OpenShift 4 has slipped it's now I think mid April and we are I think a week or a week and a half behind them so it gives us time to test it just do last minute testing to make sure we're ready before we release as far as communities so that is going to be based on Istio 11 which is still being worked on in the community our tech preview images are already using the Release 11 branch as of about a week ago I think it was last Friday so just a week back yesterday eight days ago where we took the snapshot to to productize so that will be those type preview ones will be released on Wednesday this Wednesday and that includes the three scale stuff so you can get that you can play around with with that now either well based or sent us based we do community ones and we do product ones through the the OpenShift registry Release 11 for upstream Istio in theory is going to be I think it's February 21st is the date for it but there are still quite a few P0s that are open that we need to get rid of first so it's there is a possibility that will either either they're all going to get fixed in the next week and everything's going to be fantastic we're going to be on track or it's going to slip again but it's it's not too far away from it now so RGA will be based on Istio 11 and we are currently looking at towards the end of April okay any other questions I don't know why your hand went up slowly there so K native is based on top of Istio so they take our stuff and they add their the serverless stuff on top of that yes yes yes so they take the images that we create for Istio and they base there they just add in the serverless stuff on top of that so we are we are underpinning K native okay any yep so do I still need say the need of Waf I'm not I don't under a web application firewall so yes I wouldn't say don't do it because I think more security is good security you you have different security is one of those things where you get different vulnerabilities in different areas and it's better to be protecting it in depth and relying on one particular thing so there are certain things that Istio does really well for security so mutual TLS that kind of stuff the role-based access control so you can restrict who can focus on it but it's not protecting the applications themselves so anything which can reduce the impact of vulnerabilities in your application you should still do yes definitely I think that's done so thank you very much for your time