 Hey, welcome everybody to another OpenShift Commons briefing. Today, we're, again, once again, live streaming on Twitch and through BlueJeans. And so Bear with us, ask questions and chat as if you get any technical issues, but we're thrilled to have with us an OpenShift Commons member, Kinfolk, and Albin Kiki, who has been with us before, to talk a little bit about a new project that he's been working on and making life better for all of us Kubernauts. And I love the title, Unleash Your Clusters, E-B-P-F Superpowers with Kugel, anyways, Gadget. I'm gonna let him explain what they've been doing over at Kinfolk. And then at the end of this, we'll do some live Q&A and have a bit of a conversation about what's going on at Kinfolk these days because there's lots of news over there. So Albin, take it away. Tell us what your superpowers you're enabling with us today. Thank you. Yes, I will talk about E-B-P-F Superpowers in Cip-City-L Gadget or OC Gadget as well in Inspector Gadget. First, my name is Albon. I'm a co-founder and director of Half-Kinfolk Labs at Kinfolk. That's a new thing we announced recently at Kinfolk, we do consulting or an open source project like INUX on Kubernetes. And we recently announced that we have a dedicated team to consulting around Kubernetes. And Inspector Gadget is one of the projects we work on there that I will present now. First, I will describe a bit the problem statement. Why do we work on Inspector Gadget? First, debugging distributed application is hard when you have an application working on Kubernetes and something is not good. It's kind of difficult to debug it. At the same time, we have now a lot of E-P-F tracing tools on Linux, but being available on Linux doesn't necessarily mean that it's easy to use on Kubernetes. And so there's no trivial the goal of Inspector Gadget is to plug that up to make it easy to use E-P-P-F tracing tools to debug your applications on Kubernetes. So Inspector Gadget is not just one tool, but it's a collection of gadgets or developers of Kubernetes applications. We now have a Kubernetes Slack channel on Inspector Gadget and it's an open source project available on GitHub. So I will talk about a lot of Kubernetes and E-P-F here. Now, as I mentioned, there are many different E-P-P-F tracing tools on Linux that you can use on the common line. For example, E-P-F Trace, BCC has a lot of tools in general for tracing, but other for networking, it's a good resource to learn new things about E-P-F. I will talk about a trace loop as well because that has been designed for Inspector Gadget on others as well. To use that in Kubernetes, there are tools like Ctl Trace that use B-P-F Trace but are the Kubernetes level on Inspector Gadget that tries to pick different, sorry, B-P-F code and make it available at the Kubernetes level with a specific use case that I will describe. So I'm sure it has a lot of gadgets. So there are different use case, each gadget that you use case like trace loop is for seeing when your port crashed to inspect what has been wrong with it and with the last system calls that it has done. There is a network policy advisor that helps you to see what kind of network traffic is done by your ports and help you to write network policies in a bit more automated way than just writing YAML by yourself. Capabilities is another gadget that helps you to see what capabilities are exercised by your ports and then you can write port security policies in a easier way than try to guess what is happening. And then there are other things to inspect what your application are doing, what kind of files are open with OpenStore, what kind of programs are executed with XXNOOP or what kind of circuits are bound on the TCP port, for example, with Ben's Snoop and so on. So in my talk, I will have a few demos of these different gadgets. If you want to try CubeCityL Gadget or Inspector Gadget by yourself, it should be easy to install on your laptop by installing, so learning CubeCityL Gadget and then deploying the Gadget DemonSet on your cluster. You can do that with the OC Gadget Deploy command because Inspector Gadget is actually a CubeCityL or OC plugin and then apply it with this command. If you want to reproduce the demo, I will actually just use the OpenShift by ground covered by CataCoda at this address. And you can fetch the slides and read the steps notes at this link. So let's start by the first demo. I will try to install it live here. So let me leave the slides and go to the CataCoda command. So here I have a terminal and I will get my note just once again. The first thing I will do is fetch CubeCityL Gadget and I use a developer version that is not released yet because I added some last minute fixes for make it work on OpenShift. Prepare the installation and here I make it available as a CubeCityL plugin or OC plugin. And to check if it works, I can type CubeCityL Gadget version or OC Gadget version and see that it works as a plugin for OC and I use a development version here. Next, I will reply this in this Kubernetes cluster here on OpenShift. I use this command that deploys DemonSet on a couple of airbag holes to make it work with privilege. So now if I look at the ports deployed previously, I should see what being created and after a while, I should be able to see, to confirm by looking at the logs that it's running. So here I run the logs command. I see the version and it seems to start correctly. So this one, if I run OC Gadget, see some, the list of actual gadgets that is available in Inspector Gadget and that's it for the installation. I will go back to my slides. And just wait for me. Okay, so I will present the first gadget, network policy advisor. The use case of that is when a developer join a project and that already exists, it has already plenty of microservices or ports and so on, but no network policies have been developed. And so it's kind of difficult for this developer to start working network policy when they don't know the architecture of the project. And so that's kind of what I called port security as an afterthought because the security has not been designed from the beginning, but after the project exists, we think, oh, what about writing network policies? I will take as an example this project. So that's a demo microservice project. And it has a lot of components that I don't know what they do. So it's a bit difficult for me to write network policies when I don't know what participative part is used and so on. Let me go to demo that. I'm going back to the terminal. So the first thing I will do is to touch the ML definition of, I don't know from Github and the ML, so I can have a look at it. It contains a lot of services, a lot of port. And you can see here, it doesn't have any network policies. So then I will start demo. And let me prepare something I will do is I will start inspector gadget, network policy monitor to see all the connections while I deploy this application. So it should catch which parts is talking to which part and so on. So here, the network policy advisor to monitor everything that happens in the namespace demo on record every TCP connection and so on in this file. So far, nothing happened because there is nothing in the namespace demo, so this, so I will start by creating it. The namespace demo exists, I will deploy all my services and then I will check what is available there. So we take a bit of time, so I will go back to my slides while it is deploying. When it finished this demo. So meanwhile, while this application is deploying, I can explain BTF in a nutshell how things works. I will not go into deep details, but give another view of what BTF is. So BTF is kind of a mini virtual machine in your Linux kernel and the workflow how it works your BTF program in C. And then you compile this C program in, with C-Long LLVM into this BTF by code. And once you have this BTF by code, you can upload that into the kernel with a specially designed system called BTF. And the Linux kernel is, first task is to check that this BTF program doesn't do anything wrong, sorry. It's not allowed to, it will not damage the kernel, it will not crash it because it verified that there is no loop, it will not do random memory access and so on. If you verify that it's a good BTF program, then this BTF program can be allowed to run and it will be executed as on specific triggers, or it could be a network trigger, every time a packet come on a network interface, or it could be attached to system calls. So every time a system call is executed, it will run the BTF program. And then it will be able to send messages to applications in the user space via BTF maps. With this mechanism, we have a way which is safe for the Linux kernel to execute arbitrary BTF programs in the Linux kernel and inspect what's happening there. So inspect a widget, use that a lot. And then I will show the next slide how it does that. So inspector gadget is a command line tool that you run on your laptop. And then you communicate with the Kubernetes cluster only via the API server of the Kubernetes cluster. It doesn't access to your node or it doesn't open a port or something like that. It's actually just a plug-in to a Qubectl or then use a first-class object in Kubernetes like ports, the one-set of configurations to deploy the gadget pod on all nodes. And then this gadget pod will execute the BTF program ether test loop or VCC or this network policy advisor. This program will install the BTF program in the kernel as I showed in the previous slide. And then gather events there that can be reported back to the user. I will go back to my demo and hope that it's running now. Perfect. It's running. So let's put, I will now try to see. I'll call it all the network events in this network press log file. And I cannot look at it to see what it looks like. It's actually one line per a new TCP connection with some metadata attached to see which pod is talking to which other pod. And based on that, it's a command to be able to make sense of that. So it looks like that. It takes as an input the log file that I created before and redirect that to an email file that contains network. It should contain useful things. It make it a bit bigger. So for example here, I have a network policy that applies on the shipping service pod because it has a pod selector on the shipping service and it has both inquires and egress policy, network policies. And then here, I see it's allowed to have increased traffic coming from this record service to the shipping service. And there are a lot of other network policies that is kind of based on the real traffic that happened on the cluster. Of course, that's not something that the developer should apply blindly. I'll say, okay, now I have my network policies but I find it a lot faster to take an example on that, copy what makes sense, than just typing an email network policy from scratch. Okay, that's the end of this demo about the network policies. We'll go back to the next gadget I will present is called Tressloop. Tressloop, if I go into technical details, what it does is Tress system calls, a bit like stress, but in C groups using BPF on overwriteable ring buffers. So that's a bit complicated to say but I will explain the use case of that. As a developer, I really like to use stress as to debug my application because I can see what system calls it does but stress can be a bit difficult to use on communities. First, it is a bit slow, so it's not possible to use stress for all the processes, all the parts on my Kubernetes cluster in production, that will just not work. And then, it's not, my use case is sometimes there is a part that crush but once it's crushed, I want to debug it but it's too late to attach stress on it because the process is no longer there and it's not possible. So I want a tool that is useful for parts that crush but sometimes it's unreproducible and it's just one crush, so I cannot go back in the past on the use of stress. And the idea of Tressloop is the idea of a flight recorder that always record all the system calls in memory in a ring buffer of limited size. And then if something crush, I have the last few system calls that are still in the ring buffer I can inspect to see what was wrong there. So it uses a one ring buffer for every pod or container and then if the pod crush, I can inspect that ring buffer for this one. So if I compare stress on Tressloop, it's a bit different. Stress use P-trace mechanism to get the trace. Tressloop use BPF on Tressfronts. The granularity is different as well. Tress use one process or several process. It can trace one or several processes. And Tressloop is looking at C-groups or you can attach it to a C-group or container. Tress is slow because of the way it works but Tressloop is fast but on the other hand, it can lose events if things are, well, first because the ring buffer is of limited size in memory. So when it's fully, it's overwrite the oldest events but although because of the way it works, it's possible that you can lose some buffer sometimes. Stress on the other hand is reliable and never lose anything. But even if Tressloop is not as reliable, it's still very useful to debug applications in practice. The way Tressloop works, it use a BPF program attached on a Tressfront called C-enter and that's a Tressfront that is executed every time there is a new system called executed in your system. And the first thing that the BPF program will do is to find out which container it is. So you can look at the C-group or in different way to figure out, okay, that's this pod or this other pod and then redirect the execution flow to another BPF program that will log the events in a paired pod ring buffer. Those ring buffer are never looked at. They are just buffer in memory unless the user specifically asked for that with the Inspector Gadget command line tool to dump the content of the ring buffer into the terminal. In this way, it's faster than stress because nobody looks at those ring buffer when everything is good. Not only when there is a problem, this ring buffer I copied into user space to look at that. So there is no context switch back stress on this packet cluster. Now it's time for a demo of stress loop. Okay, so now I will first have a look at the pods I have on my cluster. So I don't have anything in the different space. I have a lot of pods as well. I will have a look with OC Gadget first loop what I can do with that. There is a few subcommands. The first one is a list, what is available. As I mentioned before, I don't have any pod running in the different space. So I will look at other name space and I can see I have other things running here. I will take an example of this Kubernetes space. I'll see I have a few trace that are available for me to see. I will look at one of them. I will start again. Command is to get a picked one of them with this command and I should be able to see the last system calls by the pod even though it's currently running. I see the system calls were mostly flipping, not knowing much. The second demo I will do with that. Let me hit the screen. I will start a new pod, this one, multiplication and then trying to save this to a file and then printing this file. But if you notice my shell script is not really good because I might not get the correct result like a bit of time to load the pod. And then because my shell scripting skills are not so good I don't get actually the result of the multiplication. And even if the pod can delete the pod, I can wonder is the result of my multiplication lost forever or can I recover it in some way with the inspector gadget and the trace loop gadget. I am able to recover. So here I noticed there is one trace that has dominated a few seconds ago and I should be able to do the one thing, full screen again. So here I should be able to recover the trace, picking the trace ID and see the last few system calls with the VST program on scroll. If I read that I can see that the BC program received the multiplication and then write the output. So I can debug what was happening and so on. Given through the pod was deleted at that time. So trace loop will keep in memory for a few hours the last ring buffer of the pod data crashed and so this gives me a bit of time to look on inspect things. Thanks. That's the end of this demo about trace loop. I will go back to my slides now. So I have more gadgets to demo that feel specific use case. There is open snoop, XX snoop, and snoop, profile, and TCP tracer. Let me go back to the next. Today my internet connection is very slow. Don't worry too much about the internet connection. It's lovely to see all these gadgets and I'm curious when we get to the Q&A about what other people are looking for in additional gadgets. The bind snoop is relatively new I think. Yes, some of them are relatively new like the profile gadget was a few weeks ago I think. Thanks. So I will present, I will start first this exact snoop gadget. What it does, it has specify on the command line which pod I want to monitor. So I can use namespace or Kubernetes label or the pod name and so on. And then everything that new process that we executed in the pod, in the pods that match this criteria, I will have a new line to describe them. And in the other terminal, I will use the open snoop gadget to get information for each new file that is opened. Don't worry too much about the error here that will work. I'll move things around but... That will work even through that there is some warning about the pfgear that is not in this kernel. Now I will start the pod. So here I use this kind of anti-pattern to use a curl pipe bash for the purpose of the demo. I execute a script which I don't really know what it is beforehand. So here inspector gadget will be useful to be able to inspect what programs are executed and what files are opened by executing pod that I don't really have control on what is executing. Here it takes a bit of time because it needs to download the pod on this cataclysm. Here I see at the top, I see all the files that are being opened and at the bottom I can see the commands for every new command like said, mkdir, curl, ppm, I see a new line and at the top I see the files that are being opened. So all of that was done by attaching ppf programs on getting the results through the inspector gadget. Here I specified one Twitter yard to select which pod I want to inspect but it's possible to attach several pods at the same time, several pods at this level and so on. I can just turn the full screen button again. I'm not used to... in the web console. If I remove this, for example, it will match all the pods in a different way. So next demo about Ben Snoop. I will show that it can actually match several pods at the same time. So first thing, I will use this Ben Snoop and I will select all the pods that run in a different namespace on a different level and then I want to see what kind of bits are being created on which pod they are going to... In fact, there is no pod currently running and in the bottom terminal I will start by creating this new namespace and then launch this NGNX deployment. Here I see I select three replicas but at some point I should be able to see three containers here and when they start I should be able to see which options on which TCP port they use. And I'll see that as useful for when I deploy a new NGNX container in this case. I don't know this image very well and I don't know it's not always as easy to know whether it will listen on port 80 or 8080 or some other pods. And so when I write manually my Kubernetes services in the ML, I need to know which part it's listening on to be able to write it. Here it happened. So here I see all the bind system call with the options. So here I see it's on port 8081 with a reuse socket option. That's probably more of a bind scope. The next demo is about capabilities. So let me clean that. Here I will start by creating a new channel. I use the busybox command to get a shell in my terminal. And I will execute some commands that require some privilege. For example, create new networking interface, a CH route and cannot ping or listen on privilege port. I think that's sometimes works, sometimes doesn't work. And just to show you sometimes ping worked and see it worked as well. But n cannot doesn't work. Here is an example n cannot doesn't work but it's kind of difficult to know why. I just get operational but I don't know which kind of capability we are missing in my part to make it work. If I don't know if I receive I have to deploy a container image which does some operation which I don't really know about. I want to know what kind of capability I exercise. Then I can use the capability gadget for that. For example, I will catch all the parts that are in and see if they do something here. If I repeat the n cannot command here I see it require the cap n cannot capability. I use ping need to have cap network if I use I try to create a new network interface. I just get a question that I see that the cap net admin was the container attempted to exercise that capability. The goal of that is to be able to provide pod security policies in a more informed way than just allowing all the privilege just because this is you. I can specifically add the capabilities that I really need or not everything. Thank you. That was for the capabilities of this gadget. The next gadget I will demo is a CPU profiler. To demo a CPU profiler I will first start the CPU profiler with this profile gadget. I will match everything that happens on the next phase. I will specify this option because I want to see the channel stack of everything that happens in this response. This gadget doesn't display anything until I stop it. Here I think I am in the pod right? Yes. If I run this command, so it doesn't print anything, but it should take a lot of CPU. Look at that. Okay. I have this cat process that takes some CPU but if I want to see what is actually doing in kernel I use the CPU profiler, I stop it and I should get some stats about the most frequent kernel stack that were sampled when the CPU were in the spot. The most frequent is at the bottom. I see the user space process is cut and here the kernel stack that took the most of the time and I see there is a system call that was this read and then you go to VFS read in the kernel and then there is a function in the kernel called urundone read and so on. By using that I can specify the parts that are slow and then figure out kind of why is it slow, what it is really doing. There is this option dash k for getting kernel stacks but there is also option dash u to get stacks from user space that doesn't always work, that's something that is a bit in development. By the way, all those tools come from VCC so it's not something that has been reinvented in VACA Gadget. Inspector Gadget mostly use existing tools in this case from VCC or just adapt them to be able to use them in Kubernetes. On this CPU profiler was one tool that I used recently with a customer to be able to inspect why something was slow in a Kubernetes cluster. If you are interested you can read about it in the last video. In a recent Kinfog blog post. Okay. And that's the last one I will demo today is called TCP Tracer. First I will start the TCP Tracer Gadget which is the thing that's happening in this main space and then I will execute some things that happen here. So it catches all the TCP connects, TCP accept on the TCP clause, all the events for TCP connections. If I were to create an incoming connection I will see it as well here and I can run things several times with some bugs here this is paying two lines instead of one but you get the ideas. I can see what's happening in different parts. That's for the fun if I try to look at the different space. I will click on the one button again. I'm full screen again. Here I will be lucky and I will see some new TCP connection or maybe not. But you can inspect what your different parts are doing about the incoming or incoming TCP connection that way. So I will go back to my slides and explain how things work. So in a lot of this demo it was mostly tools from BCC that were adapted for inspector gadget to run in Kubernetes. What do we actually need to have an IPF tracing tool adapted to Kubernetes. What I like to have is to trace parts for users that don't always care about process ID. It's more useful to me to select which part with Kubernetes levels or Kubernetes namespace rather than selecting the PID of what you want to trace. When you have a lot of machines that's less practical that way. And I want to have a kubectl like user experience. So don't ask developers to SSH to them worker nodes to be able to debug but use it from the comfort of the kubectl command line. Another component that is used in inspector gadget is called the Gadger Tracer Manager. And to explain it I will show this command. Here in the exact snoop gadget or other gadgets I can select the parts I want by labels or by namespace or by pod name or by node or by if it is a pod with several containers inside I can get the context. And I can use one or several of those criteria to select the parts I want. So that make it a bit complicated because in the BPR program in kernel we don't know about Kubernetes levels or Kubernetes namespace. So we need to match this filtering criteria on what's happening in the BPR program. And another difficulty is that parts can come and go and tracer can come and go as well. Parts can crash and then the replication controller can start a new one and so on. So during the execution of a gadget parts can go come and go. Other parts don't always have predictable names. In this example there is suffix or hold on suffix. And sometimes one part can be traced by several gadgets at the same time depending on the field. To come up with the solution there is this gadget processor manager which is a demand running in a demand set for all nodes. And they implement GRPC in a very simple one where this demand can be in form of new tracers on new containers. So there is kind of four methods on this GRPC API. On the left side it can be in form of new containers using OCI hooks. There is a OCI pre-start hooks. Every time a new container is created the OCI hooks can adjust PC call to the tracer manager to inform it. On the right side every time I start a new gadget with QCTL gadget or OCI gadget it will use the many this API to execute something on the node this wrapper script will actually call the GRPC method to add the tracer there. In this way this tracer manager knows about all the tracers what they want to do on all the containers and what level they are on what Kubernetes namespace are there running on. With that information it will update the BPF maps. So there is one BPF maps for each tracer and each map will contain the list of containers that it should press. So when containers come on call these maps will be updated. And then this BPF maps will be executed by the BPF program. So in this case it will execute the exact snoop gadget. It will execute a BPF program with a key problem on some system call. And then in the BPF code it will actually check the BPF map. It will look up if the current process or currency group or current process is in the configuration that has been set up by the gadget tracer manager. And then if it should not be traced it just returns without tracing anything. So that's how it works to select the parts. In this way the BPF program don't need to do any string comparison with the Kubernetes levels and so on which will be which is something which is difficult to do in BPF. It just have to look in the hash map if the container should be traced or not and that's fairly quick to do. If you want I can show on the command line how it works this gadget tracer manager that depends if we have time for this right now. We've got a little bit of time. I'm not seeing a whole lot of questions in the Q&A so just keep going and so far you've hit everything that I was going to ask and so go for another one. Okay so I can show it then. Let me go back to the terminal. So what I will do is to get my gadget pod this command is to just get the name of the pod here and then I will execute I will get the shell in that pod. Let's see. So here I am inside the the pod of the gadget of inspector gadget and here I see the entrepreneur on some screens and there should be this gadget tracer manager command here and in this that's actually a command line interface the only thing that I will do is it will call on gfpc the API that is on the command line. I can show you that it should be running here it has been called by the pod with the serve option so it's served the gfpc interface with the dump option it will actually dump the list of containers in the pod here I don't have any tracer running but I see a list of containers with some information in the pod and it should have filled vpf maps that I should be able to see in this directory but I don't have any tracer running so I don't see them but that's where I will be able to and the gfpc interface is actually not something export to outside it just has a unique socket that is listed here in a slash run slash gadget tracer manager dot socket it's a unique socket like docker or author use which is not exposed to internet okay that was just a glance view on this component so if you want to contribute to inspector gadgets welcome it's an open source project and it has GitHub page with issues and I try to use this label of code first issues which is another issue that is easy or where I can provide guidance about how to do that so both on inspector gadgets on the trace loop project they are both so I try to make it work on many Kubernetes distributions so that's I will say it's in the early stage of this program so there is no release that works on OpenShift yet but there is full request on the demo code that I show here and I hope to have release which works on OpenShift nicely like I showed today that would be great have you been in contact with some of the OpenShift development team yet I have not not yet we will get you connected I think that would be cool that would be cool and I would like to see this working it's interesting to me because Kubernetes is a wonderful high level abstracted thing and but to really make sure you debug it and you can really work with it at the granular level that you're showing it and making it easy to do so is pretty awesome so I'm curious all of the gadgets that you've shown us today for the most part were they contributed as open source or are they things that have come from Kinfolk engineering teams what sort of community do you have around inspector Gadget at the moment I can go back to the list of gadgets for where they come from most of them come from the project BCC OpenSnoop, execsnoop, ManSnoop all the ones at the bottom actually capabilities as well they come from BCC and the two ones at the top openSnoop you have seen it's in a GitHub repository on the Kinfolk organization and a network policy advisor it relies on BPF code that will return for with scope initially yes I can show you on GitHub I can show you the repository there's actually a lot of gadgets or tools that are here there is a list here and that's really useful to learn about BPF here so here I have a list I have a long list I just pick a few of them like execsnoop openSnoop and so on and use them in inspector Gadget probably other that you like that can be adapted to Kubernetes as well in inspector Gadget yeah so is there anywhere there's like a wish list of which ones you want people to work on first or how are you prioritizing which ones you're adding to inspector Gadget I guess that's like teasing out the question for people who want to help or want to add one to your list of things to do I don't really know I will say that depends on what you want to do with it for the recent one that has been added is profile I picked this one because it's the one I used in a customer situation to find out why was it slow running on this community cluster but if there is a specific problem that needs to be there is something that you need your community cluster is slow with an FS maybe this one could be picked up that would be cool so if you're listening to this afterwards you see something on this list that you think we all ought to be working on reach out to Albin and do that the other thing I did notice you are going to give a tutorial I believe at KubeCon the virtual one it's listed on the schedule is that still a go for you guys your tutorial on using BPF in cloud native environments is that still a go on August yes I still plan to do it so I need to check I'm not speaking alone I'm speaking with Lorenzo from this dig sorry he will talk about KubeCityLtrace as well which is a similar project doing BPF things on communities I still plan to do it and I need to check if we can both with this change of planning that sometimes disrupts public plans but I hope can you still hear me I can hear you hi everybody Chris Short here we lost Diane to be honest with you she might have just dropped like we just lost her video and everything she just DM me and she's back she just lost power and is jumping back in oh wow I'm the one with the thunderstorms here today hey I'm back sorry I had a bit of a power outage there for a half a second so that's what's lovely about live live streaming things like this so the other thing that I want to get you back on again sometime really soon she gone okay so Alvin I'm going to assume that oh no and she's back Diane you might want to just wrap it up we lost our again I'm trying the network is just doing this wonky thing I want to have Alvin back on with some of the other team sometime soon to talk about flat car actually flat car in the context of okd which is in its beta release and going GA and as you probably have noted is running on fedora coro s I'm very curious to see what we can do with flat car and okd so stay tuned for me picking his brain and his team's brain about that in the not too distant future so Alvin thank you for putting up with my lovely internet access today I do have fiber optic I don't know why this is going up and down but anyways thanks for joining us today if you're listening I will put the slides for this and the video of his demos and all the links on our YouTube channel at RH OpenShift and as well on the blog post on OpenShift.com so don't scramble and try and write notes I'll make Alvin give me his slides and links to all the resources so thanks again for joining us thanks Alvin for taking the time to do this I'm looking forward to the tutorial at kubcon and seeing how kubcon goes virtual and hopefully they don't have fiber optic that's as wonky as mine today so thanks again take care thank you Alvin alright thanks for everybody who's been watching great to see you