 Well, hello everybody and welcome to another Tech and Talk. I'm starting, this is the second in the series, talks with people who have interesting ideas and new innovations coming out in the cloud native, cloud Kubernetes containerized space that we are living and breathing in these days. And a good friend of mine, Chris Nova at KubeCon, not KubeCon, at GoforCon last week, open sourced a very interesting project that she's going to talk to us about. It's called KubeCon, or some sort of variation on the unicorn plane. And it's supposed to solve some really interesting Kubernetes infrastructure management issues. And since I wasn't at GoforCon, I was really excited to get her to come on and give us a talk about it. The format for this is we'll let Chris do the deep dive, the talk, the demo, all the cool stuff. We'll see all the demos do their wonderful things. And then we'll do Q&A afterwards live. So if you have questions, you can ask them in the chat and set them up. But without any further ado, I'm going to let Chris introduce herself and do the deep dive and we will get going here. So thanks, Chris. Awesome. Well, good to see you again, Diane and everyone else joining. So KubeCon or KubeCon, as I like to call it, is this new tool that I've been wanting to write for quite some time, probably six or seven months. And I finally got some downtime and enough free Saturdays and was able to put this together. I think the high level sort of elevator pitch of what it does is it's aimed at solving the infrastructure problem, the way I see it in Kubernetes. And I know not everybody sees it the same way I do, but this tool really makes me really happy and addresses some of the things that I noticed that I was struggling with that were kind of frustrating me. So hopefully we can talk a little bit more and go into detail of what it is, what it solves and why I made the technical decisions I made. And I think I would want to spend just a couple of moments before we really jump in. This isn't a very similar space as the COPS projects that I'm a maintainer of. And I wanted to just be very clear that I absolutely love and respect COPS and COPS will always be like a special place in my heart. And this is not intended to replace that, but rather to experiment in the name of research and development, some patterns that I'm working on and some patterns that I've been wanting to see in the code base. Hopefully that can get back forwarded into COPS and we can come together in the future. I think it's also important to note that a lot of what this tool does is a direct reflection of a book that I'm writing right now for O'Reilly on cloud data infrastructure and some of the patterns that we talk about in the book are directly implemented here in this tool. So is this ready for production? No, will it be ready for production? I don't know, maybe one day, but it's mostly here and in this space to demonstrate how to solve infrastructure according to these patterns, in a way that I wanted to prescribe it to myself and there in the community. So let's look at the GitHub repo. Can you guys see this okay? Zoom in one. Yep, looks perfect. Okay, so the first thing you'll notice is the lack of the PKG directory over here on the left. And the reason for that is because I wanted to keep this tool very flat. Every one of the go packages here on the left, one to two nested packages, if that at all, and I really don't ever intend for them to grow any deeper. And then if you look, there's one main.go file. The whole reason for this setup is so that this program is go-getable, so you can just type gogetgithub.com krisnovacubicorn, cubic-orn, and it'll download the tool and compile it using whatever version of go you're running and spit out a binary for you. The message of this tool is I want it to be simple. I really want a user to be able to walk up to this code base and quickly understand what's going on and quickly be able to make meaningful contributions to the project and see how this stuff works. One of the things I talked about at Goforcon was this beautiful mapper that COPS has where we take this concept of an API and then we eventually map that to some set of resources. And here in Cubicorn, you can see in the cloud directory, we have these very simple maps for each one of the cloud providers that we're working on. And we'll go into more detail later as we go and actually look at the implementation. But the thing I wanted to say was how is Cubicorn different? It uses Cubadm to bootstrap our clusters. And if you go and you actually run, like actually, I think if I run this right now, Cubadm, I think it says somewhere in here, Cubadm is beta, do not use it for protection clusters. And this is the core of Cubicorn. So bear that in mind if you do decide to start poking around with this tool today. But I really think Cubadm is ready. I mean, every time I've used it, it's worked well for me and it took an hour or two of research and development to kind of figure out the nuances with getting my cluster up and running just right. But once it's there, I mean, it's rock solid. So I thought it was a good idea. And granted, I am a bit crazy to go ahead and build out an infrastructure deployment tool on top of Cubadm so that I can start using Cubadm more and start exercising it as a tool. I mean, the only way that we're really gonna ever get to a point where we're trusting it and people are confident using it is if people start using it. So this is like my way of convincing people it's okay to take that first step and to start. Running clusters with it regardless of if they're quote unquote, ready for production anyway. And I mean, this is Kubernetes we're always ready for production, right? So if we go and we actually look at the code, I'm gonna move over to my IDE here. So I hope you guys can see this because we're actually gonna look at some go code today which is gonna be super exciting. This is the repo that we were just looking at in GitHub. And we use the Cobra command library. Thank you, Steve. It's a fabulous library. I love it to mock out all of our commands. And if we look at each one of those, we have adopt which is this idea that's not coded that you will be able to walk up to a Kubernetes cluster running in a cloud and audit the cloud interactively through your command line and effectively take ownership of that cluster and begin managing it with Kubecorn. This is one of the patterns that I really wanna push for is this sense of being able to audit infrastructure and reason about it and come up with this concept of it that is represented in the form of the API that we will look at in just a second. Apply which is very simple idea but extremely complex under the covers that says, tell me an intended state, I'll go audit the infrastructure and detect its actual state and then I will reconcile those two states together thus making some change in infrastructure land. So probably the easiest way to think of apply is I have nothing running. I wanna have a cluster that looks like this. I run apply, apply audits the infrastructure and says, oh, hey, nothing's running. I better create all the things and then it'll actually go and go through and modeled out those resources and apply them. And really you can think of each one of these resources getting applied as an HTTP request to some clouds API. So if we're running in Azure, it would just be an HTTP request if we're running in Amazon, it would be the same thing. This concept of create and this is what I kind of feel as one of the more powerful things that really makes cubicorn unique is we have these things called profiles defined here in the profiles directory. And if you write go, you're already kind of like jaw dropped because you see what this is. This is a struct literal that defines a Kubernetes cluster in go which is freaking gorgeous and create, we'll take one of these struct literals that we just looked at and we'll drop it off in a state store. So right now we have under bar state on your local file system that it will create as the only type of state store we currently support because again, this thing is like three weeks old but ultimately we wanna be able to have state stores in GitHub, we wanna be able to use state stores like Azure Blob Storage or Amazon S3 where we can just store a simple YAML file. And if you actually go and you look at an example of one of these cluster YAMLs it's YAML representation of that struct literal we just looked at, but the good thing here is it actually vendors in Kubernetes API machinery. So if you've interacted with the Kubernetes object before it's the same thing but it represents infrastructure instead of some application layer item. So create, we'll actually create one of these things according to these profiles. If you go and look at the read me in profiles you'll notice that it says, can I add one to the repo? And I'm like, yeah, check it out. I want this directory of profiles to grow and to have all of these different crazy ways of experimenting with this tool and probably breaking the tool and experimenting with Kubernetes and pushing the boundaries in all these different ways. So in a perfect world people would be pull requesting these profiles with a little bit of documentation on their specific one and why it's important and why it matters. Hold on, I just lost my sound here. Can somebody confirm that I'm still broadcasting? Yep, I can still hear you, you're still broadcasting. Okay, great. Thank you. So yeah, hopefully this directory will grow. I will continue to add profiles and I encourage others to experiment and try to add their own and we will go from there. So that's create, we look at delete it is the opposite of apply. It says, let me go and audit the infrastructure and then if anything exists, I know I need to send another one of those HTTP requests and remove it. Inv is not a command, it's just shared environment code. Get config is a way of taking an existing cluster and it's configuration which stores information about let's look at one of these. It'll actually have SSH information in here. So we'll have your public key. This is okay to look at. It's called public for a reason, no secrets here. And a path to where it was on my file system as well as its fingerprint and the user to bootstrap the cluster. And it'll actually SSH into the cluster, look for the cube config and then pull it down to your local file system and write it out. So that as a user, you can just say, oh, I wanna start interacting with this cluster and I know the name of it and go get my config. Image.go, this is another one of the things that I think really makes Cubicorps unique. This, it does not work today, but ultimately the plan is to be able to bundle up one of these YAML files on the left with all of the YAML representation of Kubernetes level objects and possibly at CD backups still figuring out what that's gonna look like and if we wanna borrow existing tooling somewhere. But the ultimate goal is to be able to take a snapshot of your infrastructure and your Kubernetes application layer, bundle all of that up into some sort of compressed file for today, I'm rationalizing that as probably just a tarball and ship that around. The reason that I wanna be able to do that is so that I can say, hey, here's my snapshot of my cluster and it's just a bunch of text and I can put that in a GitHub repository, I can send that over in Slack, I can stick it in any email and if somebody wants to run my same cluster, they could just come up here at the top and say, oh, I work for Microsoft, I wanna run this in Azure and you rename that to Azure and then you can run it and it's a really great way to share infrastructure and share application layers. Already I've had people hitting me up about how they wanna be able to use this for testing and development and staging environments. We have Kubernetes in production but rebuilding that infrastructure actually takes quite a bit of time and is quite challenging. So this is designed to offer sort of some operational empathy and say, we got your back, bro, we know you wanna run Kubernetes in a different cloud, we're gonna make that easy for you. Again, I'm a little bit crazy but my whole philosophy here is I wanna experiment and I want to bring people together and I wanna pull people apart anymore. So I think by offering this framework that says, hey, come work with us and bring your little component that plugs into the bigger machine here and we can start working together and running in different clouds. I think it's really powerful and that's kind of what I believe in at the end of the day. Technology aside, I just wanna help bring people together and make the world a better place. Going back to our commands, we have this one called list that was implemented at Goforcon. Many thanks to all of the wonderful engineers who sat with me and worked with me that day. There was about six of them and I could not have asked for a better turnout and all this does is it says, give me a state store and I will list all of your clusters. So you can see what actually we have running and what actually we have a concept of and then then my personal favorite command in the entire tooling is if you just run cubicorn without any arguments you get this fabulous ASCII unicorn displayed in your terminal as you can see here. It actually has the version number inside the unicorn and it points to the author's file and then here on the left we have the actual git shove, the recent commit. So we're not versioning or doing releases right now because this thing is broken and it doesn't work. Well, it's not broken, it does work. It's just not like I'm not ready to release it until we get some more stuff built into it. So as we're developing, bear this in mind if you plan on contributing to the project that I will probably always be asking for this value if you hit a bug or if you see something weird or unexpected. And then here on the left we see all of the commands we just went through and talked about. So the other thing that I think is really important to bring up here is this bootstrap directory. And again, this is operational, empathetic or empathy I should say. You'll see these shell scripts. And as a sysadmin, I mean, let's be honest here. I love bash and I hate bash at the same time but it works really well. And as an operator you can look at it and read it and know exactly what's going on. So if we were to actually look at this, it's a shell script. This is how we bootstrap Kubernetes. And it's what, 47 lines long and probably could be shorter if I didn't have this huge comment at the top that yells at people and tells them to never ever put templating into this but just write it in bash. But this is how we bootstrap Kubernetes with QBADM. And if we actually go and we look at this, we like add the QBADM to our AppsKit repository. We do an update. We install a handful of tools that are needed. We start Docker and we enable the Docker service. I calculate the public IP address of the machine from this EC2 metadata tool. And I calculate the private IP address of the machine using good old if config. I do a reset so that I know my QBADM is always item potent. And I do init. And this is like beautiful. If you've ever tried to bootstrap Kubernetes from the ground up before or read Kelsey high towers, Kubernetes the hard way, this one command does all that for you. And that is, that just makes me grin ear to ear every time I see it. I showed this to Kelsey at go for con and he shared this one liner here that's great. It's adding the Calico tool to our cluster that helps us with networking. And I think out of all of them, he said this is the one that just kind of worked out of the box and did everything we wanted to. So round of applause for our friends over at Tigera. Then we, this final step here is actually optional, but basically I just moved the Q config to the same directory so that I always know where to look for it in the get config command we looked at earlier. And it bootstraps Kubernetes. Here, the node is even shorter. We take a token in a master IP address that are passed in from a cuba corn at runtime. And we say join and we give it a token and we point it to the master and notes come up and we get a cuba running and poof a Kubernetes cluster. And we say, okay, let's do a array. So without further ado, who wants to create a cluster? Because I know I certainly do. So the first thing we want to do is we want to say create. And actually just for, I don't potency, actually no, I don't want to do that. We'll just say create. We'll say cuba corn create. We want to give it a name. So we'll call it tech and talk. And we want to give it a profile, profiles are these struct literals over here that just define things for you and give you a starting point. Like, hey, it works. You can start there. And then if you want to start making changes and maybe you want a different user or you want the Kubernetes API running on a different port or you want to use a different cider block, you totally can come in here and start to green. But we'll just start with a basic one for today. So we'll do profile, we'll run this in AWS, why not? So it'll come back and say congratulations. We have made this YAML file for you and you can edit the file and then run cuba corn apply the name of the cluster. And I just got like a ringing in my ear. Is everything okay? Everything's fine. Okay, so we can do cuba corn. Actually, let's just look at this, why not? So state, it's the name of the cluster and then our cluster.yaml and it's just, again, YAML representation of that struct we just looked at. You'll notice that we named things in a clever way so that we can look them up at runtime later. And you'll notice that we're pretty explicit with all of the different values we can define here. Everything from a name to the size of the instance to even which cider block each of these instances run in. Going all the way to the bottom, we can actually see we define SSH information. Again, if you don't explicitly tell it something it'll make assumptions that I kind of feel are realistic. So looking for SSH ID RSA.PUB, but again, if I wanted to use a different SSH key I could just come in and change this now. We generated this super secret token at runtime. So when I first released the program we had this hard coded and now it's being generated with a random hex string and that's it. That's all we have. So we can get out of there and we can apply this and all we... All right, so we had this technical break there but we're gonna restart now and I'm gonna get it back to Chris and take it away again. Okay, great. So yeah, we can now go into Emacs. Actually, I had deleted it so let me recreate it. So we can now go into Emacs and actually look and see what we just created which was the name of the cluster here and then cluster.yaml. And here you can see we have all of our key value pairs defined. We have our SSH information. We have our size of the instances we're running and we create all of the network information here as well. And this is kind of interesting because each of the masters and the nodes are defined in their own individual network configuration. So as a user you can actually go in and define some pretty powerful network configurations and tweak them to your liking and you can trust that your nodes and your masters are gonna be running independently of each other but can still wrap between each other. So now that that's created and we've made our changes or maybe we haven't we can actually do an apply. I can do an apply. Actually, let me quickly demonstrate that if a user has no cube config on their system they can do a cubicorn apply slash the name of their cluster and I will turn the logs up for folks at home. And we can actually go through and see what we're creating here. So the logger I wrote myself because I'm a bit crazy and I wanted pretty colors but the colors actually represent things. So every time you see the cyan color you can as a user you know that something happened some action was taking the system changed in some way. If you've ever heard charity majors talk about her tooling at Honeycomb it's a very important concept. So we separate them out from just regular debug information that is just telling a software engineer where we are as the code execute. So we can actually go and we can look at these cyan colored log entries and actually see it take place and watch the cluster come to life. So the first thing we do is create this key pair which is just the representation of the SSH key. After that's created we can create our network DPC we create an internet gateway so that we can map everything out to the public internet and we start going through this model that is indexed together. And if we look at the model it's just a hash map and it's integer indexed and it's just a list of resources. And after we define one of these we increase our integer and this sort of path allows us to make variables as we create these. And so we can define DPC index is equal to wherever the index is right now and we can use that later in our code. But it's important to note that this data structure integer index hash map or think of it like a list if you will is what represents our cluster and the resources we will create. So I really almost wish that this thing failed so that I could demonstrate this really valuable concept of a cubicorn being atomic. So because it didn't fail I can just sort of explain what would happen. Let's pretend that we got to creating the security group. And for some reason something in AWS was misconfigured maybe we hit a limit something happened and this security group was not able to be created. It would actually unwind itself and go through this hash map backwards and undo all of the action that it had taken earlier. So cubicorn does not give you a guarantee that your cluster will come up but it does however give you a guarantee that says as a user and someone crafting these complex infrastructure maps, I know that I will only create infrastructure past fail either you're gonna create all of it or you're gonna create none of it. So it's actually kind of fun to watch when you're developing which is when I usually do something wrong and one of these API requests will fail to actually just watch it undo itself which is great because after it undoes itself I can just rerun the command over again and it makes development much quicker and much easier for me. So we go through and it just creates these resources and it maps them together and eventually they'll get to a point where we need to look up an IP address of the master. So that's what this sort of loop does here and it'll actually hang and wait for the API the Kubernetes API to come up. And after that comes up, we can actually find the address of the master instance and then we can plug that into the launch configuration of our nodes and actually create our cluster that way. So here you see it hanging again and after it finally comes up it'll write our cube config and you see it wrote here. Users kriscube slash config and now I can get my nodes and I have one master up and running. I can even take this command and I can SSH directly into one of these clusters. Notice we're running on Ubuntu here and this is good old regular Ubuntu. There's nothing fancy about this AMI. If you go and you look in the bootstrap script here actually I'm trying to think where I define it. Here in the launch configuration which we may be defined here in profile. You can actually go and look up the AMI and it's just Ubuntu 16.04. So I'm encouraging people to use different operating systems. I want people to use up different operating systems. I want there to be a profile for CoreOS. I want there to be a profile for Ubuntu. I want there to be Joe and Sally's Ubuntu profiles because these are sort of what we're representing our cluster with. I mean, in a perfect world like we could probably even host these profiles in a get repo somewhere and just have all of these wonderful examples of running Kubernetes in different ways which is what I think we want. At least that's what I want. So here we're on our instance and we can actually see QBADM was able to bootstrap Kubernetes for us. Out of the box. We just created this and Kubernetes is up and running. We can go to varlib and we can go into this cloud directory and then instances scripts. And we can actually see this is our bootstrap scripts that we looked at directly in our repository. And you can see the token we defined in our profile is here. We're running on port 443. The app get stuff we talked about earlier is all defined here. We start up Docker. We enable Docker and we're done. We can actually go to good old var log and we can cat out the output here. And these are QBADM logs. We run our pre-flight checks. We reset our QBADM installation. We generate our TLS certificates for the Kubernetes API and we ultimately bootstrap the Kubernetes control plane and write the queue config out to this. And your Kubernetes master has just initialized successfully. And that's what we want. So with that being said, we can go through and we can now reverse walk through the list with a delete command. We'll give it the name of our cluster again, just tech in talk. And we will turn the logs up for folks at home and we can now watch it actually go through and iterate through the list in reverse and delete all of these resources. We just created. The one thing I wanted to point out now that the demo is complete and we're actually watching it delete is this reconciler interface. If I could just get a few minutes and go through that, I think it's important for folks at home just to understand the simplicity of it while we wait for the delete to go through. Delete is not quite as asynchronous as the init because you have to delete things in order because of dependencies. So this will actually take on the order of 60 to 80 seconds to complete. But we'll come back and check on it later. So if you go into the GitHub repository here and go into the cloud directory, I actually, I value and really, really am proud of this interface so much that I went through and wrote up this documentation but I would like to just go through it and point out how this pattern works and how it's designed to work well with a user and how to potentially be rendered to a pod and use as the underlying library of an operator for infrastructure, which is really exciting. So the first thing we have is we have init and what init does is it's like all of the housekeeping things that need to happen in order for us to start communicating with the cloud. So like let's off with the SDK, let's create a simple hello world transaction between the program and the cloud API. Let's set some defaults in memory and do some other things and basically it's the reconciler ready to go. We have this method called git actual, which will return the actual state of your infrastructure in reference to a single Kubernetes cluster. This is really powerful because look at what it's returning. It's returning the cluster API. It's not returning cloud resources. So git actual will actually go and audit your infrastructure and return this super valuable cluster representation that is cloud agnostic. So this is where we're actually taking resources in the cloud, mapping them to the API and returning what's really there in real life. Get expected says you gave me a profile. I'm gonna now marshal that into an API and that's what I expect. So you could, when you init the reconciler here, you could actually go through and define whatever expected API you wanted. You could calculate that using another program and change the code using go. You could do it through YAML in the state store and actually edit it up in your favorite text editor of choice, Emacs. Or you could get that from any number of other places that you wanted to if you were writing one of these new operator pods. Then we have reconciler and this takes what's actually there and takes what you wanna be there and reconciles the two. Duplicon has a guarantee that this return value here is always going to match what get expected will return here. And if it doesn't, it will unwind itself and treat that as any other failure and delete the cluster before it goes back to stack and makes any changes in the cloud. So the last one here is destroyed which is effectively the opposite of reconciled. Just go through this backwards and we'll delete everything. So I really hope that people kind of see this and shake their head and go, yes, this makes sense to me. It's very simple. Chris, you've been talking about it way too much. I kind of got it within the first two or three seconds of looking at it, which is what we want honestly because I want it to be easy for people to implement their cloud. I want people to start sharing their cloud infrastructure. I want somebody that could be able to come in and say, I want to run this in my cloud of my choice and I want to be able to change the implementation a little bit. The whole point of this project is that everything is behind an interface with strict contracts and guarantees. It's a framework, it's not as cool. So if you actually go and look, I'm in my free time coding up this digital ocean pull request and it's, I think it's like five or six files, like it's not, it's seven files and I bet one of them is a huge read me. It's not a lot of code and it's kind of exciting to be able to think like, wow, in a weekend I can code a cloud implementation for Kubernetes and I can go into the bootstrap directory and start changing shell scripts and tinkering with my cloud. And all of the boiler plate and all of the noise of interacting with that and making that into a runnable program is kind of taken care of at this point. So again, I really hope people start looking at cubicorn as a framework and experimenting with it and understanding that this is infrastructure and the software level that's in place to empower users to run in different clouds to bring people together and to give you a starting point for solving and dealing with this infrastructure layer that is so critical to working with Kubernetes. Wow, all I'm gonna say is wow. I wish I'd been at GopherCon and to see the looks on the faces of people when you demoed this live there as well. And thank you for dealing with our little technical glitch there on the sound. I owe you. My dog says the same. Just new dog. I don't even really know where to start. There are so many really cool features in this that like right from the unicorn to the color coding in the log files to just the simplicity of the whole thing is just really amazing. I think one of the best things is the delete of Kubernetes cluster, the cleanup that you do and the realization of how important that is. But the thing that caught my attention at the very beginning was and I come from an IT audit background many years ago. So the ability to run and get the state store and get all the information about a cluster and then of course reapplying it. But as you can tell, I'm a little excited about this. And so is my dog. And I think that he's got a lot to say, I have to say. Hey, Monty, somebody moving stuff upstairs. Well, yes. We might edit that out too, but maybe not. So the question I had at the beginning too was a little bit about you mentioned COPS and COPS is a project underneath Kubernetes already. This is outside of the Kubernetes repo. Where do you see this going? Is this something that the CNCF might take on at some point and adopt? Or is this really just about experimenting and pushing Kube admin beyond where you keep saying it's not ready for production yet, but then I'm like, okay, wait a minute. So where do you see this going next? I think. Oh, I'm getting it. Oh, I'm getting it. Let me turn my head down. I think for me, the fundamental thing this is solving is I can kind of take ownership of it and kind of do things my way. And for me, what's important is empowering people and developing unopinionated frameworks that people can interface with and plug into. So I would like for people to use it. I'm not working on this thing full-time, like this is not my day job. This is just like Chris Nova on the side. But if it does gain momentum, I would love to see it turn into a widely used and adopted project. I think it would be great if we could mainstream it if that was necessary and get it into Kubernetes Core through the incubation process. But I am going to apply the same level of interrogation to my code that I would apply to the politics behind this project, which is, what are we really gaining and what is so special about getting something into github.com slash kubernetes? Is that really gonna offer anything more than just it's gonna be owned and operated by the CNCF? Which I'm totally cool with. It would be an honor. I think that would be great. I just, you know, we'll cross that bridge when it comes. For today, I want people using QBDM. I think it's ready. I don't care what the read me says. And I think the only way it's ever gonna get better is by doing something like this, by having this infrastructure in place that will actually get people up and running with QBDM. And I think once it sort of is in this widely adopted space of things where you and I can go and download the program and run a cluster and interact with it, the bash script that bootstraps the cluster and actually see and feel and play with it. I think it's really gonna help harden the project and ultimately create a vendor for it, right? Like if people are asking for features and I'm pointing them over to the QBDM folks and said cluster lifecycle, like I'm really sorry for doing that guys, but I love this idea. I want people using it. I want it to get better. So maybe I'm doing a little bit of that as well. I think always poking a stick in the fire is a good thing. And I think on all of the open source projects, whether they're under the Qubrepo or on the side, I mean, Kubernetes has got a pretty good structure for incubating and doing that. But the key I think to anybody's success, especially to get QBDM up there and out there and adopted in production is getting that feedback. And I think what you're doing is awesome, but it's also exposing people to new patterns and simplifying the approach that we have to creating these clusters and managing them. So kudos to you for getting it done and spending all of your Saturdays doing this and including creating the great unicorn graphic there. We do love that. And the other thing that I was gonna ask you and we've taken up most of your morning already now, this is one great project. Next week, we have Liz Rice from AquaSec is gonna be talking about Qubb Bench, another project that she's been working on the side around benchmarking for Kubernetes, which is equally awesome, I'm sure. And I'll learn all about it next week. But I'm also interested in this space. Other people that you think we should hear from. And before you go, I'm gonna remind you to put up your last slide again so people know where that repo is. But before you answer, before you leave me this morning, who else do you think we should be talking to and is doing new and interesting things out there? I would say there's a really great person that I've been, I've had the honor of meeting twice now once at GopherCon and once up in Seattle. And every time I meet with her, I learned something and I sort of get inspired a bit to go and work and do these open source things on the weekend. So I would say if you could track down Tiffany Jernigan actually from Amazon, she would be a great person to talk to. Yeah, very good things. I think I've seen one talk by her as well previously. So that's actually a really good suggestion. Awesome. Chris, as always, it's wonderful to see you, to hear from you. Thanks for your patience with our process today. I owe you a microphone and we will see you again. We will be together at GopherCon for sure in December, hopefully between now and then sooner on other events. But GopherCon's coming up and we're hosting another OpenShift Commons gathering in Austin on December 5th. And Chris has been invited to be on the upstream this panel in which we'll be talking about issues like bringing up new projects in GopherCon and others and how we all collaborate and connect together. So I wish you great success with this. I know there are people at Red Hat, Michael, Austin Bloss and others who are very excited about this project. So I'm sure you'll see a few of our Red Haters making full requests and having conversations on Slack. I think you also said that you had created a Slack channel or an IRC channel for GopherCon or CubicCorn or however you want to say it. Is it Slack or IRC that you created it on? It's Slack. It's the Gopher's Slack. There's a channel in there called CubicORN. And I think, I think there's 10 of us, which I was like so proud. I was like, wow, there's 10 people who are interested in this project. I'm so excited. So we all hang out in there and I'm pretty much always available. If I'm not, I'll usually just send a message that says like, hey, I'm walking the dog. I'll be back in 20 minutes or whatever. All right. So we'd love to see people there. Well, perfect. Thank you very much for doing this. You know, we had many more than 10 on the call earlier until we had our technical glitch there. So I'm sure there's a lot of interest in this and I look forward to having you back on again on whatever the next project is and that you're working on and learn more about what you're doing over at Microsoft sometime in the future as well. All right. Take care. Awesome. Thank you.