 Well, hello, everybody. My name is Langan White and welcome to KBE Insider. We should have significantly more attendees today because Slack is down, so we may as well consider it a day off. So it's like between GitHub and Slack, you know, if either one is down, we really have a hard time doing much. But welcome to KBE Insider, where we interview people who are kind of doing the work in Kubernetes to try to give our audience a sense of where we might be going soon and where we're going eventually and where they want to go so that you have a better insight into where the project is going or whatever so that you can make better decisions about how you want to invest in your Kubernetes infrastructure. Today, we actually welcome Peter Hunt from Cryo. But before we get into that, I want to mention a couple of things. First, it is Tuesday, T-W-O-S-D-A-Y, because it's 2222 or 2222, depending on what part of the world you live in, which is kind of awesome. I've seen a number of people who are also 22 today, which I think is pretty awesome as well. And then the other thing was the call for proposal, which is basically like submit an abstract so that you can give a talk. For DevConf US is now open. I will throw the link in the chat, but the CFP is open if you are new to presenting. This conference is specifically geared to people who are, for the first time, wanting to present. So we guarantee a soft crowd, but it is all open source and all very techy. So check it out. And is that going to be in person this year, Langdon? It is currently planned to be in person. So everybody, let's make that true by trying to So in Boston for our audience. Oh, yeah. Sorry. Yeah. There's only one place in the world, which is Boston. So yeah, on the campus. And the reason I bring it up is I'm one of the founders of the conference. We've been running it. I think this will be its fifth year, but it's also on campus at Boston University. So that makes it more fun as well. So we strongly encourage you to submit a talk, you know, and come if you can. It's August, I think 18 through 20 this year. So without further ado, why don't we get a little bit more into the show? And so, like I said, we'd like to welcome Peter Hunt for Cryo. And if you wanted to introduce yourself, because that's my long running joke is it's impossible to remember or know the team names or titles at Red Hat for any period of time longer than like three months. Yeah. Hey, thanks. Thanks for having me on. My name is Peter Hunt. I'm a senior software engineer at Red Hat. I'm currently working on the OpenShift Node team, primarily focusing on Cryo and sometimes AcuBut and sometimes RunC and other container related technologies. Cool. And how long have you been working on the project? So I was assigned, I was moved to Cryo full time, basically three years ago. There was a period of time where I was kind of in between Podman and Cryo, but I've been working on these things for like three and a half years now. Gotcha. And for many people who are involved in this space, what exactly is Cryo and what's it for? Because as a consumer of containerization stuff, I don't run into it very much. No, it's not one of the buzz words that come up very much. And that's actually partially like we kind of intend that. The idea of Cryo is to be the implementation of the Kubernetes container runtime interface or the CRI and an OCI open container initiative implementation of that. So that's where the name comes from CRI. The idea of it is in the before time, like well, many years ago, when Kubernetes was first written, Docker was written directly into Kubernetes and it was interfaced with the Docker shim, which is now being removed in 1.24. But the developers of the KubeBit wanted a way to kind of abstract out the way that the KubeBit asked some entity to create the containers, pull the images, and like manage the pods. And so they created the CRI, which is a protobuf API to talk to some server and do those things for the KubeBit, delegate out that job. So Cryo is an implementation of the CRI and we try to make it be as secure, performant, and boring as possible. We think that running containers in production shouldn't, you shouldn't really think about it. You should just install it on and then it should disappear into the background while you do more important work. But one thing I want to make clear is, and I think by its name, this was actually a mistaken assumption I had a number of years ago now. It's not a reference implementation. It's meant to be a production use implementation. Right. Yeah, and there really isn't like, there are basically two main implementations of the CRI now, Cryo and ContainerD, and they're both like, you know, kind of tested in upstream Kube, and they're both like used in production. And so from the SIG node perspective, they're treated, you know, generally equally. Just for, yeah, just for our audience, you want to let people know what SIG node is responsible for? Sure, yes. So SIG node is like special interest group node. It basically is responsible largely for the KubeBit. So anything, once the API server asks the KubeBit to do something, it's like the node level like, okay, schedule this pot, and then the KubeBit goes and delegates that work to the CRI implementation does all the volume plugging. And yeah, so the work that's done, like after everything's been orchestrated, once it hits the node itself, that's SIG nodes per view. Gotcha. Did you, you know, you kind of opened the can of worms there, which we know is a hot topic in kind of the Kubernetes and actually containerization community in general. But before we get into the details of it a little bit, the, you know, what I'm referencing, of course, is the Docker shim. What I'm curious about is, could you explain what the Docker shim does exactly? Sure. So basically, the Docker shim is a, is like a, you know, a shim between the KubeBit requesting, hey, can you create, can you pull this image and then can you create this container and can you create this pod between that and the Docker API. So it's the way that the KubeBit speaks Docker and the way that Kubernetes used to run because everyone used to speak Docker. The Docker API was the way that containers were created. And part of the reason for the Docker shim is that Docker itself was designed to be a complete system and not a component. So it required some hacky code to get it to behave like a component. Yeah, that's a great point. Right. I mean, I, you know, I think it's one of those English words that not a lot of people are familiar with, but is exactly what it means when we talk about it in tech speak, right? You know, is a shim is something that kind of like you shove it in there because you need it to, you know, make something fit, right? You know. And, you know, so the other shim that I find kind of scary is the boot, you know, on the bootloader with Linux, for example, there's also a shim. But so, okay, so the Docker shim, the plan is to remove the Docker shim so that we can avoid this kind of indirection or this, you know, potentially hacky code or whatever. And so, and then replace the container runtime with, say, cryo, for example. Yeah, so the once, once the Docker shim is removed, I mean, so for many years, the Qubelet has been able to use the CRI to create containers instead of the Docker shim. So the idea of removing the Docker shim is to say, definitively, the CRI is the way to have your containers be created. So you have to use one of the reference CRI implementation or one of the CRI implementations going forward. Right. Okay. And so what, what will that mean for a user, like, you know, a consumer of, of Kubernetes? So, I mean, so if you're building your own home cluster, or, you know, you're running your stuff in production, you'll have to change some flags in the Qubelet basically say, hey, talk to the CRI instead of the Docker shim. And you'll have to run cryo on that node. That should be most of the work. So it's like, we hope, like, obviously, cryo doesn't speak the Docker API. And cryo is made for the needs of the Qubelet. So there are some things that cryo decidedly doesn't do that Docker did like builds. But for the most part, if you're just running on your Kubernetes node, and you drop a cryo process on there and tell the Qubelet to talk to it, then it should just work. Right. Okay. All right. So in the kind of an existing installation, you might have to kind of think about it, but in a new installation, it would just happen. Right. Right. Exactly. Okay. So what is, since you're on the red hat, so what is OpenShift using currently? OpenShift is using cryo. So OpenShift cryo has been an option in OpenShift, since OpenShift 3.11. And then for the entire four series, which has been out for around three years as well, it has been cryo. So there's no other container runtime interface implementation that's been supported on OpenShift. So OpenShift users are going to be unaffected, effectively, by Docker? Yeah. Yeah. Exactly. They definitely, we've been using it for a long time and had a hand in kind of tweaking the CRI and building it and making it, you know, production ready as it is today. Okay. So say, I actually have a cluster that's currently running on Dockership and I'm planning to upgrade to 124. Are there particular reasons why I would choose cryo versus container D? So we think that a container, the container engine, like the high-level container runtime sometimes called, should be, if it's used in production, it should be hyper-focused on that production use case. So cryo is able to make some, like, you know, design decisions because it knows it's only tailored for the needs of the cubit. So it has some optimizations, but it knows the cubit does a lot. It reduces, it's like the capabilities of the containers because, you know, we want to run container secure by default. And at another point, oh, and the set of things that cryo is capable of doing, as mentioned before, builds, also it can't push images. These things, the cubit doesn't need it to do, so it doesn't even think about it. And that reduces cryo's code service, which theoretically reduces security risk as well. So it basically tries to be simple, performant, secure, and boring. And that's usually the pitch that we give. There's a couple of concrete cases where we know exactly how the cubit is going to request a container be created. And if something goes wrong along that process, we know exactly what's going to request that container be created. And as a result, we can optimize for that case and make sure that, you know, try to make the containers come up as quickly as possible. So that's like one kind of a little more concrete example of where we can optimize. And container D is more generic? Yeah, so container D, I mean, they call themselves like the generic container runtime. So in addition to tailoring themselves to the needs of the cubit, they also have to support Docker and they support a handful of other clients. So the code base is larger and their capabilities are more robust for sure. But we don't think that if you're running containers in production with Kubernetes, you need that robustness. Gotcha. I was going to back up a little bit just because one of the things we like to kind of ask people who come on the show is what got you into open source in the first place? So I have been using Linux since I was a freshman in college. I, you know, got convinced by a friend that it would be easier to do my course work on Linux because all of it was Unix-based and I was using Windows at the time. And I switched over and I never looked back using Ubuntu at the time much as chagrin of some of my interviewers. But so I spent college running Linux, but I never was an open source developer. But once I took, you know, operating systems course and I liked it, I, you know, figured out that there was a Red Hat Office, you know, close enough to where I wanted to live. And it just felt like a perfect fit. And I, you know, applied and got a job as soon as I could. And so I was hired to work on Podman initially. And that's what I did my internship for. And that worked out well. They seemed to like me enough. So they pulled me back and moved me over to work on Cryo. And that's where I've been ever since. Oh, now until we have to ask. So what Red Hat Office was that? That's in Westford. Oh, okay. I got you. So formerly the Boston Office, blind you Westford is 40, 45 miles, I think, from actual Boston, where, where I live. So, but now there is the downtown Boston office. So there is an actual Boston office as far as I'm concerned. So, okay, so that got you kind of into open source because you started using Linux or whatever. And as a contributor, you got pulled in because you started to get interested in operating systems. Did you, did you have that what they refer to as like the open source? I can't think of the phrasing, but like that, that it's where you're like, this thing's really bothering me. So I'm going to go fix it. Did you have that moment as well? There definitely were. And I'm, yeah, oh yeah, I mean, not so I wasn't great about, you know, contributing things upstream, but there definitely it was very special to be able to, you know, dig into, you know, the, the, you know, display manager and be like, I want it to look like this, or I want to do this. I definitely bricked a number of my early installations, trying to customize it too much. Yeah, yeah, I know that feeling. I had a deep, deep fear when I was installing Slackware on a computer in my dorm room, that it was going to literally light the monitor on fire. It used to happen fairly often, but has gotten quite a bit better. It was easier when I started. Yeah, definitely. It's, you know, put Ubuntu on there and, you know, yeah, it was much simpler. Cool. So what is your role on the kind of cryo team or on the node, what do we call them, the Kubernetes working group? SIG. Yeah, that's SIG. SIG, SIG, yeah. So I mostly maintain cryo, say like 80 to 90% of my time, I spend thinking about the cryo code base. So I serve as, you know, one of the primary like triages of cryo issues and, you know, the pushers of cryo features. You know, there are plenty of like features that this SIG node ends up, you know, working on that, you know, both of the CRI implementations have to work on. And so I'll often find myself, you know, working on those. And then I spend some of my time in SIG node, thinking about developing, you know, moving forward the Kubernetes ecosystem that features in the KubeLit. But yeah, most of my time modes, and then I spend a little bit of my time maintaining Conman, which is a little agent that each container runs, needs to run, that runs each container and pays attention to its logs and its life cycle. So I also maintain that. Is that like the pause container? Like, is it running in the pod or is it? No, so it's on the, so it's between cryo and the container. So it basically, it's the entity that actually spawns the Run C process. And once it does that, it like attaches to the standard out of that process and forwards the logs to the disk so that the KubeLit can read them and then also like watches for SIG child. So when the process dies, it can write it's the exit code to a file that cryo then reads. So cryo can actually shut down. And because none of the none of the containers are actually direct children of cryo, none of the containers die, cryo can shut down, configuration can change, it can come back up, learn that all of these containers are still up, and then basically just like re-remember all of the state of the node. Yeah, that's good. You know, I have a very close experience of that with like libvertd, you know, kind of same same idea is like, you know, you can launch all the things and then for whatever reason, if you have to shut libvert down, you can do that and then kind of bring it back. Just, you know, the reason I mentioned it right is I think developers in general probably have more experience with, with being that close to your virtual machines, whereas, you know, with the containers, I think it's less common. But that's why, you know, I would say it's comparable. Sorry, Josh, did you have something you want to ask? Oh, well, you're mentioning on working on some features and that sort of thing. So she's talking about cry, the job of cryo is to be boring and that sort of thing. But I seem to remember from KubeCon that you are actually working on adding some new things to cryo, things that actually might not have been available via Docker. Well, yeah, so, you know, we, we, we tout the boringness of it, but, you know, still at the end of the day, people want to get work done. And often this work ends up being interesting and novel. And so we, there are situations in which we add, you know, we do add features to cryo. So some of the things that we're working on. Yeah, so you mentioned the KubeCon talk, there's some situations where there are resources on nodes that need to be provisioned for containers that the KubeLit isn't aware of. Like it's kind of a hard thing to get fields into the pod API, because like a ton of entities think about the pod API. So kind of like a ground level thing to add support for like things like specifying the size of the shim for the pod, which is how pods, you know, can share memory, or, you know, there are some experimental resources on the node like Intel's block, or like block IO configuration, like these things. We've added support for basically specifying an annotation, which is something that Kubernetes supports very easily, like just, you know, arbitrary string key values. So, you know, a user can specify a special string in their pod. And if the admin has, you know, turned on a switch, and that user is able to make a pod with that certain annotation, then they get access to some of these special features like that. So another, so things that we're thinking about just generally is, you know, Kubernetes has kind of worked in a certain way for a long time where it, like the KubeLit basically does this like list watch loop where, you know, because before in Docker, it basically was just like, hey, what are your containers? What are your containers? What are your containers? What are your containers? On a loop to actually get the state of the node because it was like kind of a hacky, you know, shim, as we were talking about. So now we're kind of imagining what the future could look like and thinking about, like, okay, can we wire up, you know, events in, you know, the KubeLit so that it can listen to events and reduce the, like, runtime CPU usage. And then following a couple of caps upstream, we're working on reworking the way that stats are gathered. So I'm sure that this will come as a shock, but the way that stats are gathered in Kubernetes is also kind of hacky, where it used to be that CAdvisor would monitor the behavior of the Docker containers by, you know, I notifying on CGroups hierarchy. So basically, whenever a new CGroup was created, it would be like, ooh, a new CGroup. And then go and look at the hierarchy and be like, okay, here's what the memory of that process is. And so it kind of has worked that way, even as new CRI implementations have been created, which has caused a lot of code bloat in CAdvisor. And it's actually made it kind of hard to maintain because it knows like a ton about all of these different, you know, CRI implementations. So we're working on moving the stats gathering from CAdvisor to the CRI implementation, which has a couple of, you know, little difficulties about it because of the way that it has worked always. It's hard to change things with such a large and slow moving project. Gotcha. That makes all the sense. So kind of expanding on that, though. You know, if you were in charge of the world, what would you, you know, what would you like to see next? Like what is, what are the things that, you know, like I said, infinite time, infinite money, what would, what would you think that the right, you know, direction in a sense for Kubernetes or even CRI or, you know, any part in the middle? So I would, I would love for a world, and I'm, most of my worldview is focused on SIG node. So I would love a world where the KubeLit was a pro, like was the process pretty much responsible for what it's doing now. It listens to the API server and then calls down to CRI implementation. I would love for the CRI implementation to actually be a very thin proxy to basically one that basically runs all of these per pod processes that actually are responsible for delegating the work needed for that pod. And when a container dies, it would emit an API, an event up through the proxy and back to the KubeLit. So basically, like the CRI piece would be much thinner and much smaller, so that the runtime CPU, you know, there wouldn't be all this listing, there wouldn't be all this like process just waiting for stuff to do, but rather it would, you know, like the all of the little processes would be responsible for their little piece of work per pod, which would allow Kubernetes to scale up and scale down, you know, much better, like, you know, Kubernetes has a lot of trouble with the single node or like the, you know, smaller box, it's really like meant for like, okay, I got a huge cloud server with a bunch of VMs and let's run, you know, these things in the VMs with this elastic amount of resources available. So thinking about what it looks like for a tiny little server to run, you know, these containers and, you know, how, you know, what the memory and CPU constraints would look like, it would be great if it was more flexible and not as, I mean, it's kind of like wasteful now, being frank, but so it, but, you know, obviously, it's worked this way for a long time and it works well, so it's, and it's hard to change. So, you know, we're both, I would like to live in that world eventually. Right, because you would have to change kind of both sides of the problem, right, and, and obviously changing the kind of almost like the Kubernetes side of it is quite difficult at this point, I imagine. Right, so like, my hope is to, is that, I mean, we haven't yet put up our proposal, but we're, we're currently like some of folks in my team at Red Hat are imagining what this proposal would look like. And then once we, you know, once we submit that, and it goes through a couple of releases, then, you know, for the proposal for the event driven. So that's like the first step. And then once we have event, then we can start thinking about, okay, what actually just cryo need to do all the time? What is this daemon that we're running? Do we need it? You know, or like, how can we pare it down? Another, you know, thing that we're working on that I forgot to mention is we're rewriting Kanban. We mentioned Kanban earlier, the container agent, we're actually rewriting it and Rust, the new hot language that everyone loves talking about. And we're, we're changing it to, and, and we're, we're updating it to a per pod model rather than a per container model. Right now, for every new container that's created, a new Kanban is running for every container exec that runs a Kanban runs. And all of this is, you know, more processes than are actually needed. So, but we're updated, we're hopefully in 124, we'll have a preview of this, you know, Kanban 3.0, which is, you know, a per pod Kanban that cryo can talk through, through an API. And, you know, so that will also take us another step forward to this world where that process can actually be responsible for calling all of the run, you know, run C create, run C exec, run C kill. And then eventually, you know, we can start pairing down the things that cryo needs to do and have, have it just be as focused as possible. And somehow I sense a lot of debugging in the future. Particularly for things, for things like exec. Yeah, the, it definitely, and it's, you know, it's, it's kind of funny you mentioned because it's as if Kubernetes needs more complexity and as if it needs, you know, to have more processes in the picture. But you know, you trade offs everywhere and it will have to make it easy to debug, you know, Kanban 3. And sorry, maybe I misunderstood. But in the proposal with Kanban, then wouldn't the, wouldn't kind of the number of things happening actually be reduced or, or do you think they would go up? So the, if you're doing it by pod versus by container? There are fewer, there will be fewer processes. But Kanban will be more complex. Oh, gosh, doing more things. And there's like currently a pretty, like, you know, there's cry cuddle, which is the, which is a tool that can talk, speak the CRI client language. So it can ask, hey, cryo, like, what are the containers that are running? But that doesn't exist for, you know, this, you know, new agent that we're making. So there's, you know, it will have to, you know, think about how best to see into the world of Kanban, as, as the scope of Kanban increases. And, you know, because, like, I imagine a world in which it basically cryo is like, okay, I have this pod. Well, I have this, like, I need, I needed a pod with this specification. And then Kanban does the, you know, namespace pinning, it acts as PID one for the container for the pod, if the pod is a private pod level PID namespace, which is the reason for the pause container, which you mentioned earlier. So we'd not need the pause container in, like, most situations. And it, and then it like sets up basically the, all of the containers and does all of the execs can stop the container when it's asked to. So basically, yeah, like, all of the functions that cryo is kind of doing, moving it down to the pod level. One day. To end the comment. Right, gotcha. Sorry, just, did you have something else to ask? Well, I am, I am interested in one thing, which is that if Kanban is being rewritten in Rust, then was there like sort of internal debate about that? Because obviously, you're rewriting Kanban in Rust and the same time you're having it take on more work that's currently being done by the Kubelet. So, yeah, the, the, like, basically, well, so the options were write, rewrite, or like, extend the behavior of the current Kanban, which is written in C, and is, has been put together over a time, like, over long enough. And like, the main developers have kind of cycled in and out such that like, I'm like one of the main, you know, contributors of it. And it's hard to work with. So it was either that or rewrite in Go, which has a lot of limitations with respect to memory usage, like the Go runtime is very heavy, and the garbage collection is expensive, especially if we're trying to reduce, like, latent CPU usage. So kind of like Rust lived in this world where it was like it could be performant in the way that C is, but also, you know, is a more high level language easier to work with, can have like, threading very simply can have, you know, all of these, you know, nice niceties that we're used to in, you know, now. So I guess I didn't realize it was not in Go. So that that makes more sense, right? I mean, if you're going from C to it's kind of in Go. Yeah. So that, I mean, we, yeah. Like, and I mean, it has been, you know, it has been very useful being in C because it, you know, it's very low, you know, very low memory, even for per container process, it's like, you know, two megs per container. If there's like, you know, a couple of containers per pod, that's, you know, pretty low overhead over all. But yeah, yeah. So a fun secret is that we have con mon and C, which is, yeah. Right, right. So that makes a lot more sense. I mean, at least, you know, for me going, if it was in Go and going to Rust, that's a much harder decision given, as you were kind of saying, right? Most of the Kubernetes world doesn't go. But I think if you're coming from C, you have a lot more, you know, kind of options there. So we did actually have a question, I was just going to bring up from the audience, which was, is there a plan to coexist with the two versions of con mon as it comes out? Or, you know, are you going to try to do a forced move? Or, you know, how do you, how do you think that will happen? Right. So my plan, so right now, we have the ability to toggle between runtime types, because we have, we have support for Cata, which is a runtime type of VM. So you have to kind of work with it in a very particular way. And then the runtime type OCI, which is just like a con mon base. So my, my plan for this would be to introduce a third runtime type of which has yet to be named. And then, so they, in long story short, they would be coexisting for a while as we work out any issues with the new con mon. And, you know, we have, you know, a normal, because like, cryo supports the three communities releases that are currently, you know, that are supported. So, you know, we would be supporting a con mon based one for a while anyway. So we're, we'll, we'll safely move through until we're ready that it's like, you know, up to snuff, and capable of handling the production workloads in, you know, well, we just lost your sound, Langdon. Sorry. Is that better? If I, if I don't hit the mute button, it also works better. Sorry, I coughed earlier. So I didn't want y'all to have to hear that. I just said that's Cata as in Cata containers, right? Correct. Yes. Sorry. So yeah, Cata containers, the, the concept of running a tiny little VM instead of a, well, for a pod instead of a, instead of a set of containers. Right, right. Right. Yeah. I just wanted to make sure that we were talking about the same thing. Yeah. I guess, I guess it would be like, how would, you know, the one of the things is how would it, would it affect more exotic things, right? Like Cupert, for example. So, I mean, Cupert, I, like, you know, Cupert is, from my understanding, and I haven't really worked a ton with Cupert, but I've done, I've supported a bit of it. And from my understanding, it's basically like they create a privileged container process that then talks to live for directly in that crit, creates a VM. The, so the VM, it doesn't really live in the container cheroot as mine, or maybe it does. I'm not super familiar with, like, exactly. No, it doesn't, it doesn't, it, it, it kind of emulates a pod rather than, than being a pod. Right. So, so theoretically, I mean, everything that you can, because a container is just a process on the host, that's in a cheroot environment. So anything that, like, that doesn't change very much, really what changes is, like, the way that we look at the, into those processes. So, like, Conmon is the thing that's, like, listening to the standard out and running it to disk, and, you know, monitoring the exit code. None of that, you know, changes if the configuration gets a little bit more exotic for, like, something like Kata, where the containers actually a VM, or the pod is actually a VM and inside of it, the containers are running inside of the VM, that we're not really considering within scope of this change, because, you know, the Kata community has, you know, they, there's a very particular way that they interface with cryo. So we're not, we're not really messing with that. So this is for run C, or like C run type containers that are processes on the host, that are run inside of a cheroot environment. Those will be within scope of the change. Gotcha. Kind of, you know, you've been talking a lot about change and one of the things that's been coming up for me a lot as a, as a faculty member now, right, is where, if somebody wanted to get involved with the project, you know, either cryo or Conmon or whatever. I would imagine that there's, that it's kind of got a bunch of code that is, you know, kind of small in the sense that you can understand the small amount and, you know, make a fix. Are you tracking anything like, you know, easy bugs or easy fix type stuff for where I could point people to go and try to get involved in the community? Yeah, that's a great question. So we do have in cryo, we have like a couple of tags, like good first issue and help wanted. So those kind of, those we try when triaging to apply, if they seem generally pretty small in scope and something that, you know, wouldn't be too complicated, but we ourselves don't really have time for. So yeah, and I mean, generally, we're always happy to help people find work that they're interested in. So if they wanted to reach out to any of the cryo developers, we largely live in the Kubernetes slack and pound cryo. So if people wanted to, you know, wanted to learn how to contribute more, definitely reach out to any of us and we'd be happy to kind of direct them. We're always, we're always looking for new contributors. And is the, where's the Conman effort? Is that also running in sort of in the cryo community? So right now, yeah, so we're cryo, the cryo community, the podman community will soon join the effort they've been focusing on a new major release. So, but they're going to soon join as well. But yeah, it's largely right now within, it's just a couple of developers on the cryo team. And that is right now, it's still like, we're still pretty early stage, like we have, we're pretty close to a like MVP, but we're, you know, there's still a number of steps. And a lot of us keep getting distracted. So that lives in, there's, and you can talk about it in the cryo Slack or on cryo. And, you know, the repository is github slash containers slash Conman dash RS. So, but right now it's such early stage that we haven't really started, you know, working that that concretely on the integration into cryo. Yeah, I was just thinking because a lot of, a lot of, particularly young developers who are just leaving college, right, are learning Rust. And so they're going to be interested in that. I'm just like, you know, if somebody is a Rust developer, and they want to, and they want to get involved with Conman, where would they go? Yeah, and I do think that like, speaking of the future a little bit, I think that's the future of some of this, you know, cloud native, you know, Kubernetes world is rewrites in Rust. I mean, Go is a very, is a great language. It's, you know, has a very low barrier of entry. And it is very clear what code is doing, even very complex code. So like, but I think that for, you know, processes that live in the world that cryo lives in, like the container manager, when you start actually really tangibly thinking about the processes of a container, something a little bit more low level, something a little bit more conservative about memory, I think will kind of be the future of, I mean, we've been, we've been joking about wanting to rewrite cryo itself in Rust for a long time. And that, that we're kind of reconsidering that. But, you know, the, as the future goes forward, I can, I can see things like that happening more and more. Well, the reason, like, you know, like kind of another experience I have that kind of reinforces your, your opinion in my mind, right, is we've been looking at in the university, right at, you know, needing a low level language to teach low level classes, right. So like an operating systems class, this was specifically in the data science space, but you know, whatever. And, you know, to try to get away maybe from C and C++ and, you know, and it looks like Rust might be the best choice for that, right, is that it needs to be low enough level. And I also think that Rust starting to land in the kernel, like in the Linux kernel, you know, maybe not quite mainline, but certainly nearby, you know, kind of also reinforces that opinion, right. So, you know, but I think one of the really nice things about modern development, you know, the last five or 10 years, you know, with the popularity of containerization and all that jazz is, you can choose the right language for the right job significantly more easily than you ever used to be able to. So, you know, just because you have, you know, one thing written in Go, another thing written in Rust and another thing written in C, yes, you need to think about making that decision. But if there's a good language reason, it's very doable now. Definitely. Nice. Did you have something else there, Josh? Or did you? No, no, just yeah. What, well, Ellen, do you mind if I change topics completely here? Oh, yeah, go ahead. Are you still doing competitive dancing, Peter? Well, so, no, the last couple of years haven't been the most conducive for competitive partner dancing, but yeah, and I did last competitively dance when I was in college. I was a competitive ballroom dancer. So, but I have, I've kind of fallen off of it, but I am interested as it becomes safe and poking back into the world, at least casually, I don't think I'll probably remain a competitive dancer. But yeah, I definitely, yeah. Classic ballroom dance, like waltz and that sort of thing? Yep. Yeah. So like in the Waltz tango foxtrot, and then also the Latin dancing, like cha cha and rumba, samba and jive. So have you considered a TikTok channel? Because that would be an appropriate landing point for significant amounts of dance. I think I've forgotten a lot of the stuff, and so I would have to get good again before or get better before, you know, considering avenues of advertising my skill. Right. So we did actually just have a question in the chat, which is that are you seeing other parts of Kubernetes that are, you know, typically in C or whatever, moving towards Rust? Or is Kanban kind of a, you know, just one-off and unrelated? I don't actually know of many other projects written in C. I know, like, isn't Envoy written in C++. I don't, I'm not very tuned into that community. But, or one of, yeah, anyway. I want to say Envoy is, it's either C or C++. I can't remember which though. Yeah. Well, I know then, isn't the New Linker D is Rust, yeah? Oh, that's possible. Yeah, I don't, I don't know. So, but I think a lot of the conversation will end up being what to move from go to Rust, because go is clearly the dominant language in this ecosystem. And I mean, I think things like, I mean, I don't know, like, for instance, the thinking about the prospect of rewriting the Kubelet is horrible and makes me scared, because it's like, you know, hundreds of thousands to millions of lines of code and incredibly complex and, you know, not super well unit tested all the time. So, I don't know, like, for a lot of the pieces in the, the, the like higher level Kubernetes ecosystem, I'm not sure if it makes a bunch of sense. I mean, obviously it's very modular. So it's like some, some really motivated developer wanted to go and write, rewrite at CD in Rust, they could and just plug it in and it would work. But I think that, I think that the biggest considerations are like, you know, anything underneath the Kubelet. So like rewrite, like, you know, there's the project C run, which is written by one of our colleagues at Red Hat and Giuseppe Scrivano and like C, like, so it's run C rewritten in C, which makes sense because it's, you know, nice and low level and, you know, you need access to a lot of the kernel APIs and there are actually are pieces of run C written in C to do some of the things that go can't. So thing for positions like that, like I can imagine a world in which someone, I mean, there is actually a Rust runtime and I'm forgetting the name of now Yuki or something. Yeah, Yuki. But the, I think that like, it's like the container manager level. So like, you know, Cryo or Kanban or, you know, maybe run C are the most clear targets for that kind of rewrite. Gotcha. Yeah, I mean, I think, I think that's interesting, you know, because language choice can can make a big difference to, you know, the kind of situational or, you know, the situation or whatever. But Rust really, it's only in my opinion, right? It's only just getting to the point where it's something I would trust at that kind of level. You know, it's great and all and, you know, people can really like it and all that jazz, but it's like, I'm going to run something in production. I don't know. I was a C programmer for a long time. And the just because a language is old doesn't mean it's trustworthy. Well, that too, right. I would make that a C++, but yeah. Well, and I think, yeah, like, I, and I think the barrier to entry of Rust, I mean, I, well, at least speaking for myself, like the re changing the way that I think about programming models, and, you know, making, you know, to the Rust version, like it's not like, you know, we have some of it, some of that model within C++, but it's not mandated in the way that it is in Rust. So I found it hard to switch over from go to Rust, and I can definitely see that being just a general barrier of entry for any project thinking about moving over to Rust. I don't, well, I mean, one of the things you always have to look at is you look at, you look at the programmers who have a track record and you look at the programmers who are new, right? And I'm seeing a lot more Rust adoption among new programmers, because if you look at, I mean, you still look at the go documentation and it starts out with go is just like C, except for these things, which is great if you're like me and you've already programmed in C. But if you're getting started, you're crossing over from Java, that's not helpful at all. Right. And, you know, and in the world of code, we always have to think about the fact that there's always, you know, that there's always more new programmers than there are old programmers. Yeah, I'd like to speak to anybody who's still raking in the box for cobalt, right? So, there was a question that had advice. I'm sorry. Go ahead. Go ahead. Oh, I was going to say, somebody asked if there was advice on learning Rust. And at least for me, I, you know, when I started to take a look at it, and I did not get very far because, you know, work kind of got in the way. But the, the Rust kind of website is pretty good, you know, at some of the kind of documentation tutorial stuff. But I was wondering if, you know, either of you have, you know, anything that you've heard or seen or experienced that seemed like a good way to help you learn Rust. I learned in a very kind of, I don't know, I don't, books don't typically work for me. My brain, I try to read and I just kind of get hazy, have trouble paying attention. So, the, I don't have any suggestions like that. What I do is I look at code that other people have written in Rust that I can, I trust. So, like, you know, one of my colleagues, Sasha, Grunner, who also works on cryo with me, he's done a ton of writing stuff in Rust. And so I basically look at his code and then see how it looks and then write some code and then see how many compile errors I have, and then Google all the compile errors until I fix them. And eventually, iteratively over time, I start getting more familiar with it. But I don't think that that's actually the most efficient way to learn a language. I think that's just like the way that works for my brain. So, yeah. Yeah, it's funny. I actually, I'm very good at reading, like I really like reading or whatever, but not to do something like learning a programming language. I usually find the best way for me to do it is I try to go build something. And, you know, and basically, you know, stack overflow and Google are open over here. And my, you know, text editor is open over here. And I just kind of keep banging at it until, until it works. But I think, I think that's, you know, there are better ways to do these things. I'm just not 100% sure what they are. I did, I did like, like I said, I like the rust, I can't remember the website, it's like rust-lang.org. Yeah. You know, I like some of the content there. I thought that was pretty good. And isn't like Firefox is now completely rust, right? Or is it just like a rust engine? But I mean, I would trust that code as a, as an exemplar. I would think, you know, Definitely, yeah, I agree. Yeah, that there's a lot of, I don't know if all of it, but I know a lot of it has like, I mean, that was a lot of Mozilla's like motivationing, you know, for creating it. Yeah. Right. I was just looking. I feel like Sasha is going to be on the show in the future, right? Yeah, he is actually next month, I believe. Yeah. Yeah. So I was just looking because I was like, do I have the right Sasha? I can't remember now. I'm just trying to remember whether it's next month or the following month. Let's see. No, no, it's next month. Okay, cool. So the next month, I'm sorry, it's February now. No, Sasha is April. So Sasha is going to be on in April talking about S-bombs and release artifact signing for Kubernetes. Gotcha. All right. So we are almost out of time. So, but I do on behalf of Twitter have to ask about the floral pants just on principle. Yeah. Yeah, what about them? Oh, like so, do you own a lot of pairs of floral pants? Where do you come by them? Are they easy? Do you recommend them for others? Absolutely. I 100% recommend. I find, you know, I typically dress largely masculine and I find a lot of masculine clothes don't have a lot of the flair that I'm looking for. So I shop in often the, you know, the women's section in their stores for fun pants. I've got some, I don't know if I can get my leg up here, but I got some. Okay, here we go. Halesthetic song, maybe. Yeah, right. So, yes, but I definitely think more people should be wearing fun pants. I kind of really have a thing against jeans. And so, and the nice thing about fun pants is that they're also comfortable, which is a big thing for me as well. So definitely more flowers in more places is my review. See, I would struggle with because you mentioned that they tend to be women's pants. Jeans already don't have enough pockets for me. I can't imagine losing even more because of the corruption of lack of women's pockets. Yeah. Now, I actually have pairs of floral pants. This is my summer weight, pajama pants for working at home, are largely floral prints. But those were actually out of a catalog for the men's casual pants has exploded as an industry over the last two years, for some reason. Yeah, but I definitely, I feel like they're still niche enough where they're like more expensive than like I'm looking for. So I definitely saw, yeah, looking. These were not, but you know who wears really flashy pants and it's people in the culinary industry, back of house. Because oh, that's true. Chef pants are a thing and those are expensive, but they're also indestructible. Scrubs, they're going that direction too. You're seeing a lot of scrubs now that are in fancy, fancy flavors. It's funny, because my father was a ER doc, right? And I remember growing up him basically wearing scrubs most of the time. And they were never anything besides green, like ever, ever. So we bought my dad, my father's an obstetrician, and so we bought him a set that had little tiny prints of the chlamydia virus. Nice, nice. On that note, which, you know, thanks so much for coming, Peter. We really appreciate it. And you know, we'd love talking about, you know, the low level things that are going on in Kubernetes. We like to hear, you know, from engineers who are kind of doing the work, what, you know, where do you see it going? That's really interesting about the rust choice. You know, interesting about how, you know, you're trying to get more of an event driven infrastructure, even at the kind of cryo level, which I think is also interesting. But thanks to the audience. We really appreciate you sticking around for us. And please definitely join us. We have, we already have, I think, guests lined up for the next, like, three shows, four shows, something like that. So keep an eye out. Last Tuesday of the month. Yeah, next month is going to be a couple of people talking about, I was mentioning more exotic runtimes like Kubert. So a couple of people talking about virtual machines in a container infrastructure. So somebody from the Kubert project, as well as somebody who's worked on virtualization in the Linux kernel for a long time. So. Right. Yeah. So that should be a fun show. But just remember, it's last Tuesday every month, 10 a.m. Eastern. What is that? UTC is 1, 2, 2 p.m. And yeah, and we'll see you next time. Okay. Thanks. Bye-bye.