 Well, hi everyone. Thank you for coming to yet another episode of Cube by Example Insider, where we try to interview people who are kind of doing the work in the Kubernetes space. The big reason or my big argument for the show is that, unlike many other kind of proprietary products, if you want to know what's going on in open source, if you ask product management that is often flawed, as we used to say, you don't always get the pizza that you requested because the challenge with open source is that people can, you know, randomly contribute on nights and weekends to get the features they want into the system. So if we talk to the people who are doing the work, then we can say, hey, you know, what are you seeing as the things that are coming up soon? And today we're going to focus on the OCI. And we'll talk more about what that is and everything in a few minutes. But before I do that, I'd like to introduce our guest, Phil Estes. And we may have another surprise guest in a bit. But Phil, do you want to introduce yourself? And oh, actually, before we do that, let me just say my co-host, Savita. Sorry, I forgot to introduce you. And, you know, she'll introduce herself and then maybe hand it to Phil. Thank you, Langdon. Hi, everyone. My name is Savita. And I am a writer for the past two, three years almost. It's coming up. And I've been contributing to Kubernetes community for the past five or six years. I started with documentation. And then I did a bit of release. I was the release team for five releases. And then I found my home with security. So that's where I am right now contributing. And if you want to find me, you can find me in those channels. And I'm also a maintainer for the conveyor project, which is also a CNC of Sandbox project that focuses on application modernization. I think it's enough about me. Now, I'm going to pass it on to our guest, Phil. Thanks. Yeah. Hi, everyone. Phil Estes. I'm a principal engineer at AWS. But more importantly for the show, I've been a long time contributor to the container runtime part of the stack. So a decade ago, for anyone that was at KubeCon, there were a lot of 10-year references, 10 years at Kubernetes, Solomon Hikes talked about 10 years ago building Docker. So yeah, a decade ago got very involved in the Docker runtime project. At the time was at IBM, collaborated a ton with Red Hat folks on the super early days of Docker, and then helping launch the OCI and all the work that I think we'll probably talk about here. And more recently, or at least in that same timeframe, we launched Container D as a sort of vendor neutral, just core runtime that was donated to the CNCF seven years ago now, actually seven years ago this month, and have been a maintainer on that project for the last seven years and watched it grow. And that's sort of my entry point into Kubernetes, because Container D became a popular choice for the runtime underneath Kubernetes. I always feel weird when people invite me to Kubernetes things because I'm not an actual contributor or member of the org, but I'm adjacent to Kubernetes and work a lot with the Kubernetes team. So so yeah, that's me. That's cool. It's yeah, I'll say it was completely planned that it was seven years ago this month that that's why we invited you on the show. Let me ask you first, before we kind of get really into OCI, one of the things we like to ask our guests is, you know, kind of what got you into open source in the first place? You know, what, you know, how did you get pulled into that space? Yeah, no, that's a great question. I'll try and make it brief given my age. But I grew up early in my IBM career. So, you know, graduate college joined IBM worked on proprietary operating system software way down in the in the bowels of IBM, no doing nothing with customers just coding. And, you know, just curious about a lot of things. And this this new operating system Linux started showing up and the internet was kind of new. And so finding information on things was not as easy. But I was taking a master's level course and needed a compiler. And it was super painful to think about buying a compiler just to, you know, do the work for this course. Long story short, I was like, this Linux thing is super easy, download, discuss, install it, you've got free GCC and I can do all the homework. So that kind of got me hooked in general on the concepts around open source software. But it was a very long time before actually contributed. I ended up working for a good while in IBM's Linux Technology Center, which had a ton of collaboration with Red Hat long before the purchase. So I was sort of in the realm of open source, you know, using open source, but it really wasn't till 2013 that I kind of switched roles in IBM, joined the cloud unit that have been looking at passes and, you know, all this sort of exciting new world of cloud technology. And that was happened to be the year that Docker became extremely popular. And IBM said, we need some people to just go figure out this community, go get involved, see what this whole thing's about. And that's when I really just dove in because I, you know, created a GitHub account and figured out, you know, how to create a PR and fix a bug. And I just loved it. Like it just radically changed the trajectory of my career at that point, became a maintainer within a year plus, and got, you know, just very involved in contribution and trying to help figure out, you know, we all know how communities are. They're just slight bit of chaos mixed with some bit of organization. But I loved it. I love the environment. So that's really when I got hooked. It was funny when you were saying that about kind of the early days, the internet, you know, I still tell students sometimes, you know, oh, yeah, back in the day, we used to guess websites, you know, because things like Yahoo didn't even exist. And so the only way you could, you know, find a website was by guessing the name of it. And yeah, so it's been, it's been a big change right over the years. So that's really cool. The, you know, I also remember, you know, kind of getting a slack wear on floppy disks and installing it in the late 90s. And, you know, in a stack of 16, as I recall, to get the operating system installed. And you got to make sure that you set your monitor settings correctly. Otherwise, you could literally lay dead on fire. So yeah, if fun times. So that's cool. It's interesting because I think a lot of the people we normally interview on the show, right, kind of got into open source mostly because they're trying to like scratch their own itch, right. And it sounds more like you were, you were almost like kind of going and trying to almost like build a product, right, or try to go and like, you know, there was a need in this space rather than necessarily something kind of personal that is what brought you there. Does that seem like a yeah, I think that's fair. So yeah, I mean, like I said, I was in the realm and so people knew, oh, Phil knows Linux, he's been, you know, kind of on the edges of that community. But yeah, it was essentially a part of my job that it was, I was asked to go, you know, sort of investigate this, this new technology. And, you know, the cool thing is like I had always worked kind of at the operating system level. And so coming to Docker and realizing that everything they were doing is really built on Linux kernel capabilities, namespaces and C groups and all, you know, it was a good, you know, all of us I assume at some point in your career, you've had a job where it's like, you've kind of learned to turn the crank. And then that's the kind of role I'd been in just before that. And so it was, like I said, it really did change the trajectory of my career, because out of that came a ton of learning and digging in, you know, at a deeper level than I had. And then, you know, from there actually, figuring out, hey, it'd be fun to start sharing this. And so I started speaking at conferences and that launched a whole aspect of my career that had never existed. So, but yeah, it all came out of like sort of a, almost like a job, like, go to this out. Right, right. So kind of changing gears a little bit. So I mentioned OCI at the beginning, right, to the open container initiative. Yeah. But so I think it would be really awesome if you could kind of explain what the OCI is and how it relates to container D or Docker or, you know, like Podman or because I think it's something that's quite confusing for many people. Yeah, absolutely. And, you know, it's one of those things with a pretty interesting origin story. Again, to make a KubeCon reference, there was a keynote, I think on Friday, Bob Wise, who I think was at Samsung's cloud at the time, but joined AWS later, you know, gave some hints at that story. But essentially the essential parts of the story are that this is super interesting that I happen to get involved in a project that immediately ended up in lots of controversy, because that was an additional twist on, you know, all of, it wasn't just working on Docker, the runtime, it was all of a sudden there was, you know, CoroS came out with Rocket and said, hey, we think Docker's going in kind of the wrong direction, there should be a spec, there should be a definition of what a container is, and Docker shouldn't be in charge of sort of defining kind of how containers work. I think Red Hat was going through a transition with re-architecting OpenShift around containers, using the Docker engine initially, but then clashing around, you know, low-level debates around should system D be a primary part of how containers work, and Docker saying absolutely not, we think they're just application containers. So yeah, out of all this swirling controversy, you know, large, large entities like Google and Microsoft and IBM and including Red Hat were, you know, pretty worried about the future of what a rift in kind of this very early technology would bring to the industry. And so there were essentially some meetings and Bob and his keynote talked about, you know, a specific meeting with Dan Cohn involved that both launched, I think it was the same summer of 2015 within a few months. The CNCF was launched to take over kind of Kubernetes, to shepherd the Kubernetes project, and the OCI was formed to say, hey, CoroS, Docker, Red Hat, whoever else has ideas like let's get together under a consortium, you know, similar to the CNCF run by the Linux Foundation, and let's hash out specifications, like for what a container is, what a container images. And so the OCI had a very, you know, symbiotic relationship with the CNCF, CNCF was Kubernetes, the code OCI was, here's the standards that if you want your own container runtime, or you don't want to use Docker, like at least conform to these specs, so we can all interoperate. And so that's really how the OCI was launched. Right. And I actually, I actually remember that because it was like two really big kind of splashes, right? Yeah. And then the OCI. And so I think it's important to note, right, that the OCI isn't really software kind of in and of itself, right? It's more like, you know, an RFC or, you know, something like that, where, and I think that's, I think that's where it gets a little bit confusing, plus the fact that there's actually, isn't there an OCI D, you know, the, you know, so, and then so how does that relate to like container D? Yeah. Yeah, no, good, good, good question. And so there is, yeah, the interesting, the OCI is imperfect in that it's not just a pure standards body. At the same moment of its founding, there was also the decision for Docker to give the core, what had been called lib container, a sub directory within the Docker project that really did the work of starting a container process on Linux. They donated that to the OCI to be the first reference implementation of the runtime spec. So Run C was that new, like software project that would be collaborated alongside the spec, so that there'd be a place to, you know, if you're going to build a higher layer container runtime, whether it's Podman or, you know, Docker continuing along their path or Creo that came out or container D, essentially, there'd be a reference that you could use. And so Run C has been a broadly used piece of software. And that became clear when we had a huge CVE in January of this year, that it's just amazing how much Run C is probably on every Linux server in the world. And when it needs to be patched, it's a big deal. So Run C is that initial implementation. So container D, Docker, Creo, Podman, they all conform to the OCI spec mostly by using Run C. And there are other implementations of the spec. So Red Hat created C Run, which is written in C, that operates just like Run C has some other capabilities that were important to Red Hat. But Run C is sort of a de facto standard that goes alongside the spec. And so all the tools and all the, you know, when you build a container, when you run it in OpenShift or Amazon's cloud or Google, the fact that the OCI specs exist and we all happen to either depend on the same binary implementation or have a conformant implementation. And then you expand that out to our registries, whether, you know, Quay or, you know, a commercial registry or one of the big cloud providers registries. We've all agreed like how we transfer container images and what they look like. And so the specs have given us that level playing field that whether you use container D or Creo or Podman or Docker, you know, the common world that we wanted to create essentially exists because we can all interoperate. Yeah, I think the one where I was kind of like most impressed by that kind of proliferation of standards was actually like on Flatpak. Because it's like, Flatpak are containers, right? But they're like wildly different than, you know, kind of what you normally think of as containers. And yet it's the best of my knowledge, you know, conforms to the OCI as well. And so I think that that's really kind of impressive for the standard to be able to be used in those kind of too wildly different, you know, scenarios, even though, you know, mostly applications still, but still a GUI application and the command line application are quite different in a lot of ways, right? So yeah, the, let's see, what was I going to talk about next. So, so at present, you are primarily working on container D or are you mostly working kind of the standard space or both? Yeah, so both, although I would say I'm more technically involved in container D the project and its lifecycle of releases. And one of the cool things joining AWS, one of my interests was really building, you know, not just myself working upstream, but AWS had built a container runtime team that was learning to sort of contribute upstream and take leadership roles, become maintainers. And so a lot of the last three years has not just been me working in the project, but mentoring and helping, you know, AWS, this AWS team learn how to be more involved in open source, learn how to contribute, because container D is used broadly in AWS services that offer container compute primitives like our Kubernetes offering or Bargate. So yeah, that's, that's, I'd say my day job is very focused on that. I'm a member of the OCI technical oversight board, and I've been on that for a number of years. But that's really, you know, an oversight role that the maintainers of each spec are actually doing the day to day work of new spec releases, changes, resolving, you know, requests for new, you know, new ideas to be added to the spec. But yeah, so I'm still involved in the OCI, but there are others who carry on a lot of the day to day work. Yeah, that brings me to the question that I have had in my mind. So when Phil started talking, you mentioned about the community, right? And we have a lot of stakeholders who are interested in more contributing who wants to be like, have a say in the OCI spec, and it can even call the situation like a too many quick kind of situation. So how do you deal with the enhancements that comes through? Like, what is the process? Or like, I know the heart of the open source is like transparency. It's a part of Linux foundation. But can you shed a little bit more light into it so that we can know how they can get involved in case of they are interested in this? It's probably documented somewhere, but it will be amazing to hear from one of the maintainers and who is on the technical oversight committee, if I got that wrong. Yeah, no, no, that's a great question. So I will say that the OCI has struggled to like, you know, it's not the same as like an open source project where people just come in and you can do a quick PR and sort of get yourself established as a contributor. Specs are specifications for decades have been painful work because there's just lots of opinions and by nature you want specs to move slowly because people, you know, you end up with so many projects that conform to a spec that, you know, if you add a new feature in a revision of the spec that says it's a required feature, now you may have forced hundreds of projects to figure out how to add that new feature to their software to say they're still conformant to the spec. So I'd say the first two years of the OCI were fairly straightforward because it was mostly about like Docker is kind of a de facto standard. They've already defined like what an image looks like and it was mostly, you know, small debates about, you know, what should we name this property? Docker named it this? Do we still want to name it that? So getting to that first 1.0 OCI spec was fairly straightforward. Since then the OCI has like, I think had to learn a lot about what it means to, you know, invite people to bring new ideas and yet help them see like how we can do that in a way that's not painful for adopters of the spec. And so we just released the OCI 1.1 image spec and distribution spec in the last month or so. We thought we were going to release that last January. So as you can see, that was painful for people that were like, I'm ready for the 1.1 spec. I think this is what we want. We need it. And then it took another year. And most of that was allowing people who had strong opinions to not hold it up but to make sure we fully understood why they were resisting like a certain language of, you know, something being a must or a should for those that know spec language. You know, those seem like super small things, but like you know that once you release and people start adopting it, like it's really hard to change that. And so, yeah, so spec work is a little bit more like sort of grind it out a little more pain involved of like, you know, you can't just cut a release every three months and say, hey, you know, okay, we're floodgates are now open for new features. So yeah, it's a lot of like trying to be super intentional with people and help them understand. And, you know, we've had people come, you know, with in the early days like Sun and Solaris was still around. So, you know, containers aren't just about Linux. So the Windows people were there. Solaris people were there. And now FreeBSD people have come and gotten their pieces added. So I think we're open, you know, I think the OCI is appropriately open to new ideas and making sure we cover, you know, all the use cases that people care about. But yeah, it can be tricky because it's, it's not an easy thing just to, you know, for example, one final example before I stop rambling about specs, but you know, there's a lot of, you know, we've had, like I said, we've had containers for about 10 years, at least, you know, popularly the pieces of that have been around much longer. But and a lot of folks have come and said, you know, this, the way we store images and you've got these layers and they're all tar balls and they're compressed and you unpack them and layer them, you know, that was great. That was a great initial idea. But like, you know, here's a bunch of ways that that's slow or that's that lacks performance characteristics that we could have if we redid it. But think about like all the registries in the world, all the containers that have been built, like, is you can't just sort of say, well, tomorrow we're all going to do OCI 2.0 and images look totally different and they're better. So just, you know, switch today, like, you know, that there'll be the if we ever get there, there'll be a huge period of overlap and, and, you know, people trying to transition to a new world of whatever OCI 2.0 looks like. So that's, that's maybe a good example of like specs or there's this huge tension involved with conformance versus new features, new ideas. And if you are looking for a new, like, fun April 1st prank, you can always say like, hey, 2.0 is coming. You can just unrelease the chaos. Yeah, you just have to rebuild everything, it'll be totally fine. I did share, though, you can still have fun with specs because I threw in the chat the TCP IP overcarrier. So I'll, though, I just, something you said made me think of something else. So do you, or is there a movement or discussion or something of BSD jails, for example, conforming to OCI, like some of the other Unix-like platforms and their interpretations of containers? Yeah, so there has been talk of that. I don't keep up as much with the runtime spec because, again, it's probably the slowest moving at this point because it's very stable, it's very well understood. It's mostly, you know, kernel features that appear in the Linux kernel that would be valuable, you know, say, from a security context. The runtime spec will add, you know, a way to, in your containers, config to turn on that kernel feature. And so most of the runtime spec work is kind of the slow movement of new capabilities up through the stack. But yeah, I mean, the lightweight hypervisor folks have been around the OCI for a while. We just had a discussion. We had some meeting time in Paris last week for folks to join and talk about OCI work. And this whole idea about using sandboxes, like a more generic term, a sandbox could be a group of Linux containers, like a pod and Kubernetes, but it could be a VM. And so thinking about, I don't just want to expect, that tells me how containers start, I want a spec that encapsulates the start of a sandbox because that may have network and storage and starting a VM and knowing all the configurations about memory in a kernel to boot that VM. And so, you know, confidential containers is one of those use cases. So yeah, the OCI is having to understand all these worlds where people are using container concepts, but maybe in ways that the original Docker run container doesn't encapsulate fully. And so, you know, I think we will see the OCI grow to kind of, whether it's additional specs, like, oh, here's the sandbox spec, and it knows how to run container specs. You know, those kind of ideas, you know, the OCI will have to adopt and understand those use cases and figure out what that means to the specs. Interesting. Yeah, you know, spec work is, yeah, I think one of the other things, you know, there's a lot of, it's not politicking necessarily in the bad sense of the term, as much as the, you know, there's a lot of kind of negotiation and discussion about kind of each of those pieces, because as you kind of said, you know, it's really long running and, you know, has a pretty strong impact on a lot more people than just the people making the spec, you know. So, kind of talking a little bit more about container D. So, what is container D doing exactly? Like, what is its job? Yeah. So, like I said, initially, it came out of Docker and some of those early sort of contentious debates about, you know, Kubernetes needs a container runtime. It had been using Docker for years. Docker at the moment was, you know, adding things like swarm, which is like, you know, sort of swatting in the face of Kubernetes, like, hey, we have our own orchestrator built in the binary that you're using to run containers under Kubernetes. And it wasn't just that. It was, you know, Docker was moving very fast. Kubernetes was moving very fast. And so, Docker was breaking Kubernetes releases and vice versa. So, container D was meant to be the stable core. It didn't have all the features of Docker. It just had the features that the Kubelet needed to drive, like pull an image, start a container, start a pod. And so these, we haven't even talked about, I guess we haven't mentioned the term CRI. I don't know, the audience should definitely be aware of that. But the container runtime interface was a new API created from the Kubelet to call into a container runtime. And it kind of disconnected what had been a very tightly coupled system of Kubelet and Docker engine through a piece of code called the Docker shim, which was finally deprecated just a few, well, now, I guess maybe years ago. I was actually going to mention that I can't remember if it was this show or the other show I did, where we actually did an episode all about the real production. And it wasn't actually that big a deal. But yeah, yeah, yeah, right. It kind of ended a whole era and made sure that the CRI now is the only real way to connect up Kubernetes to a container runtime. And so, container D implemented that interface from Kubernetes and had a simple set of services like I know how to pull an image, I know how to start a container, I know how to get a container's metrics and, you know, all the things about managing a containerized process. And so container D for most of its life, most of its life has just simply been hopefully a very stable API, a stable runtime that anyone can use. Kubernetes obviously uses it through the CRI. Docker basically abstracted some of its core kind of inner workings to use container D instead of its own code. And that work even is continuing today. So Docker, in essence, is getting smaller and using more of container D over the years. So yeah, we've, like I said, we've existed for over seven years now. And I'd say we're not the most exciting project because our core mission is to be stable, secure, reliable as a runtime and the needs of a runtime, you know, have been fairly stable over that time period. Our basic ongoing work is the CRI isn't static, it adds new APIs now and then there have been a couple just in the last Kubernetes release. So we add features that continue to make the CRI viable for container D. And then secondly, you know, any code base just goes through evolutions, we've been refining it, we're just now coming up to our 2.0 release, which lets us finish the deprecation of a lot of things that were done in the early days that we realized, oh, that's not the best way to do it or here's a better way. So 2.0 has been kind of our focus of the last year and we're now on the release candidate cycle of that. So yeah, that's mostly what container D is up to. The cool thing is we have a huge broad community. So in the early days, there were some concerns that oh, container D is just another Docker project. But today we have maintainers from every major cloud provider, we've got independents, we have, you know, I think 12 to 15 maintainers and only two or three of them work at the same company. So healthy in the sense of like, I'm happy to see kind of our, our governance is working, we've got, you know, broad interest in the project from a lot of different companies. So so, you know, I think we're in a really good spot. Yeah, no, that makes sense. Sorry. I didn't mean to interrupt. So that's amazing and wonderful. I wanted to ask one other question. So you have been in the space for a really long time. So if someone wants to get started with containers, they want to learn and they want to eventually contribute to the space, where can they get started, what they can do and how can they get to their, you know, if they want to give back to the open source community or get involved. It's like being a part of the open source community knowing how to work in public is one of the important things in every step of the, like even all the companies, the careers, people are looking like, do you know how to collaborate outside in public? Yeah. So yeah, containers is kind of a broad space. So there's a lot of potential places and I, you know, do get asked this a good bit. You know, even people at Amazon, hey, how do I, where do I start? Like where do I even figure out what to get involved in? I usually, I usually tell people to start by like finding out where a project hangs out. And that sounds kind of weird, but like, is there a slack channel? I see the same thing. Yeah, like, is there a slack channel? You know, because I fear that a lot of people just go to GitHub and you're just staring at that list of issues or pull requests. You don't know anybody. You don't really, you know, yeah, I just feel like that's, that's a really tough way to try and get involved as sort of a, a transactional, like I'm creating an issue. Now I'm creating a PR and I've never like interacted with someone. So I really try and encourage people if that project has a community call, like join it, you know, just listen in. You may not have anything to share, but you'll start to understand like, oh, you know, she's always, you know, talking about this aspect of the project. And that's what I'm interested in. I should go connect with her or he seems to be like the key voice of the project. Like I should figure out how to maybe ask him what, what they're looking for people to work on. And so, you know, that's what I did a decade ago in the Docker project. Like I learned who the maintainers were and kind of what they're unique, like, oh, you know, that seems to be a build person. And like if I want to learn how to build containers, like, or how the build process works, like I should talk to them. So there's that aspect, like learning about the community you're interested in and figuring out who's who. The second thing is like find out if they have a process for getting involved, like do they mark like good first issues or beginner issues? And we do that with our container runtime team here at AWS. We like we try and like curate like we actually have an open source block once a week. And it's like just join join this call. And here's like a shared doc with like 10 issues we saw that might be easy to start with. And, you know, you mentioned, you know, you started with documentation a long time ago, that's a that's a great spot. Like anyone who's gone to an open source project, you know, the docs are always like painfully or tend to be like, Oh, wow, they, they're still recommending that to start building the project you installed go version one dot x and that's like five years old. So I bet they don't really mean that. You know, and things like that, like just making it easier for developers to get started. Usually, as a new contributor, you have the best view of like, wow, your docs really didn't help me in this spot. Like I felt like you assumed I knew something that I didn't like, you're a great person to be able to notice that because people that are involved are like, they just like have all these assumptions in their head because they've been doing it. So yeah, those are a couple of the main ways that I try and help people get involved. You know, some projects are better than others and container to you, we've tried to get better at like marking issues with like experience levels. But as we all know, like those things take time and you need to like encourage your maintainer and reviewer community to try and help new contributors in that way. I am, yeah, it's interesting, right? So for a long time now, actually, when I've ever had teams who reported to me, anybody who's new to the team gets to update all the onboarding docs, right? Like, you know, that's part of the onboarding process. It kind of forces you to really read it carefully as well as then, you know, contribute back kind of all the changes. And so I think it's a really good kind of model to encourage. I think it works really well. And the thing is that it's not overwhelming, to be honest. So if someone just know they wouldn't learn how to create a PR and you're going to work in public, you're like, oh, I have, so there might not be a thousand highs on you, but the feel when you start doing it, I would like everyone's going to look at my PR or like it's out in the open, you know, that there is this feeling like, what are they going to think to starting something small, like onboarding docs or documentation is the easiest thing that you get to know the project, you get to know the process, and you are, there is like success. Oh, I have successfully done something and I've dated it. Now I can just get the motivation you need and they can just grow from there. That's what happened to me. And I think like so many people might relate to that, that they started somewhere, like in a documentation or something small. And yeah. Yeah, I have to remind people, you know, especially younger engineers that join Amazon and want to work on containers, like I also had that first PR moment, like I was sweating bullets, like, you know, this is just 10 years ago, I've been a software developer for 20 years at that point. And somehow it being public, and I know all the maintainers are staring at it. And like, I was like refreshing GitHub, like, oh, I hope they're not like, I didn't do something really stupid or like, because I had seen maintainers who were like more direct, like, you know, say, Oh, this is awful, like you should have done it this way, which isn't great. But like some people are like that. I try and be a little bit gentler. But so, you know, I was waiting for that, you know, moment myself, like, Oh, you did this all wrong. And it was exhilarating to, you know, get that LGTM and like, wow, it's merged, like they actually thought it was good. But yeah, everyone has to like get through that kind of initial hurdle of it being public and everyone looking at it. Yeah. Yeah, it's it can be it can be kind of terrifying. And although I am impressed that it was at your first one that was accepted, because I think that's also something to kind of keep in mind is that your first one may not be accepted. Yeah. But yeah, the, you know, but let me changing topics a little bit just because we're getting close to the end of the time. So one of the things I thought was really cool at KubeCon EU last week was one of the keynotes was interviewing kind of a lot of the like TOC about what they think is going to be, you know, where what's going to be new or what's going to be still around in 10 years. So I'm going to put the question to you is what do you what do you think is going to be, you know, unchanged and or what's going to be different in 10 years? Yeah. No, that's a great question. You know, you alluded to it earlier. And I think the fact that like, even other, I want to say aspects of the industry, I'm not I'm not sure the best way to term it, but things that aren't traditionally like run with a container runtime have adopted the OCI like format of how to distribute software. And so like homebrew, I think is using OCI images now. Oh, there's some GitHub feature that people noticed was transferring things like using the registry protocol and OCI image format. So obviously, there's something in that, you know, protocol and format that that is is valuable enough that other people are saying, Hey, like, why would I create my own format for like sort of moving bits around of software? You know, this really works or this fits my my model. So I feel like, you know, given that I feel like kind of this, the core of the OCI model and the core of how how those initial container concepts of sharing software around probably has some good staying power. You know, now, you know, there have been other technologies that have come and gone. And so who knows, life, life can change quickly. But I feel like, you know, there's, we've, we've come upon something that just turns out to be really valuable. And I think that can stick around for a while. What I think may vastly changes is, is, you know, we're already seeing inklings of it with Wasm. And obviously, AI is bringing, you know, everyone's adding AI to everything. You know, how that will change like the idea of what, how we run things, you know, containers are one very specific concept about running a process isolated in a certain way. But Wasm is already bringing new ideas to that. And the cool thing is, you know, there was a, you know, I don't know if you saw the Wasm keynote, you know, was using container D but plugging in their own shim interface. So it's not using Run C, the program I talked about which knows how to start a container, it uses a Wasm executor to note that knows how to start a Wasm process. And so I think this idea of plugability is what we're going to see explode is that, you know, people that care about the, you know, confidential, trusted execution environments that some of the chip makers are coming up with are going to find ways to, it's going to look like a container process. But when it actually runs, it's going to be in some super controlled memory area of the CPU that's maybe bound by a hypervisor. And so I think we'll see tons of sort of explosion in that space that may take us in new directions about, you know, how containers look or how we package them. So yeah, I'm not always great with crystal ball, but I think, you know, those are the areas where I see some interesting, you know, people moving in new directions that, you know, may radically change like how we think about running at least certain types of applications. Yeah, we were, I was talking with some people about KubeCon, about, you know, running kind of containers in Wasm. And in particular, there's a new, I don't know if you're familiar with silver blue, but it, you know, so, but there's a new flavor of silver blue called Bluefin, which targeted developers, which is actually distributed as a container, which then you would like to install on your laptop or whatever. But what's interesting there is what if you could run Bluefin in a Wasm browser, right? And so, you know, like for my students, for example, that'd be really, really interesting because then I could have a class where I can kind of give them a uniform development environment that is based on Linux, right, but still running in their browser, you know, so, so there's going to be some interesting experimentation there. So yeah, I think that's, you know, like I would agree with your crystal ball, you know, there's, there's a lot going on there and it's really quite interesting. However, what I did want to also ask is because the way we actually do this show, we normally say, okay, what do you see in the next six months, you know, related to the kind of the projects you're involved with or the next release or whatever, you know, do you think that you're most excited about that you think is going to be the best new thing? Yeah. So I said this earlier, container D is not maybe the most exciting project, but obviously for us and for people that are involved, like this 2.0 milestone is huge. As far as like we, one of the most important things we've sort of put on ourselves in the last few years is like there's a core to container D that we feel and clearly has been very valuable, but we want to make sure that everything else is very pluggable. So, you know, the way you can plug in other snapshotters for other file system types or like the Wasm folks did plugging in a different kind of shim to drive a different kind of workload. So 2.0 continues to move us to that world where we're trying to be extremely pluggable and container D can be shaped by, you know, other use cases that maybe we haven't even predicted yet. And so that's, again, not exciting from a, Hey, here's a new AI feature. It's more exciting from an infrastructure standpoint of like container D can hopefully be useful to a lot of different groups without without them having to come to us and say, Hey, container D doesn't work for me. I need this huge change. Instead, it's like, well, here's a plug point. Here's a way that you can add your capabilities. So yeah, that's exciting for us and for the project in general. You know, getting to this 2.0 milestone is a big part of the maturity of the project. So yeah. When you release it, you should consider like it's part of the press release showing a demo of plugging an LLM into one of those plug-in points. That'll pull off the release for a lot of people. Questionable value at best, I'm sure, but that would be the idea. Sivita, did you have any kind of closing questions or thoughts that you wanted to add? No, I just wanted to say that it was so captivating to talk about the container stuff. No, I learned a lot. I almost forgot about AI until Phil brought it up. It is the top question that it's there on everyone's head. Like, whenever you wherever you go, especially at KubeCon or anywhere, any product that you use or build, people are like, how can I do AI with it? I totally forgot about that. I came today and I did not have that. It was such an interesting conversation to have. It taught me personally. I learned a lot. I'm even like looking at Bluefoot. I've only seen the logo and it's a dinosaur and I really liked it and I'm like, okay, we'll check it out. So it's been a very, it's a learning episode for myself at least. So thank you for that. Awesome. Yeah. And speaking of milestones, apparently our 10,000 PR was created in the last hour and just one of our reviewers, she, it's not a PR will be merging, but it's hilarious. It's just has made it to 10,000. Yeah, I just posted it to the chat. That is pretty awesome. I will say one of the things that I thought was interesting potentially for you is I've had more students talking to me about getting involved in systems in the past, I don't know, two to three months than I ever have before. And I kind of wonder how much of it's related to a defensiveness in the sense of, there have been a lot of issues with hiring and jobs and layoffs and things like that. And so I wonder if you're going to see actually kind of an influx in contributors to things like container D or the Linux kernel or kind of the much more lower level infrastructure because it is so kind of unusual for people to work on. And so I just kind of something to keep an eye out for because I think it's potentially interesting. Yeah, it's interesting you say that because every time, and maybe this is just my own lack of faith that the people still care about the lower layers of the stack when there's so much interesting happening, higher layers. But when we get new contributors and people show up and want to work on container D, it's like awesome. We'd love to have you. And when we go to KubeCon and we have our maintainer track session, the room was packed and they had to turn away people at the door. And it's like, wow, seven years in, people still want to hear the latest on container D and figure out how to participate. And so that's awesome. Yeah, so it's really cool. Well, thank you so much for the time. We really appreciate you coming on the show. And Savita points out there is a contributing guide to container D that I will post in the chat real quick. But I think that's our show. And thank you so much for coming. It was lovely chatting with you. I learned a lot and thank you, Langdon. It was one of the cool shows, at least all episodes closer to heart. But since I'm on the show also, again, I really, terribly enjoyed learning about things today. Thank you. Yeah, awesome. Thanks so much for having me. And yeah, it was a great conversation. All right. See you. All right. Bye.