 Hello and welcome to LGTM on Cloud Native TV. Hello and welcome. My name is David Mackay. You may know me better as Rocko. Today is LGTM and we are taking a look at contributing to container D with the wonderful Phil Estes. There Phil, how are you? Great. Good to see you again. How are you? Very well. Thank you. Thank you for everyone tuning in. Please feel free to use the chat to ask us questions along the way and also remember this is a CNCF event and as such is subject to the code of conduct. Please be respectful to myself, to Phil and to everyone in the chat. Thank you. All right, Phil. Can you do us a favor? Just give us a quick introduction about you and then we'll talk about today's plan for today. Sure. I guess for today's topic, maybe the most interesting thing is just how much I've been active in the container runtime community for six years or so. So that started with getting involved in Docker in the early days of the Docker open-source project, working some of the OCI and RunC, and then focused a lot on container D the last few years. So my employer is AWS. I was an IBM for a very long time, but now work at AWS where we use container D across quite a few of the services, Fargate and EKS being two of the most notable. So yeah, I've been involved in this part of the community for a long time and excited to share with folks how container D is put together as a project and how you get involved. So yeah, thanks for having me. Yeah, we're really excited for today's episode and walking through container D and how to contribute. That is the purpose of LGTM is to give anyone watching a home who's thinking I'd love to contribute and I don't know how to get started. But that's why we're here. And for container D, there is no better person than you, Phil. So we're really looking forward to sitting down and getting the nice overview of the different components, the code, the test, the builds, all of those things. We're going to pluck them out your head and share them with our viewers today. Yeah, great. Excellent. So for people that maybe aren't entirely comfortable or confident and maybe describing what container D is, can you give us the high level overview just of what the project is and maybe a upstream adoption, what's it used for, et cetera? Yeah, so like I said, container D has, I said a couple of years, but I guess officially it's been around for longer. It's been at least five years since the code base started. But I think kind of an important data point there is that container D had a shift in its life in late 2016 and early 2017 from just being sort of a process supervisor that was used by Docker, by the Docker engine to manage the life cycle of Run C. And so again, that's the open container initiative, low level runtime that executes containers on Linux. So it started its early days as quite a small project that just sort of did this intermediary role between Docker, the broader kind of runtime engine and Run C actually working with the operating system to create your containerized process. But then late in 2016, we announced, along with Docker and other people in the community, that container D would grow into a more complete container runtime. For example, it would have registry interactions, it would have snapshotters for how your images are stored on the local file system using different copy and write file system providers. And so container D in that kind of late 2016, early 2017 era became more than just this process supervisor and really became a container runtime that was more focused around just those core capabilities to kind of fit the niche of those who didn't need the full Docker ecosystem of plugins and volume management and networking. And so container D initially fit into the announcement of Kubernetes CRI, the container runtime interface. And so initially container D had a CRI plugin that made it kind of the perfect match for being the container runtime for your Kubernetes cluster. And so that same year, 2017, container D was donated to the CNCF. Many new contributors showed up, many more cloud providers and other kind of downstream users adopted it or started that process of shifting, for example, to manage Kubernetes services from using Docker to container D. And of course that took some time to migrate and for container D to mature, but effectively you'll find container D use pretty heavily in cloud managed services in functions as a service offerings. I know IBM uses it in some of their functions as a service offerings. Alex Ellis has FASD, which embeds container D. Darren Shepherd and the Rancher team created K3S, which embeds container D. And so there's really a broad set of kind of consumers of container D as kind of this core container runtime. Some of those use cases I just mentioned, one of the benefits is it's embeddable. It has a nice Go API that can be used from other Go programs, either as a client or even embedding the entire server like K3S does. And so yeah, we've seen tons of growth in that last three to four year period, both in usage, but also in contributors and people involved in the project. So it's been great. It's been a healthy project. We graduated in the CNCF a couple of years ago. And so stability, value to the ecosystem, good governance and contribution from a lot of different parties. Yeah, it's been a great project. Awesome. Yeah, I think the, I'm always surprised that when people say they're not using container D and then when you really look at their stack of what they're using, you realize just how far and why container D is, and actually we're almost all using container D now if we were running containers with Kubernetes or any of these other serverless tools. So yeah, I think the success of the project is apparent just by the adoption of the tooling across the board. That's awesome. Yeah, yeah, absolutely. All right, well, today you're gonna kind of give us a quick code walkthrough tour. We're gonna take a look at the different components. I'm gonna project with loads of questions as we go. And then we will take a look at the development experience as well. So I will bring your screen up here. And if you wanna just take it away and I'll do my best to ask questions. Yeah, great. So I'm starting in maybe kind of a unique spot. We have a sub repo called project which as container D grew in kind of sub repos, we needed a common place to put contributors guide and governance and the official maintainers list security details about how to report security issues. So that's all here in container D slash project. Again, there's not a ton to look at here, but I also thought this would be good to jump in and talk about sort of two kinds of projects you'll find in the container D list of repositories. Most of the projects within container D are what we call core projects. So the same list of maintainers all have right authority to all those sub repos. And many of them are rendered into the container D project. So when you build container D, there's a certain release or hash of that sub project that's included in the vendering. And so those are what we call core projects, the same governance applies. But we added a few, maybe it's been a couple of years now, this idea of non core sub projects. And so these may have additional maintainers. It may be sort of an area that's aligned with container D but not really core to the project itself. And so anyway, there's definitions here. And if you read the whole governance, you'll see the slight variations. But we'll look at some of those, but I think it's just as you're looking through the repos, it's good to kind of understand the difference between what we call sort of the core container D project as well as these non core sub projects. In fact, just here near the top of our list of repos, StarGZ, Snapshotter, NerdCTL, TTRPC, Rust. These three projects are non core sub projects of container D. They've been brought in because they're interesting to the project as a whole. And StarGZ, Snapshotter for example, can be built into container D for a lazy pole container image implementation. Maybe some people have heard of NerdCTL. This is an interesting project created by one of our maintainers that gives you a more Docker compatible CLI. So if you don't like the limitations of CTR, which is the container D client, you can try on NerdCTL, which includes all kinds of interesting things like it. It sets up rootless container support for you. It builds in the StarGZ Snapshotter support. It builds in build kit support so you can have Docker build capabilities. So again, these are interesting things that are related to the project, but aren't necessarily core. Many of the others, so again, container D is our main repo that we'll look at most of the time I think we're talking. But of course, there are many pieces that exist around that console support, C Groups, which is actually used by other projects outside of container D like our C Groups implementation. We've got our website, TTRPC, which is the lightweight GRPC client. And when we look at the architecture, that's how the shim actually talks to the container D demon itself, managing that run C process that I talked about. And again, there are many others here. We've built some release tooling. We have some other non-core capabilities like the implementation of image encryption. So again, that's kind of what you'll see here. There's, I guess, 24 total repositories. Again, probably 17, 18 of those are core to the project, and five to 10 of them are non-core. And there's also tooling some projects in the website. There's lots of different ways for people to get involved. Like container D slash container D is maybe to end up first. You can maybe pick one of those fringe projects and kind of tip your tool in earlier. Yeah, absolutely. Nerd CTL is interesting because it has a bunch of contributors who maybe aren't all that interested in developing a container runtime, but they find it easier to jump in with, hey, I could implement Docker inspect or Docker PS. And so Nerd CTL has quickly grown into a very active project where a lot of different contributors are implementing other pieces of the sort of standard Docker client syntax for container D. I thought it might be good. That may be too small. Let me make that a bit bigger, if I help. Yeah, so a quick look at a rough architecture diagram may help us as we kind of look at the container D core main repo. So as I talked about, there's all these consumers kind of at the top, whether it's a cloud or a specific tool or capability. And they're probably calling into container D via various methods. And so the client options are really to use the Go API. And so again, you can use standard Go package documentation for container D slash container D and see all the Go APIs. If you're coming from a Kubernetes context, your KubeLit will be calling container D via the gRPC CRI API. And then plugins within container D, like the CRI plugin, will be calling the Go API to drive container D to do the things KubeLit is asking it to do. So it could start this pod or pull this image. And then we export Prometheus metrics as well out of the engine. So that next level, the core is really what you would assume is the implementation of the container runtime. We've broken it up into various gRPC services for images and namespaces and snapshots and tasks. And then, of course, each of those services has some metadata associated with that. So BoltDB is used as the metadata store to hold these images and their references and the content and the actual containers themselves. And then at the back end, you have all the various snapshot or implementations, so ButterFS, StepMapper, Overlay. And again, that's pluggable. You can even have external snapshotters like the StarGZ snapshot or project. And how all this actually talks to run times or via the shim client, which is talking to the other side of that shim is running RunC and actually your containerized process behind that. But again, that's pluggable as well. You can write your own shim for something other than RunC. And so we have shims for Firecracker for lightweight virtualization, Cata containers, again, lightweight virtualization. Microsoft has RunHCS, which runs the Windows containers. Google has a GVisor. I can't remember if they actually have a shim or if they just have a RunC replacement. But again, this is a pluggable back end side of containerity. If the sky's the limit as far as if you can implement the shim API, you can drive whatever kind of process isolation you want behind that simply by having your own shim. So if I have an overwhelmed people, that's kind of the high level view. As we look into the actual repo, how we'll find the layout of packages and directories in containerity will map to this architecture. So if we look at the main containerity repo, again, lots and lots of content here, there will be a quiz. So probably the, I don't want to bore people with talking about every possible thing. So I think the best way to start thinking about it is there are a set of files and first level directories that represent that Go API. And so you can see down here container.go and container checkpoint and options and diff and events and image, image store. These are all, if we go look at the, actually we should just open that. So it may help people see that more clearly. If I open up the Go doc, then we'll see a lot of these same things. We have the containerty client. So again, all the ways you can use the client to talk to a running containerty demon, various options for that. They're obviously packages for each of the services. So the gRPC services images, all the options for when you're starting a container. So again, if you're using the Go API, you're going to say with image and reference an image. And then there's options on how you want that pulled, various snapshot options. If you want to write your own OCI run C spec with your own options in there, then you can actually pass the spec. So again, the Go API is fairly rich. This is actually how the CRI implementation in containerty uses containerty. So it actually uses the Go API to drive containerty, like creating a new container, creating a new task. So that's mostly the files in this root directory here or a lot of the implementation of the Go API. And then at one level down, there's a lot of the metadata and implementation references and the namespaces and metadata service and labels and leases. So again, a lot of the implementation of those services we saw are within those directories. If you came to containerty and said, actually the GRPC API, I'd like to enrich it in some way. Again, this is defined in protobufs. And so we have these text files here with some documentation. If you wanted to change how the services are actually implemented, you would start by changing an API definition here. And then in our make file, there's our targets to actually rebuild the protobufs. And then of course, you have to wire that up to those directories where actual implementations of those services are in containerty. Some of the snapshotters are built in and some of them are external to the project. So again, if you're looking for the overlay snapshotter, it's built in, but some of the others are actual repos within the broader containerty org. There's a lot of nice kind of helpers. So anything to do with OCI. So obviously when you start a container in containerty, you'll get a default spec with whatever. If you're using CTR or nerd CTL, you specify the volume out. Obviously at some point that will end up generating a entry in the OCI spec. But all that's implemented here in the OCI subder for interacting with RunC. What else is interesting? Under the package directory. So one of the interesting things is that this year, we changed CRI for being a totally different sub repo within the containerty org. And we migrated and merged that into the containerty code base itself. We were doing a lot of kind of this iterative vendering, so you fix something in CRI and then you fix what it's using in containerty. And then you have to revender CRI back into containerty to make a build and then release it. And so we hope this helps people develop and use the CRI if you're a developer to enable kind of quicker iteration on changes of the CRI. And so you can see the CRI subder here. This is most of the implementation of that CRI API from Kubelet. And again, if we look in server, here's a container create. And so again, if you're using containerty as your Kubelet's runtime, the CRI call to create a container will come through here. And then, if we look at this, it's actually using containerty's API to do that container create. And so it's that linkage between CRI and containerty being used as a via the go API. So that's kind of a fairly high level overview of the layout of the code. Trying to think if there's anything else worth digging into. But again, it's a big project. There is quite a bit of code here. But most people find that you're not making a change that crosses this entire repo. I personally am not necessarily an expert on our snapshotters. There are other people who are. And so if you looked in the snapshotters projects and directories, you'd find very few changes for me. I've been focused more on other parts of the engine. So that's totally fine as well. Contributors can have a focus area, an area where they feel more comfortable. And we have plenty of contributors that cover the code base. Nice. I think I'll try and summarize that in 10 seconds as best I can, even though there was a whole lot of information there. But if you're coming to the project and you want to make a change to the API, the first place to start would either be the protobuf files, which have the descriptions, or those go files in the top level directory, which map to the API. And then you've got a nice clean directory structure with sub-directories for all the different components that those APIs have to interact with. Did I get anything wrong? Yeah, no, that's great. All right. Awesome. Thank you very much for that. Shall we do you have a closed local age while we go through the development experience, the build process, and take a look at how we can get this run out? Yeah. Yeah. So just like any other project, your starting point is to clone the container to repo, get it set up in some local environment. I guess it's probably good to mention we have a building.md entry here, just talking about building the dev environment. They're actually today, other than installing Go and potentially installing the butterfast headers and library for your Linux distro, there's really very few kind of pre-rex that would be very difficult. In fact, this whole section on installing the both compiler, if you're never going to change the API, you actually don't even need the print above compiler installed. So yeah, the other part of it is that if you don't have RunC installed on your system, which again is almost hard to do today because most distros will install some container runtime components that will install probably a reasonable version of RunC. But were that not the case, you would want to clone the RunC repository again run these fairly straightforward commands to install RunC. And a little shout out that RunC used to care a lot about which version of RunC you installed. You could look at our our vendering Go.mod file and find the right release tag and build that. But RunC is currently voting on the v1.0.0 release. So RunC is finally going to be 1.0 final. And so any kind of reasonable 1.0 install of RunC should work fine with container D. There's less of this kind of interrelationship between versions of RunC and versions of container D that you have to worry about anymore. But again, if you're interested in the exact version we build we actually have created a new file which I just blanked on where we have that. But again, what we try to do is separate out vendering from which version we build for CI because again those things don't absolutely have to be linked anymore. But if you do look at our Go.mod and again we use Go.mod vendering we finally went through the pain of switching to Go.mod and getting all our vendering. We do have a little bit of a complex replace rule set up here. So if you're going to vendor container D you need to also do these same replacements. And there are some tricks here with Empty.mod which you can go read the PR about that people much more skilled in the art of Go.mod set that up. But again RunC here you can see we're using 1.0.0.-rc95. So again that was a little bit of a roundabout description of one of the pre-rex. Once you have RunC and the butter offense headers and some reasonable version of Go I have 1.16.5 here. And then the only other comment I was going to make is that believe it or not container D builds on Mac OS like natively not as a container. And there are people working on there's a couple open issues and I think even a PR about using some VSD kernel capabilities to actually run containers. So you can't run containers on Mac but you can build the project and we even run CI or every PR CI make sure the build isn't broken. And even I want to say as a short test we've been I'm not sure if I'm right about that we can go look. So yeah but mostly you're going to want to be on Linux but hey if you want to build on Mac you can do that. I had no idea that was possible. That actually makes things a lot easier for developers that are working on a Mac then so that's thanks. Yeah yeah and again Akahiro who wrote Nerd CTL many people know him he did a lot of the work in rootless containers along with folks from Creo and Red Hat. He has a new project I want to say it's called Lima but he's always getting the point with Nerd CTL container D and Lima to have like a Docker desktop like experience on Mac. So it starts a small via Linux VM so you can actually use Nerd CTL as the client on Mac driving the Linux embedded VM similar to how Docker desktop works. So if you're interested that that's an interesting new project to play with as well but back to Linux we're here at our command line. I've checked out the project I have run CI I have the all the necessary prerequisites. Probably the most interesting easy thing to do as far as building is make binaries that's going to build CTR which is again the simple client which if you read through our readme we say is unsupported. We mean that in the sense that like CTR is a part of that API contract that we offer in container D. It's simply a sort of nice admin type tool. You know Nerd CTL is much more feature rich now but again CTR is there container D is the demon itself. We build a stress tool that's just an interesting use case for trying to test container D 24-7 just running containers tasks image polls. And one of the maintainers has a live system constantly running that on every commit. And then there are three shims and I won't spend a ton of time here but the shim API has matured over the four or five years that the project's been around. And so we're now on the the v2 version of the shim which is actually the second to run cv1 and run cv2. Some of these enhancements to the shim API came about because of users like catac containers and others who needed a more rich API for that management of containerized processes. Obviously you can think of lightweight virtualization as needing sort of more management metadata about that VM, things that aren't necessarily part of like the run c spec. And so again the shim API has matured to support those use cases. And so we currently build the old legacy shim and then the two modern shims. Again if you were going to play with Firecracker or Cata you could go download those projects they will build their shims. You could install that and then your this container d that we just built could use those other shims that are built out of tree in those projects. So yeah building does not take long as you saw it's just a minute or two. Pretty fast. Yeah now the fun part is say I want to play around with this version if you have installed Docker like I have on this VM. Docker also uses container d. I think most people know that but I didn't say that in the opening. Docker has a in the old packaging Docker actually delivered container d. Now most of the two distributions have their own container d package and Docker has its own package and simply depends on the container d service running most likely through system d on your machine. So because I have Docker running and it's already using the container d installed I usually play tricks in my environment to either shut down Docker, replace container d or point to a container d and slash user local. There's also things you can do like start container d listing on a different Unix socket. And so then you know I could basically run this container d even while Docker is is depending on my system container d if you want to call it that. And then when I test it or run CTR commands I can simply point at that Unix socket. So actually I wanted to point to this one. So what I was talking about is that when I run container d I can set the address for the gRPC server that defaults to a container d.soc in this root own directory. So this is the my system if you want to call it that my system level container d is running and listing on this socket. So if I want to run the one I just built I can do address run container d from some other socket. So if we now go look down run container d there should be what did I do? Run container d private.soc. Yeah so more fun. It's still reading my container d config and yeah it's still reading this config so I would also need to basically create another config obviously just like I did with a different socket and I would need to set up the gRPC address to be different than the default. So again one way if I'm happy just running the test I think I actually made container d I think I'm stuck in a gRPC timeout trying to contend with that other socket because it ignored my address on the command so we can we can fix that but so that was far too nice because you didn't do a dash name I would have I would have just have shot it in the head. Yeah so the nice thing is the the test suite does this for you so the integration part of the test suite will start its own container d on its own socket with its own config and so the the nice thing is if I run make test you know I'm not going to have a problem having this weird interaction with the system level container d and so again that this will probably take a good while we can leave it running for a minute if there are other questions there's things we want to poke into you know again that's your basic dev environment make binaries make tests make integration I think the it may be well I'll give you a chance to tell me what you'd like to to poke at next um but it may also be worth noting that our ci is set up to use github actions and so you know for for every actually that's that's painful to look at the yaml we should just look at a pr and we'll look at one of my prs and what's painful about looking at yaml so this this kind of gives you a feel for what's going to happen and again maybe maybe I'm jumping ahead if we're going to do an issue and a poor request we'll get back to this but as I mentioned we're cross building to make sure build is working across a number of architectures including my colleague sam karp just added free bsd support recently and that's still a maturing and it has a run c a like replacement called run j that's its own separate project so again we're cross we're linting across various our os's we're cross building across various uh cpu and architecture pairs we're then building all the binaries and then running integration and I was right we do run the unit tests on mac uh linux integration with a bunch of matrix of different of those shims and also running against c run which is a c uh a replacement for run c written in c that red hat created um so again this is this is kind of what happens when you create a pr um you know all all these steps are going to happen uh in github actions to validate that your change is a breaking a set of architectures a set of operating systems and then running the the tests okay so let me clarify a few things there so you know the feedback loop for a new contributor coming to the project you know they clone the project like they do with any other get reports that's right um a pretty solid way to start I guess we'd just be running make real binaries which will just ensure that you have the correct toolchain and everything that you need to actually build the project so if you just get that out of the way I guess it's going to save them a lot of time and if you do run into headers the chances are you need the barfs headers potentially some protobuf compilers and I think that was it actually so maybe those two yeah yeah just the go toolchain and those two things yeah uh you know it's become such an easy question these days when you ask someone how to build a go project you know if you go back just 18 months it was like well which vendor until or which dependency till they were using but go mod really has kind of taken over and it's great to see that a lot of these bigger projects are also adopting it as well so just running a go build should for anyone coming to this who's got at least go 113 I think it is um should just pull over from down yeah is there a minimum go dependency on container d or does is it not too fun yeah um I think we've said 113 I I don't know if our development main branch so we just released 1.5 um recently which I know I guess what I'm why I'm hesitating is I'm not sure if there's anything in the current development branch that is using uh so for example there uh go lang has package errors and we started using errors.is I think which is I don't remember which go release that came in but it might be higher than 113 though but that's a great first PR for someone if if you if you want to update or read me if we're wrong about our minimum version nice uh I also forgot to mention you pointed out there was a building markdown file in the root of the repo repository so people should definitely run through that as well and yep there were two other make fail targets that you mentioned there were uh make tests which is already ran was pretty quick it looks like is that just running the unit tests and I think you said there was a make integration as well yeah so so make um integration is what's going to actually start a container the instance so that's not going to work on the mac right that's that's on the next one me yeah so this needs run c to be installed because obviously it's going to be starting and stopping containers it needs uh it needs root um because that's to start the container the demon um and you're going to need an internet connection and hope the container registries we use during integration are not having downtime or an outage because it'll be pulling images you know some basic images to run all the tests so we have a bit of flakiness sometimes in ci because you know we're doing a lot of registry interactions yeah but yeah I think I guess you know if you're making your first contribution to the project you know you'll know which parts of the system you're hopefully trying to change your unit test maybe enough to get that pr up and if you're doing anything that it modifies maybe container creation the integration tests are probably quite a good thing to run as well yeah absolutely yep okay uh so maybe we could take a look at the pull request format I don't know if you have a trivial issue you'd like us to work on or you want to just run through a pull request it's up to you but if we could just maybe take a look at the template and talk about some of the conventions that container depot entries is there too yeah so let's look at issues for a second I don't think we've have a template for uh prs but we definitely um formalized our issue template over the last couple years uh we have enabled github discussions in the last year um and so this is kind of a nice way to keep people from opening an issue that's just like a general question and we also added a link to try and get people to join cnc of slack and point out that the container d and container d dev channels exist there for people's questions a little more interactive way to to talk to community members um the only distinction there in the channel names um container d we see is like anybody end users you're playing around you're trying it out container d dash dev we um we uh we assume that someone who's interested in maybe contributing or has a question about how it's built or you know it's trying to extend it in some way or use it in their project uh again you know we find if people uh mix that up from time to time but that's kind of the the split of the channels we uh also uh formalized our security policy so I'd shown some of that and so this is kind of nice because we can link directly to that um and so what's left is again just straightforward templates for I found the bug which again looks like a lot of other templates out there what would you do what results what did you expect and then we ask you know for some output of version you know show us if it's relevant your run c version your cri configuration what kernel you're on um and then we we've we've tried to toss in some helpful um you know wait you know if container d's hung can you provide us a stack trace uh by you know follow these commands um so yeah fairly straightforward uh people that follow this uh get a lot more help because if they don't do this um you know our first response usually is can you provide you know version details etc so as usual as with most most projects we love people who kind of follow the the format and give us as much detail as possible the other um template is just you know I want container to do container d to do this new thing um and so it gives you a chance to describe that and provide context what you're trying to accomplish um so I think we can see go ahead yeah I was just curious you know if I'm coming to the project and I've got a great idea for like a new feature um you know as opening an issue to start a discussion the best way is having a pull request with a proof of concept the best way is there an rsc process like I guess for smaller changes that's not important but maybe larger ones it is yeah so um yeah so we have not ever felt the need for like a full kind of um formal proposal process you know for new ideas or new features um obviously it can be really helpful if if it's not a minor thing to you know just join one of those channels the container d dev obviously if we're talking about new feature implementation and just you know pose it there because you may find out that the maintainers have already thought about that or there's already a poc or there's there's something they didn't find as they were looking through um but opening an issue is is is definitely a reasonable alternative or next step uh for example uh cos one of our reviewers just added this a few days ago um that's you know should we add health checks like docker hat like docker engine has how would we do that one of the pros and cons that's something he had already chatted with with the maintainers should you know do you do you all think that this is something worth uh considering and we're like yeah you know open it open a feature label on that uh again the the template provides that labeling and then we can potentially add more labels like windows or or other interesting labels on the issues you have any labels that are you know keen for people to look out for for new or or simpler issues yeah so um we do we we've done maybe a poor job than we'd like to uh um you know marking like experience beginner expert intermediate help wanted these were labels we created to try you know especially in the early days trying to help people understand where they could fit in um so we we've just started to try and have some more community calls so live you know zoom call and um hopefully we can get some folks interested in in kind of helping us triage issues that like every open source project it's can be um difficult to get enough time to to do this kind of grooming of the issue list it just sounds like if someone new is coming then and they can navigate the issues and find something that's relatively simple for them to pick up the community calls might be a good way to start a discussion or the container d-dev channel on the cncf slack and just say hey i want to contribute i don't know how to start and hopefully someone there will help you it yeah yeah absolutely okay so what about the the pull request process then itself when i when i open one and create one um is there anything that i need to know there that's different are there any slash commands does i have to assign an issue to anyone or yeah just open it and hope the best yeah so um you know my my workflow um is that i tend not to really use this new pull request i mean obviously you can use this and select a branch um and prepare changes but in my my workflow you know i have something i want to do uh i'm just going to you know check out a new branch uh fix something um i'm going to go you know edit that file maybe it's maybe it's i found out that we need that's interesting except 1.14 yeah that is interesting 113 is okay 115 is okay but no you cannot use what we're doing with that i think that was yeah i think that was when um there was that issue with signaling uh sig sig int interrupt and writes and reads potentially needing to handle interrupt anyway so let's say we found out that that we needed 115 and now i guess we can leave so you know i've made this change um i'm going to commit it update um minimum is there any format i'm commit mess it's easily conventional commands or um so the we do have uh project checks that run early in those github actions the format we want is the standard um validation that a lot of other projects use so docker uses this run c uses it and so it expects like a sort of title i guess we'll call it uh up to 75 characters then a blank line um i realized we then some kind of description we now need to 1.15.x and then you need a signed off by and what we would like you to do is use your real name and so some people you know put their github id here but again for the dco compliance we'd like people to use their their real name and then um a obviously their email and so this is this is the the format um that we expect and then we'll fail ci if you don't follow this if there's no signed off by line or if this is all on one huge line it'll it'll err out on ci um and so you know i can now push this um and so i i tend to just um you know again push this this has been pushed to my fork of container d and so now um if i just go to the container d uh github has this nice feature where it's like oh you just push something um do you want to pull across yes i do and so the nice thing is you'll see that you know it's using that that sort of title line of my commit as the title of the pr and then everything else is just put in inside the first comment um and again there's the diff of my change um and obviously i can create that i i guess i'm not going to because i have no idea if that's really true um but what's going to happen is that um automatically a few things will happen first uh sadly because of crypto mining if you've never contributed to container d or any of these repos it will not run ci until one of us um with commit access to the repo uh clicks a button that says authorize you know run running of ci um the other thing is that because the cri project is now integrated into container d we have two other things outside of github actions that are happening on every pr uh one is that we're actually running um uh the uh forget the the end to end tests uh for container d and kubut um we're also running arm64 uh because github actions doesn't have uh integrated support for arm64 yet um we open lab runs arm64 builds and tests and integration sadly this doesn't work um open lab was having an issue when i opened this pr so it didn't run but those those things are going to happen and again if you're a new contributor it's going to force one of the container d members to authorize the end to end test to run um and there are a few slash commands that the robot will uh comment and show you uh if you uh don't know what you're doing or don't know how to read if a test fails and you want to re-execute it um we don't unlike you know kubernetes which has a lot of uh interesting integration uh there aren't a ton of like slash commands or robots operating here other than this one that runs the end to end tests um and so we don't have auto merge or any of those other things so at that point most of what you're gonna want to care about is that you haven't broken ci and if you don't understand why something failed just comment ask uh um say hey this doesn't look related to my pr can someone help me figure out why this didn't work um and then yeah you're looking for two different maintainers or reviewers to lgtm your pr and then it will be merged and they may we may add labels like hey this is a really important bug fix it should be cherry picked back to an existing release and so maintainers will add those labels and may ask you and even give you the commands like can you please cherry pick this commit uh against the release branch one four one five thanks awesome so yeah i think that's that's pretty much uh yeah that's perfect awesome thank you very much all right let's uh get back over here so that is our whirlwind guide to contribute into container d i hope you all got a lot of useful information there um i think they're really important but i think they're really important bits to take home there are that if you do want to contribute to container d be involved in the cnc of slack uh and the container d and container d dash dev channels i'm sure there'll be lots of interesting and helpful people there willing to help you out uh the issue q is there get involved open issues wherever possible if is anything bigger maybe open a discussion first and uh try and get some people to discuss the idea but if you make sure it makes sense for the project to take it forward from there as far as building and testing goes there's make failed targets for everything so it should be hopefully nice and simple and uh that's it have fun contributing to container d and fell thank you so much for joining us today and walking us through that there was a lot of knowledge there to be shared and hope everyone else finds the issues so excited cool yeah thanks for helping me awesome thank you very much all right uh what time is this and one hour pop will be here with spotlight and they will be doing a six store root key ceremony so come and check that one out fell thank you again i will speak to you soon and have a great day thanks bye bye everybody