 So, with this talk, we hope that you can get a good understanding of how the contribution workflow works on the Diego team and as well as other Cloud Foundry projects, and how does the engineering team engage with the community to deal with issues and take requests and that kind of thing. We also hope that you get encouraged to get involved and start contributing to either Diego or any other Cloud Foundry project if you are not doing that already. And we'll do that by giving you a general overview of how that process works with some background. And we'll show a walkthrough of how a simple contribution makes its way through an issue, then becoming a forecast and then finally getting merged into Diego. And lastly, we'll give you just some quick tips about how to go about sending a contribution and getting that accepted and preventing it from getting protected, basically. So some background. Pour request, if you're not familiar, is a term on GitHub that signifies a request to merge changes onto a project. So that's a great way to collaborate because it allows you, it's basically a conversation started and it has, you can comment and have multiple people basically review the same change set together. And Diego is first and foremost a team on Cloud Foundry that's part of the bigger Cloud Foundry project. And it's a team that maintains the Diego project, which is the container scheduler and orchestrator within Cloud Foundry. The full description of Diego is kind of a cytoscope of this talk. But the team is composed of four and a half bears, like we like to say, which means nine engineers and then one product manager. And these engineers are from a variety of companies. So like I said, I'm from Pivotal. Jenny here is from HPE. We have a few engineers from IBM on a team and we've had folks from SAP, GE and other companies before on the team as well. So it's a very diverse in that way. Those developers work full-time on the project and are responsible for not only putting the project forward and developing the features that the project has to work on, but also engaging with the community and dealing with issues and PRs. The team rotates fairly frequently. Like I said, I was on the Diego team, I'm no longer on the team. So we rotate within Cloud Foundry every few months and sometimes a little longer than that. But yeah. So to get involved and to get in contact with the team, we have a few things you can do. You can open a GitHub issue if you already have a bug report and you know the steps to reproduce it or you just have some suggestion that is a little more consolidated and then you just want to propose that just a specific project of Diego or of Cloud Foundry. Or if you just want to have a conversation or you have a question about how to get started, you can join us on our Slack channel on the CloudFoundry.slack.com. Each day we have a designed interrupt pair. What that means is in the morning we'll assign one of the pairs to be responsible for looking in Slack. So on the top of our Slack channel, on the subject line, there will be a couple of names listed. Those are the people responsible for answering to the community on Slack. So you can come by and ask any question you might have. And lastly, if you have some longer conversation that needs more eyes, maybe it's not with only one team, one larger CloudFoundry discussion, you can use the CFDAV mailing list as well to get started with that communication. So let's get to the walkthrough. So Jen will play a developer that's not on the core team of Diego. So she does not have commit rights. And she noticed an opportunity to improve one of the code bases. And she wants to contribute with that change. And I will play a developer on the core team that will assist her with that contribution. And I will be responsible for community for the day. So the same way we have an interrupt pair, we also assign one pair to deal with community. And what that means is we will look at issues and poor requests and interactions from the community for the beginning of the day to just be responsive to the community and make sure that those contributions are making their way through efficiently. So like Luan said, I'm an outside contributor. I'm poking around a little bit, Diego. I'm trying to understand the code a little bit better. I'm taking a look at, in this case, the executor code base. We have lots of separate code bases inside Diego. And we'll talk about that in a little bit. But I'm poking around. I'm taking a look, trying to get a little bit involved. And I notice there's one opportunity for a bit of a refactor. And so before I want to dive in and actually start working on it and submit the pull requests and stuff, I want to first just verify with the team that they agree that this is an appropriate refactor. This can save you a lot of time. Because if you go through and run all the tests, do all the work to make the PR and then submit the PR, and then later it turns out like the Diego team doesn't really agree with the direction or it conflicts with other work in progress or something that's a lot of wasted effort on your part. And it's just a lot of frustration for everyone involved. So in this case, I take a look. I notice an opportunity for improvement. I create an issue and I say like, hey, I notice this. Is it okay if I actually go ahead and fix it? All right, so Jen then sends the issue. And I, as the pair responsible for that community interaction on that day, will notice that a story will get created. So we have a thing called Gitbot that will look at our Git repos and create stories on our Pivotal Tracker so that we have more visibility into the state of that and what's going on there. So a story will get created in the community backlog and then we'll all look at it and respond as quickly as possible. So in this case, I'm just telling Jen that I agree with the change and I'm just giving her the thumbs up to go ahead and do it. So just a quick note about how our repos are structured in Diego. We have one repo called Diego Release. This is kind of the parent repository of all of our other microservices. This serves as our Bosch release. It's also a go path. It's sort of the parent of all of our other microservices. Each one of our other sub-components is all a sub-module inside of the Diego Release repo. So all of our go code is generally in one of these sub-modules. So most of the time you're not going to be submitting pull requests to Diego Release directly. You're going to be submitting a pull request to one of these sub-modules. And then later, once the pull request gets accepted, someone from the Diego team will go ahead and make a commit to Diego Release to bump the sub-module, meaning update the SHA of that sub-module to match what you just submitted. So in general, you're not going to be needing to make PRs to Diego Release unless you're doing something related to the Bosch release itself. So that means if you're changing the way the manifest are generated or the way the jobs are deployed, then there's going to be stuff in Diego Release. But if you're changing go code, that's not going to be in Diego Release itself. So we're going to be playing some videos as these go. But the details of what we're running here, like you don't have to jot down exactly what's being run here. All the instructions here are in this document right here. This is the contribution document where we list out the exact steps you need to do to set up your environment and stuff like that. So what we're going to do now is I'm just going to set up my environment. I got the okay from Luan to go ahead and make this fix. But I have a brand new MacBook. I've never worked on Diego before. I need to actually go and set up Bosch Light and configure my environment so I can work on Diego. So the very first thing we're going to do is set up Bosch Light. Some of you might be familiar with Bosch. It's like the orchestration tool that we use in Cloud Foundry. Bosch lets us deploy to multiple different eye ases. So you can deploy to AWS and Google Cloud and all these things. You can also deploy to a local VM called Bosch Light. And that's what we do for all of our testing. So we have a real live Cloud Foundry and a real Diego. It's just running within individual containers in a VM on your local machine. So we use this for running our basic acceptance tests and stuff like that. So here, we're just cloning the Bosch Light repository and running Vagrant up to get our local Bosch Light up and running. Next thing we do is we need to actually pull down CF release. So Diego release depends on Cloud Foundry. CF release is a repo that's pretty similar to Diego release in that it's made up of lots of sub modules. So CF release has even more sub modules than we do, so there's a lot to bring down. So when you bring it down, there's gonna be like a script in there called ScriptsUpdate. You can run this basically just as a get pull and updates all the sub modules and initializes all the sub modules. Once you actually have the CF code, then we're gonna actually do a Bosch Create Release and a Bosch Upload Release to get that code all set up on the Bosch Light box itself. So you can see here we've sped up a bit the process. This actually doesn't take quite so fast. You'll find that out when you try to do it yourself. But it should be pretty simple to do. It just can take a little bit. I'm just gonna skip the rest of it for now. So the next step is to actually get Diego release. So like I said, Diego depends on CF being there as well, but we also obviously need the Diego release code as well. So now we're gonna pull down Diego release. We're gonna do the same thing where we're gonna clone it and pull down all the sub modules, do a Bosch Create Release and a Bosch Upload Release. Again, all of these instructions are laid out in the contributing documentation. Next step is to actually deploy the releases that we just uploaded to Bosch Light. So what we've just done is we have now some tar balls that contain the source code for CF release and the source code for Diego release. But nothing's actually been deployed to our Bosch Light yet. They're just tar balls that are sitting waiting for us to use them on our Bosch Light. So now we're gonna generate the manifest. That's the deployment manifest for CF release. That's the thing that describes how this Cloud Foundry is supposed to be configured and what properties it has, what VM sizes it needs, and stuff like that. And the CF release repo and the Diego release repo both have scripts you can use to generate a Bosch manifest that's specific for Bosch Light. So you don't need to tweak anything yourself. It's already configured to be sized correctly and have the right properties for Bosch Light. So we run the scripts, generate Bosch Light, dev manifest, and then we actually do a Bosch deploy and we get CF running on our Bosch Light. And then we're gonna do the same thing with Diego. Diego actually requires not only CF release and Diego release, but it also requires either Guard and Linux release or Guard and Run C. Those are the two options that we have right now for running for the actual container runtime. And there's a RudaFest release and either at CD release or SQL release. There are a bunch of different components that all come together when you deploy Diego. And in one of the previous steps, we also had to download and those tar balls and then put those on to our Bosch Light as well. Okay, so our Bosch Light is all up and running. We can also verify that, we didn't do it in this case, but we could verify that the Bosch Light is working correctly by just doing a simple CF push and making sure that it's actually using Diego and just do some basic verification. We also have a big suite of unit tests. So what we're doing right now is we're just, before I even start coding, I wanna make sure that my environment is clean and that everything is working right. So I'm running our unit test right now. Our unit tests run directly on your host machine, on your development machine. So in this case, it's a Mac. And so it's actually running directly on my Mac. Usually the unit tests are pretty well isolated and they run pretty quickly. We also have another test suite called Anigo. Anigo runs a bit more integration type tests. These tests that when we have multiple of our microservices all deployed and running together that they can talk to each other. It's not a full Bosch deployment. So it's not really a real Diego or real CF. It's just testing that if we have this component running and this component running they should be able to still talk to each other even if it's not deployed on Bosch and we do some basic verification there. We run these in a container on concourse. Some of you may be familiar with concourse as a CI continuous integration tool. We use concourse in this case as just a convenient way to run a containerized job locally. We can run it either locally or we can run it on our actual group CI pipeline. So if I don't want to wait for it to run on my machine I can run it on like in our case we can run it on our AWS deployed concourse. But for an outside contributor you're gonna have a local concourse most likely that you're gonna run this against. And the steps for setting up concourse are again in that documentation with everything else. It's also a vagrant box so you just bring down concourse light and you vagrant up and then you have a locally running concourse and you can run these tests. Now onto our third suite of tests that are run on your local environment. These are the acceptance tests also called CATS. These are what we need Bosch light for. So these are gonna be actually verifying CF is working correctly from sort of an end user perspective. These tests actually use the CF CLI directly and make sure that the basic scenarios are all working end to end. So this is why we needed to deploy Bosch light in the first place. And so again we want to make sure that everything is working with these acceptance tests locally before I actually start doing development. Because if I start doing development and then things are broken then I'm like I don't know if I messed it up or if it was messed up to begin with. So we just want to make sure I have a good clean environment, especially the first time I start doing development for Diego. So now my local environment seems fine. All the tests pass, everything seems good. So I'm ready to fork the repository. So in GitHub, some of you might be familiar with this already, but when you fork something, it means you're making a copy of it under your local username. So this is gonna create a copy of the executor. The executor is one of our sub modules. It's gonna create a copy of the executor and it's gonna put it under the gen spinny user. So it's gonna be just like the repo that exists in Cloud Foundry, but it's gonna be under my name. Now that I have, now that I check that out and I do a get pull from my, from my version of the repo. I'm gonna first start off by writing a test. I'm gonna see that it's a failing test to make sure that I'm actually writing a test that is actually testing what I want to implement. And I'm not just accidentally after the fact, writing a trivially passing test. And then I'm gonna actually code up the fix. And then once I'm satisfied that my code makes the test I wrote pass, I'm gonna run all those tests again. So those three test suites that I just showed you, the unit tests, an ego, and the acceptance test. I'm gonna run all those, make sure everything is green. And once that's all done, then I'm ready to start to open the actual PR. So when I go and I do a push to my branch on my forked repo, so gen spinny slash executor, when I push to a branch there and I go to GitHub, I'm gonna see this little thing pop up that says, do you wanna do a compare and pull request? And so I click on that, and then there's an opportunity for me to write a little bit of a description. So here I'm gonna reference the issue that I made at the beginning. So I'm gonna just say, as I said in this issue, I wanted to do this little refactor, that makes it so everything's kind of linked together. And if it's not Luan the next day, if it's some other person from the Diego team, they can follow the track of work and they can go back to the issue and say, okay, they had this discussion and it just keeps everything well linked. So there's some things to keep in mind when you're sending a request. Specifically about testing, we wanna make sure that we are covering edge cases and error cases. So when you're writing your test cases, add your happy path, but also think about what could go wrong in there. And if that's not present, we'll likely send a PR back. So hey, can you please add a little more test coverage around these scenarios? So can just be proactive about that. That helps a lot in the team make sure we're getting a good contribution. And also make sure you run out of tests before you submit the request. Jen said that, we're probably gonna say that again before the end of the stock. But it's also very important that the test pass because if the tests are read when we get the request, we'll likely not merge it. That said, I'm again the community pair and I noticed that Jen has sent her request like she had promised. So the first thing we generally do is through the GitHub UI itself, we'll look at the file changes just to have a quick glance of what was changed. If there's any obvious things we can recommend should be fixed. In this case, the request Jen sent had one style of reference that the Diego team does that she didn't necessarily follow. The particular example here is the order of the arguments is not what we generally do because we generally have a logger as the first argument just to have a standard since that's kind of all the methods we call. So I just send that feedback to her and instead of just fixing that, I send the feedback so that next time that she sends a contribution, she'll know and not make the same mistake again. And then after I give some feedback on the lines, I send one message saying, hey, I wrote some feedback to your change. It looks good otherwise, it's good to merge just if you could just fix that and update the PR then we can just merge that in. So I see his comment, it's an easy enough fix to do. I just reorder the arguments in that function. After I do that, I'm going to do a git rebase. I'm going to do this so that I can squash my two commits because I don't want to submit this as two separate commits. But also because I want to see if there are other changes that have gone in since I did this, I want to rebase against Origin Master, like against the Cloud Foundry version of the executor. In case, while we had this back and forth, there might have been a couple of days that went by and maybe someone else committed. And so I want to make sure that I'm rebased on top of the laziest version of Origin Master so that my PR can go in cleanly. Then once that's all done and rebased and I have a single commit, I'm going to do a force push to my branch and this is going to automatically update the PR. Once I do the force push, I'm going to go back to GitHub and I'm going to update the PR and say, hey guys, I updated it with your feedback. Can you take a look again? Well, and then finally, when I get to the updated pull request, I will look at it again just make sure that the changes make sense. And if they do, we'll pull the code down locally so that we can run the tests. If there is any less minute things that we noticed like a typo, for example, in this case, I mentioned there was a small typo, so I just fixed that. So we'll do small fixes like this one in the last minute. But we'll run a test, make sure it's all good, and then we'll merge it. When we push the merge, GitHub will automatically update the story, the pull request, so Jen gets a notification. We'll also update our tracker story just signify that this is all merged and get to go so that our product manager knows to take a look at it and accept the story. Yeah, and then at that point, contribution is merged and all done. So we just want to cover some common rejection reasons, or not really rejection, but reasons that we might come back to you and say, oh, we're not ready to merge this yet. So if you don't have adequate test coverage, so remember, we want happy path tests, but also edge cases. And depending on the scope of the work, that might mean unit tests or that might mean a Nego test or acceptance test. We'll probably be working with you through that, if it's a big enough feature that you might need one of those. We'll point it out to you. But in general, you want to be thinking, am I testing every edge case? Is my code fully tested? Another reason we might come back to you or just flat out say we're not taking the pull request is if it conflicts with work that is already in progress. This is more if you don't talk to us before you do it, if you just submit the PR. So if it's conflicting with our roadmap, meaning someone else is already working on something similar, or it's just something that we don't think is the right direction for how we envision the code going. So just a reminder to talk to us first. Another reason we could come back to you and say we're not ready to merge it is if the code is not re-based against master, meaning it won't merge cleanly. So this is a little bit unfortunate that sometimes pull requests can take a couple of days to go through because there's a little bit of back and forth. And in that time, if it's taking a while, other people come in and actually make commits in the middle. And so you have to go back and re-base and make sure your branch merges in cleanly. Another reason is untidy commits. So if there's been a bunch of back and forth and a lot of iterations, you might have commit stacking up. In general, for most cases, you probably want to have just a single commit. So you want to squash your commits. So it looks clean in the get history once we actually merge it. The other issue would be not running the tests. And so you have some failing tests. And then we catch that, and we come back to you, and it lengthens the whole process. So lastly, if you want to get involved, but you don't have a specific feature you're dying to do or you don't find a bug or something like that, you're welcome to come to our Slack channel and say, hey, my name is blah, blah, blah. I'm interested to get my feet wet with Diego, but I don't know where to start. Do you guys have some bug I could work on? Is there something I could do? And we'll work with you. We'll try to find something. Our tracker is public, so you can look at what we're working on right now. But I wouldn't go and just pull something off of tracker and assume that you can work on it. For the most part, we assume that the tracker, the stuff in tracker is stuff that we're going to get to. If you see something in tracker you want to work on, you can message us and say, oh, I really want to do this, and then we can talk about that. But in general, the tracker, we assume that the core committers are going to be pulling things off the tracker backlog. So just the main point is just talk to us if you don't know what you want to work on, or even if you do know what you want to work on, recommend talking to us first, and then we can have a conversation before you invest a bunch of work and energy. So with all that, is there any questions? Sorry, the question was about license, like contribution license. That's a great question. I don't know if we have some automated software or anything that goes through and checks for it. So the question was about. So to send a contribution, you have to sign a CLA. That's actually an automated process, but that's for the individuals. It doesn't necessarily solve the problem of someone sending a contribution that is copied from somewhere else. Yeah, but we have an automated process for signing CLAs, which that's basically how we have automated. Yeah, so it's tricky, you know, it's tricky. But yeah, there is a point that we didn't mention here, which is that if you try to submit a PR and you haven't signed the CLA, there's an automated box that's gonna come back and say, you know, you need to go and sign the CLA, but it doesn't really solve the problem of people going in like stealing code from places they shouldn't be stealing it. Yeah, people don't think about it. Yeah, I think it's kind of a problem for any open source project is, you know, you kind of are trusting the people that are bringing the code in to be doing it in an appropriate way. Any more questions? That's a good question. And you should stay for her talk after this because the Diego team actually has three time zones now. So generally available from what, it's nine on US East Coast until six on West Coast or yeah. So there is a good 12 hours there, but yeah, there's some availability issue there. Any more questions? Cool, thank you everyone.