 Hello, can you hear me? Can everybody hear me? Yes, hi. OK, welcome to Runtime. I want to say real quick what this talk is and what this talk isn't. What this talk isn't is a talk about the architecture of the CF Runtime. If you came here for that, that's not what this is. I'm sorry. What this talk is is a talk about the process of the CF Runtime team and how you can contribute and how you can get the most out of being part of a really excellent open source project that has a really excellent team behind it. So let's get started with a little bit of an overview. So we're going to talk about our development process on the team, how we collaborate with other teams, our continuous integration and deployment, how we interact with the community at large, how you can contribute back, and hopefully at the end we'll have a few minutes to open this up for some questions. So without any further ado, we're going to talk about our process. Some of this is a day in the life of a Runtime team member and some of this is overall pivotal process and why we think it's a good thing and why we're talking about it. So we'll start with stand-up and pairing. Every morning we do a stand-up. The stand-up is our morning meeting. It's where we talk about what happened. What did we do yesterday? What did the organization do yesterday? We have two stand-ups, big stand-up. That's organization-wide. Everybody at Pivotal comes, or in person or digitally, as it were. And we say, do people need help? Do people have interesting things to share with the rest of the organization that might not be shared otherwise? And finally, we talk about events. Little stand-up is a very similar format. We say, what did we do yesterday? Is there anything that people need help with? Is there any interesting things to share? And finally, we set up pairing. Pairing is our process. Every developer has a partner for a day. Sometimes longer. Kind of depends on the team. And the reason we do this is because we've found that the standard idea where I write some code, and I want this code to be included in the project, and so I give it to Dan. And Dan looks over it and says yes or no. That's inefficient. And there's a better way to do this. So we have this idea of a pair. A pair is a cohesive unit of engineering work, or the ability to do engineering work. And the idea is that two minds working together are actually greater than two minds working apart. And we always get two eyes on every line of code that's written. And if we don't, there's some other things that we can do to make sure that the code is worthwhile. So we will submit pull requests if we are soloing on code. But ultimately, the idea is that pairing creates this concept of two eyes on the code, and it makes sure that whoever's writing some code, if they miss something, the other person can catch it. There's lots of little safety nets in the pairing process that help make it a more efficient way of writing code. We think this is a good thing. IPM. IPM is part of our agile process. Agile is just a word that doesn't really mean anything. We like to think of ourselves as being able to iterate very quickly on our code. So we break code down into chunks that we call stories. We put them in a tool called Pivotal Tracker. If you are contributing to Cloud Foundry, you should be aware of Pivotal Tracker. It's this really nifty tool for organizing an agile code base and an agile project. But the idea is that we come up with all the things that we need to do for a project. And we can look at them and see them. And our product manager, Yui, can organize them into a timeline that makes sense from a product standpoint. And every Monday morning, we go into a room and we talk about, what are the next two weeks worth of work that we have to do? How hard is it? What are some edge cases that we might not have thought of initially? And we say, all right, we think that these things need to happen in this order. A great example is in the mornings, we had a new track called Context Paths. And this was a request from a foundation member that we have a new way of routing to different apps. And I won't get into detail on that. But the idea is that we suddenly had a bunch of stories. And we didn't know how much work they're going to take. And the idea is that before somebody actually starts doing work on them, we have to have a conversation. Every story is a conversation about the work that's going to be done. So we have this conversation. We come to a conclusion that adding this functionality to the router is going to be harder. And we mark that. And I say harder. What does that mean? What is harder? It's going to be harder than something that's easier. What does that mean? All right. So the idea is that we're not actually coming up with quantifiable ways of measuring hard. We're coming up with an idea that we can say, we understand that this is going to be a difficult thing to do. And part of that understanding is the idea that you can't quantify difficulty. You can't say, this is 10 units of hard. And that thing is two units of hard. And so this other thing is much harder. You have to say, this is relatively more difficult. And we understand this because we had a meeting and we had a conversation about that unit of work. This conversation is really important. The next thing that we do, and this is part of our overall process, is we test drive everything. And one of the really things that helped me understand test driving was the idea that your tests aren't a way of verifying that your code works, but are rather a way of specifying how you plan on writing the code. And that's one of the reasons why in our spec in Rubyland, tests are called specs. It's a specification. It's you saying, this is how I want my code to eventually work. And you're not necessarily coming up with implementation-specific details about the code when you write a test. But instead, what you're saying is, this is how I expect the code to result in certain functionalities. And I think that's a really good way of talking about test driving because the idea is that we're allowing our tests to drive how we end up developing. And this is something that Ansi talked about this morning and is really excellent talk about Diego. He touched upon this feedback loop, the idea where you write a test and that informs how you're going to write some code. You write some code and that informs more information about what you really need to test. There's lots of places where this comes up. So we came across this really interesting edge case in this routing situation where we came up with a new way of storing routes in the go router. And we had a test for the new way. And then we kind of discovered that we hadn't covered an edge case where if you hadn't put the route in with the leading slash, the go router wouldn't accept it. But we didn't have a test for that. And we saw that the go router was actually accepting things that shouldn't be accepting. So we had to backfill a test for that. And it's one of these ways that as you write code, as you discover more about the code you're writing, it informs how you should be testing it. And as you write tests, as you think more about what is the actual functionality that you should be testing, it informs the code that you're writing. The feedback loop feeds back into product and engineering. As we finish stories, it informs the product team or our product manager Yui about really what are the next stories. And as she writes stories and informs us about how we need to be doing them. So that feedback loop is really important. The last two things, DevOps. DevOps is the idea that we are our operators. If you saw Andrew Clay Schaffer's talk two talks ago in this room, you had this understanding we're not picking up CF release and throwing it over a wall, as he put it, to operations. We're deploying it. We deploy it many times a day. Every time a commit gets pushed to CF release, it gets deployed. And we manage that. And we manage deployments of CF release all across the organization. Other people manage CF release all across the organization. And we help them, too. The idea is that not only are we writing the code for CF release, but we are managing deployments of it and having a really strong, deep understanding of how CF release, the code that we're writing, how that works in a deployed production environment. That's super important to our team, because we get direct, immediate, actionable feedback about the code we're writing by the virtue of being the people who are deploying it. Finally, Retro is something at the end of the day that, under the week, I'm sorry, and we talk about the week. And some of it's technical. We talk about things that went wrong technically. And some of it's emotional. Things that we thought, oh, we had this interaction or something upset us about something that happened in the workplace. But it's a way of reflecting on the work and the workspace in the week and talking with your coworkers about how can we not only iterate on our code, but how can we iterate on the process by which we develop Cloud Manager? And that's super important, because not only is our code fluid and not only do we want to iterate quickly and develop quickly Cloud Foundry, we want to make sure that our process always supports that goal. And that means we have to iterate on our process, too. So Dan is going to come up. Dan Levine, he is a contributor from IBM. And he's going to talk a little bit about our development process from the point of view of somebody outside the pivotal organization. So thanks, Zach. So when Zach was talking about this stand-up and the entire group gets together, IBM and a lot of other companies who really aren't following this process kind of take a look at it, and we're like, OK, how can we do that? So I live here in the South Bay. I'd like to not commute two and a half to three hours every day to go up to work. That's kind of annoying. And we are an open source community. And so we leverage a lot of tools that the open source community feels comfortable with. So stand-up and pairing. Every day for our team, we jump on a Google Hangout chat. It's just there. I get context on what happened yesterday. I can bring up my own issues. I can help other people resolve their issues. So that's really important to keep that flow and have everyone communicating. Besides that, when we actually do our TDD and we do our DevOps and we actually develop, we use another tool, Old Screen Hero. It allows us to actually sit down with our pair and go through the same user interface that you would have if you were actually working side by side with your pair. It works pretty well. The only thing that's a little bit of a bummer is when you're not working there with your pair, you can't have the kind of communication that you might have with another pair. Working with your pair is great. It's fine. You can hear them. When you're working with someone else that isn't your pair, then it kind of breaks down a little. It's a little bit harder. Your pair is then relaying information back to you. Hopefully, nothing gets lost in that translation. And so then the last thing when working remotely is our IPM in retro. Now personally, I like to go up, beginning and ending of the week, because that's when those two meetings are. It's a lot nicer to be in person, especially for retro. And you get beer and snacks, so a little bit of incentive to go up there. But being remote, it's not too bad. And so with that, we use kind of the same techniques when collaborating with other teams. Google Hangouts are a big part of discussions. Video, it's way easier to see someone's face and communicate with them. And the other tool that we have for non-face-to-face communication is Slack, which is just an internal messaging tool. But something that's really important about the runtime team is what we call an interrupt pair. So every morning, two people or one pair is randomly selected as the interrupt pair. So anyone else inside a CF that has any issues comes and literally interrupts the interrupt pair. And these people are responsible going to help them, taking a look at their issues, sometimes debugging it, sitting down with them. And kind of understanding if there really is a problem, do we need to add a story? Do we need to get this work done? Or are they just not configuring something right? So they really take care of those kind of issues. Another thing for Slack, it's pretty great for simple yes or no questions. Hey, could you take a look at this for me and just what do you think kind of idea situation? You'll get simple questions. Someone's like, hey, we can't seem to get our app staged. And it seems like the container is being deleted every time it fails. Is that the intent behavior? And we're like, yeah, that's the behavior we would expect. So then maybe they say, well, how could we go and debug this issue? So obviously, the logs, blah, blah, blah. But sometimes it requires more than that. And that's when the interrupt pair is really handy. You could do something insane, like fork the build pack, put a sleep in there, go into your container, and start inspecting the files, which is just one example that had to be done. Another form of communication that we have is we have a pretty sweet bat phone. Now, I can't give you the number to the bat phone. Only a few people can have that number. Yui, who's basically the mayor of Gotham, has the number. So she can reach us whenever she's remote. And that's just another tool that's pretty easy for communication, where you don't want to go through a giant text list for, here's all my issues. Pick up the phone and chat. And then lastly, how do other teams actually contribute code? Well, most teams, obviously, they just directly commit to Cloud Foundry. They're there. Most of them are in San Francisco. It's pretty easy. Once in a great while, there's actually PRs made. It could be someone from the team who is soloing for the day. Like Zach said, we want two eyes on everything. So someone who solos for the day, they'll make a pull request. Ask a pair the next day, hey, could you review my code? Them and the pair can re-review the code that they submitted, something like that. And then all this code that goes through, actually goes through our continuous integration pipeline. And our CI is really important to us. We do TDD for everything. So if we're not running the test, what's the point? So our CI is always running all the time. All of your changes are going to start off by going to a pre-staging environment. So we named all of ours after sauces. So we have Tobasco as our first staging environment. And it's a pretty lightweight deployment of CF. It's good to see that, hey, since our last code changes, we can roll everything and upgrade everything in a clean fashion. There's minimal tests that run against it. We run our tests as our teams to say, OK, this seems to be pretty good. And once it gets through that phase, it goes into staging. And staging is where things are much more interesting. Staging is a semi-small-scale production environment. And we have everything work in there. We have services hooked into our staging environment. We have Diego running on our staging environment. And it's also the place where most of the PMs go for acceptance. So it's the place where, hey, look, we pushed the code. It's working there. Everything says it's green. And then the PM is like, OK, well, let me go take a look at it. Let me play around with it on the staging environment. And it also serves as a great place where we're all coming together. And then we can step back and say, did we break anyone else? So maybe we pushed some code that accidentally broke services. They'll come over, bug us, and be like, hey, you see those red things on your board that save services? Well, we care about those. Could you please fix the code that you've pushed accidentally? And then besides that, we have smoke and stress testing. So we do smoke tests to make sure that our environment is always up and running. When we do these staging rules, we have little apps running in there saying, hey, we're rolling everything, but we can still talk to you. CF is still running. Fine, we're happy with that. Occasionally, we do a bit of stress testing against our deployments. It's not all that common. But when we want to check backwards regression, we'll do some stress testing, maybe potentially spin up little environments that are isolated to test a specific feature. But for the most part, we really don't do all that much stress testing. Another thing that's important is our environment differences. So as the runtime team, we are kind of the meld of everything. We manage the giant CF release repo that everyone's contributing to. And so we literally deploy on OpenStack, vSphere, AWS. And we manage all of these different deployments. And they're pretty useful to us. There's small things that people do all the time that this isn't going to have any consequences. And these are where we catch those problems. For example, there was a Ruby gem that we're using called Fog to manage the back end of connecting to AWS or NFS for OpenStack. And someone said, these symbols look a little funky. Can we just change them to strings? Sure, it's Ruby. You can do that. Totally broke OpenStack. Whatever the gem was doing underneath, for some reason, strings and symbols were not handled the same way. And so our OpenStack environment exploded. And that was good. This is why we have trust in the CI. So one of our environments went red. And so when you hear, oh, Cloud Foundry, it's safe to fail, it's because we have these giant CI environments that we feel comfortable that we're safe to fail. Red doesn't mean, oh, something is wrong. Red's a cue to us to be like, hey, let's go investigate why this went red or why something didn't pass. So red isn't a problem. It's just a great way to be sure that you have pushed all your code. Everything is working in there. And when everything is green, then you're golden. And with that, I'm gonna throw it back over to Zach to finish up on contribution. Hello, can you hear me? Yes, good. Hi. So that's our process. That's what we do internally. This is how we work as pivots on the runtime team at 875 Howard Street in San Francisco. I see lots of familiar faces, but I see lots of unfamiliar faces. And first, I wanna say the community is great. It's awesome that we get requests from the community. We get code, we get issues, we get conversations that inform new features. All of this is awesome, especially code, because it's work that we don't have to do. So yeah, pull requests, come on, keep them coming. But how can we help you understand our process in order to help your code and your issues get solved quicker and get pulled in quicker? So that's what we're gonna talk about today. So a little bit of information about how we work through issues and information that comes from the community. We triage stuff in Tracker. So again, all of this stuff is completely open. You can take a look at our Tracker project and see all of the stories. You can see all of the features, all of the bugs, all of the chores that we have in there. You can't edit them, but you can see them. And we really want you to. We want you to take a look and understand Tracker and try to get a better understanding of how we iterate on Cloud Pantry. For the most part, issues come in, bugs come in, conversations and pull requests come in. And Yui, product manager, takes a look at them and says, this is this first line of defense. Is this reasonable? Does this belong to runtime? Sometimes we get a bug in, you know, VCAP dev, that's the runtime mailing list that really belongs in the Bosch mailing list. That's our first line of defense. Yui's gonna say, is this reasonable for runtime to be looking at? She puts it in the Tracker backlog. We have a little section for the community. The community pair, which is kind of like the interrupt pair, only they get interrupted by the community. We'll take a look at that stuff. The sources of these things, like I said, they're gonna be pull requests on GitHub, issues on GitHub, email sent to the mailing list. And we also get foundation requests that sort of go straight to Yui, and I don't really understand, there's like a kind of a secret chip implanted in the product managers and the foundation has a little remote control and then she puts in a story in the Tracker. Some feedback we got when we were beta testing this talk was it needed to be funnier. So could you all laugh a little harder? Great, okay, okay. So, thanks, Jim. Yui puts foundation requests in the Tracker and we take care of them. How can you contribute issues and get the most out of asking us these questions? Send an email to the mailing list, put something on GitHub issues, but what does it really take for us to help you? The best thing that you can give us is your manifest and your logs related to the issue. This is super important because we get stuff all the time and we're so desperate to help. We truly are, and I don't mean this in a sarcastic way. We genuinely want every issue that comes to us to be solved and we want people to come away from their interaction with us saying, man, that was so cool, the runtime team fixed my problem. They're the best. We want that to happen, right? How can we do that? Well, we're gonna ask you for your manifest and we're gonna look through your manifest. We're gonna say, okay, what might be missing here? What were their changes in the last major version of CF release? That's a big one, is there's a new thing added to the manifest, a new property, or a property changed and somebody didn't pick up on it. Their new manifest didn't get that change and now LoggerGator crashes every time you try to deploy. That's huge. Give us your manifest, please. Same thing with logs. Your app is crashing, but it's not crashing in a way that makes any sense. Probably on the DEA, and you say that to us. You say, it's on the DEA. Well, we're gonna say, can we have your DEA logs? So send us that stuff. And a last little bit of information. Be careful, please, when you send us your manifest because it's got private keys in it and it's got your passwords in it and if you mail vcapdev, manifest with your private key, it's not just us who gets that. It's every other single person in the world who knows how to go to code.google.com or whatever the Google Groups mailing list is, right? And it's moved now, but that's not important. So don't mail us your private keys. That's a thing you shouldn't do. But in the meantime, yeah, send us your manifest, send us your logs. Give us as much information as you can. Sometimes there's this balance between sifting through a lot of information to try to find a problem and feeling like you don't have enough. And we'd rather you err on the side of more information because that just helps us solve your problem quicker. Moving on, code. How can you get your PR merch? Well, so we talked about test driving. This is really important and we talked about trust in our CI. Well, we have this trust because every single feature we write comes with a really comprehensive suite of tests that we've written before we wrote the feature. This means that we can be really relying on our continuous integration to tell us very accurately whether or not our code is working. And that means that if we get code that doesn't have comprehensive test coverage, that's a problem. Because it means that we can't really pull that in and feel that the CI is still as reliable as the standard that we hold it up to be. So I skipped a bullet point. I apologize about that. But is it tested? This is super important. Please write tests. And if you feel like your tests don't make a lot of sense, if you're not sure how to test that, ask us, boy, do we love talking about test driving at Pivotal? Man, that's like, if you say test driving next to somebody from Pivotal, an hour, you're at least in that conversation. It's great, right? We love testing. So please ask us because we can't wait to help you. And the more people who understand test driving, the more people who are able to really confidently contribute to CF, only good things can come from that. Did you run acceptance tests? We don't expect everybody to write an acceptance test. That's an end-to-end integration test. That's pushing an app and expecting that app to work given the feature you wrote. We do not expect you to be writing these regularly. But did you run them? Did you make sure that you didn't break something that was already working already? That's a big one. So please run the test suites before you contribute a pull request. Because sometimes it's just a little tiny thing. And it's so much easier on you, we don't want to waste your time. We don't want to send you back to fix something that you could have just caught. And it's not like we're upset about, oh, you could have just caught it. That's not what I mean. I mean, your time is just as valuable as anybody else's. So we don't want to waste that. We want to know why you made the change. That's important. And the other thing that's important is, is it generally applicable? Does this apply to every possible consumer of Cloud Foundry? Or did you write a solution that's very specific to your installation? If you did, can you generalize it? Can you take a solution that applies to your specific multi-tenant deployment of Cloud Foundry on OpenStack and apply that to people on vSphere and AWS? And in the future, CenturyLink Cloud, can you do that? And that's really important, because we're a foundation and we are platform agnostic. We can't pull in OpenStack specific contributions. But we can pull in generic contributions. And we want to, especially if you wrote it. This code, we didn't have to write. The last thing is just thinking about, what does this break? Start a conversation with your pull request. Pull requests are great when they're a conversation. What does it break? What might it break? And have that conversation. And sometimes the conversation is, it won't break anything. So we'll pull it in. Sometimes the conversation is, this might break stuff. Let's go back and fix some problems. That's really important. And like I said, every story in our backlog is a conversation that we promise each other that we're going to have. And as a community contributor, we'd like to make that same promise to you. So I want to show you the anatomy of a good-looking pull request. This was from Mike Youngstrom. Raise your hand. Yeah. So this came in. This is fixing an issue that Mike also filed. And it's got a really great comprehensive test suite. There's tons of other code in here, by the way. But it's got this nice comprehensive test suite. It's got a great little PR information about what actually does this pull request do. It gives us some hints about where to look for possible problem areas. This makes our job and our life so much easier. And I've got a little link up here that I'm going to click on. Oh, that didn't work. How does this, can I? Well, we'll skip it for now. But the idea is that that link was going to be the full issue. The issue, I'll describe it, had a long discussion about the problem that needed to be solved. And it went back and forth. UEE was involved from the product standpoint. We had engineers involved from an engineering standpoint. The idea is that every issue, every tracker story, every pull request is a conversation that we have about how to make Cloud Foundry better. That's super important. So without further ado, do people have questions? You. It depends. So we have this picture of our board. This board shows four pairs. One of them is fake because they're stuffed animals. But this is the board. So the idea is that at any given time, we might have anywhere between two and four pairs. But right now, I think we're at three pairs? We're odd. We're at three and a half, actually. So being odd, it can be a problem because it means somebody's always soloing. But that's just something we deal with as people rotate on and off the team. And then we do have the morale boosting pair in case somebody needs some stuffed animal love. Right there. Yeah, so IBM kind of faced this problem. We had people working on the runtime team from London, the East Coast, and here in San Francisco at one point. So there's one person in London. So he's going to solo for part of the day. Eventually, then the people on the East Coast woke up. And he would solo with one of them for half a day. His second half of the day. Once he left, the two people on the East Coast would then pair up and continue on for the next half of the day. And then right at lunchtime, if they stayed together, their day is over. But in the morning, they could split off from pairing with each other and pair with someone from San Francisco for half the day. Just keep it rotating, keep all the new faces. And then once they were done at lunchtime, then they dropped off. And the two pairs that were pairing here in San Francisco then combined and became one pair again. So we obviously do the best to try and accommodate, like everyone is always pairing. But there are definitely cases where someone is going to have to solo and submit pull requests for a review. Again, does anybody else? Yeah, I'll get back to you. I'm sorry, I just wanted to give somebody else a chance. So performance and quality indicators, a lot of that comes from our continuous integration environment, where we have stress tests. We have a bunch of graphs that are just constantly coming from people hitting apps deployed against our staging environment. And those graphs are showing us go-router round trip, for instance. That's one of the things that we're measuring. How long does the request spend in the go-router? How long does it spend in the DEA? How long does it spend coming back? And what's the networking overhead just due to networking, where it's not really in anything? Does that make sense? Oh, yes, I see. Yeah, we do. So like I said, we give points to a story, which is how much work a story is going to be. And I put scare quotes in there because it's not a quantifiable thing. But the idea is that at the end of the week, we can see, oh, you got through 17 points. And again, that doesn't measure something specific. But if the next week, we see you got through five points, then that's showing us, ideally, we didn't change how we were estimating how hard a story is. But that's showing us, oh, suddenly something slowed down. Or we didn't spend enough time working on real features. We were too busy fixing bugs. So those are not so much indicators of performance of us as individuals, but as a team, how much work were we actually able to get done on forward progress on Cloud Foundry? And a little bit of a note here, we only point stories that we think are features. So bugs and chores, cleaning up the code base, don't get pointed. Which means that if you do a week where you were only fixing bugs, you're going to see that number of points at the end of the week drop. And that's a good feedback to say, wow, we just spent all this time fixing bugs. Do we have a problem where we're introducing a lot of bugs? It's a huge, really great indicator of what are some problems in your process if you see points drop precipitously. And likewise, if you see points skyrocket, that can also be an indicator that you might be overestimating things. Does that make sense? In the back. So it kind of depends. Everybody is able, allowed to write a story. And like I said, a story is really just a promise to have a conversation. Anybody can start a conversation. So we write features as developers. Most features come from product. That's just because product is picking a direction. And we are as developers part of that. But that's product. Lots of bugs come from developers. Chores are another type of story. Those come often from developers, but also from product. If, you know, Yui says, we want to create this new environment, that might be a chore. It might be a feature if we think that that's really going to move forward our progress on Cloud Foundry. But it's a collaboration. Like I said, anybody can start a conversation. Right. So there's a couple of ways to move forward on a situation like that. So something goes red. Take a look. All right, what was the commit? The commit was just a few lines. All right, well, let's go in there and let's fix it. Easy enough to do. First is, hey, there is this really large commit. We're not really sure what broke it. Someone made a pull request that is 10 commits long. Let's just revert the commit and get the pipeline green again. And then throw it back in the ball back in their court and be like, hey, we tried it out. There's some issues here. There's all the logs that we grabbed. We try to be helpful for their debugging as well. That really doesn't happen too often though. And then you sometimes get the really odd commits where we were rolling the UAAs in these new bumps and a migration didn't complete in time. So at that point, your whole system's just kind of like borked and really requires someone going in there fixing the code up and then putting a commit on top of that to make sure it doesn't happen again. So there's lots of stuff that can happen where we have a huge pile of commits. The build goes red. We're trying to fix them. One of the dangers of that is when our build is red, we don't want anybody else to commit to Cloud Foundry. So this means that services, loggergator, all these different teams that are really important have commits piling up as they do work because we're not telling them to just go home. And now, OK, so now they've got these commits piled up. They've got 10 commits that they're ready to go. And our build's been red for a few days because we had some wacky problem with flakiness. And this is an actual thing that happened where it was four days of a red build because we just had so much flakiness in our testing. And we just had every other team had 20 commits ready to go. We actually ended up doing a staggered commit into CF release where we kept the build red and we said, OK, services, commit one at a time. OK, lamb, do your commits. OK, Diego, do your commits. And we watched as each of those teams made it through their commits, made it through every step of the build into staging. And we're green. Then we said, OK, now we're back to normal again. But it's a dangerous thing where if you don't decide, roll back now, get it green ASAP, you're going to be dealing with not just your backed up work but other teams backed up work. How do we handle them? It's part of the story. The router should be able to handle this much in the constraint. And that's part of that conversation we have. And we did this came up recently where we were doing performance testing against the router. And it was a chore in this case. But it was mostly just saying, well, actually, how much does the router handle? And then we discovered that a new commit to the router made it so that the router couldn't handle, couldn't route as quickly as we wanted it to. And so we had a new feature which was, OK, implement this new data structure for storing routes that makes the router much faster. But it's really just part of that conversation we have. We want the router to be able to route a request from outside to an app. That's what a story might be. And as part of the conversation we have, one of the bullet points in the story is the router must be able to handle and such requests in a second. And that might just be a constraint. And as part of the acceptance for that story, product might actually test that, use Apache Bench and see how much can the router actually handle. Thanks all for coming. If you have any more questions, come on up.