 My mic's a little loud. Hello, welcome everyone. My name's Sean Degg. I'm in the IBM Linux Technology Center. I'm a core contributor on Nova, the QA team, Tempest, DevStack, some of the upgrade testing and grenade. I review a lot of patches. And so, today, I figured I would give an introduction about what it really takes to get from having an idea about something that you want in OpenStack and getting it all the way through to commit. The sort of the origination of this talk was the fact that I find that I end up explaining this a lot to first time developers trying to get stuff into OpenStack. There's sort of this weird maze of things they don't realize when they first try to propose a commit of why did that get rejected for this reason, that reason, whatever. So try to lay it out as much as possible and walk people through so that hopefully your first or some of your early development contributions can be successful. And you can understand when things are bad and when things are good as part of the process. That QR code has a link to these slides or HTML. They're up on my personal website. There's a lot of links embedded in here. I'll stick this out on Twitter afterwards so that people can catch it later and dive into the links. So I realized that we don't have just a U.S. audience. So this whole soup to nuts thing. I use a lot of American idioms. I apologize for people that aren't native English speakers. It just means all the way from the beginning to all the way to the end. So we start off with where do you start in this process? And we start with having an idea. I've got this great idea for a feature. I wanna land an open stack or behavior that I think really should change. And so that's great. So you've got this idea. And the first thing you wanna do when you have a new idea is actually go and talk to other people about it. Socialize it either on our mailing list, OpenStackdev or on the OpenStackdev IRC channel because as it turns out, OpenStack is a rapidly evolving project. A lot of people have a lot of the same great ideas all at the same time. And so it would be good to figure out if someone else is already working on what you think should go in next. And if possible, then go and work with them on it. From a review perspective, the last thing I wanna see is three completely different incompatible approaches to the same thing that we wanna add because that just makes everything terrible to figure out. When we're reviewing code, we typically don't wanna play politics in the middle of that. It's like if all you guys want this thing in, go back, work together, come up with some common thing and propose it. That's the right thing to do. And it's much better to join an existing effort than to then cut off on your own. The next thing that you really need to do, it's great to have an idea, it's great to bring it to the mailing list, it's great to talk about it. It is much, much better if there is code. English only works so well in specifications of what this feature is, what this terminology is, whatever. OpenStack has a language, it's called Python, it's really good at describing solutions to problems because you can execute them. So start with the prototype. It doesn't have to be complete, it doesn't have to be everything, but it's something where people can actually now address this as a real, as a thing they can get their hands in and see not just what you're talking about, but the approaches to it. Things are just clearer in code. It also, we get a lot of people that will show up and suggest these great grandiose ideas. If you've been to a design summit a few times, you will notice that some topic sessions have shown up at every design summit and they don't seem to actually make traction. So there's a cautiousness about people with just ideas and not willing to do work. And so you show up with some code and that is huge. That shows you're serious, you're actually gonna do some work. That's great, we love that and we want it. So you write up the code and then you get it ready to contribute. We're gonna go into this in detail. This is most of what this talk is about is explaining what this diagram really means. But this is basically the contribution process to OpenStack. I'm using Nova as a specific example as just one project, but this is true for anything that's core and anything that's incubated and anything that's in Stack Forge, they all work the same way. Where you're starting with an upstream master that's somewhere on GitHub and you clone it down and then you make us fixed within a branch in Git and then you make sure that you run all the local unit tests and you commit this thing and then you push it up for review. We have a specific extension to Git called GitReview that makes it much, much simpler to interact with these Garrett review servers which I'll show you some details of in the moment. Otherwise you get some really gnarly URLs that you have to hard code all over the place. So you need to know Git, we're not going into that. We'll assume that as an exercise for the reader. But once you get to this point, we don't use Git push, we don't use merge requests. We use this Garrett review system and you have to install this Git review thing and you run it and it submits upstream. And so to prep your environment for that, it's actually, once you clone some arbitrary upstream OpenStack repository, you just run this little command sudo pip install GitReview and it installs GitReview and then you have this GitReview command that you can run. And in every project, we've already got config files which say when you review this project at this branch, you push to this location, so you don't have to do all that nasty stuff yourself. The first time you do this, something will probably go wrong. Many things will go wrong. But this will go wrong to a lot of people. If you just run GitReview, you've made your patch and you push it up there, you will get an error which looks very much like this, which is we can't look at your change because you have not signed the contributor license agreement. And we actually reject inbound changes for people that aren't on the CLA with a URL to go deal with this in Garret, which is a URL. So the contributors license agreement is something that OpenStack has as protection for any code that you bring to the project, you have signed an agreement legally that says you're allowed to bring this here and your employer says it's okay for you to bring this here. So we don't end up in a situation where there's code in OpenStack, which was illicitly brought from some other third party environment that is not appropriate for the project. So then once you've gotten through that, you just have to go and sign the agreement, there's a process on that, there's a bunch of explanation and the wiki about that. Now we're into actual reviews. Is that as washed out head on as it is from the side? Okay, great. Be a little harder to read some of the smaller text. Okay, what's important about these vague, unclear blocks of text here? So Garret is a code review system at review.opensack.org. When you push up a Git commit, you've got, it will demonstrate that it's got the commit message in here. You can specify things like blueprints or bugs that it implements. And then what's important that you'll see here is there's a whole voting process. And we'll start with humans and then we'll move to non-human voting. So every change that goes up has a formal review process by humans. And anyone on the internet, anyone in this room can go start reading OpenStack code and can start adding reviews to it. Just by logging in to Garret with an open ID, you can start reviewing code and helping us make things better and add your comments to it. Everyone gets a plus one or a minus one vote. These are fundamentally advisory, but they are very useful as a core contributor that has a lot of code I should be reviewing. One of the first things I go and look for is did someone else plus one or minus one this code? Because if someone else minus one did, that's at the bottom of the queue I might get to it. Someone else plus one did. Okay, somebody thinks this is good enough to at least go in and dive a little deeper. That's a huge indicator and that means that that's where I'm gonna spend my time. Core contributors within each of the projects and it's a separate sort of bit for every project gets a plus two and a minus two vote. And so the formal policy for all projects in OpenStack is two plus twos are required to make any code go further to actually approve. There's a separate actual approve bit that gets set, but usually the second person, the plus two code, does that. If a minus one vote, you can upload a new copy of your code, a new iteration and all the minus one and plus one votes and the plus two votes all reset and so it's like this is a new patch, whatever. There's also the minus two vote is really, we call it the band hammer. This is someone throws this down if the approach is fundamentally flawed or dangerous for whatever reason they feel as a core contributor and that sticks. So that will block this code from ever going in until that person goes and pulls that hold on it. It is used very rarely because minus one advisories are pretty useful. The comments can be general or line specific. Those have links showing instances of both of them where you can actually, we can annotate per line of code and typically a detailed review will actually go through and say, okay, minus one and like this, you did a wrong thing here. This thing should be better. This, like why did you add this new variable? And people will give very detailed feedback. Minus one to a patch is a normal first reaction. People often freak out. They submit their first bit of code and they get a minus one like three hours later and they're like, oh my God, what did I do wrong? What did I do wrong? Nothing, this is exactly normal. This is normal behavior. The reason we do code reviews is to make the code better on every iteration. So minus one is normal. We actually ran some statistics and like basically every commit that actually lands in Git takes on average three iterations before it lands. And that's across the entire spectrum, right? When people are first new with this is probably a couple more. I've been on the cells bare metal. I was one of the key reviewers to help the bare metal stuff into the pipeline. And I mean that was six months that they were working that patch series and like 30 iterations before we got it to where it needed to be. And that's fine, right? The point is what lands an open stack, eventually it has to be ready for the tree. It has to be code that the core contributors believe in. It doesn't go and modify things in ways that we don't feel we can support in the future. In the process of contributing, this is becoming owned by whatever the core community team is and they have to be able to feel like they can support it going forward. If you've gotten minus one, make sure to iterate on your code quickly because it will be ignored. Typically, again, like I said, I've got a limited time. I'm trying to review as much code as possible and those things fall to the bottom of my list. This is a link to a URL of the code that I'm supposed to review on a regular basis and you will understand why I look for easy things to ignore because there's a lot on it. Responsiveness is super critical. You really want to iterate quickly when someone provides you feedback, not sort of sit on it and figure out if someone else is gonna accept it later because they won't. And responsive is highly appreciated, right? As a reviewer, I comment on something and I'll probably be in the code review system for the next hour or two working on stuff. And if I see comments coming back immediately, I'm actually in a mode to handle that. That's huge, I remember that, I remember who you were. It means when I see code from you come through again, I'm going to preferentially go look at that code because I know that if I provide you feedback, you are gonna respond quickly and it's an actively really good use of my time. So responsiveness is awesome be as responsive as possible. Don't get argumentative, which is sort of the inverse of that. You, in almost all cases, cannot argue your way out of a negative review. And that's a new idea for people, some people. And realize that the feedback was there for a reason and be nice about it, we all try to be nice. And when you're updating patches, it's important to use the get, commit, amend. We put this item potent version string and all the get commit messages. So when you upload version two or version three or whatever of your code, then it will properly be identified as Garret as the same code and list as an update. And so all the history of the past code reviews will be in there, which is really useful to understand how this thing evolved over time. Yes. It doesn't, right? But responsiveness is the important thing. If it's a day, that's good. If it's a week, that's bad, right? And the reality is the core teams are, I mean, that's something like Nova, we've got people in a lot of scattered time zones. So it may not be that I pick up something, but other people will notice that a patch went in and somebody said, no, you gotta go fix this thing. And then you did when it was sort of a natural cycle for you within 12 hours or whatever. And then it gets noticed, right? And that's the thing is that in reality, we spend a lot of time looking at these code and this code reviews and part of you as a new contributor and contributing to OpenStack, building kind of a personal reputation. Once you start a lot of code, you remember, oh, Boris always throws out some awesome stuff. He's this database guy that we've got on the project. And it's like, great. Whenever Boris throws out something, I go try to look at it immediately because he's always hyper responsive to what's going on and I wanna help him through the process. Because that's the thing, realistically as a core contributor, we are sort of shepherding things. That's part of our goal. It's keeping stuff out that really can't go in, but then stuff that people are showing up and being very willing to work on things, it's part of like, okay, so how do we make this work within OpenStack in a way that doesn't break anyone else, doesn't break any of our core tenants of scale out and share nothing. But we do want more contributors. And the growth of the contributors you've seen released to release is sort of a reflection of that general attitude. Minus twos are very much reserved for code that can't go in for various reasons. It might be, there might be horrible, like it goes in a direction, the project just doesn't wanna go in, it goes in, it does something that's just not possible, it would be really bad, it breaks backwards compatibility. It happens, it goes in at the wrong time, we have freeze areas, and once we pass freeze myself, like on Nova this past cycle, we passed freeze and people kept pushing features. And I was just like, nope, minus two, minus two, minus two, all that stuff's held will reopen in Havana, but don't show up with features after freeze, we're done. And realize what's part of the natural cycle. So if you get a minus two, make sure you figure out with the person, you get enough feedback to understand why it can't go in. Was it a timing reason? Was it a structural reason? Was it something that's just not appropriate for the project at all? Because that needs real serious reconsideration about the approach. Tips for being successful, start small. If you are a new contributor, and coming here and you have like a super important feature that your particular company or area is interested in, don't start with your first introduction in the community, 4,000 line change throwing in. It's just not a good idea. And a lot of people don't realize this, right? It's like, here we go, here's my giant new vendor driver and I've never seen anyone before and I haven't followed the style guidelines and like stop. Do some small things first. Go fix a couple of bugs first. Get familiar with the process. Get familiar with the people involved. It's just a lot easier. Smaller is always better than big. If you do changes, they should be minimum possible that they can be in size. They're just easier to review. When I start my day, I get a cup of coffee. I sit down on my laptop. I bring up the code review system and I start looking. And so the first thing I do is I go and look for 12 line changes that I can just say like yep, that was obviously a bug and that's a plus two, right? And I try to bang as many of those out as quickly as possible. I go look at stuff that other people have plus twoed because I'm on East Coast time and the entire continuous integration system is basically there's nothing in it when I wake up. So I've got the entire merge CI system more or less to myself. So I start with everything that somebody else's plus twoed on projects that I can and figure out what's mergeable because if I merge it early before the Pacific Coast guys wake up, we got all the code in and they didn't have to worry about it and we have more run time for testing later in the day. So easy things that it's super easy for someone to say put that in is great. Use good commit messages. It seems like a minimal thing. It seems like a thing that people don't tend to want to worry about sometimes but really good commit messages are huge because people will just like read and when I look at a piece of code I need to understand why. Like what is the rationale for this changing? And if it's just a link to a bug somewhere else that I then gotta go read and then figure out like wait was this really the fix for this bug or what was the issue, whatever. All right, maybe I'll deal with that one later but if it's like in the commit is like very distinct one line here's what this is about. A paragraph or two explaining exactly what's going on and why this is the approach to fix it then I can look at that. I can look at the review, the code and I just be like oh yeah, clear, go, right. And whenever possible if there are relative bugs or blueprints reference that. We can also do patch series within Garrett if you're familiar with that and get if you have a whole bunch of changes that you need to make don't make them as one big block make them as a series, you know dependent patch series and you know basically a whole bunch of commits and a get branch and you do get review and it pushes them all up in sequence and they're all tied to each other and they'll do all the right things and again it's easier to review. So people review code, machines review code too. So when you push, we start off with immediately we run a whole bunch of tests by Jenkins, so Jenkins is very chatty on all the reviews and if you've ever wondered who this Jenkins guy is it is our continuous integration system that runs all our tests and it will, the moment there's code out there it will go and run a series of tests. When Jenkins minus ones things it will provide a report, you know it won't like go into a detailed list of what was in your code but it will provide a report of well I ran the following, you know we had test failures and here's links to all the logs of the results of these tests and these are all hot links you can go into them and you can figure out like why they failed. Make sure you do that. I have seen many times when people just you know like don't understand and we have this ability to rerun tests and they just keep running tests that are failed and it's like no, your tests failed for a reason you have to go read the logs and figure out why. The tests that we run are, we have a whole bunch of different checks so we have a style checker we enforce style in the code that's the pep8 python style we also have additional style rules that we enforce programmatically. It's easier to review for them we try to make any time we've said like this is adjusted style we enforce it in a machine so that you know people don't have to spend brainpower on that. We build docs, we run unit tests on two different versions of Python we set up a full node environment run dev stack on it and tempest and we'll dive into the detail and starting very soon we're gonna be doing upgrade testing from Grizzly to master on every commit to make sure that you didn't the proposed change didn't break an offline upgrade of all the services so that we ensure compatibility of upgrade to the next release. The style checker everything that's in the style checker is in this get repo it's based on flake eight it actually finds some interesting Python issues like reuse of variables in funny ways or code that is going to break when it runs as well as the pep8 rules style checking and we run this early before everything else so that we don't bother to run all the rest of the tests if it doesn't fail the style guidelines because it just can't go in otherwise. You know this seems like a minor thing it seems like we're nitpicking but when you have 550 active developers over a six month cycle and you know half a million lines of code consistency is the only road to sanity so we have to enforce this. We run unit tests on every project itself defines what its unit tests are in the NOVA instance there are over 5,000 unit tests in tree. They get run on Python 2.6 and 2.7 on Ubuntu we're working to get them running on 2.6 on Rails 6 and you know believe it or not there's actually enough differences between 2.6 and 2.7 that we run them both because there's a failure we'll get fails in one and not in the other. These unit tests are basically when they do database stuff it's typically an in-memory database so it's very synthetic although we do a little bit of upgrade testing on real databases within here when it comes to our migrations. We do on every commit on every proposed commit you push a piece of code you push a one line change we go and spin up three virtual machines on each of the virtual machines we spin up DevStack which is an installation tool for developers to get an easy version of OpenStack running right off the master of Git and we bring up a one node OpenStack within that guest with a specific configuration we do this sort of three different ways to do a matrix of different configurations and then we have a battery of 700 integration tests that runs that ensures that when you call through Nova to Cinder the right things all happen over the course of running Tempest which is that test suite we spin up around 75 virtual machines and do various horrible things to them we do the same with volumes and images and glance and sum to the network quantum's a little less tested in here and then we also run against the Nova and glance command lines and do some basic operations from the command line approach and pretty soon like we have this mostly working it'll be gating within two weeks probably where we're also gonna make sure that you can upgrade from Grizzly to whatever your code version was compatibly and that gets looked at every proposed code commit not on the merge it's on every proposed so the numbers sometimes get staggering about like how many OpenStack instances we create a day it's in the thousands as just part of a normal run cycle so hopefully this graph looks a little bit more clear now of what all these extra blocks were here about the review process that we've got how Jenkins automates it and once you get through all of this right, success you have successfully turned your idea into an OpenStack commit you've landed it hopefully it didn't take you in an order amount of time but it's a victory and you get to do a victory lap on it there's a couple more useful links that are in my slides which again, I'll stick this out on Twitter in about 10 minutes once all things are said and done here talking about what the Garrett workflow is what it means to be a core developer and part of the whole plus two voting model and we've got some decent how to contribute pages as well on the wiki and you can follow me on Twitter too I'm pretty easy to find so with that we got about 10 minutes left in the session and I want to end a little early for questions and because it's the end of the day and people are probably as fried as I am right now but so we'll throw it out there if anyone wants to ask a question about things yes, yeah so in Garrett itself there's actually a way to add reviewers you can add them by name or email address so if there are specific reviewers that you think should be checking out your code and put it in there the reality is it works in some people's cases it doesn't in others because just in my case my review queue is so long and I get so much email from the system I'm just mostly going through a normal filter mechanism the other thing that can happen is that most most teams have like a weekly meeting an IRC and so if you've got some change that you really want to get some eyes on and it's not people aren't looking at it it's usually a pretty good time to raise that there's usually a section at the end that's sort of left for like what are reviews that people are looking for that people like haven't seen attention on so far I know we do that some in the NOVA and the QA meetings I assume some of the other projects do the same thing but again getting active on IRC just in general on the dev channel and kind of asking questions there and asking if someone would take a look at this is probably a good thing to do don't be hyper insistent about it I'll pick on some of my friends on the Red Dwarf project they were trying to get some stuff in the dev stack and like literally every six hours they're like look at this review, look at this review, look stop I'm going to eventually get to it but you're not making your case right now it'll take a little time one of the key bout trade-offs that we try to manage within the core projects is making sure that we have a core review team that's big enough that the review backlog is not getting too crazy right that we have enough people so we keep moving the review backlog and honestly one of my strategies again for processing the queue you know not the like let me move as much code through as possible but my other strategy that I go and do is I actually for exactly the folks that like no one's looked at their stuff yet I go visually scan my review for anything at Jenkins is plus one that's like a day old that no one else has seen a review on because like that's not cool right we need to be more responsive than that and try to provide feedback and I know a lot of other reviewers try to do the same sort of tactics but like stuff gets missed right like you know my I can probably actually show you what my well maybe I can it'll actually come through my review queue is kind of stupid big and I can find out where my cursor is there it is yeah non mirrored screens what oh it only took the really that's can't be right there's no way that fits on one screen it is pay yes there's a next button nevermind yeah yeah no actually on the other oh because I'm not signed in that's why my preference on my user account I have like a hundred items per page and like typically it's a couple of pages of that so it's just like you know okay I'm gonna go after what I can go after and you know and the rest of it like if anyone's here trying to push a review this week you won't get reviewed we're all here and we're all a little too fried to look at code right now other yes sure yep yep that's a good piece of feedback there's a question over here okay is there something you're looking for not really the only thing is so we have a if you write in your commit message like bug and then a number the the system automatically turns that into a hyperlink that goes back to launchpad for that bug ID so like and if you do that there is there's actually another interface into all of this called review day which hangs off of the main open stack project which actually does another thing where it it's sort of priority weights the reviews based on the severity of the bugs that they are fixing so like fixing important bugs your stuff goes all the way to the top um so like linking to if you're actually fixing a bug linking the bug is it'll actually generate a lot more visibility for it other than that the process is all just the same right um and if you're working on a blueprint right you know we use this process by which new features any substantial new feature really needs to have a blueprint in launchpad so that we kind of track it towards release and we use it as the release notes and features and everything um in the same thing you like there's a put blueprint and then the name of the blueprint um it will automatically link it and it will actually bump your stuff up on the review cycle as well yes there's a yep so yeah yeah no absolutely so um so we recommend to everyone that when you're working on a feature or bug you create a dedicated branch for that specific thing um so you'll make your commit so your your branch will be one commit off of master right and then you you push it and um this get review script that we do uh what it one of the things that it does is it it generates and declares another id field and it injects it into the commit message and so that goes out to Garrett and Garrett will track based on that id in the future so when someone says like no no no you know like you know gives you a minus one and says go fix this and this and that the other then you just all you have to do is go into that branch fix whatever it was and do a get commit amend and that will instead of generating another commit on top of that it'll actually modify the previous one and then you just run get review again and it will then update and that will show up as patch two on that same thing and you can do this as many times as you want um if you're doing patch series you can um they'll all end up with ids and dependent on each other and and all the like fun rebase-i stuff like it does everything you expected to do you just got to make sure you keep that that id that got injected the same um and people get really mad if you submit like duplicate stuff for the same thing and you didn't keep that id because they see it over here and then you lose the the history of review and that's actually kind of important because people have left a lot of comments about like no do this no do that and also you know realize that like the core teams on these projects are we are not always of one mind so sometimes it's real useful when someone like no no minus one I don't like that and you're like but but but but Russell said to do exactly that in this previous part of the review and they're like um uh okay right yeah that's fine that's fine right um and so it's helpful for us too to realize like when we're being inconsistent and to try to like you know be nicer to people about that or I mean honestly the right thing to do you know if you know we're doing it right is like oh you push review three there were three other core reviewers that have reviewed this in the past go look at what it was before how they commented on it and ensure that that you're following what they asked of it right and because we do try to work collaboratively like that where it's like well this was the recommendation and like okay did they follow the recommendation if they did I'm cool with it I don't see any new issues and like you know the person that originally starts reviewing the patch may not be the person that eventually gets it all the way to the end we we definitely hand off work like that just you know it's part of the nature there was a question here that's true the yeah yeah so the the docs have a couple of checks that are specific to them they're kind of unit tests per se right it's got to build because all the docs are based like you know xml or sgml stuff and so it's got to actually compile out and be able to be published to the site and so the checks are just a little different it's conceptually the same model and so if you're working in the open sec manuals tree or like the api site tree or any of those other documentation projects conceptually the same approach like obviously we don't run the code tests right but you know otherwise it's it's the same any other questions one in the back um so right now all of the core projects are python um all of them um I don't know that it's if you were to create a new core project um well it's in and I'll put an exception there are some of the tools which also have shellcode in them that aren't core projects but they're part of our testing infrastructure um it if you were to start a new core project that you would want to incubate through open stack I don't know that's a hard and fast rule that it's python but but this this is where the community has has emerged on so it might be a hard sell um try to if you're committed if you're adding features to existing stuff use whatever they're doing right um otherwise it's just it's just not going to fly so there's another question here yep yeah so um uh I think official policy is two past stables yeah um so we have stable branches if you check out the stable like stable grizzly branch of nova um that's there's actually a separate stable maintenance team that has plus two on that that's that's handling the stable on that um and if I think official policy is two back so so stable grizzly and stable fulsome are currently being maintained stable sx is sort of not you know it means that if people really want to still do some stuff there they might but like but but typically after three like the stable maintenance team that's on that just doesn't care anyway no no so yeah um there is yeah so this right here this is a stable grizzly change um that's you know that's in my review tool as well the reality is the volume in stable is much less by nature like it should be right stable stable we only bring back critical bug fixes security issues and whatnot anything else there was one there and we'll take one here and then we'll call it beer time yeah so all the unit tests um there is typically either literally a run test.sh or like a in the readme like how to run the unit tests directly locally um on so that will give you all the unit tests run so in nova right you'd run 5000 tests they take you know six minutes on a laptop um the tempest dev stack what we do in the gate um you can do locally as well um if you check out dev stack you can you can bring it up dev stack actually lets you specify the get URLs that you want to pull each of the projects from so instead of pulling from the default upstream master you could actually point it at your own get repo wherever it is on the network on local or make changes directly in the dev stack tree um and then run the tempest tests those will take you about um we automatically configure at the end of a dev stack pull we automatically configure tempest for you to work on your machine um and it's in it's in a tree just an op stack tank pissed um that you can run the tempest tests they'll take about 45 minutes yeah and when you run the unit tests it'll run the style checker up front or maybe it's at the end it runs a style checker them but yeah yeah so the style checker is baked into the unit tests and then um and then run tempest and that that should basically give you what happens in the gate there's a couple edge conditions around the way the docs get generated everything else that you might hit once you get upstream but in reality if you run all the rest of that stuff everyone will love you because honestly half the half the changes that get pushed the first time they clearly didn't run the unit tests right and it's like there we are we're cut off it's beer time if you want to talk to me I'm hanging out up here um for a little bit after this