 So we'll give you just a quick overview of AppLinks, who we are, what we do. So yeah, so he briefly suggested we do race registration, timing, scoring. So if you run the race, Spartan is one of our larger customers. So if you've registered with them, you've used our platform, you've used our hardware, the orange tag that they hand out from the check-in process through what we call launch that's improved over the last year and a half or two years. Since they were the first beta users of that product, and they've gave us a lot of valuable feedback towards that process, because they check in anywhere from 6,000 to 15,000 people for one event. So they have to move a lot of people. And so before, they would 16 lines, and it would back up. And it would take you 20 minutes or longer to get through. And I think it's down to under five minutes now on average. So we provide a lot of services for race day. So we have tons of products. And that's where I think a lot of this will help. So here you can see this is their bib. This is where you can find event results. So you can go and collect all of your results into one place so you can get some kind of view of how well you're doing your trending. And this is one of our timing boxes. Yep, so we have 300 million race results. So even if we're not the ones timing the race, scoring the race, or anything, there's a good chance you're an athlete. So if you've ever done triathlons, bike events, or running events, you're most likely in in athletics. So kind of what we're going to talk about today, our evolution of use of GitLab, GitLab CI, developer engagement, and then go into more of the project side, which we've adopted more recently, which is why we chose GitLab for our product agile tools, the boards doing epics and issues in GitLab. And then any questions? So our evolution of use, we've been on GitLab since 2015. We started with the community edition. We really did this as a way to really have a single source for all of our code. We were kind of split across everywhere, people's laptops, GitHub, you name it. We chose GitLab, self-hosted for that single location. We started looking in 2018 for moving to the actual enterprise edition to get us more of those agile tools, getting out of what we were doing for issue tracking. Along with that, we've got LDAP Group Sync, which has been really useful for us. It allows us to put any developer, support teams, anybody within our organization in our LDAP, automatically get access to GitLab. They can start contributing, merge requests, they can put in issues. They do that right away. We don't have to go create multiple accounts for them for many tools. We got the ability to do merge request approvals. So our QA team or team leads for certain projects that we want to have a little bit tighter control on, we can kind of gate that with approvals. And then this is when we started doing our GitLab CI proof of concept moving from Jenkins. We quickly discovered that we really wanted the full suite of tools that for the agile, the product side. So we went to the ultimate edition of GitLab. We restructured the way our repos were laid out. We put everything under one main ATHLINX group. And we did that to allow us to add anybody with one LDAP group to allow environment variables at a group level. So we didn't have to do it per repo. It just made sense for us. And then as we'll show you later, we do it for our issue tracking as well. This is when we got our non-engineering people involved. So our product team really using GitLab, our support team, legal, you name it, ATHLINX, they're now using GitLab to contribute to our issues or feedback, have that full visibility. And this is when we really started the migration to GitLab CI from Jenkins. And this is what we'll talk about. So we use a wide variety of technologies in our environments. We have pretty much everything. So we were in Jenkins where we moved most of the stuff out. We have a variety of languages being used, pretty much every operating system, both mobile, Android and iOS. So we're all over the place and we use GitLab CI for all of it. We had 300 jobs in our Jenkins pipelines before. Now we have less than 40 active ones. These are just our ones that don't really make sense to invest the time to move right now, but likely we'll get everything over. So one example that we had fairly recently, we moved from Jenkins that was 322 lines across two pipelines. And it took about 17 minutes to run every time it did. Moving it to GitLab, we have it in 103 lines in a single pipeline running in about nine minutes, which is about 50% of savings across the board, both in code and in time spent running those jobs. So that was a huge improvement. So we're doing autoscaling in AWS for our GitLab runners. This allows us to keep one running all the time, so developers aren't waiting if nothing's going on. And if we have just an influx of jobs come in all at once, it just gets provisioned in AWS and we keep our build down. So our average cost for running all of our builds is about between $5 and $15 a month. And I looked for May 2018 to May 2019 and we've spent $100 to build everything that we've built. And we're doing many builds every day. So it's quite efficient for us. So developer engagement. So this was a big thing we wanted to, as the DevOps team, so from my point of view, we really wanted to get developers more engaged in the CI process, in owning that pipeline, in taking some of that burden off the DevOps shoulders and letting them have that full pipeline. So that's what we did. They own the code, as Anani can attest to you. They've started building out our GitLab CI files, which was made very easy by using YAML. GitLab CI is purely YAML, super easy for any of the developers to come in and pick up versus having to learn groovy script for Jenkins. As you saw before, it reduces the number of lines needed to get things done. We're running pretty much everything, or most things in Docker, so GitLab CI works very well with Docker. So that was a really easy thing for us to adopt and we built many custom things. We implemented the approval process. So again, for some of those repos that we want a little bit tighter control, we can put in a gate for our QA team. Automated testing and deployment. So this came up in the first talk. We actually are doing K6, Cypress testing, all in the pipeline that all just works without any special configuration with GitLab CI. But we're also doing the SAST and DAST security testing in the pipelines for a lot. And that's really just throwing a template in the GitLab CI file and it just works. So that was a big help for us getting some security checking. Future plans, the direct integration for Kubernetes. So we do Kubernetes deployments through GitLab CI now, but we're doing that as a kind of a custom written thing where we're actually executing cube control commands to our cluster to do deployments. We want that direct integration, which will get us the review app that was discussed before. So we really want that. Getting it in Kubernetes is going to make it super easy. We just haven't prioritized getting that done. And then I'll turn it over to Christopher. So these are not my slides. Shana was originally intended to come and speak, but she's had other priorities. So I'm here to fill in. So I've been with the company for four years. So we've gone through a ton of agile planning tools. First, we started off at Jira. I only had maybe two months of experience. You want to come over on this side? Should I stand here then? Yeah. It's definitely spotless. Yeah, so I only had maybe two months of experience with Jira, but that was a bit of a cumbersome. And our scrum master really just wanted to move to a rally in any ways. So we moved with rally. And then we moved to version one, because we felt like we wanted to track more information, and it had more features. There was just really too much for our developers to even get involved, or support, or anyone else. We really needed to get involved. So things we wanted to provide with our roadmap was transparency, and we wanted to give voice to other parts of the business so that sales can tell us what features they want, so support can tell us, oh, hey, this is impacting us the most. We've answered this ticket 100 times in the last week. Please fix this. Just to give transparency and voice to what engineering is working on so that the other departments can have input into what's going on. And just the usability for GitLab to do that was really, really great. So GitLab is not necessarily all that opinionated about how you do issue tracking. That affords us to do a mix of Kanban on some department levels, and the engineering team does two-week sprints, so that we have a much more trackable, deliverable, whereas DevOps team works off Kanban. But we keep all our issues in one place, so everything can be tracked. The power of how they do that is through labels and filtering, so that you can get a board that shows you just exactly the view that you want to see. And we have examples of that afterwards if you have questions to see it. Another thing that helped a ton were issue templates. Issue templates provide some structure for the other departments to know what they need to fill out. So a common thing that they would office in miss, and now they don't anymore, is where are they seeing this problem? Is it only happening on Windows? Is it only happening on Macs? Do they think it's affecting everyone? Like, this kind of information helps. And then like a prompt for, do you have a customer log? Do you have any information on how to actually reproduce this? And on and on and on. They don't have to fill out everything. It's just a markdown file. But at least it starts them off thinking how to properly fill this out. That engineering will get to it quicker so that we don't have to have so much back and forth before our problems actually solved. And since all of our issues, all of our code, CI, are all done inside of GitLab, it provides a single view from there's a problem to it's been merged and fixed. So very often, if support made this issue, they can tag it with a label that says source support. That way they can look at all the issues that they think are problems. And they can track how much work is being done to help them this sprint. And what can I expect to be done this sprint? So it's very valuable for them to watch it. And then I would like to encourage them more to pull into the issues because the issues will have a merge request attached to it. And once we get closer to review apps and otherwise, they would also then be able to click through the issue to the merge request and then to wherever it's hosted temporarily for the reviewing so that they can test it and give feedback even before I merge it into master if they want to. The one thing, I think, just from the flexibility that GitLab provides is their metrics are a bit harder to do just because what dashboard would you provide for Kanban or how would you want to view it for your sprint burn down and a couple other things. So we've just written a couple custom tools that just hit their API that give us the information and graphs that we want. And we have a couple of those at the end as well. So we use their epics, which are different than issues inside of their program. The epics are mostly to help the business use a Kanban style and approve things. So inside a business level epic will be how much money do you think this is going to make us? How long does engineering, wild ass guess, think this is going to take? And then allow us to link all of the maybe milestoneing or smaller epics to it. Engineering one is really going to have a shorter summary of what we just told the business of why we want this. What purpose or problem is this going to solve? Here's a rough idea on how we're going to solve it. And then we can get UX or DevOps involved when they have concerns about how much this might cost with an AWS bill or otherwise. And then just keep splitting it out until we have a deliverable milestone and then even much smaller. So this is much very, very similar to what they were just talking about, where this might be MVP and MVP too. And then all of these are like, what's the smallest thing I can deliver that provides value? So that we can shove as many as those as we think we can into a sprint. And by the end of it, we have something we can deliver. Speaking to the views, boards are kind of like Trello. And so you can create as many columns as you want, and they're just labels. And so if that issue has that label on it, it's going to sit in that column. And it can sit in more than one column. And I think that's where just this last one, they released unique labels, scoped issues. That's the word I want. Scoped issues so that it could, you could prevent this group of labels to only have one at a time. So it couldn't be ready to pull an in progress at the same time if you happen to put both labels on at the same time. And so you can see we have a lot of labels. And they're more specific to who's going to be working on that at the time. So since UX likes to work with a Kanban style flow, they'll have status like UX needed so that they can know that they need to take a look at this and prioritize it and discuss when is this needed by. And they'll have like UX doing, and then UX done. Just so that we can, they can keep track of all the columns that they need to have done. And then it gives visibility to anyone else who cares about that issue. That it's ready now for engineering to look at develop. And so we're just now getting marketing involved in this so that we get better about communicating all the new features we've deployed this month or otherwise that timers, race directors, athletes will actually know about the work we're doing. Because we haven't been the best about that in the recent past. But getting them involved much more earlier through labels and otherwise so that we can track what the work that we need them help to do, they can get involved earlier and reduce the cycle between them. Us finishing an issue and then them actually communicating it out. Because it was often a couple of weeks to a month. And then it's just because of a communication problem. Yes? What is status swag me? That is, we don't use that label much anymore. That was when product would like a tech lead or an engineering manager to give a wild guess at how long how many sprints product this epic would take. I want this done given what's defined here. Give me a rust estimate because if it's going to take three months, I don't want this feature. Or we need to discuss maybe a miscommunication around what's stated there, how hard you think something is. But that was just a rough request to get a wild guess on an issue. We're right into Q&A. But to bring up that, this is one of the sample boards. And so you can see one, and this is a sprint board. So this column is basically open issues that don't have any of the labels that would match a column here. And then this is everything that we've brought into a sprint because most of these are scoped to a milestone. And that's the way we use as a sprint boundary. Blocked is something I'll look at from all team-wide just so that I can see if there's anything I need to help unblock. And then for that board, I could literally just make it this column because that's the only one I want to see at that time. So these are really customizable. Another thing that we got into to try to track better sprint metrics was adding a planned label so that I knew that this issue entered that milestone with the intention from the beginning to be done. If it's a high priority defect, we slap an unplanned label on it and we put it into the sprint so we can know when stuff gets done. And then there's also a sprint stretch. So if you finish everything, you can keep going and do more work. I don't want to discourage bringing in more work.