 Attending to my talk, I'm going to be talking about using GitLab to power front-end technical interviews. So a little bit about me before we get started. My name is Clement Ho. I am a front-end engineering manager at GitLab. I oversee the monitor health team. And I live in Austin, Texas, so two Texans for Lightning Talks. My Twitter handle and GitLab handle is clemmakesapps, so you have any questions about the talk about GitLab? I'm always happy to answer any questions there. So I want to first get a gauge of the audience here. Can you raise your hand if you've ever participated in the hiring process at your company or previous company? Okay. Pretty much almost everybody. That's great. Raise your hand if you've ever evaluated candidates using a technical interview. Okay. Almost everybody. Cool. So imagine this. Imagine you had a hiring target of doubling your team size and all your interviews are remote. Welcome to GitLab. This is our company hiring chart during the past year, and this hiring target is challenging. And here are the main three problems. First, we didn't have enough interviewers for the pipeline of candidates coming through. Our process was inconsistent and biased, and it was difficult to measure whether we were raising the bar. And by raising the bar, I mean making sure each candidate that joins the team makes the team better. So we ended up using GitLab to solve these problems. I shouldn't be a surprise at this point, and I am here to convince you that you should try that out, too. So let's take a step back. What were we doing before? So we used to have multiple techniques. Each interviewer might do a little bit of here and there. Some might just stick with one technique, and so I'm going to go through each of them. One of the techniques we used in the past was verbal technical questions. We would ask the candidate, hey, can you describe some technical concept and have a dialogue there? Sometimes we'd also do data structure questions, live coding exercises, where we would ask a candidate, hey, can you write a linked list, or can you write a sorting algorithm? And that was how we measured the candidate. Sometimes we would give them a prompt and say, hey, this is the description, please write a UI and live share with me, screen share, and then we'll evaluate them that way. And then once in a while, when these things weren't sufficient, we would also send candidates a supplemental take home project to see how they would perform. So what are the advantages of these different techniques? So with verbal technical questions, you're able to understand how they communicate. It's important. Communication is important. If they can communicate technical concepts, that's usually a good sign. With data structures, it's a consistent measurement and evaluation. I can talk to another manager or another interviewer and be able to communicate, hey, this person wasn't able to do a linked list, they got stuck here, they weren't able to understand the runtime efficiency here, so it's pretty consistent. With the prompt, you're able to accurately measure kind of candidate's ability to build something. You're able to see kind of the progression. And with a take home project, you're able to kind of build an environment where it mimics working remotely more accurately than the other techniques. Unfortunately, these techniques also have a lot of drawbacks. With verbal technical questions, it does not always equate to being able to code. So I've interviewed candidates that could talk the talk, but they couldn't really write the code. And that's not a great situation for an engineer to join GitLab. With data structures, it may also not be a strong indicator of their ability. So sometimes with engineers that came from more traditional backgrounds or just graduated college, they can shine really well in this. But someone that might be senior, maybe able to do a lot of great things and not as brushed up on data structures, may not shine as well in this technique. For a prompt, it can be difficult to measure and compare. There's so many ways to do things in front end specifically, but in engineering in general. And then with the take home project, this is really hard, was that it actually disadvantaged people from underrepresented groups. So this is backed up with research. And basically, if you kind of imagine a scenario where you're a single parent and you have kids, you may not have as much opportunity to take dedicated time a couple hours after work to really focus on a take home project compared to someone from a more privileged background, they might be able to dedicate and output something better. And so with GitLab, diversity being such a big value and inclusion, this was just not really a great technique for evaluating candidates. So what did we change? How did we use GitLab? So the first thing we did was we standardized. We standardized on candidates reviewing and fixing a merge request on a test project. We created a test project for candidates to kind of fix up and review and see their thought process and their coding through that. We implemented an open source project called Project Seeder that kind of went through the process of seeding the project for each candidate. It would export the template, import, add the user with an expiration date, and then trigger the MR pipeline for the candidate to review the merge request. And all of this is powered by a GitLab bot user, and that way each interviewer doesn't have to play with all their API keys, and it's all streamlined. So once we set up the project, we send an email similar to this. This is just a template that we send out to candidates, redacted the merge request and the email, just to give you a general idea of what it would look like. After we send the email, the candidate can click on the link. This is a trivial example of a screenshot. Should be pretty familiar with the merge request page, but this is what they would be faced with to review and fix up as part of the technical interview. We also standardized on a rubric, a grading rubric, to make sure we're more consistent with our evaluation because we don't want to be in a situation where we're unconscious bias or bias one candidate over another because of our preconceived notions. We also grade candidates on multiple categories so that we can further evaluate candidates and how they perform holistically. Previous techniques, it was very easy to kind of drill down and maybe one technology, like did they know how to use JavaScript, but with a whole project and a merge request to review, we're able to evaluate them more specifically. We also built a dashboard on Periscope to really get the feedback loop of how we do our interviews and see how we can improve. So we're definitely a lot better than where we were before, but there's still a lot of room for improvement. So here's the dashboard of how we've progressed and how we can understand where we're going. So an example is before we implemented this new interview process, we had some things that we wanted to test. We were wondering whether with GitLab, view is a big part of frontend, ViewJS, and we wanted to know whether we should require candidates with view experience. And with frontend, ViewJS experience is about 30% of the frontend population. And through this process, we're able to determine that candidates with view experience don't necessarily shine more than candidates with no view experience. And we're only able to do this because we have reporting. So I want to take you through a brief demo of kind of what I've talked about and how that would look like. So here I have created a GL commit example group, and it's open, so you can check it out now or later. I forked the project seeder and set up some API keys there, so we don't have to deal with that. We have a template here of a trivial example, and then I have a subgroup with all the interview projects, imaginary candidates that we're seeding the interview for. So let me show you the template. Pretty simple. This is just your GitLab pages. It has a merge request with the interview test MR. So in here, we're updating the website to say, hello, GitLab commit SF. With the failing pipeline, the candidate would have to fix that up. So with the project seeder app, what we do is we actually take the variables from CI and actually configure the application that way. So this is a really cool way to utilize GitLab CI. Maybe you haven't thought of it this way or maybe use it for other cool things, but this is how we do it. So here I already prefilled it as here. So I'm creating a new project, example two, and I'm adding this bot user that I created and the expiration. So I can just easily run this pipeline and it'll seed this project. This commit message is just the last, it's just running master. So it's not super relevant. And we just run the setup pipeline. And that will just go through the process of creating the project, importing it, exporting it, and sharing with that user, as though you were for an actual candidate. So for the sake of time, I've actually already created it, created one. So let's go over here, example one. So here it's already created the project, merge requests, and there's changes here. You can easily look at here. In an example for a candidate, they would probably look at the CI and see, oh, there's a failing test. Let's see what that's about. Oh, it looks like it's checking for Hello World. So since we changed the message earlier, we can just change this and get this test passing and then pass this interview. So trivial example, but hopefully that shows you the flow. Cool. So why is this model better? This model is better because we create realistic scenarios and a consistent way of measurement. So we're able to get better candidates overall, candidates that pass through this technical interview. We're sure that they're going to be successful at GitLab. So with realistic scenarios with GitLab, we actually make sure our interview project matches our product architecture. So with GitLab, it's Ruby on Rails with Vue.js. So we use the similar model with our test project, giving candidates a good perspective of what it's like to work at GitLab. We're also able to evaluate candidates holistically across multiple areas. So previously, we weren't able to test whether a candidate knows how to use Git, or whether they understand testing or pipelines. But with this model, using GitLab for interviews, we're able to holistically check for that. We're also able to determine and understand how candidates' problems solve, because we allow candidates to use the internet when they're working through the merge request. And for example, at GitLab, we use Hamel for some of our HTML templating, and not everyone knows what that is. And so giving the opportunity for an interviewer to see how a candidate looks for answers on the internet is also very valuable. So you should use and consider using GitLab in your technical interview because you can implicitly evaluate candidates using multiple categories. You can measure performance and make sense. If you're already using GitLab for your tooling, you're just exposing them to what it's like to work at GitLab. It's a more accurate representation. And you can also make sure you're measuring testing proficiency so you make sure that they understand how that works before they join your company. So we have some resources I wanted to share. Their project seater is open source MIT license. You're welcome to check it out or fork it or contribute to it. Although we don't publish our evaluation criteria on our rubric, we do have that dashboard that is publicly available. And so feel free to check that out. It's not perfect, but we're continuing to evolve and improve it. And if you have more questions, you can feel free to reach out to me on Twitter or find me during the break. I'm happy to answer any questions or dive in deeper. Thank you very much.