 August 2017 support functional update. We're gonna talk about what's been on our brain, what's going on, what you can help with. We wanna start with highlights. Summer coming down. Blonde bleach highlights fading from our hairs, but we have great things here to talk about. Alex has joined in APAC. Daven has joined in Americas. We have another offer out for an America Support Engineer, which is really, really awesome. We have a new ticket priority system that is helping us solve our tickets better and faster and making sure that we're having less breaches. We've had a pending backlog that has been resolved and we have .com production console access for the services team, which means that we're gonna solve those problems even faster. So a lot of momentum and velocity here. Very excited about that. We're gonna talk about some things. Jose is no longer on the support team. We have an update here on breaching tickets. If you remember last functional update, we dove super deep into that. We're gonna do the same thing here today. Get a little bit of an update. Services support. We're looking for a lead for this role. I'm not going anywhere, but we want to start shaping out services support, which is gethostin.com. So if you know somebody, we'll talk more about that later. And I wanna do a 30-day look back. I'm gonna ask this question. So at any point during this presentation, if you're distracted, not listening to the words that are coming out of my mouth, going into your ears and into your brain, I want you to think about this question, not lunch, not dinner, but this question. What accounts make the most tickets? We have three different size that we use, less than 1200 AAR, between 1200 and 12,000, and over 12,000 AAR. We're gonna look at that 30-day period, and I have answers, you have questions. Let's see what they look like. I've been sharing this graph for a long, long time. This is our backlog. This is support ticket backlog. And if you notice, there's something there. It might make you think of that, but it's not the wall. That huge drop was me diving in and figuring out that we could close all of the pending tickets. We've had a couple of instances there where we've done chunks, and we've made a big move to just completely wipe it out. So this backlog now, if you're an astute viewer of the Functional Group Updates, you'll remember this page. You used to have a asterisk with a caveat that said this graph is skewed by the pending backlog. It is no longer the case. This is our accurate backlog. There's just under 600 tickets that support has to worry about. About 200 of those tickets are on hold, which means at some point, we were trying to coordinate with some other department or team or something. Most of those are very old. So the next piece here is figuring out what's on hold. Can we close that, et cetera, et cetera, which will get us more clarity into this. So very excited about that. And if you notice here, the majority of this graph is relatively flat. And I've been explaining to the team here, we're holding the line. And I want you to take a second because that phrase is very powerful. That's what it feels like right now. And the graph shows it, it echoes it almost perfectly. We are not advancing. We are not falling behind, but we are just holding the line and that emotion, that feeling, that's exactly what it feels like in support right now. What we're trying to do is get more people in because the holding the line is a function of our team size. I've shared this chart many, many times. And if you look in the last 12 weeks or so, you'll see a couple of times, we'll dip into that backlog and we'll start taking some out. And then in the last three weeks or so, some of the backlog's built up a little bit. And that's what I'm talking about, where if you go back all the way up to say week 14 or 15, when we had a little bit of a larger team, we were able to get 30, 25, 28 tickets into that backlog and start making meaningful strides. So that's something that we're aware of. We are actively and avidly hiring. I have about 15 resumes that I'm going through today alone. Shout out to the recruiting team for helping get those onto my desk. And we are trying to get in front of it, but we can see everything lines up and makes sense. I want to take a minute to talk about breaching tickets. This was a huge, huge thing that we focused on last time and I want to do the same here. There's a push from the global nature of GitLab to go to a 24-5 schedule. That means 24 hours a day, the clock is ticking, five days a week. And that makes sense. And I think we can do it. The problem is, what does it look like? What can we do now? And what is it going to take to be perfect and get it as close to perfection as possible? So what I did last week was I put our clocks on a 20 hours a day, five days a week clock. And this week is our first time ever using 24-5 on the SLA clocks. And I'll dive deeper into that and make some more explanations as we go through so you'll be able to see. This graph here, you'll notice that there's the long running blue line which is our premium SLA and there's this green line that just formed, EE starter SLA, that's new. I've just started breaking out. The last time I did this report, we had two SLA's. We had premium SLA, which was known premium customers and we had base SLA, which was everyone who was not premium. Obviously that's not enough fidelity and that's not enough to help us understand and see what is going on and where, what products are we struggling to support and taking, where is the problems? So this graph is focusing right now on EE starter premium. So let's take a look. On week 31, that was our pivot point here. That's when we instituted two things. We instituted the new queuing policy, which I talked about in the last functional group update. So check that out to see those changes. And then we also made sure that our wall time was accurate to 12 hours a day, five days a week, which is what we were advertising on the website. Before that, all of this period, we were using 20 hours on the clock, five days a week. That wasn't what we were advertising. So we said, we need to make sure that we have 100% accurate numbers to what we're advertising. And you'll see over those past four, those weeks after that, we had a marked improvement. The new system was working. When we were counting the time that we were advertising, we were hitting just about 90%, just under, which is phenomenal, very, very excited about that. Then we have a dip. What do I think that dip was? One, I think that we were actually fatigued. On week 35, I think that we were a little bit fatigued. Week 36, that was the first time that I put 25 back in rotation. So we saw what it looked like there. So we were better than before, but still a dip from 12.5, which makes sense. And this week, as of yesterday when I pulled these graphs, we dipped just a tiny bit below. But if we compare to, say, week 34, where we had almost 90% completion of SLAs, we're now at 66% with literally double the time on the board. So we doubled our coverage and we've dropped 20%. So that's something that obviously we want to change and make better, but we are still focusing on that. It's still something that's gonna elude us until the team size grows. And if you're wondering, this graph here has all of the new SLAs. It's very noisy, but I wanted to show it in the event that you were wondering, well, what are the other things doing? Base is still everything that is not quantified in E.E. Starter, Get Host, Premium and Trials. And the marching orders that I've given the support team are that we need to focus on Premium first, E.E. Starter, and then we have a smaller team that's working on Get Host Trials and Base. So those are the orders that we should see them in. This graph is really important because I want to also emphasize there is no running, no hiding, no getting away from breaches. And what I mean by that is if a ticket is breached and the tickets are breached and we take time to respond to them, you'll see we've been trending down, we've been doing good. What this is showing is times ticket has been breached in the median. And what we see here is this downward trend, which means that we were doing well, doing well. On week 36, that means that there was a ticket where tickets at some point that were breached for some amount of time that pulled up our median. And obviously we've gone way back down because we've addressed them and now we're back into our normal flow. But effectively, if there's any time where there's a hiccup, this is the graph where we're gonna see it and we're gonna know and there's no hiding from it. The data's visible, so there's no way to push anything under the rug or get away from these. So that's the way that we can see from both angles, if you will, about breaches. This graph is also very important here. This one is specific to premium. You'll notice there's two lines. We have first reply time and next reply time. What's the difference? A ticket will come in. The time between the ticket comes in and then we respond is the first reply time. And the anything after that, they come back to us and we reply back is next reply time. It's important to measure both of those. And you'll notice on our upward climb, we were aligned and we were very close. Anytime we start to see a gap forming, that is implying that whatever, if the gap is first reply time is higher than the next reply time, we're more likely to take new tickets versus grabbing a ticket that may have been longer running. So this is something to think about. And I'm gonna pay attention to this. Obviously, we want these to be as close to each other as possible. And anytime that we see a gap is going to show that we need to be paying attention and where we need to be focusing our energy. So that's something that I'll have in my toolkit to pay attention to. And as we see the gap formed and the gap is closing a little bit, so that's good to see between first and next reply time. So now I wanna talk a bit about services support. Services support encompasses two products, GitLab, excuse me, two services, software as a service, gitlab.com and githost.io. And Githost is absolutely one of the biggest struggles on the services side. I've pulled out of the past two weeks, I've pulled three of the biggest pressing issues that we've had. Upgrades seem to be not working consistently, which takes time to debug and figure out. Backups are working, but not 100%, which is taking time to figure out. And then we found a nasty bug where backups were getting reaped if we didn't even have a backup in place, which has now been thankfully fixed, but that was scary. So there's a lot of things that scare me about Githost and it's something that we need to come up with a plan for and address. That said, we're looking to hire a services support lead to focused on both of those products and owning that my focus will then transition to the EE support with on-premise. So there's a job description for the services side, it's gonna be fleshed out a little bit more with details about what the lead portion of that will encompass, but take a look at that. If you know anybody who's done SaaS support or lead SaaS support teams, they'll probably be a great fit for that, so that's something we should look into. Okay, ours, we'll take a quick second to talk about that. Logging console access on.com, this is something that before we had to ask for, so it was very easy because we would see when we asked and we could track that. Now we need to be hyper-vigilant because we have access so we need to be making sure that we're logging that. And on this one, it's really important because if you're on the product side, pay attention there because if there's anything that we can do to improve the product, this is where you'll see it. Obviously there's gonna be some edge cases that we can't improve the product and we still need to use console, but anything else, there may be ways to improve the product that will help our customers and help our team. So keep that in mind product. We're reducing time to solve, that's gonna be a huge struggle as we now have 24 hours on the clock and we're trying that out. We want to see what this looks like, this is the future, we're trying to prepare for it, trying to build that out. Getting breaches to zero is priority number one there and we're using service desk, we have it set up, we have it in place, things are coming in, it looks to be working properly, so we're gonna update the handbook and deprecate Zendesk and make sure that the service desk workflow works properly. We'll be working with the security team, Brian on that to make sure that that works. That would be awesome. So now, if you remember, I asked a question, which group creates the most tickets over the last 30 days? Think about it, get one in your head because I'm about to show you to the tune of data, data, data. So here, we have this table with our cohorts. We can see that actually our larger customers, so let me break this table down, let me break this down really quickly. So we have our cohorts and of those cohorts, how many organizations, unique organizations did we talk to in the past 30 days? So less than $1,200 AR, there were a lot of them and they created a lot of tickets. The middle tier, there was less of them and they created less tickets, still a lot. The larger tier, there was the least amount of them, but per, if we average across org, they made the most tickets. And what this is coming from is large customer onboarding. Even without this one case where we had 21 tickets generated from an account, a large customer that's getting onboarded, they still were in the lead. So our large customers are creating a lot of tickets. The unknown section there, 544 tickets, that's some amalgamation of Githos CE users, gitlab.com users, and then whatever's left over are truly unknown, meaning we don't know their organization, which should be people that are on trials or things like that, those are coming in. So we can dive in and get that data all cleared up, but for the sake of this discussion, the things that we know, those are the answers. So I don't know if anything's surprising to you there, but the thing that will help support out the most is helping to improve getting new customers, especially large customers, getting them implemented and getting their teams up and running, that's gonna save a ton of load because remember, most of these large customers are also premium, so they have a four-hour SLA, so they're constantly going to the top of our queue, they're constantly getting a lot of attention, and like I said, in one instance, there was 21 tickets in the last 30 days, which was ahead by a huge margin. The average between the rest was somewhere around two. So let's go to needs. We need West Coast coverage. We've started looking at when and where tickets are breaching and we need West Coast America's coverage. We also want to build out the AMIA team in preparation for getting more AMIA clients. So we want at least two in AMIA and I'm looking for one West Coast. Very excited about that. And if you know somebody that's run a SaaS support team and is looking for some new challenges, we have plenty of challenges. We have a great team. We have a ton of support across GitLab so they can run with things here. So if you have anybody in mind, put them in touch and I would love to talk with them. I want to take a second to give some shout outs. Ari Hunt, John, Stan, James, across departments just helping support out in every way, shape, and form. Keep that up. Thank you all teams for helping us. So I'm going to go to questions. I'll stop sharing my screen and I'll take a look and we'll see what questions we have. Okay. So, Elsha asked, is premium emergency support 24-7? Yes. So if somebody has emergency, if they're premium, they can page us at any time and they'll get a response within 30 minutes. So premium emergency, yes. Mike says, product doesn't have any visibility. Yes, Mike, if you click on that issue on that slide, you'll see all of the time support has used console. So you can take a look at what the tickets were. You can take a look at, we've put a summary as well that says, we needed to debug this CI thing or we needed to debug that thing. So that will help get you some clarity. Gregory Stark says, the answer is we don't know. But the answer is actually of the accounts that we know their org and their org size, we have that data. Anything else does not have an org and does not have AAR. So that means in theory, most of our tickets are people who have not paid us or are free or some way. So that's something that we're trying to dive deeper into. So, awesome. I think I've addressed all the questions from the chat. I wanna take a minute if there are any others. Happy to address them. All right. Everybody, enjoy your Thursdays. Thursday here in the Americas. I'm pretty sure it's a different day somewhere else, somewhere in the world. So enjoy whatever day it is, wherever you are. And let's have a great Thursday.