 All right, welcome to our third tutorial session for the Q2 Hackathon and happy to welcome Joshua Lambert, another team member from the product management team. So, Josh, I'll sort of turn things over to you, let you introduce yourself and kick things off. Thanks Ray, and thanks everyone for participating in our Hackathon. Really appreciate it. It's a great event. I am Joshua Lambert, product manager here as Ray mentioned, and I am currently working on our scaling team. So, essentially as GitLab grows and as the PM team itself grows, we're actually looking to double the team this year. It makes sense to have a more sort of cross-functional person focused on helping to improve the efficiency and effectiveness of our PM team. So, I'm trying to do things like help to automate some tasks that are previously done manually, help to have better data feeds coming into the product teams, we have better sensing mechanisms for what people are using, what problems they're having, and things like that. So, that's kind of what I'm focused on right now. However, this is a recent change for me up until a couple of weeks ago, I was working on our distribution package and monitor teams, and so for the Hackathon, it might make sense I think to try to focus there perhaps on one of those stages as opposed to sort of my role, which is sort of a little more of the internal side of things, internal opsie things. So, this is our team page if folks haven't seen it, you can get a sense of everyone in the company and what they're up to. I'll just share some links as we go along here so folks can get a sense of our pages and how they're structured and how to help find some resources, since all of this is public information in GitLab and of course the way you can help to contribute regardless of whether it's currently a Hackathon or not. So, cool. So, that's me. And let's talk about perhaps in the monitor stage here would be a good one to go through and give a little tutorial on. So, if folks aren't aware, we have all of our stages and groups and categories laid out in our product cam book. I'll paste that as well here for folks who are finding, but if you just search for GitLab product categories, you'll go come right here to this page from Google. So, here is where we list all of our again stages and groups and you can see kind of a nice way to kind of visualize and think about these stages and how they fit together here on this graphic. So, we'll be talking about the monitor stage here, which is a bit on the tail end of sort of the DevOps cycle here. And that way you can kind of think of this as you kind of plan your sprints or your next milestone, you engineer it, you build it, you then test it, package it up, deploy it out to staging or wherever you're deploying it to, eventually it goes to production, you configure it and then you try and of course monitor it to make sure it's running as you'd expect it to run. So, this is where we're kind of sitting here now and the goal of this stage for GitLab is to help really kind of increase awareness and user experience of some of these monitoring tools and operational tools in GitLab sprinkles some of the analytics where it makes the most sense. For example, if you could take, for example, some of your errors from a century and you can imagine being able to see, for example, how many errors they have been caused by a particular merge request in the error tracking page or directly on the merge request itself. So, great ways to help sort of provide that information contextually where it makes the most sense no matter what kind of context you're looking at in GitLab, whether it's in the planning phases, the merge request in great phases or elsewhere. So, that's kind of where it fits, right? So, you take all these things or you take your operational learnings and you're trying to funnel them back into the planning phases and then help to hopefully improve things as far as your next iteration. So, that's what we're looking to here on the monitoring team. You can kind of scroll down and get a look at all of these and how they fit together. We'll jump on down to monitor here, monitor stage. And you can see we've broken it up across two different groups. So, we have our APM group which is focused on the metrics trace which is focused on error tracking, cluster monitoring, synthetic monitoring, incident management and status page. You can also get a sense of kind of what we think the current maturity of these are here with these kind of tags, right? So, viable would be that, you know, you can certainly go ahead and use it. Minimal would be sort of we have our first iteration done but there's some work to go before it really kind of has a more more usable surface area, for example. And we have definitions of exactly what those mean on our product maturity page if you want to learn more. But let's go ahead and take a look quickly at the product vision here for monitor. This is under direction monitor. I'll share that link here as well. And all of our stages have, by the way, have for each stage name here under the direction you can go ahead and find them. And you can get a sense of kind of where we're at when we're going here with our video. Incident management is one of the features you're working on for this year. We've actually just shipped the kind of first step towards incident management. So, incident response in 11-11 with the ability to actually create issues from alerts you're getting from previous. But you can get a sense of each of the categories that we have, more detail on them here. And then also with documentation if we have some initial support out, as well as a vision page as well, which talks a little more detail around that particular category, why we're doing it, and how we plan to sort of really take it to the next maturity level, right, with its minimal to viable, its viable to complete, etc. So that's a bit there on the monitoring direction page. What we can do here, right, if it makes sense, maybe to give folks a kind of quick spin as to the monitoring features. And I'll talk about some of the, you know, I have some issues that might make some sense along the way. That sounds good. Cool. All right, great. So I have a personal project here, which I've gone ahead in the interest of time sort of created. This is actually utilizing the Java Spring template that we have in GitLab. So I've gone ahead and just simply went through and clicked on essentially new project. And then went ahead and picked our spring project template. And you can go ahead and get the exact same kind of project here by doing the exact same thing. The other thing I did is I've gone through and I have attached a Kubernetes cluster to this project. As you can see here, by going in and just essentially clicking on add Kubernetes cluster. And then I would add cache ones pretty simply there. But those are the two steps I've kind of done right now as well as turning on autodevops as well. Autodevops, if you haven't seen it, I think we had a configurist for a hackathon last year, but essentially provides an out-of-the-box breast practice set of CICD templates to really build and deploy your projects. And it has some kind of intelligence in there to help protect the language type, build it and do the right thing and then ship it. And so that's what I'm using here to go ahead and get that deployed. One other quick note as I have is I've also deployed our ingress and Prometheus. You can simply click on the install button to go ahead and get those going. But this is kind of some of the foundation to sort of get monitoring up and running in a relatively easy way. So you can really just attach a cluster, click on deploy for Prometheus and then you'll start getting things like your CPU memory utilization and things like that from the cluster. You also, of course, can zip along and change your window scope here as well to get more details. But that's it and I'm ready to go ahead and start working with monitoring. For folks who would like to contribute, we do have one other thing to keep in mind here, if I can spell correctly, which would be our GitLab GDK. So if you're kind of working on some of the Prometheus-related features, if you wanted to, we do have a documentation O2 on how to set up Prometheus and get that going for a given set of projects. And so that can help you get going if you might need to have Prometheus hooked up to GDK to then kind of test your work. So that can help you get started there if you're looking to contribute and actually need to go ahead and test some of this stuff locally. GDK for folks who might not be aware is our GitLab development kit. So this can help you to go ahead and get started with GitLab and install it and then start working with it locally on your machine. But so some of the basics there for how to go ahead and get started. And so let's take a look at some of the GitLab features here, so around monitoring. So I've gone ahead and actually have our environments tab here. And you can see that I've got a deploy to production. This is using the autodot feature that went and deployed it. And you can see here we have one pod running currently in our production instance. I can actually click on this. Remember I mentioned logging was one of our categories. And you can see here we actually have a direct pod log coming in from Kubernetes. So you actually have a sense of what this looks like. And if I had multiple pods, I can also jump between the different pods here in this little drop down. So that's kind of a nice way to simply get access to these logs without having to go ahead and run kubectl logs or do some other kind of service. In the future, we are looking forward to improving this and providing some search capabilities as well as some more persistent logging. And we're kind of thinking a close look at the great work like we're kind of doing with Loki, how to have a nice sort of lightweight logging approach that we're looking to hopefully leverage here in the coming next few months here. So stay tuned for that. In the meantime, you can simply get these logs from Kubernetes itself. So that's pretty cool. You can go ahead and get your logs there for that particular pod or really any pod you have deployed. We also have a monitoring button here which you can access for a given environment. Which of them pull up a monitoring dashboard or you can simply click on the metrics tab over here to get a sense of what these metrics look like. You can get a sense of here, for example, the error rate, the latency, and throughput. So what happens here actually is that we have some out-of-the-box sort of dashboards or some recognition for some commonly deployed services, mainly around things like response metrics and system metrics. So we took a look at your Pyramid Theosaur that was deployed. And we saw it was picking up to mention next ingress metrics as well as some Kubernetes metrics. And so we went ahead and simply rendered these in the dashboard out-of-the-box for you. So you didn't have to worry about doing any of this yourself. You can also go ahead and change, for example, the timeline here. You have a better sense of what that looks like if you can zoom in here to 30 minutes and take a bit of a closer look at some of these values here. You can see here our latency has gone up as in certain cases, perhaps a submissional loop and through on this platform. So let's hook two or some metrics here. You also can of course add your own metrics as I mentioned before, whether they're business metrics or response metrics. You can simply provide us with the Pyramid Theosaur query that you'd like to use and then some simple things like what the labels and things like that should be. And we'll go ahead and add it here on the dashboard as you can customize this to your own liking. In the near future, we're also looking to also add some source control dashboards so you can define them in the annual, in your repo. And you can have a difference in the dashboards loaded for different services or things like that as well, in addition to these sort of out-of-the-box detected ones. So it's a bit of a quick tour there on the metrics. Oh, also I mentioned if you have a higher tier of GitLab, you can also take advantage of some of our living capabilities. And so for example, you can say that if the alerting threshold goes over, say for example, 1%, I can go ahead and add an alert and you can see here that I'll start of course getting some alerts here shortly as we've exceeded this percentage on this given deployment here. And I can go ahead and I can remove this alert as well. So some examples there on setting some alerts. A couple other quick features here as well. If we go into the operations tab, or where they are on, you can see some other options around things like error tracking and incidents. So you can see we can actually tell GitLab to create an issue and then pick a template if it wanted to when alert is created. And then also we've configured, I'll reset the scene. I forgot that the token shows for a century integration. So I mentioned before kind of earlier on that we're looking to sort of be able to really provide some contextual analysis of some of this data. So the error tracking here currently in our first iteration is a bit of a list here of the errors we've seen from the project. But also we're looking to go ahead and leverage some of this cool century integration that we'll try and find sort of like suspicious commits. So for this error, this particular commit seems suspicious. We can then roll them up and then for example on the merger quest page or elsewhere, you can then get that view of these errors seem to be caused by this particular merger quest and have a more kind of cohesive look at these things and really try and surface them where developers are looking at them in the MRs and things like that as well as opposed to having to jump in to century. So it's a quick tour of some of the monitoring features. One other key area here for GitLab is that we're also focused on what we call GitLab self monitoring. So it's listed here. What this is is really for all of our customers and really for ourselves as well, we're in gitlab.com. We want to provide a great experience in operating your GitLab instance. Again, with this GitLab.com or whether it's a GitLab deployment on a Raspberry Pi, we want to make sure that's a good experience as possible and we can give you a great out of the box observability suite for deployment as well as also hopefully have some proactive alerts in case there's problems upcoming that need to be aware of. Things like running out of disk space or your psychic cues are getting a little bit too long. You're going to have some decorative performance, some things like that. And so that class of features are we call self monitoring and we were working on those the last few months here to provide a really kind of better experience in that regard. And so for example, with our release here in a couple of days later in June, I told that you'll have an out of the box for Pi instance and a dashboard preloaded that's configured so you can really just kind of go there and get a sense of how your instance is performing and any problems there are and we'll also start to email you if there's any alerts that you'd be aware of again for things like disk space outages or other kind of common scenarios. We'll be working to pull this into GitLab itself using some of the minor features I just showed you around error tracking, logging, metrics and dashboarding in the next couple months as well. And that way the current plan is to have a project within GitLab which you can use as sort of your GitLab administration project which will include things like again all those metrics and dashboards but also have service at home for things like how do we increase the disk space, who should it be assigned to and their operational tasks as well. So that's kind of the some additional scope that we have as part of the monitoring team. One of our aspects of the monitoring team is also can be considered sort of the GitLab charting team in the sense that we're building out a lot of the different charting capabilities of GitLab and so as you see some of these line charts and sort of these single stack charts, those are both on our team and they'll be reused across the entire product going forward for other parts as well. So if you look for some of the for example the GitLab accepting merge request issues in the monitor section you'll probably find some around charting and things like that more broadly and those are sort of our widget type workflows. So it's a quick tour of some of the monitor features and sort of the surface area and scope. I'll pause there how much I guess we're recording this so it's not live interactive but Ray any questions they might have or areas that we should cover David as well. Yeah I mean just thanks for the overview including the demo of monitoring. Just couple of things I wanted to point out. I mean thanks for sharing the product categories page. I mean one of the things I wanted to highlight there was that I mean we have a list of people that are identified like including product managers and like a back-end engineers or front-end engineering managers. If you're working on an issue for monitoring I mean people should definitely feel free to ping any one of those and issues or MRs as people are working on them. And the other thing I wanted to ask you, Joshua, was on the direction page for monitor in addition to sort of the planned items I noticed that you have a section called like other interesting items. I mean I assume that those are like areas if any of the community members are interested in they could sort of raise your hand and start contributing to. Yeah exactly. So these are items that we felt were interesting that kind of beyond sort of the upcoming release plan might be interesting items for folks to work on. I kind of mentioned them all performance dashboards a little bit so that logging features as well with low keys and so it's really cool stuff like on the coming kind of coming soon list here that if folks wanted to kind of get a jump on that would be great as well. I've also kind of have a kind of a list of some issues that are sort of smaller in scope that also be great for folks to work on as well that we can perhaps talk through really here as well sort of after sort of the initial high-level overview. Cool yeah I'll probably yeah that's a great slide that I'll probably post it on the hackathon page later on in the day so you can yeah okay great yeah yeah I'll get that published as well cool but yeah even beyond hackathon if people want to start picking up this issue even after the event I think thanks for highlighting those. Yeah we have a label accepting merger request as well so this is a pretty a pretty broad label actually and there's unfortunately there's a couple iterations you have to find the right the right iteration of accepting merger request but if you look for them in the label of monitor you will get a set and I picked the wrong label here let me get the right I think I think hold on we should we should fix this. Yeah it's one of the green ones I'm not sure which one. Yeah this this leaves me like this is a very different participation but if you know these are issues that are generally should be triaged and and the good candidates for potentially working on and so if you want to work on one of these just just you know grab the issue if you're passionate about it maybe if it's not entirely clear some might have a totally fleshed out sort of a description there just you know ping the product manager you problem the stages page as Ray mentioned or ping the engineering manager or the person who opened it to get ever and we can sort of iterate and I want to design it and then kind of you can go ahead and forward and make an MR as well but most of them should be pretty well defined but they have that label on them so. Josh a quick question I've noticed that there's monitor and there's the other devops monitor label do you favor one over the other or yeah that's a great question so so monitor is probably to your point of the better one to key off of the stage label is sort of going away in favor of the devops label and so this should be one that that we're going forward should be more heavily utilized but they're both kind of applied right now sort of equally or should be but going forward this one will sort of phase out the monitor stage level so yeah so I think for kind of long-term usage you should use this pair of labels if you'd like if you're also passionate about a certain particular group you can see we have the APM group as well as the debugging and health group labels and so you can use this as well to further refine if you'd like to that particular kind of group within the stage as well okay thanks yeah absolutely but again we have as you can see here quite a number of them but a couple of the ones that the team just highlighted here briefly as far as some ones that might be relatively easy to work on and relatively fleshed out would be one of them being on the self-monitoring category so as I mentioned before we have sort of bundling Grafana with a GitLab and we'll have an on but a fault here in 12.0 along with having it hooked in for GitLab OAuth as well and so what would be great is if I'm on sort of the admin experience for a flow I can't show it here since I'm not a GitLab.com admin but you can actually there's a bit like a little wrench and you go to the wrench and you see this monitoring tip and it here is to actually have sort of a like a metrics page here you can click on which would then take you off to Grafana and you can then of course go ahead and take a look at that dashboard so pretty small change here I think on the admin workflow but a great way to help drive awareness of that embedded Grafana instance that's going to be available here in 12.0 by default so that should be relatively easy one to go ahead and add and thank you Dale for opening it up another quick and easy one here would be some internationalization support if in monitor so right now we have average and max for example on the legend name these are currently not really available to be translated so the great if we could go ahead and kind of update the widget there to make that support sort of localization and you can see here we actually have a GitLab UI project which as I mentioned before how the monitor team is kind of building a lot of those widgets this is kind of our excuse me a little widget library so if you kind of go over here too and I'll share this link you'll see some of our our widget library right here in the Git UI project as well so this is where a lot of the charts are defined and then we're used across GitLab so that's that's the internationalization legend one which again would be hopefully a pretty low heating fruit for folks to try and jump on and the other two are more on logging and really trying to help increase the discoverability and usability of logging so we kind of do that first iteration of logging where we had it if you remember seeing it underneath we're really kind of part of deploy boards so this feature here is what we call deploy boards and folks might not be aware they'd actually click on this box but for the first iteration it was kind of the easiest way to kind of build out this workflow but it would be great if we could go ahead and really add logging to operations tab here and so you'd have it alongside metrics and tracing and you could just jump right in there if I haven't to go through that to pull it towards workflow so kind of the first step there would be to really just take that drop down you see here and add an environment drop down as well and that way you could jump between production or staging or a review application easily from this from this one single workflow and then once we have that we can then of course hopefully simply add the little sidebar item for logging and then you can then have a default to production and if it doesn't pop up we'll then of course just let you just pick one and you can just choose the you know the environment drop down to grab the particular environment you're looking for and then own in on the preferred pod as well so those are a couple ones that might be great ones to sort of get started with which hopefully aren't sort of some really heavy lifting and then of course you're also more than welcome to grab any of the ones here that are merged as well I'll just paste this the filter query here and then get live chat as well for folks to kind of jump in take a look at all right thank you awesome well yeah thanks for the great overview and especially highlighting issues where we looking for help from the community members David any thing else that you want to cover it got looks like a four minutes left no just perhaps a quick question just what what do you think the skills what skill should should contribute this process to contribute to to monitor it seems like a wide range I'm not sure which way you can give me no to answer yeah so I think really kind of depends on on the feature area so so the self monitoring stuff really I think just some some probably some easy front ends talk to just add the item here in the list and have it open up a given URL so forget lab we're gonna default to um really that really the root URL um the dash and then and then I think there's something called Grafana um and that way um simply basically if you click on that new item it just goes here and then and then that's really kind of all we need you to do um so that one I think should be relatively um kind of open for anyone who who has some you know some basic front end skills for example which is one of the reasons that we picked it um this one is similarly as well I I can't remember if this is an uh like a an svg diagram or not but um this one might be a little bit more involved since you're dealing in sort of the git lab widgets themselves and this might be a little more applicable for um I'm not sure if it's a view or not but um someone with our front end toolkit uh would be would be great here and also has some understanding of how we sort of localize strings in git lab would be awesome um and then these two for logging um would be probably good for someone who either is looking to sort of um try kubernetes or is familiar with kubernetes because you'll you'll need to have a kubernetes cluster up and running in order for this to work um that's because we actually ask kubernetes for the logs for the pod um right currently and then hopefully in the future here we'll we'll work at Loki um but for right now it's it's sort of a kubernetes only offering and so you'll have to have a cluster either mini cube or something else um in that documentation I mentioned earlier around the gdk um that can help to get you going um but you might be helpful to have some familiarity with with kubernetes although we do use a library uh a kubernetes client for this and so much of the apis are already there um but imagine for testing and things like that you'll probably still need to have an environment to make sure if things are working as you'd expect um but yeah so it really kind of runs the gamut from uh so the self-monitoring stuff really requiring not much in the way of sort of kubernetes experience and just kind of more of general front end perhaps some back end work on those um and then as you get into some of like the logging and things like that um you might need some similar tools um it is with noting actually that I did demo that um uh we utilize the kubernetes deployed version of prometheus however um we actually support uh really allowing users to specify any prometheus url um so if you wanted to work on some of the dashboarding features um you actually don't need to have um uh prometheus uh deployed or even running in kubernetes um you can actually um if you if you haven't deployed it to cluster and sort of if being managed we'll simply ask you to give us the prometheus url here and then you can save it and then the same stuff will work um so you might not need to have kubernetes and this you're doing something specific with kubernetes like some of the um uh cluster alerts and things like that that might be in there thank you yeah of course um and and again thanks everyone for the hackathon and and thanks ray and david for setting us up this is awesome um so if need be help definitely as ray mentioned uh ping us on uh ping me on on any of the issues and things like that we're trying to help and um thanks everyone for your enthusiasm and support of git lab uh it's great and that's a really a lot of what a lot of what helps to drive us along and sort of the community's enthusiasm so uh thanks everyone for paying attention and for looking to contribute to git lab cool yeah I mean I couldn't upset it any better and for people that are watching the recording if you have any questions uh after you view the recording I mean feel free to post questions on gitter uh and that david and I will be happy to answer them or like ping somebody like josh josh if we don't have it have the have the right answers so all right thanks again for your time joshua we'll of course let's you go thanks everyone yeah thank you bye