 All right, so for our last tutorial for the day, we have a product manager, Daniel, from our Configure team. So Daniel, without further ado, I'll turn things over to you. Just tell us a little bit about yourself and tell us everything about Configure. Thank you, Ray. Okay, can you see my screen okay there with the full view of the presentation? Yep. Great. All right, so yeah, we'll do an overview of the Configure stage of the DevOps lifecycle here at GitLab. My name is Daniel and I'm the product manager. So along with a great team of back-end front-end and UX engineers, we deliver on a couple of the categories that I will go over shortly. So here's the DevOps lifecycle. As viewers may have heard today from prior presentations, it comprises of 10 stages. So we sit in the ops part of the DevOps lifecycle and we deal with everything that has to do with infrastructure. So basically provisioning infrastructure, more specifically Kubernetes and everything that's related. We deal with operations. So features that empower operators, that would be things like chat ops, things like protected environments, things of that sort. And we also deal with the GitLab offering for serverless. So basically what we have right now is the Knative deployment to Kubernetes that allows you to work with serverless workloads. And we'll talk about that a little bit as well. All right, so moving right along, let's chat about the specific categories. So first and foremost is auto DevOps, which is a very popular category. Here at GitLab and what auto DevOps is, basically a CI template that provides modern out of the box CI workflows that are very tightly integrated into Kubernetes. So basically you get things like review apps, you get things like automatic testing, automatic security features, automatic container scanning, deployment, all of those things, you get them out of the box. When you pair them with a Kubernetes cluster, it's very powerful. So what we've seen with users out there is that it's hard to get started and that's kind of an explosion of projects right now. And providing modern CI workflows for all of your projects can be hard, but with auto DevOps it doesn't have to be. So some of the things that we're thinking about auto DevOps is that we want to make it as smart as possible. So we want auto DevOps to run only when the necessary components are there. We want auto DevOps to run on projects that are relevant. Let's say auto DevOps leverages Heroku build packs. And well, if you have a Docker file in your project, we'll use that, if there's no Docker file, we'll use Heroku build packs. And we wanna make it run, let's say, if we know that build packs are offered for a certain amount of projects, you only wanna run it on those projects or if you have a Docker file, things like that. So we're working to make auto DevOps kind of smarter and cover more use cases for the projects that are out there. The second category is the Kubernetes integration. And what do we aim to do here is we aim to make all the things that are hard about Kubernetes easier. So basically that's being able to deploy a cluster from within the GitLab GUI, being able to deploy applications into your cluster in the form of Helm charts, kind of a single click operation. So not having to concern yourself with a lot of YAML files and having to do a lot of configurations, but us taking care of that for you. So some of the things that we're looking at there is the ability to upgrade any apps that are already running in your cluster. That's something that we have for 11.9 and 11.8. And the ability to uninstall those apps. So things like that is what's on our radar. And as well the ability to cover as many use cases as possible. So there at the bottom of this slide, we have the link that will take you to all the categories. There you can drill down into the ethics and the issues that we have planned. The third one is serverless. So our serverless offering is new and it basically builds on the ability to deploy Knative into a Kubernetes cluster. And then with a couple of configuration steps, being able to deploy your app, if you have a serverless app, let's say, or an app that you wanna run in a serverless fashion, that would take advantage of the features that come built in with Knative. So that would be scaling. Scaling is very well done in Knative. Out of the box, you can scale up and down to zero without any configuration. And as well, you have Knative serving and eventing that you can take advantage of within your app. And then the second thing that we offer for serverless is the ability to deploy functions. When you deploy functions, you only have to define a couple of YAML files in your repo and then the function files themselves with your function code. We'll deploy those things into your Knative, into your Kubernetes cluster and basically give you the URL where those functions are being served and things like that. And in the serverless space, we're looking to do quite a bit of things. This is something that we have produced recently. There's kind of quite a bit of ground for us to cover. Some of the things that we're thinking about there is we want to abstract as much as possible from the user. So let's say we really don't want you to concern yourself with things like defining YAML files and configuration files. If we know what those are going to be, then we want to do as much of that for you as possible. So we're working kind of abstracting layers there. Then the fourth category is chat ops and that's the ability to exercise actions on your infrastructure. So right now our chat ops integration is kind of minimal. It allows you to run Slack slash commands that in turn will trigger a GitLab CI job. So you can configure those to be anything you want them to be. But we want to be more opinionated on that, if you will. And we want to out of the box work for as many use cases as we can. So that's something that we have planned for this year. Right now it's minimal and we want to kind of expand it to cover more use cases. Then we have the fifth category then is Rumbock integration. And see. All right, and I guess we can cover questions then at the end of the categories here. The fifth category is Rumbock configuration. So some of you may be familiar with Jupyter Hub and Jupyter Notebooks. And basically what we want to do here is give operators the ability to configure Rumbocks that have like code snippets. So you can wrote things like queries on your database or you can run any kind of action on your infrastructure. And we're leveraging NERCH, which is a company that has an open source library called Rubix and they have kind of pre-built actions for both AWS and Kubernetes that make it very simple to kind of write those things in your Jupyter Hub notebook. So that's what we have right now that's minimal where we want to take it is we want to add more security to it. And we also want to add the ability to version control the things that point your Rumbocks. Then the next category is Pass. That's platform as a service. You may be familiar with some offerings like Heroku that basically allow you to provision compute space for your app. So this is kind of where we want to go with this. Probably in conjunction with AutodevOps if we see that you don't have compute space configured for your project you want to automatically do that for you. And we're thinking maybe having a free tier for that and after your free time runs out then offer you to kind of take charge of this compute space. But we want to show you the power of pairing your project with some compute space is basically what we aim to do here and we aim to make it easy. Then next we have cluster cost optimization. So our Kubernetes integration has become very popular. And right now we see that some of our customers are hiring full time people to manage their costs their infrastructure costs. So there are plenty of tools out there that provide you the ability to kind of monitor the usage of your Kubernetes clusters and monitor the cost. So we want to build all of that functionality right into GitLab. So we're thinking that at a minimum we'll start with letting you know what's underutilized in your clusters and basically what efficiencies you can make in an easy way. Then down the road we picture maybe showing you dollar amounts of what you're paying for, how much money you could save and things like that. And then the very last one is chaos engineering. This is something that was made popular by Netflix and it's basically exercising outages unplanned outages into your infrastructure to test how resilient your configuration is. So basically how Netflix started this is by just taking one instance down then they went to higher on that level. So all the way up to a region. So taking a whole AWS region out. So we plan to kind of do something similar of course leveraging our Kubernetes integration. So if you enable chaos engineering what would that look like it perhaps could start with the minimal unit, a pod all the way up to a cluster. So we want to make that easy and we want to build it into our Kubernetes integration and we want it to be configurable and easy to use. So as I mentioned, there's still the link there at the bottom of the slide. You can find out more information there and you can drill down all the way into the issues that are planned to both on the short term and the long term. Cool. Yeah. I think the question may have resolved itself but Erin can chime in here. But yeah, I think originally the question was on Kubernetes as a service but I think your discussion on platform as a service may have addressed that question but Erin just correct us if we're mistaken. And yeah, you know a good point to bring up here is that we're not interested in making money on compute where we want to do is have kind of the power of enablement. So we want to show you what's possible. We want to show you how powerful it is. Initially, you know, we'll start with Kubernetes and once we have kind of exhausted the free time that we can provide for your project, we want to offer an easy way to kind of change that responsibility over to you and maybe the cloud of your choice and basically migrate all of those resources there so you don't have to start over from scratch. You know, but still early, so we're doing discovery right now for pass on what the best solution there. We're thinking that it may be kind of a multi-tenant Kubernetes cluster or it may be some flavor of Knative that will automatically use those resources in a smart way. So it's still early and we will, but we do plan to have an MPC this year and then from there we'll build on top of that. All right, so let's move on here. Also missing one, there we go. So this is at a very high level what we want to focus on for 2019. So we want to add depth to our flagship features. So the Kubernetes integration and AutodevOps are at a very usable state and we want to add a little bit more depth so we cover more ground, more use cases and make those features kind of smarter, right? The second thing that we want to do is we want to empower operators. So we want to make difficult things easy such as standing up infrastructure, making changes on that infrastructure, running downtime scenarios through Runbox and things like that. We want to make those things easy to set up and configure. And the last thing is that we want to focus on the developer experience when it comes to infrastructure. So things like setting up clusters or deploying a Helm chart to your cluster and things like that, we want to make that seamless. So we really want to have a good developer experience when it comes to infrastructure. It's the third part of our focus. So I wanted to go over kind of the major community contributions that we've had. It's worth noting that our team has merged at least one community MR per release for the last 12 weeks. And I linked here some of the things that you can use as examples. I'll drop the presentation link in the chat here. So if you want to reference these at a later time and Ray, maybe we can drop that as a link in the presentation if we publish it later. So we see that auto-develops is very popular and we have kind of a lot of features there that are up for grabs and accepting merged requests. So here on my next slide, I have three resources that I linked that I think may be useful. So those are, the first one is the issue board with all of the configure issues that are accepting MRs right now. So you can take a look at those. The second one is an overview for our roadmap. So basically you see one that may be attractive to you. You know where we're working on it. So take a look at that definitely. I mean, when you look at the issue, you'll see if it has a milestone assigned and you can ask any questions right on the issue. We love feedback. So please feel free to chat with us there. And lastly is my email. If you have any questions about any of the issues or you want to join the conversation of something that's not out there yet, please drop me a line and I'll be glad to chat with you all. All right, so that's all I had for the overview of configure. If there are any questions, I'll be glad to take them. All right, so Ray, are we at a good stopping point here? Is there anything else that you want to cover? I think you're on mute, Ray. Can you hear me okay? I can hear you. All right, sorry about that. So a little Bluetooth issue. So yeah, I mean, thanks for your talk, Daniel. Just a couple of things I wanted to bring up. I think for each of the product managers, we wanted to highlight an issue to encourage people to work on those issues during the hackathon. I think the one we had was related to Ingress deployment, not supporting non-IP address formats. Yeah, that's a great one. So let's take a look at that. So currently, let me see, that's ECS deployments. Let's see. Let me just look for that issue. And now it's here, I don't know if it's... I can post it on the chat window too. Oh yeah, please, yeah, that would be great. Here we go. All right, so as you know right now, and maybe I can show an example of this. Let me see if I can bring one up really quick. So right now, when we deploy a cluster, we give you the ability to deploy Ingress to that cluster. And once Ingress is deployed, we'll show the IP address that was provided for that particular cluster. So when you deploy a cluster to AWS, EKS, we are currently not updating that field. So you will see that question mark kind of remain there. And the reason for that is that AWS is not deploying IPs, but they're using a full DNS. So basically what we see on the sample here is a full address at aws.com. So basically, if you query for that field directly on the command line, you will see the right DNS. But because we're specifically looking for IP, we're not populated it properly. So this is something small, we'll probably have to look for external address or domain. So it may just have a little bit of digging into the code, but there's already some conversation on this issue that may help you. And basically, I think it's host name, what's being returned here. And it's basically updating the front end, so it supports both IP and host name. So this is kind of a very straightforward one. I think that the AWS EKS folks are more used to working in the command line. So it's not like this is super urgent, but we've seen some appetite for it. It has three upvotes. And it should be something kind of simple to implement because it has a simple workaround. I think that people maybe don't mind a lot going through that workaround. So this would be a great simple issue to get started in contributing to the configure team for sure. Yeah, I think this one's still up for grabs. No one's spoken up to work on this yet. So people are interested, people can mention me or Daniel and then we'll be happy to assign the issue to you. And then I'm sure if you have more detailed questions you can type your questions in there. There are plenty of people both within and also from the wider community discussing the issue. So I'm sure people will be happy to answer your questions or address your comments. Great, cool. And one other thing while you're sharing your screen, Daniel, is can you show us your vision page for configure? Absolutely, you know, and that's something that I'll go ahead and link on the presentation because I think that is something definitely useful to see. Yeah, thanks for sending the link to the presentation. I don't think you heard me because I had mic issues but I'll definitely post the link to your presentation on the hackathon page that people can reference it. Oh, great, great, yeah, thank you. Yeah, so let me post this link and this is going to be something useful actually not only for the configure stage but for everything. So on our website about.gitlab.com we have a direction page and on that direction page we have the DevOps stages. It's about halfway down or three quarters of the way maybe. And here you will find the vision for every one of our categories. So here you can scroll to the right and you will see that we have the vision for each one of our 10 stages. So configure sits right here maybe I'll drop this link in the chat as well because they think that it's useful not only for configure but for every single stage and I'll drop it on the presentation as well. So our vision page has also the roadmap overview that I linked on the resources on the presentation and that's kind of what our roadmap looks like in the short terms with this in January so probably not much has changed and what our focus is. And then it has each one of our categories that I just talked about and it links to each one of those things that I mentioned so you can link to the epic when you click into each one of them. So the epic kind of has a description of what it is and it has what we're working on next. And then if you scroll down it has all the issues that are linked to that epic but if you wanna know what we're working on next you can look under the what's next and why heading and that they'll tell you what we're working on next. Cool. Yeah, I get this question from people about our roadmap and they're quite a bit shocked when we say our roadmaps public there's nothing like we like to do this in the open and I just showed the link and it's I think refreshing for a lot of people but this is how we work in open source and I get lab. So. Yeah, that's a great thing here is that we work out in the public and it allows the community to interact with us and that's very helpful in many different levels. But yeah, so that's something I guess very unique to our company. Right, I think, yeah, I think we've all worked at places where if you wanna see a roadmap you need to file like three different legal forms to get approval to see it but it's not the case here but. All right, so let me put the vision for all the stages and then we'll put the vision for the configure stage here as well so people have access. All right. So I'll work on cleaning. Real-time update, which is great. Oh yeah, that's how we like to work here for sure. Yeah, awesome. So yeah, I'm looking at the chat window to see if people have any other questions. If not, I think, you know, as you can see here that then you see Daniel's email address and people know how to get a hold of me as well. So people weren't able to join in watching the recording. Feel free to ping either one of us and we'll be happy to get back to you. All right, well thanks Daniel, not only for leading the session but all your preparation, appreciate it. I'm sure community do us too and have a great rest of your day. You as well. Thanks everyone. Thanks Ray. All right, thanks. Bye.