 Yeah, it's 30 seconds past, so I'm going to get started. Welcome to the first group conversation for the configure team. We are under the ops teams at GitLab. We work on all of the features related to the configure stage in the DevOps life cycle. So who are we? There is currently seven of us, according to this, and we have four open positions. So something you can totally help out with is referrals for back-end engineers at GitLab. There are a lot of open positions and our team has, yeah, most of them, no, not most of them, a lot of them. What we work on is most of the Kubernetes integration stuff and auto DevOps. We're also working on some cool new features around serverless and runbooks. And there's the product vision page where we have a lot of really exciting stuff we potentially are going to work on in 2019. A lot of this stuff is built around Kubernetes integration as kind of at the heart of how all of this stuff fits together. So our accomplishments, since we became a team, which was only maybe four or five months ago, we shipped Protected Environments, which is a feature of GitLab that can allow operators to protect deployment to certain environments, production environment, and only allow certain people to trigger those types of jobs, et cetera. Something that's been really demanded is RBAC support. And this is like a lot of companies wanting to use GitLab, wanting to adopt GitLab in Kubernetes integration have been skeptical of the fact we didn't support RBAC. That's something we've been working on, we managed to ship autodevils support for that, too. And some other autodevils improvement there, database migration was one that came up at the summit as was something that Sid brought up as DZ mentioned that he couldn't use autodevils because of database migration, so we ticked off one more of the things. We're currently working on HTTPS support, which was another thing that was brought up at the summit. And we also onboarded the new team member, Tong. He is in New Zealand, was actually the first hire in New Zealand for GitLab as well, which is quite cool. And we've onboarded him to the team, and he's now contributing a lot to the team and delivering a lot of these things we're talking about as well as a backend engineer. We also have, this is a pretty important achievement, was we've had autodevils for ages, but had basically no documentation on how to actually set up a local development environment and use autodevils. So now we have that. It's extensive and still not trivial to do, but we have documentation. We are working on focusing on matching users' expectations for Kubernetes and autodevils. So much like I said about DZ, not being able to use autodevils we had. We're adding HTTPS support. We're adding the ability to set environment variables for your running application. Group level clusters was something our Samsung customers brought up at the summit saying they needed that. Instance level clusters is also something that on-premise customers will wanna be able to use. So we're working on those. And another common theme of what we've built because Kubernetes seems to be at the heart of just about every piece of software people wanna deliver these days. We have a lot of startups wanting to work with GitLab and build on top of what we've got in terms of our Kubernetes integration. TriggerMesh wants to build serverless support based on their ideas. Nurch is this really interesting idea called executable run books. They're working with us. Netlify wants to build a static site generator or a dynamic static site generator on top of our Kubernetes integration and DigitalOcean wants to ship the ability to create clusters on DigitalOcean through GitLab's interface. So those are the slides. Let me open the chat and see where we were at with that. Okay, Dan, do you wanna jump in or do you want me to just read the question? Let's see, any plans to make AutoDevOps work with Kubernetes? Sorry, different Dan, I meant. Sorry. Dan asked the question. Sorry, yeah, that was me. Yeah, I'm just wondering if there's any plans to work on making AWS integration of AutoDevOps as smooth with EKS with Amazon's communities as it is with Google's. Daniel, you can answer that. Thanks for the question, Dan. Yes, I think that one of our long-term plans is to provide that experience for all major cloud providers. So while we couldn't give you a hard timeline right now, it is something that we have planned long-term. So I would love to see the same experience in Azure and in AWS. And I think that as we ramp up our team and as we are able to add more people to take on more things and have more capacity, we'll see more of those things happening. And of course, if anybody has any contacts at AWS, they are more than welcome to contribute this as the digital ocean team is doing. So just to know there. Kind of related an interesting topic because digital oceans wanting to do the same thing, Google's done in terms of being able to provision a cluster throughout UI. We kind of started to have these conversations around what would we need in order to get an extensible API setup so that anybody could do this and EKS could do this as well. It turns out we're basically most of the way there, we're already starting to build out an API so that anybody can create a cluster in GitLab. So couple that with OAuth support, we might be able to have a fairly minimal implementation where any external cloud provider could provision manage Kubernetes clusters in GitLab. And that's a possible way we have going forward to kind of keep up with the demand of other cloud providers wanting to piggyback off of our success. More questions we had. No more questions in the chat. We have lots of people talking about IWS. So I'll add on then. So you mentioned that the API for that, does that include the configurations that happen as part of AutoDevOps? So it's not just creating the cluster, right? It's also configuring again the Ingress setup and all that because I know that with AWS that's a bit rocky. And so being able to say we have that level of easy integration I think is more than just creating the cluster, right? So is the API you're working on, talk, you're gonna cover that as well? Yeah, it was funny that I was just thinking about that as you're talking about EKS, the problem at the moment is it has a different Ingress behavior than Google and that's the main reason the integration doesn't work as well. But yeah, it does lead me to think that we could think about building an API so that people can create an Ingress through our API. So rather than us having to understand the Ingress implementation that every provider uses if they are already integrating with our API to create the cluster, then all we really need to do is get, create another endpoint for them to create an Ingress on our end as a possible option there. But that's not part of the API issue we have now but that's a good suggestion and something we should think about making that more seamless. It'll be interesting depending on how DigitalOcean implements this, whether or not we need to start thinking about that sooner rather than later as well if their Ingress implementation doesn't work for us. More questions. Gonna be such a quick group conversation. So I can say it out loud if you want. I'm just wondering this because it's something I've encountered in the past and I don't know if we have any, if we don't I'm wondering if customers have asked for it. Do we have any sort of automated user provisioning built into sort of like as part of our back so that customers can manage their users, not through an interface but through an API that can be hooked up to their own user management systems? I'm not aware of any such thing and it's complicated I think is probably, the challenge here is like our back depending on, that's what I understand depending on where the Kubernetes cluster is running the authentication is implemented through an authorization provider like for Google, I guess they have their own authorization provider for users to get access to the cluster and things like that. So yeah, I don't exactly know how that would work but it's probably something worth exploring and I'm not sure if it's come up before. We certainly have integrations with LDAP, SAML and Okta and a few others. I don't know about generic API type of user management. I assume we also have an API for user management as well but we at least do those. No more questions? Okay, we have one more comment in the chat. Not heard of Chef, Puppet and Ansible being mentioned. It's probably what we're doing. I mean, I don't know exactly what Chef and Puppet and Ansible are up to these days. Maybe they're doing a bunch of things beyond what I know them for but what we are doing is different in the sense that we're not trying to build out something that provisions infrastructure for you. We are trying to build out an integration with based on top of Kubernetes so that users can more easily get access to Kubernetes clusters through our UI but the provisioning, et cetera, is still kind of managed and owned by the cloud provider. So Google knows how to provision and run a Kubernetes cluster GitLab dozen. So we just call out to Google's APIs to do it right now. Maybe that's kind of related to your question. One way to look at that might be that we're looking at sort of the cloud native way of doing configuration management and not to throw them under the bus too much but Chef, Puppet and Ansible are sort of legacy, managing of VMs primarily. I'm sure they're all struggling of how to get out of that box but for the most part their sweet spot is in managing virtual machines and standing up instances and configuring those instances to all look exactly the same and we're focused on specifically Kubernetes GlideNator. That's great. That's really helpful. Thank you. Just to put my cards on the table, I've been here for a year now but I was with Chef for I guess three years plus previously so that's sort of my background and not so much now but that was the thinking. But that's really helpful and it will help when we speak to customers. Yeah, I think it's kind of interesting because like we don't wanna become the place where like we don't wanna re-implement the tooling that's needed to provision servers, et cetera like that. So we're really leveraging Kubernetes a lot because it's the leader in terms of containerized infrastructure but yeah, there is a layer on top of that of cloud providers like Google actually running the Kubernetes cluster for you as well that we also don't wanna touch. So for users with their applications we want them to start thinking cloud native and deploying containers and we want them to be doing that on top of our Kubernetes integration and we don't really wanna expose infrastructure configuration to anybody really. Okay, well, I will count down from five if we get no more questions. Five, four, three, two, one. Okay, thanks everybody. Have a great day. Bye bye.