 So hi everyone. I apologize for the delay and technical problems. My name is Ashish Kutiela. I'm a director of product marketing at GitLab, and it's my pleasure here to be with Gene Kim, who is a leading research and research analyst and a author of all things DevOps. He's been you know doing this for many years now and most famously you might have read his books The Phoenix project as well as the new you know DevOps handbook Next slide, please like this. Yeah, so yeah in this next hour I'll be presenting the highlights of the state of DevOps findings and We'll be talking about having some Discussion around some of the key findings out of it So for those of you who don't know the state of DevOps report is the longest running study of its kind It's a five-year cross-population study. It's now spanning 30,000 respondents and so for The first four years that was done in conjunction with our puppet and this year was done in conjunction with Google Cloud so the next slide Agnes the goal was always to understand what does high performance look like and Really understand what are the factors that predict performance? so you know the one the first comment I'll make here is that There's another construct added in the measurement of IT performance and it's a really factoring and availability So let's go to the next slide So we took a look at performance through a slightly different lens and of course, we're always looking for the practices that predict performance So some of the key areas we looked at were monitoring and observability Some more factors around continuous testing look at the practices that influence databases, which is actually one of the most petrifying Places in the application stack to change and then more on security. So on the next slide We also looked at more factors around culture so we know that There are really three factors that derive performance They're really devops other conditions that allow fast flow high reliability And it really involves architectural practices technical practices and cultural norms So on the next slide is what we call what's always the most exciting part of the study for me Which is really understanding the performance differences between high-medium and low-performers and again We see this massive difference between the best performers And the not best performers. We know that high performers out of the 2018 study They're doing 46 times more frequent Deployments that could be deployments of code in the application or deployments of any changes to the environment and again They are much more Faster in terms of being able to go from something put into version control through integration through testing through deployment so that customers are actually getting value and It's not just about doing more work, but it's getting far better outcomes When they do a change high performers are seven times less likely to have the change go wrong whether it's a service impairment a Seven-outage security breach or compliance failure when things go wrong. They can fix it 2600 times more quickly. So In this is actually really interesting effect on the next slide. I'll ask you for a commentary on this But if you go to the next slide, yeah, I'm here You can see that You know the the stark differences between high and low performers So, you know, she showed we've been at this for many years. This certainly must resonate with your own experiences No, absolutely. I think I'll share a recent example I can't name, you know, who this is this is a large financial institution Based out of New York a global, you know financial house and they've started to kind of accelerate their journey towards devops They were already doing this and they were able to deploy code to production once every two weeks And they have now gone up to six times a day If you look at that at, you know, over two weeks that 60 deploys every two weeks instead of one deploy So it's actually 60 times better for them than they were doing before so it absolutely is working for those, you know Who are we're doing this? Taking this approach and doing it right. Oh, yeah In fact, that I think reinforces so many of the other sort of really great findings from this year And maybe just a set context. I mean, I think five years in, you know, we always worry You know, are we gonna actually see something interesting that we can actually learn from and Every year there's some incredible findings. So let's go to what those are on the next slide You know, for me one of the Okay, one commentary is I had mentioned that we had another factor in terms of what We used to call it performance. We now call it software delivery performance and That's the because we now are looking at Availability as well, right? So that makes sense, right? We not only want to get to market quickly We want to be able to fix things when it goes wrong We also want to make sure that servers are actually available when customers need it And so, uh, you know, the highest performers are three and a half times more likely to have strong availability practices So that's what we would expect on the next slide this is One of the shockers for me is in the early years the the high performing cluster was always, yeah It's typically between 15% And you know, we grew to make 25% this year high performers make up 48% of the population And so we're actually studying a subset of those what we're calling the elite performers but what is so astonishing to me it's really two fold one is The fact that it reinforces this lesson I learned that the software engineering institute at Carnegie Mellon University Is the high performers always getting better the best are always accelerating away from the herd That is absolutely true But I think the other kind of important point is that I think one could have actually made a case That it was okay to be mediocre because everyone else was however, what this is showing now is that You know high performers are almost half of the population right to be mediocre We're actually behind so I think it really makes the case that What was good a couple years ago is probably not good enough today. Uh, does this Resonate with you. I see No, absolutely Jean. I think the one example I already gave you was you know deploying every two weeks Was already like good high performance from where they started, which was you know I think it was once a quarter that they were able to deploy well And then they took it to the next level, which you know resonates exactly what you say here They went from being a high performer to an elite performer I think the key to that we'll talk about it later in the webinar is Learning from you know what they were doing and how to constantly improve upon that Exactly right. Um, so on the next slide Is Ah, you know, so this is actually now my favorite way to view The data, uh, you know, I would always present kind of the the the 46 times I have 2500 times faster This is actually the more useful view for me, uh, which is really uh for every one of the four key metrics You know You can now quickly see whether you which cluster you're in and again, I would focus on lead time for changes of all the Things that I recommend people to measure to it really is lead time You know, how quickly can we go from something going to version control through integration through testing through deployment into production and then all the other metrics go up and down with that so You know, so actually it is a version of this slide that I now Is my main way of presenting the data. Any thoughts on that Ashish? so A question for you. Maybe reserve for the end. I really like these metrics. I think a lot of you know customers and Other other leaders who are deploying DevOps and getting success are actually using a number of different metrics And why these four metrics or others if you have a brief commentary on that or you want to talk about Yeah, yeah, well, I mean so and it's going into five years of the research In the early years we're looking for the metrics that matter And so we tried many many metrics and it was actually these it was uh, these four metrics that all correlated together And in fact, it is these four metrics that when you combine them is actually a predictive Is a predictor of organizational performance. So, uh, why these four? You know the the theory of you know, it's because that's what the research showed and You know, I think the the theory is is that The first two metrics are deployment frequency and lead time for changes Those are agility metrics and then the two other metrics mean time to pair and change failure rate Those are the sort of outcome metrics or that's the stability metrics. And so they're actually kind of two orthogonal Axes of performance. So I find that extremely satisfying I think the key is you also said they're correlated and the explanation on how they're correlated makes a lot of sense. Yep Um, awesome. In fact for anyone who wants to learn more about that, I would actually recommend the accelerate book Uh, dr. Nicole Forsgren, um, who is one of the three researchers that we've have been behind the study Yeah, she does a phenomenal job of explaining all that and it's something that I've uh, flipped through and Whenever someone asked me a very technical question that needs a very, uh, backup answer I take a screenshot of of that book um, so on the next slide Agnes, um one of the other, uh surprising and delightful findings was the notion of clouds and we know that platforms are important We want developers to work self-service. Uh, so we asked, uh A set of questions. First is are you doing cloud? And then we asked five more questions in terms of, uh You know the characteristics of the cloud services. They're using is it on demand by network access, research pooling toxicity, measured service and turns out this is uh, you know, these are exactly definitions, uh from nist in terms of their guidance on cloud computing and so only, uh You know one fifth of those people who said they were doing cloud actually said that they're doing all of these things Which means you know only 22 of those people who are doing cloud computing. You're actually using cloud computing Um, you know as nist defines it, but those who are using all five um axes of, uh cloud capability Were 23 times more likely to be elite performers. So yeah, this shows, uh, you know How great it can be for dev productivity and also how easy it is to use cloud and not actually get the the benefits Uh, and not get the benefits that is promised Ashish Um Absolutely, we are seeing the same thing. We don't do the research as deeply as you do, but we do See the evidence that those who are using cloud Are able to achieve, you know Their goals of going for, I mean if you go back to the four metrics you just talked about We see more of those kind of being realized As people are adopting cloud more than, you know, the traditional infrastructure and these five key things that, um, You know capabilities are absolutely spot-on on, you know Why you should be using cloud and how you take advantage of that Right now as opposed to just lifting and shifting things into the cloud, right? It's probably not going to get the desired benefits. Okay on the next slide, uh, oh open source software So, uh, this I thought was, uh, super interesting. So, uh teams that use open source were one points, uh, Nearly twice as likely to be elite performers and those same teams Were one and a half times more likely to expand open source usage in the future I think this is super interesting because You know, it is I would say, uh, almost everybody for the services and applications that we deploy into production It turns out we didn't actually write most of it, right? We are using to a great extent open source libraries, uh, whether it's npm in the node ecosystem or mavenge In the java ecosystem, um, you know, cokopods in the in the apple ecosystem, so, um Open source is a phenomenal, um, accelerator for developer productivity Uh, she's uh, you have a lot to say about open source I do so Agnes if you could go to the next slide So let me share a little story with you. Um As you know that gitlab is based on open source and we actually Have a philosophy that everybody can contribute, right? Which ties in really well with, uh, the open source philosophy But what is interesting is apart from our own engineering staff, we have about 2200 plus external contributors actually actively contributing to our code base And that has made us, you know, one of the highest velocity open source projects The idea here is that everybody has an input everybody can contribute Everybody can contribute features that they would like to see in or fix things that they would like to see And this concept of co-creation includes our customers, you know, they use our software when they actually Want something that it's on roadmap, but earlier But they're actually, you know getting in there and creating this we had one of the, you know One of our customers recently Common to us. We are hiring gitlab developers in our staff and we said why And the answer was because we want some features faster than you know, that are shipping We ship every month and that was just phenomenal to hear that So if you go to the next slide Agnes Um, I mean just as an example the velocity we achieve I mean you look we're doing about 67 deploys a day to gitlab.com And for the past 87 months, we have released on the 22nd of every month without fail And this is, you know, definitely A big thank you to all our co-contributors the fact that it's open source It's transparent and everybody can contribute next slide Agnes And just to give a flavor of, you know, how this is working for us If you look at, you know, the the product improvement that we have in our own product It's an exponential rise, you know, as more and more people have started contributing as, you know Customers have gotten in just look at that, you know, phenomenal hockey stick effect and how fast the product is evolving and adding features So this is a good example of when you collaborate with everybody when you let everybody call contribute and co-author how How fast does it do that? So next slide, Agnes Okay, awesome. Yeah, it is just Amazing just how it is this open source can lead to incredibly different Development models and we've been traditionally accustomed to Thank you, Ashish. And so another To reinforce something that you said is that DevOps and high performance is not restricted to any industry Certainly not restricted just to the fangs of facebook, amazon, netflix and google so it is for across every industry vertical And so although we've had We've been able to show pie charts over the last five years to say hey DevOps and high performance across almost every industry vertical is independent of company size Or even profit and offer profit, you know, that was more We couldn't actually show that In a statistically significant way. No, we just didn't gather the data in a way that was possible this year. However dr. Nicole fordgren she actually went through and Showed that Industry absolutely doesn't matter. There is actually no industry That is exempt from high performance, whether it's regulated or non-regulated So that's actually a very strong claim we can make now is that uh, you know, high performance is possible everywhere And so that's actually a really neat find to actually now put into Into the community and the literature the next slide Is about uh, yes, the data the technical practice around databases You know, I think just from my perspective If you take a look at the typical application stack application data operating systems networking, you know firewall You know, I you've seen this kind of revolution in terms of uh, how we um around automation and so the the Using the same sort of technical practices that we're using development extending across the entire value stream Except for databases that databases were always very scary to change, uh, and I think that's because of the Irreversible nature of those changes, uh, and the fact that you know when you're doing schema changes, you know, you could actually You know, there's certain operations that you think will take minutes, but actually take hours So that's always very petrifying and one of the findings is that all the technical practices are equally applicable for the database as well Um, and so we're actually on the next slide. I guess, uh, thank you um, so Of course, uh, there's no way that we can actually increase the number of deployments Without causing carnage and disruption if we weren't actually integrating testing into daily work The next slide. Oh, yeah, um ashish I think jean, I just wanted to reinforce, you know from another respected analyst from gartner in a recent paper where you know, they talk about hacking your culture to drive quality and dev off success Um, it's really about the feedback mechanisms, right? You want to make sure that you're investing in the right places And really how you can do this by is by having, you know, automated Testing automate automate automate and give developers the confidence, you know, as soon as you can that what they're doing is right and how do you, you know Whether you want to keep working on that or whether you need to change your direction and do something else The feedback earlier you get you all know this and never off. It really works Yeah, right. In fact, uh, that's been one of my learnings, you know, is that I'm spending more and more time, uh Writing code is at the fast feedback is Even six minutes is too long. You have to be able to bring that down to seconds is like night and day in terms of uh, how you feel about daily work um slides 23 Is about observability and monitoring. So, uh, again, this reinforces another finding Um about a you have to have proactive production telemetry Um, and so we had looked at two things around, uh monitoring, uh, and then observability how we went into the observability community, uh, so kind of a new space and try to, uh, uh, see how that would impact performance And, you know, lead performers were nearly 1.5 times more likely Teams have put great observability monitoring, uh, in place. Uh, were 1.5 times more likely to be a high performer This makes sense. It reinforces our previous year's findings. Uh, to me, that's You know, pretty obvious, right? I mean, uh, you can't fix an outage if you can't see what you're doing, right? But uh, another kind of interesting fact is that, uh, We had two instruments monitoring and observability. They actually loaded together so that, uh, really means that, uh, as a Two distinct sets of practices. They really do go If you're doing one, you're doing the other, right? So we haven't been able to split those yet, uh, in terms of two distinct sets of practices So, uh, maybe that's a work for next year Next slide Oh, yes, so, um, this is what's called a Structural equation model. So this is actually one of my favorite diagrams, uh, in That's come out of the study because what this shows Is, uh, every time you see an arrow, that is a predictive causal factor So that means that, uh, for example, deployment automation does Help with the achievement continuous delivery, which in turn does increase Effectiveness of SEO performance, which does predict an increase in performance of organizational performance Um, and so everything that is in bold and Capital letters, uh, those are new instruments. Um And and so, uh, you know, these are just fantastic. Um Yeah, boy, there's no better diagram than this to sort of say how all the pieces fit together And so that's the one of my most cited Diagrams and In fact, uh in other diagrams, you'll see on the left of this It'll be actually leadership because leadership amplifies So much of these, you know, just one practical practices on the next slide Are is another aspect of The structure equation model, uh, which is, you know, the need for organizational learning So one of those practices that increase climate for learning is having retrospectives. Um So simultaneously increases climate for learning as well as, uh, increasing Uh, you know generative culture that was measured by the western instruments. So the west room doctor west room found nearly, uh, 12 years ago that the health care safety patient safety and health care organizations was, uh, highly predicted by Uh, organizational culture and then he had three categories, uh pathological cultures that hid bad news punished People telling bad news kind of was averse to seeking new information. You had bureaucratic cultures That tried to create a merciful organization Then at the other extreme you had generative cultures that actually seek Bad news we train messengers tell bad news and we're always looking for novel ideas to solve problems So so retrospectives helps with achievement of helping create generative cultures, which in turn influences and predicts organizational performance next slide, um and The Let's see. Where's the arrow here? Uh, you're right giving teams autonomy Um, also we found helps increase trust and voice. So trust is to what extent do people trust the system trust leadership um, uh voices, uh To what extent do people feel that their concerns are heard? That people care, right? And again, these two in turn drive organizational culture, which again drive organizational performance so I love this just because it matches so well with the the work of simon senech, uh, right the the work of The MIT researcher who talks about, you know, what people really want is autonomy and mastery, right? So it really I think reinforces those notions um, next slide Is uh, uh, right misguided performers. All right. So this is uh, I'm not sure. Oh, hey, uh, before we go there Did you want to talk about this? let's go misguided performers and then Uh, I should I'd love to actually hear you talk about the minimum viable change. So, um Miss guide performs so one of the things that we found Was a super kind of interesting cluster in the low performing category So these are the ones that have a low deployment frequency In extremely high lead time for changes and low deployment failures But there were also the ones that had the longest mttrs Where you had mttrs reported between one to six months so This is super interesting, right because what it uh, the conjecture here is that these are organizations that are trying to be very super slow super cautious But uh, but they're also getting the worst most protracted outages And so before you laugh at the notion of like does anyone can really anyone have an outage for one to six months? That actually happens a lot more than one would think Uh, for example, like whenever you script a database, right? You often can get the service back up and running, but you are you know, we're spending weeks, you know trying to restore data Do direct data edits, right? We've actually corrupted data, you know, it takes weeks to restore You know healthshare.gov is another example. That's probably a spectacular example But you know, uh, there's there's an entire category of incidents where from the outside it looks fine But uh, it does take the organization Uh weeks or even months to restore what we would call normal operations. So This is interesting because it says that slow and cautious has a Hidden dark side where there are actually the ones that are suffering from the worst outages. So Just another reason why, you know Fat more frequent deployments, uh with more uh with more of the technical practice, right is definitely better than within this Um, let's go to the next slide and uh, actually I feel like this is actually misplaced. Uh, really this is all about the notion of Oh, oh, oh, I see. I get it right. Tell us about the philosophy a totally different philosophy of making changes So just as you were saying, you know Big large batches get complicated take not only more time to get there, but if something goes wrong It's actually extremely hard to fix it long lead time But guess what also? You get to know off the feedback only when the large batch reaches out there in production So our philosophy here is We break down minimum viable product that everybody's heard of It into minimum viable feature And then further into minimum viable change minimum viable change and the idea here is that If you can ship something To production which is better than what you have today You ship it You test it you ship it and so in one of our previous slides We saw that we do about 67 66.8 deploys per day on our, you know good lab platform And and the benefit of this is you know multi-fold we talked about you know If something is not there out in production you cannot get feedback So that is you know one result The other one is you know, it reduces the complexity of the large batch changes that you exactly talked about So it is really hard to kind of like think about shipping a minimum viable change versus, you know Completely functioning feature or a product But you know, we've been doing this for the past four years or five years And we see that this not only adds to the velocity But also the smaller batch changes and the feedback loops really help us make our product better So I'll leave it at that we'll discuss more with the q&a super On the next slide Is actually that's uh, let's skip this unless if you want to talk to this ashish No, I think we can skip this one. Okay. Um, all right. Here's another interesting finding So low performing teams were four times more likely to use functional outsourcing than elite teams so It's interesting right and so by functional outsourcing we mean, you know, this is where we take All, you know, we might outsource all of development to one group outsource Qa to another group outsource Operations for to and yet another group and hey just to make things interesting We'll outsource security to yet a totally different function So, you know, the logic of this is you know, this is a way to help preserve accountability You know to all the parties and yet I think in all our experiences What we found is that it becomes very difficult to push changes quickly through the entire value stream when everyone You know when every coordination interaction requires an account manager potentially lawyers contractual change fees and so forth And and so this is not to say that outsourcing is Makes it impossible to dev ops. In fact, uh, you know, I think it's actually been shown that if you take all if you have outsourced teams that own dev qa and operations You can actually get great outcomes there, right? But what does not work? is purely functional outsourcing so That was a super super interesting finding Other major takeaways on the slide and on the slide Ways to improve sdo performance slide 33 Yeah, so here are the key takeaways they again the difference between high and low are increasing and continuing to increase you know The ways to achieve high performance are now getting more and more clear I mean, it is definitely you need great architecture. You need great technical practices You need the right cultural norms leadership is important Yeah, I think those organizations where You know, we have those all sorts of practices where we spread our work between different teams and make it very difficult for them to work together That is probably not a winning bet And you know, it is again so gratifying for five years in a row We can see that it is possible to you know go more quickly And do things in a safer more reliable way and also preserve availability objectives as well on the next slide Here on the instructions to download the full state of dev app support. You just go to that link there. Maybe we can paste that into the The q&a channel as well and I want to acknowledge The team that has been behind all three years of three years five years of the state of dev app support That's dr. Nicole foresprint who is the principal investor in this jazz humble and myself and If you're interested in this body of work, I would definitely recommend The accelerate book it goes into marvelous detail about the science the causal relationships and This the statistical evidence that shows the effectiveness of all these practices and on the next slide I want to acknowledge google cloud for helping underwrite this work the continuation of a five-year journey saying high performers And of course git lab who has been a phenomenal supporter as well. So With that, um, yeah Ashish, I want to turn it over to you Thank you, uh gene. Um Thanks to the next slide after this, please Come on. Yes. Thank you. So I wanted to talk a little bit about what Um, git lab actually does and what value it provides. So, you know, we've looked across the landscape Of how teams are trying to solve for dev ops and adopt dev ops And we found that there's a lot of fragmentation of tools and a lot of the challenges that gene talked about here as well So what we've done is we've built one single application for the entire dev ops life cycle From planning all the way to monitoring And these are some of the key features that that we provide But the difference here is you don't need to integrate anything you can if you want to integrate with external tools that you might be using But you can do it all from one single application. So what's the advantage of that agnes? Next slide, please So because of the fact that we have built it from the ground up as a single application It provides many different advantages to the, you know, different teams working to implement dev ops It's a single conversation, which is a key tenant, you know that you want to have in dev ops everybody working on the same Work in progress, you know work Having one conversation around it not in fragmented silos So you are having conversational development practices implemented. You have one single data source or the single source of truth Is in one place a single interface leads to A lot of other advantages such as, you know, one data model the governance and governance and security becomes easier It's a single permission model. You have analytics that you can do in one place Most importantly here what we find is that team collaboration Is really really, you know, great It increases a lot because it is not easy today to take large enterprises Who are divided into different teams and to put them all in one team and say go work on this project So how can you still have them in different teams? But force them or, you know, have them to have them collaborate from a single place Because this is a single application in one merge request. You see the testing team You see the security team. You see the monitoring team Everyone see the same things. It's one conversation and we're finding that you know customers who are adopting more and more features Of this application in one place. They may start with, you know, the source code management They may then, you know, incorporate the continuous integration capabilities But as they adopt more they're finding that they can accelerate up to 200 percent faster For the DevOps lifecycle next slide, please like this So this was just to, you know, um, give a sense of what we can do one of the features that we have here In gitlab that we've built is auto DevOps and simply put it is what it does. It it is what it says automatic in auto auto DevOps So the idea here is with, you know, two clicks you can provision your infrastructure in this case, you know, it could be kubernetes Drop your code into gitlab And it'll automatically scan what language it is Merge it build it, you know, test it go through all security testing Package and deploy this And start monitoring it Um, imagine, you know, having to do this across separate tools switching this together trying to bring all the teams together So you can see, you know, how you all you can have your developers do is actually focus On writing the code that builds value for the company and not actually have to focus on the underlying tools integrations, you know patches, etc Get your development team to focus on creating value for the company that drives revenue Next slide, please and You know gene talked about this I think it's very obvious that the future direction is cloud native It is, you know, going towards the cloud and adopting those five practices that we talked about But that in mind, you know, gitlab is built for cloud native. So you don't have to do much We have integrations with google amazon and microsoft Cloud platforms and you know, once it's configured all you do like I said was, you know Point and click and it will configure for you manage for you. All you have to do is focus on your code Next slide, please And you know, this is this is sketching traction. I know we skip the slide But you know, we we have an adoption more than 100,000 organizations with millions of users actually using gitlab I highly encourage you to, you know, go take a look download it Try it and there's a free version that has, you know, almost 90% of what we offer and Give it a spin and let us know how you like it next slide, please So jean, uh, over to you. You are organizing a And hosting a summit coming up We have a big fan. So love for you to, you know, talk about that a little bit. Yeah, just a very briefly So my interaction is really studying how dev ops is being used by large complex organizations and so The prime mechanism to learn about this is by this conference. I've been running for five years now the dev ops enterprise summit and so Our 50 year in the us is now being held In less than a week and a half in las vegas. So, uh, you know, if you're interested in seeing technology leaders elevating the state of the practice in some of the most Recognized brands across every industry vertical You know, this is a place to be and I look forward to hopefully seeing you there In a week and a half and by the way again I would love to thank gitlab for your phenomenal support over the years And she's your personal support over the years as well. I am a big fan of this summit. Look forward to being there again And then, uh, yeah, is that uh, uh, so, uh, we have questions. Should we just jump in? Absolutely. So I'll moderate the questions, uh, jean and have you answer them and Um, you know, I can add color comment to that But the first question here is jean and ashish. Are there any examples in iot that have stood out to you as High performers or elite performers in iot. Yeah, so yeah, this is not an area that uh, I don't know a lot about But one is i'm actually interested in in fact, i'm just uh, kind of Frankly surprised at what is being categorized as iot internet of things. I saw this amazing talk By someone from chic filet where I was actually they were using kubernetes to manage all the point of sale devices and they were Classifying that as an edge device slash iot, which is that something that i'm accustomed to hearing But uh, I think that is I guess the way the industry is starting to treat edge devices Um, and I have the notion that you have uh, thousand of point of sale devices out there now managing this massive, uh, kubernetes cluster is Absolutely fascinating. So, you know, uh, I think all the practices that kubernetes encourages right immutability automated deployments, uh, you know, Certainly probably automated testing as well You know the guaranteed consistency across environments the monitoring that comes in. I mean, I think it's just fascinating so Let me see if I can find a link to that but I would say You know, I was equated iot with like small little sensor devices or plcs that go inside of industrial equipment You know, if you broaden the category of iot to point of sale devices Suddenly, um, you know, boy, I think there must be a lot of uh, you see a lot of interesting things happening in that space Let me find that, uh link and you know, I'd like to share an example, which is a little bit different, but if you consider Satellites out there in space being internet, you know devices Um, we have a we have a we have a large customer. Um, it's a space agency and they actually not only use get lab to What they call launch rockets, you know, they use the software to do that But they're actually looking at and they're deploying software onto their satellites that are orbiting the the earth Using it labs. So they are doing DevOps. They are, you know, actually deploying this out there in space It's it's it's an extreme example of iot, but you know, we've seen that and While you look for that, um, I'll go to the next question And I just posted it into the chat window. Um, if you could copy that into the q&a Oh, okay. I'll do that. You can just do that. I can do it. Um, so the next question is as you know, um, we look at this is It's a question that says how do you qualify minimum viable change And the related features how can autonomous teams maintain alignment with the mvcs and the mvs? So on our side, you know, I'll start and then, you know, see what what you want to add color to that But we explicitly name something in mvc when we can plan it if you can't reduce it further It probably qualifies as such So the alignment is just same as the other workflows, right? I mean, you have to make sure that your mvc proposal Is easily readable understandable and and independent That's an intriguing thing, right? Because I think the the theory says is that you want to, um, the theoretical ideal in manufacturing Um for lean is a single piece flow, right? And so sometimes it's called one one by one flow That means batch size of one inventory of one And so that's almost like an assembly line And so that means that you want to reduce the batch size to the lowest possible thing And it's a very startling notion of mvc because it sounds like uh, it really that creates a pressure to reduce the size of the change down to the the As you said the smallest change possible, which is actually theoretically, right the safest type of change and I think that is a Practice that I think is an answer to a tough question Which is how do you sort of do big features in a way that they can fit inside of a reasonable development interval So it obviously takes a different product planning discipline to be able to define the feature in a way where it can be done in small chunks So that sounds like the mvc is an extreme example of that and one that's obviously very effective for git lab. So that's very very cool Thanks gene. Um, moving on to the next question. Um, I'll pick on this one. We are on our dev ops journey. We are seeing a lot of benefits like you highlighted So gene that's you Do you have any ideas? Do you do you have any ideas to help onboard a traditional quality team to see the dev ops way? Ah Yeah, no, that's interesting. Um So this is going to sound like a Not a crazy not helpful answer, but it is one of the most Profound things I've ever heard of my career. This actually came from a mentor of mine. Uh, uh, Elizabeth Hendrickson. She pioneered um So much of quality engineering over the last decade and a half and uh, she told me about that the The los altos workshop software testing So they this is where they assembled the best qa people in the game And uh, you know among the 60 workshops they did one workshop. They actually did one exercise that went like this What was the dev to qa ratio for your best project and your worst project? and so for the Worst project, um, it was actually the ones with the highest dev qa ratios. Right sometimes, you know, unity one to one um For people's best projects one answer came up over and over again And uh, the the answer was for our best project. We had no testers so so This is coming from qa professionals the best in the game And so elizabeth's telling me the lesson for her was that You often end up with the best quality when everybody knows that there is no one out there who's going to find your problems for you so It's not saying that qa is not important qa is absolutely important But we need it to be fully owned by the developers. It can't be someone else's job I think what it means is that uh, the job of qa then becomes uh, it is not about testing other people's code Instead, it is how do we take those quality engineering principles and integrate it into all the daily work of the developers in fact one one does one not so comment, uh, a couple years ago in the state of dev ops report we found this kind of very peculiar finding is that the more Uh, we asked this question on a scale of one to seven to what extent our developers responsible for maintaining the acceptance test suite and The less developers owned it the worse the outcomes. So again, I think the role of qa is Not doing the work but it's helping kind of integrate security into how developers do work. And so I think the uh, it means that you know, they need to be technical. They need to be You know familiar with code and even better if they can code and yeah, I think that's a That is I think a genuine challenge for You know many qa professionals issues Yeah, no, I absolutely agree with you. Um, we're seeing the same thing I've seen it for a number of years like, you know, I've been with this field and it's it's it's I think more teams are buying into it and We see it getting better. Um going on to the next question gene I'll pick this one. How often are devops practices seen within the health care industry? Well, you know in fact one of the talks at devops enterprises from alice reya She's a vp of digital at kaiser permanente They've actually presented numerous times alice reya's presented this for them for the second year and you actually do see Actually at kaiser they actually did a they transformed the patient provider portal So this is in some ways probably the most shocking area of which you would do devops on right? This is where all the pii is this is where you know, you have all the The sensitive communication between the patients and the health care providers Um alice reya is talking about how they're now elevating the technical practices within the pharmacy group Yeah, so you know absolutely Again, I would go to the further motivate that through the latest finding that industry does not matter and Holy cow of all the places where we want faster safer cheaper happier Right, it's health care, right? We're we're not just talking about economic Gain, but we're also talking about you know improving the societal outcomes for uh for everybody. So absolutely And there's evidence points If we have greatest community reports from the health care industry and and and hopefully more In fact, uh, I just got to spend a day yesterday One of the people who really is one of the most my favorite people studies heather micahman who really Co-led the devos movement at target. She's now doing the same thing now at optum health care, which is You know a 250 billion dollar market cap company. I think it's a 145 company So, you know, I expect great things you know from Her optum So, uh, thank you gene for that. Um, I'll go back to one question that I missed I was actually the first question. Um, so chris asks, do you have any tips for promoting retrospectives in a team culture? That is otherwise better to adopt agile dev practices um It seems like our teams are not willing to try it out either for fear of giving or receiving negative feedback Or the extra meeting added into their busy schedules All right, so, um I guess three things to remind one is you know, if they're uh, eager to adopt agile Um practices and then you know part of the agile one of the more well known agile rituals is the uh daily stand-up, right? So, uh, you know, it's 15 minutes, right? And now, you know, I think it's uh, you know, people see the value of that Basically, I would You know, uh, just try adding a 30 minute retrospective to the end of each interval Whether it's the end of the scrum or whatever. I mean, there's got to be something Uh like that You know to basically, you know Evaluate, you know, um, how do we do against our goals? You know for this, you know for this week or you know every two weeks Or whatever the interval is Um, and then I would gently add You know something to the agenda of that, which is uh, what are the things that went wrong? Um, especially if it led to a production incident because those are as relevant Maybe even more important, you know than you know any difficulties had during the you know the design and development process So, you know, I would uh, I would find out all the sort of existing agile practices that the teams are using And then just gently slip in, uh, you know things into the existing retrospectives, uh And even maybe uh integrate some Blameless post-mortem section into the agenda. And I think you'll find uh, no, um, I think you'll find uh Grateful group of people as a result of doing that I think you you hit upon something that I was um Going to also say make sure that the retros are blameless Right, and this comes from our VP a product who leads, you know our retrospectives. So we do a live YouTube streaming every release every month of you know, what we release on the 22nd of every month So those readers here who would love to take um, to to to see how we do it, you know, it's streamed life so it's in front of everybody and You know, some of the key rules are make sure that the retrospectives are blameless Make sure you go in with a mindset of continuous improvement But most importantly that you know, you have to make sure that you're giving clear feedback That helps the person that you're giving it to and those three tips come from, you know You know our VP a product, you know, who leads a lot of these retrospectives And of course, you know, we invite you to come in see ours and learn from it if you would like Also, I think etsy has like really good perspectives and guide on, you know, how do they do retrospectives? The level of transparency is always amazing at kitlab That is a fact I need to send you an invitation to our next retrospective. I think So there was a follow-up question on the mbc. Do you have good examples of an mbc proposal? and um I think a good example is when we introduce epics The concept of epics into kitlab and our first iteration was nothing but just a title and all the linked issues underneath it It didn't do anything else. There were no notifications system notes or any other capability Right, we shipped that out as epics And then we came back and we added, you know, more capabilities with ease release um Any comments on that genius So, uh, I mean it could be as simple as I'm going to add a new screen that lets you create a new epic and just have a text field And that that could be The minimal viable change That helps with the achievement of a better way to organize, you know, actual issues hard so and so forth Uh, would that be an acceptable mbc? I think it would be So, um I have an interesting question for you gene. Um, is there a good devos best practice you recommend? Maybe a book Maybe a reference I don't know gene. Is there any books you recommend people read? Yeah, I would uh recommend the devos handbook I mean it is uh, it was five and a half years into making and it was goal Just to really sort of create the prescriptive guide of what are the principles and practices, you know to help Uh with to achieve, you know, the levels of performance that we've been talking about and for those of you who are interested in the research to really dive deep and to understand why um The causal links between these behaviors and outcomes. Uh, that would be the accelerate book. Um, so uh, those are my Which is kind of weird saying this as one of the co-authors of both books, but I mean is those are Uh, I'm so proud to be associated with all of those books Just to keep it neutral, um, there are books written by, you know X practitioners and executives that also provide a different insight as to what challenges they went through as they actually, you know Implemented these large scale changes And I you know, I love the handbook and I in accelerate. I just read it But I would also recommend reading books by somebody who's also, you know, written a book about their experience Excellent, so, um Sorry, okay. Okay, go on um, we have question time for one more question, which I Pick up, which is can you talk about? AI ML which is machine learning and big data as it pertains to git lab Um, I think, you know, I would like to Say, you know follow up on that answer, you know, not in the specific area of devos But I didn't want to acknowledge that question and you know I'll send you my email and happy to kind of have a follow-up conversation on that I didn't want to make sure that we addressed it. Um I think those are all the questions that we had And I'd like like to thank you gene very much for sharing Your research data with us and adding commentary to that Very useful. It's built on, you know, five years of research. It's getting better every year as I see it Thank you very much for sharing your time Yeah, thank you again to uh, you Ashish and olive git lab for the support on the study and uh, see you in a Week and a half at dev ops enterprise I'll see you there. Thank you. Thank you Agnes for hosting Thank you. Thank you everyone for the great questions. So I'm going to close it up We'd like to invite you to sign up for our free trial of git lab ultimate So we're excited to see what your team can do with it. I'll chat that link um And that's all for today. Thank you. Agnes. Thank you. Bye