 Thank you Dominica. So I've had the privilege of studying high-performing technology organizations since 1999 and those specifically those are the organizations that had the best project due date performance in development. They had the best operational availability and stability in ops as well as the best posture security and compliance. And so our goal was always to understand how did these amazing organizations make their good to great transformation. So why would we do that is because we want to understand how other organizations could replicate their amazing outcomes. So you can imagine in that 18-year journey there were many surprises but by far the biggest surprise was how it took me into the DevOps movement which I think is urgent and important. The last time that we've seen any industry being disrupted to the extent that our industry is being disrupted was probably in the 1980s when it was revolutionized through the application of lean principles and I think that's exactly what DevOps is, is the technology value stream when you apply those same lean principles. So in the next 30 minutes what I want to do is share with you what I've learned since the Phoenix project came out in 2013 and it is impossible for me to overstate how much I've learned. So what I want to do is share with you what those biggest learnings are and some ways like these are things I wish I had learned before we actually put out the Phoenix project in 2013. The first one is to what extent the business value that DevOps creates by applying DevOps principles and patterns. And so this is work that we did with Dr. Nicole Forsgren who is here somewhere in the room, Jezz Humble, a DevOps handbook co-author as well as Alana Brown from Puppet. And so what we found is that the high performers are massively outperforming the non-high-performing peers and this is based on four now going on four years of research spanning 26,000 respondents or will be by the end of the year. So surprise number one is how high performers are just getting a lot more done. They're doing 200 times more frequent deployments and so that could be deployments of code or that could be deployments of changes in the environment. And more importantly they can complete those deployments 2,500 times more quickly. Notice how quickly can we go from a change being introduced into version control, right? Not just for development, that's for everybody, right? Version controls for everybody. Do some sort of test process, do some sort of deployment process, so it's actually running in production so customers are actually getting value, right? High performers can do it in usually minutes, worst case, hours whereas lower performers might require weeks, months or quarters. So it's not just that they're getting more done, they're getting far better outcomes. When they do a production deployment, high performers are one-third as likely to have that change blow up and cause a several outage, a service impairment, a security breach or a compliance failure. And when something goes wrong, they can fix those issues 24 times more quickly. In other words, the mean time to store was 24 times faster. So this was such a decisive finding when it first came out in 2014 because it showed us that, you know, this deeply held and common experience that we had in general, the larger the size of our deployments, the more things that go wrong and the bigger creators that we make in the data center, right? The only way that we can get these sort of reliability profiles is to be doing smaller deployments more frequently. So this has been validated year over year. Last year we found another dimension of quality, which is that because high performers are integrating information security objectives into every stage of everybody's daily work, they're spending one-half the amount of time remediating security issues. And because they're able to control unplanned work, they're able to deploy nearly a third more time on planned new work, right? The higher strategic activities versus the lower value firefighting, which is probably the first page of the Phoenix project, right? This is what firefighting feels like. In 2014, you know, we found that not only did high performers have better IT performance as measured by deployment frequency, lead time, mean time prepare, change success rate, they had better organizational performance. They found that high performers are twice as likely to exceed market share profitability and productivity goals. And for those nearly 1,000 organizations that year that gave us a stock ticker symbol, they had 50% higher market cap growth over three years. So this last year we found another marker of organizational performance. In high performers, employees are 2.2 times more likely to recommend their organizations as a great place to work to their friends, as measured by the employee net promoter score. And so this is just a great proxy measure for organizations' ability to hire and retain great talent, right? So I think all of these gives us a better ability to sell DevOps within our organizations. And Nicole and I will be working on the 2017 State of DevOps Report. I can't tell you a lot about it, but it's freaking awesome. Just when you think you've learned it all, there's more to learn. So surprise number one was just to what extent high performers were outperforming their non-high performing peers. Surprise number two is how DevOps is as good for operations as it is for development. One of the cases they got a chance to study with Jez Humboldt for weeks was this case study back in 2008. And it's a Facebook chat launch story. And some of you may roll your eyes saying, wow, what could be so interesting about that, right? Chat servers are what undergraduate CS students write as part of their sophomore year of college, right? And so although that's true, what you may not know is that chat is inherently an order-and-cubed algorithm. And so at Facebook, N is 70 million simultaneous users. So that was actually technically, it was considered one of the most massive technical undertaking that Facebook. It took them one year to do. It was one of the largest project teams ever assembled. And so there were two technical practices that blew me away. One was the notion that they were testing in production for, well, so how did they use that year? So as soon as they constituted the chat team, even on day one, they were checking their code into a shared source code repository. And anything that was in the source code repo in trunk would be migrated to the production environment at least once per day. And they would do it in the middle of the day, 2 p.m. Pacific time, right? Not at midnight, not on Friday, right? Not working on weekend. They would do it in the middle of the day. And then the second thing was that they were using every active Facebook browser user session as a test harness. And the reason they did this was so they could simulate production like loads, even at the earliest stages of the project. And so the result was when they dark launched this one year later, they went from zero users to 70 million users overnight. So that's a dark launch. The launch was simply changing a configuration flag, right? And if something went wrong, they could just undo that configuration. But for me, the more interesting practice is that notion of a daily deployment, right? That they don't do it at Friday at midnight and make people work under horrendous conditions all weekend long to get things running before customers notice on Monday morning. And I think the best verbalization of this came from Nathan Schemek. He told me in 2013, it was actually at Jez Humboldt conference, Flocon at the Bar, he said, as a lifelong ops practitioner, I know that we need DevOps to make our work humane. He said, over the course of my career, I've worked on every holiday on my birthday, even worse on my spouse's birthday. And even on the day my son was born. So I think some of you may have friends, you know, have been in that situation, right, we're out of a sense of duty or obligation or maybe they because they didn't have a choice, right, they've had to do this. And some of you may be like me, where you've been a part of the leadership that I've created these inhumane work systems. And I think what makes DevOps have so much significance is that we now know that there's a better way, right, it doesn't have to be this way. Unless you think that this is only possible with open source hippie companies like Facebook, right, you should know about this case study from Scott Prude from CSG, the largest bill printing company in the United States, they're publicly traded. And if you get a paper bill from a Comcast charter communications direct TV, chance there comes from one of the two bill printing plants in the US from CSG. And this thing is, in my mind, one of the pathological worst case architectures of what can do DevOps on, you know, this bill printing application runs on 20 different technology stacks, including dot net thick client then client J2EE, COBOL, assembler, COBOL, mainframe, COBOL, mainframe assembler, mainframe VSAM, mainframe DB2, right, it's all in there, right. And so to execute a deployment, it required 20 simultaneous deployments on 20 different technology stacks for it to work, it would take them 14 days to execute, right. So, you know, they were, so they over the course of the year, they moved to a DevOps transformation, and they went from two releases a year to four releases a year, but success was predicated on the notion of a daily deployment. Every day, a team spanning DevTest and operations would deploy into a UAT environment. What were the outcomes? Within a year, incident count went down by 90%. Meantime to prepare went down by 98%. But most importantly, the code deployment lead time, right. In other words, how went from 14 days down to a day, right. So 14 days of a release team trapped in a war room, right, trying to get things running, right, with executives coming in every hour saying, are we done yet? Right. To which they would have to honestly respond, no, we're not done yet, we have 13 more days to go, right. 14 days within a year, it was done by 1pm on day one, right. And the Xbox has come out, right, because there are no life side incidents. So it's great for DevTest and operations, but also it's great for the business and customers, because often they can get the value of the features in half the time. So, yet there's another side. One of the more interesting patterns for me in watching DevTest, this is the notion of developers being put on page rotation. Patrick Lightbody said in 2011, he said, what we found was when we woke up developers at 2am, defects got fixed faster than ever, right. And Werner Vogels said it even was succinctly, if you help build it, you must help run it. And so I'm very well aware that jackasses like me, showing off jackass slides like this, is probably mobilizing an entire generation of developers to hate DevOps, right. They will sabotage every DevOps effort they see, because they would say, we did not become developers to wear pager. Pagers are for ops people, right. The whole reason they became ops people is because they're like pagers. And so, although I will recognize that there is an internal consistency to that logic, I think there is a more compelling narrative. And that comes from Tim Tischler. For many years, he led the DevOps Initiative at Nike. And he said, as a developer myself, the most satisfying point of my career was when I got to write the code, when I got to test it myself, when I got to push into production myself, when I got to see happy customers when it worked, and when I got to see their angry shaking fists when it didn't work, and when I could fix it myself. He said, not only, you know, the point is, you know, if I didn't have to open up a ticket and wait a day, the point is that not that I could have fixed it faster, which is true, the point is I could have learned something, so I didn't make the same mistake the next time around. And what he said was that, you know, our inability to self-test, self-deploy, self-fix has diminished over the last decade. And paradoxically, he's actually being put on page rotation that allows us to get that joy back. How am I doing this? Am I being too cavalier about that claim? Kind of cool, right? It's great for ops and it's great for dev. And I think it's because, you know, ultimately we're both engineers, right? And increasingly as ops professionals, I think we're going to be using development philosophies, development practices, increasingly the same tools as developers, and ultimately it's because we're both engineers working in the value stream together. So, surprise number one was the business value of dev ops. The second surprise was just how great dev ops is for both development and operations. The third surprise was there's this measurement that looks very tactical and easy to dismiss, but I actually think it's probably one of the most strategic measurements of any technology organization. And specifically it's code deployment lead time. So, even though in the dev ops community, I think the one metric that we love talking about is deploys per day, right? It's kind of embedded into the dev ops community, right? Flickr, we did 10 deploys a day every day. You know, it was per the famous Allspaw Hammons slide in 2009. But in the lean community, especially in manufacturing, that is obviously not their favorite metric. Their favorite metric is lead time. And there's this deeply held belief that goes back almost 60 years that says lead time is the most accurate predictor of internal quality, external customer satisfaction, and even employee happiness. And what we found in our benchmarking work, spending 26,000 respondents, is that that property applies to our work as well. So in manufacturing, they would probably measure lead time as how quickly can we go from a customer order or raw materials at one of the plant to finish goods leaving the plant. In our world, in our research, we specifically measured lead time as, you know, the point at which change and they're introduced into version control, right? Through tests, through integration, through tests, through deployment, so that customers are actually getting value. And I think that begs the question, why do we start the lead time clock there, right? Why don't we start it earlier when a feature is accepted by development or even when an idea is first conceived? And the big learning for me was that it's the point of which change introduced into version control that is a dividing line between two very different parts of the technology value stream. So to the left of being put into version control is design and development. So the nature of design and development work, like the Facebook chat story, right, is often we're doing work for the first time, maybe never again to be repeated, right? So the lead time of that kind of work is longer and it's highly variable, right? Because we never get a chance to practice it. So I think that's just a fact of life. Everything to the right of changes being committed into version control is we want the exact opposite characteristics of work. So that's testing and operations. We want testing and deployment to be happening all the time. We want it to happen mechanistically, repeatedly, the same way every time, and we want it to happen quickly. So code deployment lead time simultaneously predicts the effectiveness of testing and operations, but it also predicts how quickly can we create feedback for design and development. In other words, if I'm a developer and I make an error and I only find out about it nine months later during integration testing, then the link between cause and effect has surely been lost, right? This is where we get into the blame game, you know, who changed what? Not me, it must be that person, whatever. In the ideal, that error should be detected within minutes because that's when we kick in quick automated testing, right? And so that not only creates fast feedback for developers, but it also predicts how quickly can design and development get feedback from external customers. You know, you can't do a lot of experiments if you're only deploying once a year, right? So code deployment lead time is a great predictor of testing and operations effectiveness, as well as how quickly we can create feedback to design and development. So that was learning number three. So surprise number one was business value of DevOps. Two was how great it is for Dev and Ops. And three is this tactical measurement is actually not so tactical at all. In my mind, it's probably the most strategic measure of any technology organization. So prize number three was Conway's Law. And so I noticed like around 2012, it was very difficult to go to any DevOps event and not have someone talking about Conway's Law. And so for those of you who aren't familiar with it, I think the most popular incarnation of it was actually framed by Eric Raymond. So he wrote the great book Cathedral on the Bazaar about open source, but he's also responsible for the Devil's Dictionary. And his definition was this, if you have four groups working on a compiler, you will get a four pass compiler. And this is based on a famous experiment that Dr. Melvin Conway did in 1968. But so I think I kind of intellectually got that, but I wasn't, I'll be honest, there's no way that could have actually explained back to you how that would impact how we design our work and how we execute that work in the DevOps value stream. So one of the biggest aha moments was for me was just seeing Conway's Law at work in this famous story about Sprouter at Etsy. So let me just tell you the story and show you how Conway's Law actually is the backdrop of it. So back in the battle days at Etsy in 2008, in order to ship any piece of business functionality, there were two teams that were required to have to do work, right? You had the devs working in the front end and PHP, and you had the DBAs making changes to the stored procedures and Postgres, right? So these two teams would have to coordinate Marshall sequence, prioritize, blah, blah, blah, blah, all right then. And so they said, this is a problem, right? So their solution in 2009 was to create Sprouter. So Sprouter stands for stored procedure router. The idea was we're going to be able to enable devs and DBAs to work independently and they'll meet in the middle inside Sprouter. The problem is, is that now we went from two teams having to coordinate and sequence and Marshall work to three teams having to coordinate sequence and Marshall work. And as they said, this required a degree of synchronization and coordination that was rarely achieved. Every deployment became a mini outage, right? So as part of the great Etsy transformation, they said, all right, where we have to go is being able to fully empower developers to independently make changes just by making changes inside of PHP. So they created this PHP ORM, object relational model thing so that they wouldn't have to make changes to the database directly. And so the end result was not only did reliability go up because, but lead time went way down, right? Because I think, and this kind of shows how the goal, one of the goals of DevOps is to fully enable small teams to independently develop, test and deploy value to customers, right? And so I think this is a great example, at least it was for me of how Conway's Law can hurt us, 2008 to 2009, as we went from two teams to three teams and how Conway's Law can help us as we went from three teams down to one. So the real lesson for me was that the organization and our software architecture must be congruent, right? It's not enough to shuffle teams around. You know, we must also have an architecture that enables those teams to independently create and deploy value to customers, right? And we can't do that if all the teams are tightly coupled together. It means that every time we wanna do something we have to coordinate with maybe hundreds of other developers, testers and ops people. Okay, and by the way, incidentally, that becomes almost impossible when we organize our teams by the technology. If we have 20 different teams, DBA's here, oh no, Postgres's there, MySQL there, Oracle there, J2E there, Windows there, right? You mean everything has to have a lot of handoffs, so. So surprise, one, business value of DevOps. Two is DevOps is great for Devon ops. Three is code deployment lead time. Four is Conway's law has a lot to do with the outcomes that we get. So the prize number five is that I think when historians look back at the DevOps movement and technology in general, I think they'll say DevOps is probably a subset of something much larger and that they would call it probably dynamic learning organizations. So Dr. Steven Spear, he wrote one of the most famous Harvard Business Review papers and it was a paper called Decoding the DNA of the Toyota production system. And that was based on his PhD dissertation he did at the Harvard Business School. And as part of that, he actually worked on the assembly line on the production floor of a tier one supplier for six months at Toyota. And before the Toyota executives let them do that, they said you must first work in a big three auto plant for 30 days, right? Essentially saying, you will not understand the lessons that are going to be imparted on you until you work in a more conventional plant. So he extended that work beyond the Toyota production system to helping build a safety culture at Alcoa, to engine design at Pratt & Whitney, to design and operations of the US Naval Reactor Corps inside the US Navy and so forth. And he said, while designing perfectly safe systems is likely beyond our abilities. And by the way, there's no work that is more dangerous and complex than the work that we do potentially. Safe systems are closely achievable when four following conditions are met. So what I wanna do is share with you what those four conditions are. And I'll just highlight some of the technical practices that this might remind you of. But I wanna highlight on one of those capabilities because when I took this workshop at MIT, it just hit me that there was a big blind spot that we had within the DevOps handbook authorship team. In fact, I would blame Dr. Spear for about a two year delay in the five years that it took to get the DevOps handbook out. So Dr. Spear asserts that there are four conditions that most exist. One, you must see problems as they occur. In other words, any sort of assumptions that we have that are incorrect must be quickly revealed both in the design and operations phase of any complex work system. And so that's like assertion statements and code. That's like production telemetry. That's like all the telemetry they wanna put everywhere so we can actually see is the system behaving as we think it is, so we can actually correct it. The second capability is that when bad things happen, we must swarm it not only so we can restore service faster, but so we can create new knowledge. And so the paragraph of this principle is Toyota and non-cord, right? When something goes wrong, you pull the cord, the entire assembly line stops, and they do it 3,500 times a day, right? And essentially what they're saying is that we need to make systemic fixes then and there because if we don't, we're gonna have the same problem 55 seconds later, right? And so that's the notion of the daily work around. So daily work arounds happen in our world but because our work takes longer than 55 seconds, it's just less visible, right? But it is just as destructive. So in our world, this would be like continuous testing, continuous builds, continuous deployment, dropping whatever takes when something goes wrong, helping peer review other people's code because getting them into production is actually more important than whatever I'm doing right now, but in tomorrow it might be the other way around. I might need someone else to peer review my code, right? Because lead time predicts effectiveness. But the real big surprise for me was capabilities three, there has to be some mechanism where local discoveries can be integrated to create global greatness, right? In other words, how do we elevate the state of the practice so that genuine learnings are created and integrated everywhere? So I wanna share with you, and then number four is leaders create new leaders, but let me share with you like what capability three is all about. For me, this was the most profound. Like what are ways that we propagate learnings in the DevOps community? One is the notion of a single shared source code repo, right? And I think this is so important for operations and information security. The whole idea is that we put our best expertise into code so that anyone who pulls from it can inherit the best known understanding and expertise of the entire organization, right? The most famous example, this is the monolithic source code repo at Google. They have, every engineer has access to all the Google properties and everything gets executed through a continuous deployment pipeline and build systems inside the repo. Only one version of each library allowed. Contrast actor friend of mine, said I'm at a large bank and he said of the 93 versions of Java struts, we are running 92 of them in production, right? So consistency and conformity, blameless postmortems, this idea that when something goes wrong, we create the condition so we can talk about problems. As Bethany Macri from Etsy said, prevention requires honesty. Honesty requires safety, right? How do we make it possible to really create an accurate timeline of what actually happened so that we can actually talk about honestly what the right counter measures should be so we can ideally prevent those bad things from happening again. If we can't prevent, at least enable quicker detection and recovery. Chaos monkey, right? It's like eventually you run out of things to talk, if you stop having several outages, you don't have enough blameless postmortems. So you talk not about just customer impacting incidents, you talk about team impacting incidents of the seven safeguards that were designed to prevent a customer impacting incident, six of them failed, right? And if you run out of those, eventually you have to create your own failures, like chaos monkey, right? Where we like Amazon, I'm sorry, Netflix randomly kills production compute instances all the time in production. By the way, did you know that they only run that during office hours, right? In other words, you don't actually wanna wake people up needlessly at 2 a.m. You do it when everyone's in the office, just like when you would do a deployment. Learning days, DevOps days, internal technology conferences, another way where we can actually have people who are creating greatness be able to spread and propagate that and set the control norm that this is what we want within our workforce. And certainly I think open source is a part of that. Just one little side note on this, one of my favorite quotes that I think, talk about, think about it all the time is this. You're only as smart as the average of the top five people you hang out with, right? And so I think it's organizations like this where we can actually create the peer group, you know, where we can actually learn. Okay, oh, such great story. No, I actually put these slides in for you, but I don't have time. All right. Surprise number six is, I think there's this misnomer that DevOps is just for the unicorns. You know, that's Google, Amazon, Facebook. It's not for the horses. Large complex organizations that have been around for decades or even centuries. And this has been my area of passion for the last four years, which is studying not so much the unicorns, but you know, how DevOps principles and patterns are being used in organizations have been around for decades or maybe even centuries. And so we're going to the fourth year of a conference that we call the DevOps Enterprise Summit. And the goal is really to collect these learning. In fact, there are 48 case studies in the DevOps Handbook. 30 of them are from large complex organizations and they almost all came from this conference. We asked leaders of technology organizations to tell experience reports. Here's the industry we compete in. Here's my organization. Here's where I fit in the org chart. Here's the business problem we set out to solve. Here's what we did. Here's what we learned. Here's what the problems that still remain. And the reason for that is that as adult learners, we don't learn so much from theory and what people say we should do. We learn from what people did. So we can sort of conclude the learnings that we need. And so, by the way, we did one in London last year and what was interesting about London was just the age of these organizations. Barclays was founded in the year 1634, right? UKHMRC, Her Majesty's Revenue Collection Service was founded in the year 1200, right? So I don't think there's any code that goes that far back but there are certainly traditions and values and practices that go that far back. So there are many, many awesome DevOps outcomes, right? There should be no doubt that large complex organizations are achieving the same outcomes that the unicorns have been achieving. But there's one thing that is astonishing to me which is the level of courageousness that are being exhibited by these leaders. I think every one of them was given some degree of air cover but I think almost every one of them, at some point in their journey, wildly exceeded the air cover they were given, essentially putting themselves into some degree of personal jeopardy. And so the question is why would they do that? And I think the reason is that every one of them had a sense of absolute clarity and conviction that what they were doing for their organizations was needed not just to survive in the marketplace but also win in the marketplace. And as a little example of what courage looks like, I got to shadow Heather Mikman for many years. She was a senior development director at Target and I noticed a certificate on her desk and it looks like this PrintShop Pro type thing, right? The certificate and it says, to Heather Mikman for Lifetime Achievement Award for Annihilating TEP and LARB. So I asked her what is TEP and LARB? TEP stands for the Technology Evaluation Process and LARB stands for the Lead Architecture Review Board. So whenever you want to do something novel and scary, like say use Tomcat, right? You would fill out the TEP form and eventually you would get the rights to be able to pitch the Lead Architecture Review Board. And so you walk into a room and there's all the dev architects on one side, all the ops architects on the other side. They pepper you with questions, they start arguing with each other and they assign you 50 more questions and say come back next week, come back next month, right? Her reaction was why no one on my team should have to go through this. In fact, none of the 2000 engineers at Target should have to do this. In fact, why is this even here? And she said, no one could really remember. But there was some vague memory of something terrible that happened 16 years ago, but the details have been lost. And so some months later, they actually abolished the TEP and LARB, right? And I think that is like one of the markers of these people driving these transformations. I wanna actually have now learned how to actually more specifically talk about this. I was talking to a friend, Dr. Stephen Maynard. As part of his PhD program, he was studying transformational leadership and I asked him, what's that? And he rattled off these characteristics and my jaw hit the floor when I heard them because in my mind it actually exactly verbalized the behaviors and values I've seen in these leaders. So I just wanna share with you what those are. The first one is inspirational motivation, they say. Can you articulate a clear vision, inspire passion and help get other people on board? The second one is idealized influence. Can you be a role model, set the example? Can you be a lifelong learner and encourage that around others? The third is individualized consideration. Can you coach others, enable others, keep lines of communication open, recognize other contributors? And the fourth one I love, this notion of intellectual stimulation. Can you challenge status quo? Do you have this relentless need for improvement? In other words, just because we did it for 16 years the same way, this suddenly unacceptable now and I'm gonna focus, that's where I'm gonna focus on. How do you empower decision making and so forth? We did this little experiment where we asked about 100 people to take a MLQ assessment and it turns out that we found that the DevOps Enterprise community self-identified as transformational leaders and that we actually integrated that into the 2017 State of DevOps report and holy cow, exciting news and findings to come. So with that, why do I think this is important? It's because I think over the years I've thought about DevOps a lot of ways in terms of what is the mission and I think I've settled on this. IDC, the analyst firm says there's about eight million developers on the planet, eight million ops people on the planet and at best, I think the wireless opnists could say we're at 0.5% adoption. You take all the unicorns, you take the segment within the horses and so that says we have 99% left to go and I think the real goal of that is how to get every one of those 16 million engineers to be as productive as if they were working at a Google, Amazon or Facebook and there's no doubt that when we do that we will unlock trillions of dollars of economic value per year and that's not gonna happen in unicorns, that's gonna happen in every of the largest brands in every industry vertical. So I think that's the mission at hand and yes, many people still joke that yes, it took five and a half years to get out but it's out and again, the thing I'm proudest of is like the 48K studies. Most of them are from large complex organizations and I thought it was definitely worth the time that we took. So if you're interested in a 340 page excerpt of both the DevOps Handbook and the Phoenix Project, all the videos and slides from DevOps Enterprise are online. If you want exciting white papers that we've been doing at the DevOps Research Assessment at Dora and a whole bunch of other stuff, just send an email to realgenecomentsenderslides.com subjectline DevOps. Don't take a picture, don't write it down, just send an email to realgenecomentsenderslides.com subjectline DevOps and you'll get an automated response in a couple of minutes. So with that, Domenica, thank you so much and I guess I get to hand it over to Nell. Thank you.