 Let's get started. Thank you for attending this panel conversation. This is an interesting one. This panel is called Continuous Delivery Paradox, How to Balance Speed with Value. And I'm hoping that we are going to have a very passionate, if somewhat controversial, topic that will come up. Because I personally know all of these members on the panel. And they are an extremely fun bunch to have. It's hard to corral them, so it'll be interesting. So first, I'd like to introduce Garima Bajpoi. I'm sure all of you know her. She's the founder of the DevOps community at Canada. And she's also the chair of the ambassador program here at the Continuous Delivery Foundation. She does a lot of other things. In her day job, in her night job, travels all over the place. She's a published author. She's like an amazing person to work with. The next one is Carl, Coral Martin. Carl has several roles. He's held several roles. He comes with over 22 years of experience, improving organizations, their processes, helping with the people and the cultural aspect of things. He's more into services around product market fit, around strategic processes, tooling, modernizing organizations. In his most recent role, Carl has been the CTO at Delving. And I've had the honor of working with Carl at Pivotal and then VMware. Last but not least, we have Rick Clark. He serves as the global head of cloud advisory at UST. He's a technologist and a strategist. And he has also more than 20 years of experience leading in cloud open source, Linux, and other things. He created the number one cloud operating system. So it's a winter server. So that's one of his claims to fame. The next one is he decided he was a little bored and went to Rackspace and then co-founded OpenStack. Yeah, we don't talk about that one as much. For good reason. And then he's worked as the SPP of cloud infrastructure at MasterCard and then VP at Reliance Geo in India. And he helped build one of the largest infrastructure in telecommunications. So that's an amazing thing. So I'm really happy to have you on the panel. And so, like I said, they have, I think, combined about 70 years of experience in the industry. I'm your moderator, Gautam Palapa. I'm an executive advisor at VMware. I work with C-Suite and executives. I've also led a lot of global teams with mission-critical workloads, things like Enhanced 911 for all of North America, various government organizations, notifications, conferencing collaboration, and so on. So this kind of topic is really passionate to all of us. For us, at least for me, it's all about improving the quality of human life using technology because that's one of the primary reason why we are technologists and we are passionate. So coming to this conference and driving and listening to all the talks about continuous delivery is something amazing and really passionate to all of us here. So having said all that, I want to jump right in. And to prime the pump, I'm going to ask the first question to you, Garima. So the first question is, what are some pet peeves or common mistakes that teams embrace or adopt when adopting continuous delivery and how can they be avoided? So thank you, Gautam. As Gautam mentioned that I am a community leader, so I would start with one of the biggest points I can refer to is we actually overlook burnout and prostration. It is a serious issue. And if you think about it, how and what happens, why we do that as leaders, is the technology first approach. Tools come before people. And that is one of the biggest pitfalls I have seen through my journey of continuous delivery adoption. Another thing which I would also like to refer to is competing priorities. As senior leadership, I would say that we spend 80% of our time in consensus building. And after that, we have competing priorities, which results into burnout and prostration. There's another aspect of it. This is my own experience through the continuous delivery journey over years that once we align and build consensus, there is limited evolution happening during adoption. So there is the approach towards the product-oriented mindset is not settled in. So people don't think about roadmaps. They don't think about the evolution journey. And this adoption, if you think about it in the context of a big enterprise, it can take you years. And what it leads you to is after like four, five years, you're still on that journey and you have not achieved the business goals you have laid out for yourself. So these are some of the adoption challenges which results into not creating that envisioned business value which you have crafted for yourself in the beginning of this journey. That's great. Thank you, Ed. And I wanna add onto it a little bit. I mean, so we as humans, we look at our goals and things we want to accomplish as outcomes, as roadmaps and things we want to strive to. But somehow when we go into the organization or when we look at all these transformation things, they suddenly become projects. They become milestones and they become checkpoints. And so there is that sudden shift in the look and view of things. So, Carl or Rick, who wants to jump in? I'll jump in. Oh, you were about to do it. So I like all the people, stop talking about people being important, but I'm gonna, I guess, could have write the line a little bit. I think one of the biggest mistakes I see and continue to see is that pipelines are not self-sufficient. And I think Log4J really exposed that, that people were building things that required someone from the outside to do something. They weren't declarative. Like, things need to be declarative and automated. So back to the technical things. And everything needs to be declarative. If you built a system, a CI and a CD system, and you're requiring developers to build images for you, then you've made a mistake. And when Log4J happened, you're waiting for people to do things. And there are many things like that that make it not self-sufficient, not isolated. So that would be sort of my primary thing. I'm always grumpy when people haven't talked with security. They're like, okay, why don't we do continuous delivery? And they invest in building out automation infrastructure. And then at the end, they're like, okay, now we need to validate that it's secure. And it's like, well, that's still a quarterly review. And then people don't talk to finance. And it's like, okay, we fund these things on an annual funding cycle. So when we think about the continuous delivery, what is your cycle time? Where's the start? It's like, well, when does that feature get funded? And then what's your lead time from when it's funded to when it can actually be built and when it can start work? So... Yeah, and to finish this point, I think I take this back to the people aspect. Because when you think about speed, speed is a complicated factor too in this journey. And it leads to innovation gap. People need time to think, innovate, experiment, follow through their experiments. And also if, let's say, you fail in your journey, you need to take that time out to retrospect. So that, I think, one of the biggest challenges in the continuous delivery journey, when I talk about like large enterprises, the innovation gap. Yeah, there are a number of organizations that I interact with where sometimes there's no permission. Or there's not even time to have that innovation sprint or have time for yourself to actually do a retrospective and introspect on why the thing happened. Very much used the word failure. So obviously one of the things about DevOps is to fail often fast and cheap. I like to not use the word failure because for me, failure means that it was a waste. For me, I like to use the word unintended outcomes because that's truly what it is. You learn a lot from something that you thought you would get but did not happen. So your hypothesis- It's rapid, unscheduled disassembly. There you go, yeah. So, but you're learning a lot from it. And I think in enterprises at least, especially those that are trying to switch and trying to catch on and deliver things faster, especially over the last 36 months, they do not have the luxury of time, which actually goes into my second question is, and this is what you called because I know you're very passionate about this. How do you balance the need for speed with the need to maintain high quality code and minimize that risk? Well, first of all, I don't like to talk about speed because speed is a scalar. So it, and the direction that the team that people are going is like as are more important than any sense of the velocity which they're getting somewhere. So you need to have some confidence that you're building something useful before you can even talk about the speed of delivery. The second thing is I don't like to talk about speed because I find it tends to build walls between people because it's just like people, I like to trust that the people working on the product are doing their best in being professional. And so like, oh, I want you to work harder. I want you to work faster. It's like, well, if I knew how to work faster, I'd already be doing it. So you're insulting my professionalism and saying that I'm lazy or I'm not doing things that I should be doing. So I find that that sort of talking about speed sort of breaks the conversation in two ways. And so I think the more interesting thing is to sort of talk about, well, what is the chunk size? So how big a piece of work are we doing until we make that smaller? What is the waste? Like what are we building that we don't need to be building what our activities can get rid of? And so I find that sort of taking the conversation around speed and sort of asking questions are like, what do you mean? There's a big thing going on right now that a lot of executives are feeling like, oh, my teams are really slow. And there's some very interesting things going on in software. It's like, why does it feel slow? I blame JavaScript frameworks in a substantial part. They're terrible. And when you start talking about like, what is the pain that you're feeling when you're talking about speed and start teasing that out into the chunk size, like delivering things sooner. And then you start to talk about quality. And oftentimes delivering small pieces more frequently increases quality. But that also is ideally a conversation between the people building the systems and the people paying for that. Or like, well, how much quality do you want? Like what is the appropriate level of quality for this? What is the appropriate level of risk and appropriate level of reliability? You know, speed is easier to measure and easier to game than quality. How do you measure speed in software? What units does it have? You can measure something that you call speed, but I'm getting into the next question already. No, but hang on. So there are two different things. And I'd like us to at least make the differentiation of speed versus velocity. Speed is how quickly I can make a particular widget. It doesn't have to move from one place to another, but how quickly I can make that particular widget or modify it or transform it in some way. So how quickly I can run 10 tests that just a surgeon will say, yep, I did it, cool, that is speed. But velocity is that I've approved it and it can actually go to the next stage because I've had the right amount of test coverage and I'm confident enough that it can go to the next stage and that becomes velocity. Well, instead of speed, why don't I say quantity? Quantity versus quality. Like how many times? Minds of code. Yeah, how many times you deploy versus the quality of those deployments? Counterpoint, YAML. I'm going to be amazingly productive because I have 10,000 lines of YAML now. And if I work in a Kubernetes environment, right? So, okay, sorry. As you can see. The more YAML the better, it's like that's what. I'm sorry, just to sort of package up the answer to the question, I try to pivot the conversation and understand what are your real business concerns? Like you feel a conflict here and you're talking about speed and things feel slow. Like let's tease that apart into things we can actually talk to people in a respectful way. And find waste or like are we building the wrong thing or are we spending too much time trying to get reaction to work because we didn't turn on strict mode at the beginning of the project, for example. So that actually goes into the next question which is about measuring. How do you measure, what are the metrics and how do you know that you're successful? So for example, we hear all these stats everywhere like company X deployed a thousand times. Like almost everyone has embraced the Dora metrics and they think that that is the end all be all of all the measures. But there are only some dimensions that the Dora metrics are going to measure and one of them is the deployment frequency, right? If anyone's attended leads talks he went through every one of them, all five of them. So company X says that it deploys like thousand times in a day and so it's super cool and it does that. And then another company says we have automated like 85 to 90% of our pipeline and so we hardly have any manual friction in between. So does that mean that they're successful? What exactly are the true measures or metrics that we need to look at when we want to claim that we're successful in a continuous delivery journey? And Rick, you've had experience building this in a number of large scale organizations. So I'd like you to start on this to talk about what do you actually consider as success metrics in a CD? Well, so I don't think there's one answer. I think that the metrics that you should measure are based on the problem that you're trying to solve. And that problem, the technical problem should be based on what your business problem and business outcome you want. And the reason I mentioned earlier about gaming things about quantity over quality is I've seen at very large companies executives say that we're going to set our OKR as how many times we deploy. And then that goes down to the development managers who now say, okay, you need to deploy your comments separately from your, I mean, they make sure they meet their goals. It had nothing to do at this company. They had a quality problem. So how often you release a piece of crap doesn't tell you anything. And I want to accentuate on that and underscore it because I've actually had some developers who just used to commit string changes or front end element color changes as deployment and check a box saying that, yep, I did a deployment. And I want to add to that. If you're using a feature flag, you can deploy one character at a time and then turn on the feature flag when you're done. And I've deployed a thousand times. And I know it sounds like a joke, but if you're an executive and your $200,000 bonus is writing on that, that's what they do. People do, they behave like you compensate them to behave. That's a lot of Amazon interests to run all those tests. Your Amazon sales are very happy to support that test infrastructure. A serious note, what we are alluding to is two aspects. The first aspect is the context. How you measure things is very important that you have a context towards what you're measuring. I break it down into three parts when we talk about continuous delivery. There are, this continuous delivery is an ecosystem of producers, consumers and practitioners. So if you think about the producers of continuous delivery tools, technology, practices, I think the primary goal of this is to unlock new revenue streams. When you think about like the practitioners, the primary goal is to avoid waste and to build technology which is connected to the business imperative. So I would like to challenge a little bit on how we have to go beyond Dora. And what are the changes which are needed to happen in terms of measuring things? I think it's time for us to pivot and see what are the next generation matrices we have to build. So Carl, hold on to that thought for a second. Please. I actually wanna do an audience call right now. A raise of hands. How many of you measure something other than Dora? What business KPIs are measures? Do you guys measure? Do you do any of you do anything beyond? So feel free, raise your hands. Let's all participate. Two, okay. Three, I see three. Four, nice. Okay, any business KPIs in that? Non-technical. There's one. Okay. It's actually better than I expected. Yeah, so all right. So that's the point again. I mean like we're so focused on the technology portion of things. We forget the why. We forget why exactly we're doing all these things. What is the purpose? What is the drivers? How does this connect to the company's success, to the organization's success? At the end of the day, how does this actually improve the customer? And that's something that Garima also brought up because there seems to be a disconnect somewhere. Carl, sorry, but thank you. Yeah, that's great. I just wanted to throw on that. I love the Dora metrics as a hygiene metric. And it's sort of like how often you brush your teeth or how often you take a shower. It's like, okay, if you don't brush your teeth and you don't take a shower, you're likely to have real problems down the road. But at some point, a disadvantage of like, are you building a happy life or making money? And so it's like, all right, let's get this out of the way as a hygiene metric of like, are we doing these basic fundamental things? And then we can start talking about what is the business value we're creating? Are we making money? Are we actually headed in a valuable direction at speed? Nice. Delivering very small pieces of value on an incremental basis. Well, what is the, you know, so we talked about delivering a character at a time. What is the size that is useful? I mean, I have an idea that at least in early stages, as you're becoming more mature in your CD capabilities, that it needs to be small enough that you understand and can learn and can quickly find a problem. And if it breaks something, you know exactly what it is. Like if it's smaller than that, that it doesn't, you don't learn anything from it. And if it's too big, it's too difficult to figure out. That would be my, but are there other ways people decide how big of a chunk you deploy at a time? And how are those decisions being made? It is an interesting question. I'll give my takes and then the rest of you all. And then I'll open it up to the audience because I'd like to hear from all of you. I'm happy to run around with the mic. So what I've seen is it eventually ends up in your user story. And so the size of the chunk of whatever you're deploying becomes part of that user story. It depends upon how you're writing it, be it a BRD or something in Atlas in Jira or something, where you're flushing it out, a pivotal tracker if you're still using it. But that becomes the atomic size of something. It's usually a feature or a function that can be consumed by a customer. That's what I've seen. And I want to add this point here that probably we are not at a stage of generalizing the incremental size of the deliveries because of the fact that we haven't been able to create those feedback loops, which are required. If you talk about progressive delivery, I think there is more work to be done in that area in order to ensure we have that maturity to define that incremental size. Grima works with test infrastructure that involves warehouses full of radios and so that feedback loop takes a long time. I spent a lot of time working in cases where the feedback loop is nearly free and that you're pushing to a Heroku style infrastructure and my answer for like, I've been so happy on teams and I've been doing a deploy every like, pair day or half pair day. So we do a lot of pair programming and try to wrap up the atomic pieces of work. It's sort of like between one or two times a day. Yeah, it comes back to the question like, did you feel accomplished by delivering that incremental and taking that stab on it? What is that quick win which you established through that? So at the end of the day, it can be different for different industry segments, different maturity levels and different sizes of the company. So I'd like to tweak your wording a little bit, only because he started with the feature flags and the individual characters, like feeling accomplished is probably not enough. I think, did you feel that your generated value is probably going to be the qualifier? Not just did you accomplish, because every time you deploy as a developer, I mean, I've been a developer, every time I check in core, I feel accomplishment. I'm like, oh shit, I did something awesome, right? And I get that serotonin, I get that dopamine and so I feel happy and I feel accomplished. But I think if I deploy something that actually generates value, I think putting that as a qualifier upfront is probably going to be much more effective. And there's a, this is really interesting that like these, you can offer value to some of your users and that, you know, by, there's a real temptation, oftentimes I've experienced this of like, I wanna hold off until I have something sort of as a more complete product. And by doing so, you are depriving those humans on earth that would have benefit from that incremental product. And many of these people exist. And it's like, okay, like that really shitty one-tenth implementation that I got done in three days, like I can put them in the hands in three days and start, they can start driving value from that now rather than waiting six weeks or eight weeks until I get a bigger package that'll give more users value. And then I can incrementally deliver on top of that. I would challenge the idea that most developers care about value. I think everyone here is an outlier. Like, and I have data. So the fact that we're sitting in here and we came to this and we care enough to be here learning about this makes us outliers, right? There are many, many, many developers in the world. Most of them don't come to these. Like, so we have a desire and we're enthusiasts where this is something we care about. But one of the things I encourage when I talk to enterprises is that don't listen to only your noisy developers. Do a survey of all your developers and find out what they want if you're doing something about it. And what most of them want, I've done this now at two companies, they want to get their paycheck, do their job, get their bonus and go home. And if they get their bonus and they get a good review, that's what they care about. They get a raise. But some of us, obviously, I care more about doing the right thing and feeling that value. But I think we're outliers and if we design systems for ourselves, we're designing it for one or two percent. And not everyone. You've been engaged with a lingerie manufacturer and took their development team out to talk to the customers or buying lingerie. And that was quite an experience for all of us. There was a fair bit of resistance early on for that field trip, but it proved to be very valuable. But also, I've seen a lot of cases where the business is hiding the customers from the developers. They don't want the customers talking about it. That is, and Michael Kote and I, Kote and I did a series of podcasts about some of this, the divide between business and technology. It's because we hate each other. I mean, you've got the business that thinks that you have the arrogant developers, you have developers and they think this business guys are stupid. And I think that's the driver of a lot of us. Okay, so I... I'm gonna get the panel back. We're going into philosophical waters that I don't want to go into right now. Sorry, Mr. Moderator. Yeah. So actually, this is a good opportunity. Garima, thank you. I want you to start off with your closing thoughts because we are, unfortunately, out of time, but we can continue the conversation in a hallway afterwards. So I would like to say a few things, two of them actually. So coming back to the discussion about, you know, business and continuous delivery initiatives not aligned, I think we'll remain as second citizens from a continuous delivery perspective if we do not connect ourselves back to the bigger picture. And how to do that? Like if we continue to measure, and this is something which I would challenge the forum that probably it's time for us to move beyond the measurement and matrices we have been looking from the past. Can we do a better stab at it? Can we have flow optimization, real time and dynamic, you know, criterias to ensure that we measure the right things? And then probably we will have a better chance to succeed and we build a better connection towards our business stakeholders and probably become the first grade citizens. Awesome. One minute. I love continuous delivery because I've seen over and over again that once you get over that hill of sort of activation energy, on the other side of that is a massively positive some game where the people writing the software have a better life, the people operating the systems that the software developers make have a better life, the users of the software have a better life and the people paying for the software have a better life. And like there aren't very many games in business that are like so positive value that I've seen with continuous delivery and so I'm so glad you're all here and I'm having a lot of fun. Thank you, 60 seconds. 60 seconds. So I agree with what my fellow panelists just said. I think the most important thing is connecting things back to the business, choosing OKRs that are understandable by the business that means something to them as well. In fact, I would probably do it just, I'd have my own metrics and then a net promoter score with the business, just like am I doing worse when I do this? But connecting things back to the company, remembering why we're doing this and making sure that we add value. Thank you, and this is so great. We're ending up ending the entire panel with the why, the purpose, the purpose behind having CICD, especially the continuous delivery portion because at the end of the day, what we're trying to do is using all this technology to improve the human quality of life, to reduce the pain, the manual toil and the friction that people are going through and so that people can go home and be happy and have lives. That's truly why we use technology. And so I want to thank you, panel, for this great conversation. We'll be in the hallway, but I think we have one minute. Does anyone have a question? Yes, Lee. Does anybody have some spicy quips on like Westroom-style organizations? Go. Some what? Westroom-style. So like Generative or something. So you want me to talk about the three? Yeah, yeah, go, go. Okay, so Ron Westroom, he was a psychologist and he had this theory. He brought a taxonomy of organizational culture. So there are three kinds. The first one is a toxic culture. Pathological. Yes, pathological. So it's very power-oriented. It focuses on punishing people who fail. And then the next one is bureaucratic, which is very, very rule-oriented. So in the rule-oriented, you have a playbook for almost everything, your manual intervention, and then in case you fail, you're probably scapegoated. So that is this. But where you actually want to go and we want to go to is the generative organization or a performance-driven organization where everything is happy. It's like a utopia. If you fail or if you have an unintended outcome, you actually investigate, you learn from it, you share the learnings and then people have a much higher level of psychological safety. And so the goal is to move from that pathological organization all the way up to a generative organization. Thank you. All right, well, thank you everyone. To the hallway. Thank you for attending the panel. Thank you very much, Gautam.