 Welcome back to OpenShift Commons and today, as we like to do on Fridays, we're going to talk about transformational things, organizational, cultural, very dev-oppish things as well. And today we have with us Jeff Susna from Susna Associates, but you may know him as the author of Designing Delivery, Rethinking IT in the Digital Service Economy, which came out in 2015, I think, Jeff? Yes. And it's a great combination of a lot of things we've been talking about, from dev-ops to systems thinking to new things like promise theory and that. So, last week we did a great talk with Kevin Beard around raging against silos, and Jeff stepped up and said, hey, I got to talk for you. So, I'm going to let Jeff introduce himself and then talk for as long as you'd like, because it's always a pleasure. And then we'll have a conversation with some of the members from the Global Transformation Office who are here with us today and take questions from the audience. So please, Jeff, take it away and show us what you do with silos versus raging against them. Thanks, Diane. Thanks for having me. It's great to be here. So, yeah, I'm Jeff Susna, founder and CEO of Susna Associates. We are a Minneapolis based consulting agency, and we are very much hunkering down for winter here. So far it's been fairly pleasant and we've been able to survive the pandemic by going for lots of walks. But the weather forecast for the next five to seven days is that it's not supposed to get above zero Fahrenheit. So it'll be very much walking on the treadmill. Anyway, our clients are cloud software teams and executives that are struggling to meet the demand for software delivery that is not just continuous, but also customer centered. And they typically engage us when they are doing some form of agile and or DevOps and not getting the results they expect or hope for. And they say things to us like, well, we did the scrum training, we do stand ups, we do retros, we have an SRE team, we're running Kubernetes. What are we doing wrong? And what we find often, particularly in the area of DevOps is there's a great deal of confusion about this idea of breaking down silos. You know, it's right at the heart of DevOps. It's right in the word. But unfortunately, the details, the practicalities of how you do it still remains something of a mystery. And the reality is that in any kind of complex organization, which to be honest is anything more than about 30 people in engineering, a literal approach doesn't really work. On the one hand, you can't just put everyone on one team. You know, otherwise you end up with 50 or 100 or 500 people in a stand up every day. And on the other hand, if you look at one of the DevOps ideas of these cross functional self-sufficient teams where you have everything you need in order to build and run an application, that doesn't scale either. You know, the idea of having a network expert and a database expert and a security expert on every team, it doesn't scale and it starts to bloat your two pizza teams. And I was thinking about this and suddenly I realized something, which is that sort of the early approaches to DevOps made an assumption that they didn't tell anyone about. And that assumption was that there's a cloud. There is something that abstracts away enough of the low level infrastructure and operations details that it actually becomes feasible to have these self-sufficient teams that can run and build operations, excuse me, applications on their own. And that's incredibly ironic because the public cloud is the biggest silo in the history of IT. You don't know exactly where their data centers are. You don't know exactly how they do change management or what tools they use. You have this very, very opaque interface between you and them. And on the one hand, DevOps was telling us, well, stop treating IT like a commodity. Bring it in-house, stop outsourcing and offshoring things because that's your competitive advantage. But at the same time, there was this assumption that we would engage in the biggest outsourcing arrangement ever. But even aside from that, the cloud still doesn't solve all of your problems. Public cloud providers are very careful to make it clear that they provide building blocks, but you still own things like scalability and resilience and security and compliance. So if cloud is an enabler, what's different about it? Why isn't it just another silo? Well, it's different because it focuses on being an enabler instead of a controller. So instead of making you fill out forums and ask permission in order to get access to scarce resources like servers or VMs, the public cloud approach is, well, you want servers, let's figure out how to give you as many as you want for as long or short a time as you want as fast as you can ask them, ask for them. Secondly, and this is a really important point, is they're focused more on helping you than on what it is they do. Public clouds don't really care whether their infrastructure has a service or platform has a service or software has a service. Their approach is helping you solve your problems with whatever expertise they have they can contribute. And that leads to them not just being helpful, but being continuously more helpful. So I remember the first consulting project I ever did, about 10 years ago or so, where I was helping a small company migrate a fairly straightforward lamp application from a bunch of failing hardware and a colo onto AWS. And it was a pretty straightforward lift and shift kind of thing. We wanted to take advantage of some of the basic public cloud capabilities like multiple availability zones and elastic load balancing and so on and so forth. But one thing we couldn't figure out was how to deploy a high availability memcache server. And after kind of futzing around and looking at products and struggling for a while, we decided not to bother. It's just a cache that falls over. It's not the end of the world. Let's go on, finish the project, go on with our lives. So we did it. And literally about three weeks after we finished the project, AWS announced a new service, which was elastic cache, in other words, on demand, high availability memcache. And I remember seeing this and thinking, well, they must have been reading our emails. But whether they were or not, what they were definitely doing was watching their customers struggle with things on top of their platform and figuring out how can we use our expertise to make that struggle sibling. So we can, we can see an example of how this idea of focusing on helping and not just doing is important. A lot of organizations I work with, we help them with some sort of platform team. They're operating something like OpenShift. And they think that their job is to tip up and run an OpenShift farm. But what they figure out fairly quickly is in order to recoup that investment, you know, somebody has gone to the CIO or the CTO or the CFO and said, well, if we invest in OpenShift, it will make things better in all of the following ways. But to actually use that, it also means that in addition to the platform, application teams need to be able to consume it. And they need things that they may not have. They probably don't because they haven't had an environment to do them in before. You know, they need to understand and have expertise in CI and CD. They need a modular architecture that fits in containers and Kubernetes and microservices, which they probably don't have. They need the skills and tools and even just the mindset to think about monitoring applications for themselves. And what these platform teams start to figure out is that their job isn't just to operate the platform, but to actually function as a mentoring organization to help application teams with onboarding and mindset change and tool migration and so on and so forth. So to understand where this is leading, I think we need to take a step back for a minute and understand the underlying business driver. I had an epiphany one day when I was reading a marketing website from an early software as a service company. It might even have been Salesforce. And right on their homepage, they were talking about things like multiple data centers and offsite backup and advanced security practices. And I realized that they were spending marketing dollars talking about IT operations. And then they said something really interesting, which was we update the software so you don't have to. And the epiphany was that was my recognition that part of what this company was selling part of what their customers were paying for was operations. And what happens is that software as a service transfers the cost of change from the customer to the service provider. You know, it used to be with on-premise software that the customer was always serving as a break on the rate of change. Well, we can't deploy your new version because we have to go through a three month change management process or it needs an OS upgrade, which we're not doing until next year. But when software moves into the cloud, the conversation is completely different. It becomes, well, why is it taking so long to give us this feature? Where is this bug fix? Why haven't you made stability better yet? And what happens is that customers begin to demand not just value, but continuously increasing value. On the one hand, they become very impatient with delay and taking too long to deliver improvements can actually lose you customers. But at the same time, what they're asking for is not just random change, not just this spray of features, but evolution of continuous improvement in value. And what value means in the cloud is, first of all, usefulness that your software helps them get something done. But secondly, usability in the largest sense. Can I adopt it? Can I migrate to it? Can I get help with it? Can I navigate through outages? And finally, dependability, which is not just technical operability, but also things like how quickly and how successfully you respond to support calls. And the thing we have to understand is that this demand for what we could call customer-centered delivery doesn't end with the customers facing product team because they can't deliver it on their own. One of the things that we hear is the idea of creating teams that map to some value stream or customer journey as opposed to a functional silo. But if you think about complex customer journeys, it's pretty hard to fit them into a single team. Imagine that what you want to do is you want to buy supplies to renovate your bathroom. There's a whole bunch of things that have to happen that are involved. A website, a store, multiple departments in the store. So you're going to need lumber and you're going to need plumbing and you're going to need electrical and you're going to need to check out and deal with the checkout team and the cashiers and all of the back end systems that support all of that. So that's something that you're not going to fit into a single small agile team. And one of the questions we have to address is agile and DevOps are very much about decomposing things into smaller and smaller pieces, whether it be user stories or deployments or microservices or microservice teams. But we still have to figure out how to put them back together because customers buy services, they don't buy microservices. And what this tells us is that these customer facing product teams need customer centered delivery, they need continuous evolution from the rest of the organization as well. So what they need is not just performance and scalability and resilience, etc. But they need all of those things to be continually improving in terms of their usefulness and their usability and their dependability. So how do we do that? Well, the answer is by mutual service of each part of the organization treating other parts of the organizations as customers, just like paying customers. And this has the effect of turning silos inside out. You know, the problem with silos isn't that things are separate, it's that they're sealed off from each other. So what we have to do is figure out how to kind of flip them inside out and connect them to each other in some way that delivers coherent service but also maintains agility and autonomy. And we can do that by asking three deceptively simple questions. The first is how are we promising to help each other? How am I promising to help my internal customers? Second, how well are we fulfilling those promises? And finally, are they even the most useful ones to make? And we can use that as a driver for continuous value evolution and continuous service to each other. So let's look at some simple examples of what it means to think in terms of making promises to each other. If you're in support, you make a promise to end users which is that you will help them solve problems with using the application. And more and more support organizations are thinking about their promises, not just in terms of how many tickets can we close, but how often can we help the customer solve their problem on the first phone call. But you also make promises to application teams which is that you will amplify the customer's voice. One of the best ways to understand how your system is working and how people are using it and how it isn't working is from support. If you're in networking, what are you promising? Well, what you're promising is high performance, secure and compliant data flow so that I can connect my application to my database and get high throughput, low latency, but also be sure that I'm not violating PCI or HIPAA requirements. If you're on the platform team, what promises are you making? Well, you're promising to minimize the friction associated with delivering code to production. And you're also promising to maximize the ability of application teams to observe the behavior of their applications. And the interesting thing here is we haven't said anything about features. We haven't said anything about switches or routers or Kubernetes or not Kubernetes or ServiceNow or JIRA or anything like that. It's all about who we are helping and how we're helping them. So how do we do this? How do we put it into practice? Well, DevOps has an acronym which is CANS, which stands for Collaboration, Automation, Measurement and Sharing. And it's a set of kind of high-level practice areas to think about when you are implementing DevOps. The problem with it is particularly collaboration is kind of vague and it sort of throws us back to where we started at the beginning of, well, how do we collaborate? How do we organize to do this? It's this old, what is this silo breaking thing? So we haven't really made any progress. And in our work with practical sort of day two DevOps tuning and repair, if you will, we've developed our own acronym. It's Susana Associates, which is CHARC. And it's really about a set of principles for understanding how to organize yourself and how to interact with your surroundings. If you're going to be a mutual service provider, how do you actually do that? So the first principle is making the customer your compass, which means aligning your team with your customer's goals, going back to this idea of promises, and centering your work around what they need, not what you build. So if you're AWS, what they need is high availability memcache without a lot of manual work on their part. What they need is not a set of VMs and a set of templates. And the critical thing here is that this principle leads us to organize ourselves for end to end service as opposed to feature delivery. So that we think about some coherent set of promises, some coherent value that we can offer, and think about everything that's needed in order to build and test and document and support it all together. So imagine as an example a data warehousing team inside of an enterprise that is siloed along functional lines. So you have a business analysis team, you have a development team, you have a test team, you have a documentation team, you have an operations team. And there are handoffs and delays and misunderstandings and bugs every step of the way. And then what you do is you realize that there are some coherent sets of data and some coherent different customer bases for that information. And so you organize around those almost like products. One way to think about all of this is that it's kind of everything as a product. So you take all of the pieces of the puzzle, the business analysis, the development, the test, the documentation, the ops, and you put that in one cross-functional team around that product area and you have them all in stand-ups together every day. And what happens is that friction and delay and errors starts to melt away because people talk to each other. As another example, one that is very deep in the legacy world, which supposedly is not really the domain of DevOps, is electronic medical record systems. They tend to be large, complicated, very old architectures, proprietary databases, very specialized hardware requirements. It's not the kind of thing that you can easily port to Docker containers. And this is an area where it makes a lot of sense. You have sufficient complexity and sufficient specialized needs that it makes a lot of sense to have all of the low-level and high-level expertise you need together in one team. So that you have networking and database and hardware and storage and application software all working together, deploying together, debugging, dealing with outages together. Again, when you do that, you start to melt away this friction and delay and waste that tends to plague functionally organized teams. The second principle is to see the horizon, which means focusing your work on fulfilling your promises so that what you're delivering is value, not just work. I see a lot of agile teams that have this kind of heads-down tunnel vision approach, is I'm working on this user story. I'm working on this user story. And there isn't any understanding of, well, what are we actually delivering? Who cares? You know, maybe we are on our way towards continuous delivery, but delivering why? What's the actual benefit and does everybody understand that benefit? And we can use that principle to improve the way we do classic, agile ceremonies. And I've done this both with development teams and infrastructure teams. I see too many stand-ups where everybody goes around and says, this is what I'm working on today. Well, on the one hand, that's nice to know, but on the other hand, the what? It's more interesting to talk about what promise are we fulfilling today? What value are we delivering to our customers today or not delivering that we have to adapt to? In retros, I like to talk about something I call the not enough muffins effect, where you have a team and there's a tradition that somebody brings in muffins for the whole team every Monday morning, and there's a problem that there are never enough muffins. So you have your retro and you talk about what didn't go well and somebody says, well, we still don't have enough muffins. You should bring in twice as many as you think you'll need. Now on the one hand, that's a good problem to solve, right? It's good for everybody to get muffins on Monday morning, but it doesn't really have anything to do with how effectively your agile process is working. And it's more interesting and more useful to ask questions about how well did we deliver on our promises? How do we improve them? How do we repair them? Are we making promises that we shouldn't because we have no hope of fulfilling them? And then when you look at things like planning and sprint goals, instead of saying, well, our sprint goal is we're going to do five stories that add database indexes. No, your sprint goal is to make search 15% faster. And it's funny, one of the things I watch a lot is friction between agile teams and DevOps teams and executives and leadership teams. And the executives are always saying, well, this is great, you did 15 stories, but so what? What's the benefit to the business? And you should be able to say everyone should understand the benefit to the business. And when I work with engineers and I ask them, what are you promising to deliver? They say, well, I'm promising to deliver this Kubernetes port. And one of the things I'll say sometimes is, well, if the CEO walked over to your desk and said, why am I paying you to do this work? Typically, there's a really good answer. Well, it's going to make it much easier for engineers to test their changes quickly and easily against the whole application. Well, that's great, but we need to learn to talk in those terms. The next principle is applying small frequent inputs. I often joke that I should offer a fixed fee consulting service where all I do is I walk around your organization and I say one sentence over and over and over again. Which is make your work smaller, make your work smaller, make your work smaller. It's amazing how flow and quality start to improve simply when you start thinking in terms of what's the smallest next piece of value I can deliver. And it's ironic because engineers deal in decomposing problems. That's what code is, right? You want to accomplish something, you have to break it down into its algorithmic steps. But they seem to have a really hard time thinking about that, about their own work. I had an experience with a storage team that I was coaching. And I was helping them make their user stories smaller. And one engineer said to me, well, I can't do that. My task is we're changing backup software. So I have to decommission the old software and install the new software on 200 servers. I can't break that down. And I said, okay, what are you doing today? And he said, well, I'm doing the first 10 servers. Boom, there you go. And why is that valuable? Because either he comes back at the end of the day and says, I did the first 10 servers and you know everything is on track and everything is going well. Or he says, well, I only got five servers done. I got stuck with this problem. And someone else on the team says, oh, I know how to fix that. I'll help you after the stand up and we'll get back on track. Or he says, well, I only got five servers done. It turns out that this is harder and slower than I thought. We're only going to be able to do five servers a day, not 10. In which case you know that you have to adapt your plan. Maybe you need to deprioritize something else or allocate more resources. The point is that you are able to make decisions about your work more frequently. It means doing lean optimization of always looking for ways to bring excess work in progress and handoffs and toil out of it. Again, if you go back to that data warehousing example, it was as simple as getting the right people in a room together every day that waste started to disappear. And one of the challenges I see that particularly operational groups struggle with one of the things about lean optimization is shifting unplanned to planned work. And they say, well, we can we can never get out from under our unplanned work. We're always having these things come in. That's the nature of being in operations. How are we supposed to become more agile or more strategic or more proactive when we're constantly being bombarded with things? And what I teach them is a way of slowly digging their way out. How can you spend 20 minutes today that will give you 40 minutes back of free time? And then next week you can invest that 40 minutes to get yourself 80 minutes back. And you sort of slowly dig out from under the dirt. A database group as an example, which was they were trying to automate database provisioning, but they were constantly being interrupted by user questions. And they realized that part of the reason for that was that there was confusion about how to use their services. And if they took a couple of hours to write some better documentation, instead of being interrupted, they could just refer people to the documentation. And get themselves back between the whole team several hours a week, which they could then start to invest in more strategic things. The fourth principle is to respond to your surroundings, which means making feedback the heart of your process. One of the misunderstandings we have with agile and DevOps and things like continuous delivery is that the purpose is to deliver things, right? Working code is the measure of progress. But really in an adaptive organization that's continuously evolving, the most important thing is not what you're delivering now. It's figuring out where you want to go next. And part of the point of continuous delivery is it allows you to make that decision more frequently and more continuously. So what you want to do is on the one hand, you want to maximize your own visibility into we did this thing. Did it work? Did it fulfill the promise? Did it help us deliver the value that we wanted to deliver or not? And based on how people are using it and based on its effect, where do we want to go next? But the more interesting bit is when you start enhancing visibility for others. I worked with a team where I asked the product owners to start going to the weekly operations meetings. And they stopped going because they were bored out of their minds. And the reason was that the operations meetings were talking about, well, this server crashed and this is why it crashed. And the PO said to me, this is really irrelevant to us. And then I sat down with them and I showed them a Splunk graph. They had these nice dashboards on monitors up on the walls around the building that showed Splunk graphs. And I said, you see this graph? What it means is that this morning from 10 to 11, on average, it took 15 seconds to load the login page. And the POs went, ooh, that's really bad. And this other graph, this shows that between 9 and 10 this morning, 10% of the page responses were 500 errors. And they went, ooh, that's really bad. And this was suddenly they understood something technical in terms of its impact on the business and the customer experience. And the manager of the PO group said something to me in response. That was one of those moments where you go, okay, I've actually added some value here. I can feel good about myself. He said, well, maybe we should go to these weekly meetings and maybe we should be the ones reporting on these graphs. So where you really start providing mutual service inside an organization is when you start giving other teams visibility into what you're doing that allows them to do what they need to do better. Now, one of the funny things about feedback is it's really hard. And I think these sort of agile feature factories and continuous delivery factories tend to get us into a trap where we're so focused on delivering the next thing that we forget to actually ask ourselves, well, how did that thing work? I did a workshop in Germany a few years ago for an organization that actually had a very sophisticated agile and DevOps practice. And going in, I was unsure as to how much was actually going to be able to help them. And during the morning I asked them to do an exercise, which was to take one of their linear processes and reimagine it as a circular feedback driven process. And they all kind of smiled and nodded and winked and one of them raised his hand and said, well, we don't really have any linear processes anymore. We've made them all feedback driven. I said, okay, well indulge me, let's go ahead and do the exercise. It may be very brief and then we can go on with the rest of the day. And I'd broken them into four teams. And at the end of the exercise, three of the four teams independently came to the same conclusion, which they very sheepishly reported to me. They realized that they were really good at collecting feedback. And then they didn't do anything with it. They didn't actually change based on what they heard back from their customers or their infrastructure or whatever the case may be. And they realized that they were wasting a tremendous amount of time, energy and money gathering all of this information but not using it. So the final principle is that when you put these four practices into play, they give you the ability to turn with confidence. So that your process becomes a matter of asking, well, are we still headed where we want to go? Are we fulfilling our promises effectively or do we need to adapt in order to achieve our aims? And do we still want to go there? And what this allows you to do is to start to pass the puck where your customers are skating. Sorry about that. I'm using my iPhone as my webcam and I thought I had do not disturb turned on but somebody is trying to make a spam call to my phone. So if there was an interruption there for a moment, I apologize. Anyway, talk about responding to change in interruptions. So what this allows you to do is to start to flip the equation. Many organizations, they do agile and they do DevOps and they feel kind of overwhelmed. Suddenly they feel like they don't have any excuse to not do exactly what the customer demanded exactly when they demanded it. Pressure from executives and pressure from sales and pressure from product. But what this is about is starting to get out in front and starting to continuously understand what is it that your customers need next? What are they struggling to accomplish? And how can we actually point them to what they want next so that we're ahead of it? And this is the thing that allows you to really achieve this idea of amplifying continuous customer center delivery and allows your internal teams to start to become or to live up to the inspiration that DevOps got from the public cloud. And that led it to make that assumption that we've been sort of deconstructing as we go. So to finish up, if we go all the way back and we ask ourselves one more time, what do we really mean? What does it really take to break down silos in a software delivery organization? And I think what we could say is that it's about helping each other create more value in order to help customers create more value. So on that note, I will hand it back to Diane and thank you very much. Well, thank you very much for that Jeff. It always your talks are what I would call incredibly pragmatic and practical. You take things and you distill it down into really nice chunks. I was joking offline that you kind of in some ways you're like the Tim Ferriss of DevOps. You turn things into chunks that we can actually do and make happen. So really, thank you for that. I hadn't seen the chart acronym explained. And that was very, very helpful. I have a couple of other folks on the call. Jabe is from the GTO office and I'm just going to unmute them while you give me a second. And if folks have questions and Jabe and John, questions for Jeff, please go ahead. But I think one of the things that I really loved about this, it was the R part of the chart was responding to your surroundings is a lot of what we're doing at from the OpenShift community stuff and the things we do at Red Hat is really about taking in all of this feedback, massive amounts of it from, you know, thousands of customers and trying to distill it into where is the innovation going. And so I wonder if you could talk a little bit to when you have overwhelming amounts of input from customers, distilling some of that down how you would coach people for that. Well, one piece of advice, I don't remember who said this. And it's one of those things where I wish I had bookmarked it at the time. But there was a company, software company that published an article where they said that when your customers ask for something, you should forget it. And then when you read the article, what they really mean is you should write it down and then just leave it. Because if it's actually important and valuable, they'll ask for it again. So there's a certain amount of just kind of filtering signal from noise. But beyond that, the other piece of advice is this is where I get to give my pitch for involving design. One of the things I like about the promise-based approach is it's truly customer-centered and it gets you thinking about what are your users trying to accomplish. And understanding that and coming up with solutions to that is really what design and design research are all about. And when you do that, when you set, you know, AWS just didn't, I'm assuming, didn't just sit back and wait for somebody to call up and say, hey, we're struggling with Memcache. Can you build us a high availability Memcache service? Is they were actually proactively going out and looking at, let's forget about what our customers are asking us for. What are they struggling with that they haven't thought to ask for yet? I would bet that there was a ton of questions about Memcache and configuring it in their support lines as well, too. So there was a lot of fodder for that conversation. Right. But the thing that I find really fascinating and we used to have this conversation about AWS and now we have it about all the public clouds because they've all really figured it out. This is continuous innovation is, I like to use a phrase which is overwhelming your customers with goodness. Which is when you start to use continuous delivery, not just to satisfy the demand, but almost to create the demand. And again, this is where research is really important because research gives you insight before the customer asks. Right. What is the customer likely to ask for next week, next month, next year? And how can we give it to them before they have time to complain because we don't have it? And that's the Holy Grail. Yep, absolutely. Yeah. So there's one quick question that came in from the audience and it's about the smaller work, make smaller work comment that you have the mantra, which I think is, I think that's what triggered the Tim Ferriss idea. And I was like, yeah, I've heard this before. I was asking, does it always require a consultant to make smaller work? How does one constantly oversimplifying and putting too much in a sprint and since this is planning season at Red Hat, boy, can I commiserate with that story? It doesn't always take a consultant to make work smaller. The reason that consultants like me help is that it does take a switch in how you think about your work. What we're used to is building big things. And the agile folks back in the 80s and 90s figured out that trying to build big things and big chunks doesn't work very well. You know, agile was actually, it was a very pragmatic solution to a really serious problem, which was you had these IT projects that were going on for five or 10 years and costing tens of millions of dollars. And when they were delivered, they didn't work and they didn't actually meet the need. And so agile was a response to that if, well, how do we validate whether it works and whether it's the right thing, much more incrementally. The problem is that we tend to get stuck in this idea of incrementalism, which is I want to build a big thing. How can I break it down into smaller pieces? But we live in a new world where you can't know for sure that you even want to build the big thing. Today you think you want to build the big thing. And when you look at things like Lean UX, what they're really about is let's make sure that we want to keep building the big thing. And the best way to do that is to build a little thing that leads you in the direction of the big thing. And then to use that little thing to say, well, we still want to keep going there. One of the things that I do with clients is I actually use a visual charting technique that allows them to lay out the promises that they are and the promises that they could or should make and think about how they want to evolve them in an iterative fashion. So instead of saying we're going to build this new, and I like to use Slack as an example because everybody knows Slack. Instead of saying, well, we want to build this new feature in Slack to do it in terms of, well, we want to make this capability better. And what's the next small thing we can do that makes it a little better? And then how can we use that as feedback to go, well, where do we go next? So there's this important difference between incremental and iterative. And that's a very long-winded way of saying that making your work smaller isn't just about building a thing in smaller pieces. It's really about building smaller things. And delivering. Does that bring you a little bit to the MVP stuff? Eric, that's what's ringing in my head right now when you say iterative, like get something out there and see it. Is that what you're exposing here? Sort of. So the reason I'm laughing, and I'm guessing that Jabe is probably defying on mute in the background. I like to refer to MVP as the worst named good idea ever. Because if you read the Lean startup book, the purpose of an MVP is not to create a viable product. Right. It's to create the simplest possible test of whether what you think you should do next is viable or not. And actually, the sort of poster child for MVP from that book is the story of Dropbox. When they were first getting started, they didn't even know if they could get money to start a company. So the first thing they did was they created a video that showed how their interface might look. They made no attempt to make it realistic. The only point was to explain how is this different? So there was no code. There was none of this, well, we'll write a front end and collect people's money before we build the back end. They didn't write anything. It was simply, how can we ask the question of is this viable from a funding perspective? So there's huge amount of confusion, in my opinion, about what MVPs are. And there's a difference between what we would call an MVP and actually delivering a viable piece of value. Delivering a piece of value, it has to work. It has to be useful. It has to be usable and it has to be dependable. So maybe instead of viable, we should be just saying valuable, minimal, valuable. But I think that's actually what you just said is actually the confusion. A lot of people, because of the nature of the book, Lean Startup and the context that it's in where you are in an entrepreneurial situation, people hear viable as just find the valuable thing. But I think, in fact, viability has much more to do with things like operational burden, the ability to upgrade. All those things are whether or not finding something valuable and something viable is a different problem to me. So in contemporary language, a lot of people are throwing around right now, there's outputs of a system and then there's outcomes. So part of the conversation of value is saying the outputs aren't valuable, the outcome is valuable. That's the first shift. The second shift is to say, is it impactful? And impact often means, hey, you could create those outputs and create those outcomes once. But can you create them over and over and over again in a sustained way? That actually means that you have created an impact at a particular scale, a particular size. That's that whole argument about viability, sustainability, the longevity, the holding power of being able to reproduce the power and the value. That is viability to me. Hey, Jay. I think it's John. Hey, Jeff has a call man. Long time. Hey, how are you? Good. You know, I think this to me, there's nothing wrong with the V in MVP. In fact, you know, Eric Reese told me in a conversation once that he would basically fire somebody if they didn't start out with like a simple Heroku app. Right. Like, you know, and so it isn't that it is, it is viable. The problem is, we don't do the classic, you know, me, I'm going to bring Deming into the story, but we don't do the PDCA. We do the PD, the PD, the PD PD. And we don't follow the scientific, you know, a lot of stuff you think about to the word I always struggle to pronounce epistemology. Right. Like, we don't sort of apply that process. So that the problem is these large organizations now who've seeded this idea of lean startup. They're just got train wrecks of MVPs all over the place because they're on these sort of tight budgets and get it up as quick as possible. Let's use lean startup. And then like you just have this minefield of things that were just never built in a continuous motion and primarily we're not using sort of the continuous experimentation. I think I'll just throw in one more thing just because you know, you and I like we'll talk about Deming so much. Right. One of the things about Deming's loop is that it's an extension of the Schuert cycle, which Schuert came up with. Right. And Schuert basically had this observation. He said, like, we keep on doing this thing. We kind of design production operation in a row, these three things in a row in a straight line. What if we make it into a circle? Right. What if we actually hook the system back together again? And that's interesting. And then, you know, Schuert becomes Deming cycle where he adds like a moment of reflection and things like that. All also very interesting. The most interesting thing since we have Jeff on the phone here is that Schuert came up with the Schuert cycle 10 years before cybernetics became a thing. Yeah. You just couldn't leave it alone. Could you, Jay? That we describe in cybernetics. I'm not arguing that it comes from Schuert, but the language that we apply and overlay on top of Schuert and Deming, the cybernetics language sometimes. I think hide some of the original thinking around why it was a cycle, why it was a scientific cycle, what we were trying to get done with it. What I think is really interesting and the only last thing I'll say about it is that I also think it's fascinating that Schuert came up with the cycle inside of a telephone factory, which were one of the first high production factories that created a network device so that quality had something to do with the way in which these devices were deployed into a network. And that's why quality and that loop becomes such an important part of what they're trying to do at the Hawthorne factory. That's me randomly ranting about other things. So to bring this back a little bit down to earth, I think what we're all violently agreeing about here is that the ultimate purpose of Agilent DevOps is not to deliver more stuff faster, it's to learn more continuously. And the way that you stay quote unquote ahead of your customer in a highly dynamic environment is by constantly learning. And if you don't have that loop back of we did a thing, well, was the thing actually good or not? Do we want to do more of the same thing or do we want to do something different so that you have this backlog? Don't assume that the thing that's in the middle of your backlog should still be in the middle of your backlog. And to the point I made earlier about get a customer request and then forget it, there are really interesting organizations that don't work from backlogs. They just say, I mean, I've seen teams where they do mob programming, the entire team builds one useful usable and dependable thing and they release it and they say, okay, what should we build next? And they don't really care where a thing was on the backlog. They're just saying, okay, based on what we did last, what should we do next? And the point of Agilent DevOps is it allows you to do that much more fluidly. And it allows you to do it for your customers. And in a large complex organization, you need to take exactly the same approach to doing it for your internal customers. And that's the key, right? You break out of the prescription and you break out of the fear, right? I think that sort of the reason I love PDCA or just scientific thinking in general is everything's an experiment. So you sort of, you just get rid of the whole concept of fear of doing something. You know, should we do this? Or do you have enough experience to do this? Or, you know, shut up because you haven't been here that long or shut up because you're, you know, no, no, I'm just going to do an experiment. And like the truth will bear out, right? And so both that sort of decimates the whole sort of fear of failure and then it also allows us to sort of bust prescription. Like we did it last time. We've always done it this way, right? So to me that sort of scientific thinking is that sort of get out of jail free card to start to change that continuous improvement. I made Jeff laugh. Hey, Bangs. There's a question in the chat that I think is related to this. The question is, how do we push back against PO's that constantly oversimplify and put too many things into each sprint? I have an idea. Do you have a thought on that, Jeff? I'll let you go first. So my thought connecting these two things together is that if you start sprints with the idea that you're going to learn something at the end of the sprint or during the sprint, you would always build capacity in for processing and understanding what that learning was and the implications of it. So I think actually a lot of product owners who kind of overstuffed the sprints are focused on the outputs and not understanding actually what the output is. The output, by the way, I do agree that it's important to learn. But I think the actual activity is balancing the production of value in the form of some saleable high value good and learning. It's the balance of those two things that goes wonky. To me, if you over focus just on learning, you get also bad effects. Learning is expensive and you can't sell learning directly unless you're a consultancy and most people aren't. So it's a balance between the two. You need to learn a little bit. You need to produce a little bit. You need to go back and forth. You need to know when to tic-tac between those two different forms of value, information and value-added goods. So the way that I resolve that tension, and I think you're absolutely right, and this is where the designers tend to get on my shoulders a little bit, is one of the best ways to learn is from people using your software. I can't learn everything. I need my t-shirt that says I don't always test my code, but when I do, I test it in production. Is there certain things you can only learn in the real world? And if you use the real world, if you use your customers as a place to learn from lousy software, you won't have any customers anymore. And this is where I come back to iterative versus incremental, is what you're continuously delivering has to be software that they can use and software that you can support. And by making it small and making not just the increments of work, but the increments of value small, you allow yourself to do that. I think the thing I'd add there really quickly, because I agree to kind of like test in use and stuff like that, is that I actually like when I think about architecture, when I think about designing software or designing systems, I actually think there's two types of risk and two moments of risk. There is the risk that we designed incorrectly, and then there is the risk that's exposed by in use. Is it being used incorrectly or did it not do what we wanted it to do? And there's actually two different ways to recover from that. And I think a lot of people get obsessed with the we designed it wrong theory of risk, like we didn't design it completely. And there's a loop there that says like, we should spend more time designing more detailed design, blah, blah, blah. And that can cause really bad problems, right? But there is this weird way of like filtering what goes into what should be designed well or should be focused on eliminating the risk in design as opposed to the risk in use. And the risk in design is if we know that this thing needs to support the traffic for like, I don't know, 20,000 users, there's probably some design like math that we could do to make sure that whatever we design can sustain that. I usually say it's like designing a bridge. If someone says I need a bridge and there's going to be 15 ton trucks driving across it, there are social criteria that we can talk about things that like will the bridge be used to drive the right types of things will improve the connection between the two towns, will people like the bridge will be pretty all like all sorts of in use questions. Yeah, but there's also like just straight math of this is the minimal structure that will be required to ensure that a 15 ton truck won't cause the bridge to collapse. And so like balancing those two, like the quantified and foreseeable parts of architectures and design engineering. And then there's the, okay, there's some stuff we can't know until someone tries to actually use the thing and try to weed the difference between those two is I think a really important thing to notice. But that again, that's just me ranting. Well, thank you for that. So we do have to close at the end of the hour just to respect Jeff's time. And I, you know, I also wanted to say to Jeff, thank you very much for coming because I think the chart and I love the reference to pass the puck to where your customers are skating. I think you're helping us see where we're to look and how to look. And I think that was very, very, very helpful for today. Jeff, if you want to have the last word in here and then. Yeah, I'll just say, thanks everyone. It was, it was great to see you all again virtually and thanks for having me. And if folks want to talk to me more. You can find me at susna dash associates.com or I'm on Twitter and LinkedIn at Jeff Susna and Susna is spelled s USS and a. We'll put a link to that that book that's on the background there and and it's a great read as well. So designing for deliveries. Awesome. So, thanks again, Jeff. I think we're going to definitely have to do another follow up on this because I feel a debate coming on. So well played. Thank you all. All right.