 Thanks, Christina. And very excited to be here and helping people learn about what does this mean to experiment that with software? And so who are your guides today? So I'm Robby Lachman. I'm a lead evangelist here at Harness. I worked at several distributed systems firms before in various roles. And also along with me is my good friend Ethan. But Ethan, for folks who don't know you, maybe a little bit of background about yourself. Yeah. Hey, everyone. Thank you for joining today. So I'm a product manager here at Harness. I've been working in DevOps tooling for six or seven years at this point and in the startup industry for about a decade, consumer stuff, internal stuff, and excited to talk to you today. Cool. Thanks, Ethan. So what are we going to be talking about today? So the first thing we're going to be talking about is the scientific method. So if you go back to middle school or even elementary school, when you learn about the scientific method, well, now it's for software. We're going to also be talking about this term called progressive delivery. And so Ethan, do you dig into a little bit about some of the pillars of progressive delivery and also how you can start implementing progressive delivery for the first time? And finally, what does it take to actually implement a feature flag? So a big part of progressive delivery is this concept of feature flagging. So feature flags can be for application features. They can be for infrastructure or anything in between. But really we'll take a look at and walk through, hey, when deciding when to flag something, when not to flag something, what is some of your options? So science, let's start with one of my favorite topics as an adolescent, science. So who remembers the scientific method? So the scientific method is basically an empirical way of acquiring data. So first of all, imagine you're doing an experiment. So you say, you know what, part of this experiment might be if I were to melt, if I raise the temperature of ice above 32 degrees Fahrenheit or 0 Celsius, it would melt. And so basically it involves some sort of observation. Also, the scientific method also makes some sort of cognitive assumptions. So observation, I'm looking at the ice, the cognitive assumption, it will melt above a certain temperature. And also that's core to having a hypothesis, saying, hey, I hypothesize if it goes above a certain temperature, ice is going to melt. And really, you're basing your deductions. So you're challenging and validating or invalidating your hypothesis based on an experiment. We all do some sort of experiment in our days. But it's kind of lacking in the software world. Like as an application engineer or software engineer, there's reasons why experimentation with real data or with users can be a little bit difficult, which I'll get into in a little bit. But as a consumer, we're subject to experimentation all the time. So let's talk about why I live in the Southeast United States. Now, we're prone to getting hurricanes. So let's say that a hurricane is coming, or a typhoon, or a monsoon, depending where the world you are, one is coming, there's a few things that you would do to prepare. And so especially out where I live, like the first thing we do, we hear hurricanes coming. We run to a grocery store, and we run to a home improvement store, and we procure several items. But there's one item that I always find that is really awkward that they move to the front of the store when there's a hurricane coming, which is they move beer. At my grocery chain, I go to the front of the store. And it's like, we're going to be stuck in the house for a day or two until the hurricane passes. You might not need food, but I better get some beer and wine at the pass of time. Unlike, if I think like that, a lot of my neighbors around here must think like that, because beer is front and center, only for the Super Bowl also, it's front and center. But basically, the grocery stores realize that if they move items that you tend to want to have to impulse buy or panic buy to the front of the store by hypothesis and by retail data, they're actually able to sell more beer. And so why did they move? This is an experiment, right? So someone must have had an idea. Hey, why don't we move beer to the front of the store? And so what ends up happening is that, well, instead of me meandering through the store and picking up other things like loaves of bread and other deli meat and other things I need, I just end up buying beer because it's top of mind. But it's kind of take census of all the decisions that have to be made there. So the grocery store chain have to have business analytics. Are we selling more beer? It's the hypothesis. How can we check this? Or even more so in your day-to-day life, if you're living here in the United States, you might get flooded in the mail like credit card offers. They're constantly making hypothesis-driven business decisions. If we lower the credit score for a platinum card, would more people be approved? If we raise the credit score for our silver or gold card, would less people get it be approved? But this is basically what hypothesis-based business decisions are based off of, right? And so but equate to conversion. In this case, more beer sales. In the credit card example, it might be more folks signing up for a credit card paying more monthly fees. Because again, it's like how exactly are they making this decision to either go in one way or the other. It's like an A-B test, which we'll get into a little bit later. But basically, this is it. Feedback is crucial in the technology landscape or also in the software development life cycle or even the infrastructure development life cycle. Feedback can come in many forms, right? But as an engineer, depending on where you sit, so if you might be an operations engineer, you might be an infrastructure engineer, you might be a system engineer, a cloud engineer, sometimes you don't want the feedback. Because the feedback would be fairly difficult. Kind of a personal antidote here, not diabetic. And I have an electronic glucose monitor on me. And so one of our customers is actually the manufacturer of this. And so they're extremely worried about feedback from a user because by the time if I have a problem, it could be detrimental. But maybe not such a severe case for your organization. But again, it might be, you know what? We're receiving downtime or slowness or today's internet outage from a fasty seed again, right? Like you don't want that type of feedback coming to you because it's negative. But again, it's crucial. And Ethan, as a product manager, he values all feedback, not on the spot here, but it helps you learn. But sometimes feedback is negative because well, how do we get feedback in software, right? So it's not like software, I'll argue with you, that has both objective and subjective feedback for using software. So how do we go about rating software? And it's all about, well, if your users are unhappy, trying to figure out sentiment, right? Like why are they unhappy? And this is part of the risk of running an experiment because with an experiment, you're trying to validate the unknown, which what ends up happening is if you're a little bit more risk adverse, you might say, you know what? We don't want to experiment because we don't want to have detrimental feedback. We're concerned that this is the case, right? So this might be more of, you know, if you're an infrastructure engineer, you're deviating off the playbook, right? You're doing things for the first time and you just want to get some sort of control feedback because let's take a look at typically two ways that you're getting feedback with infrastructure software. So first thing is objective software feedback, right? So with objective feedback, this can be performance information. This can be latency. This can be error rates. This can be pick your, it's going to be uptime percents, right? And, you know, quote unquote, slowness is a new downtime, you know, cliche saying is that, but you can get really objective measures of software, right? And this is what we typically fall back on, like empirical objective data that something is performing or converting like it's supposed to, where it gets a little bit more difficult. And this is the one that's worrisome is the subjective feedback, you know? So I'll play devil's advocate role-playing here. Let's say I was a mobile app developer and this is where actually a lot of feature flags and this type of A-B testing came from, you know, but later like it kind of proliferates the entire technology stack. Is that, you know, what if I wrote an app and I'm the producer and I gave it to Nathan and he's like, man, Robbie, your app is like the pitch sucks. Give me double dislike, you know, in the app store and tweet about it. Like the subjectivity has a lot of feeling, right? And so part of what you want to capture when you're trying to get feedback is a little bit of A, a little bit of B. You want to capture this objective feedback, but also you want to capture the objective feedback like, hey, can we warrant why someone's feeling this way? You know, we're human, we're emotional, you know, it might be no answer for that, but we have to try. Because in today's day and age, we're living in this particular agile way, right? So agile, if you're unfamiliar with it, it's an iterative development methodology. You're making incremental changes all the time. And now for those who have experienced waterfall where you do, you know, you completely do the requirements and you completely finish the application and you completely finish the infrastructure and then, you know, you kind of start over from the beginning, it's becoming more and more rare to expect that you're certain things that that's good for, but becoming more and more rare, right? Like a lot of things are rapid development, right? Hey, we need to make a change to the app. Hey, we need to add additional capacity to our cloud infrastructure. Hey, we want to try, you know, increasing our density, let's say in the Kubernetes cluster or hand, we want to try XYZ. And so this is a big part of being agile, right? Like this funny, you know, business and technology term, agile, is that it really allows you to have quick root iteration. And but you're incrementing, you're making smaller and smaller changes. But hey, if you make smaller and smaller changes, that eludes itself to experimentation, right? So instead of saying, you know, instead of us making some wide conjecture about some ground-making change, you could start having itty-bitty little changes where you can test and having something incrementally valuable confidence, because it boils down to this concept called a blast radius, right? So if you want to make a particular change, and this comes straight out from, you know, slight liability engineering handbook 101, is basically what is a blast radius of technology? It's how many users or systems are impacted when you make a change, right? So if we were to make an overall sweeping infrastructure change, we have a very big blast radius, right? Like if we were to think all users and all the infrastructure, we're going to add another endpoint or we're going to, you know, take away or add a particular node or add a subtractive feature to everybody, that is a significant blast radius. You are exposing all your users to that. Now with progressive delivery, a core tenant to progressive delivery is actually, you have the ability to focus on specific users or specific entitlements or specific traits or specific aggregations of any of the above. Use your imagination, right? Like, hey, anybody who's coming from, you know, from infrastructure side, if you have someone coming from Europe, send them to your European data center. Anybody that's come from North America, send them to the North American data center and so on and so on, right? Because basically by limiting your blast radius, you're able to have a few, able to act more agilely, but also you're able to produce quicker feedback, right? And so core to making adjustments, core to delivering software, delivering infrastructure, delivering technology is appropriately processing feedback, right? You want feedback to be, I forget what the acronym usually is for feedback, Ethan. It's like you want feedback to be like specific. It's like giving feedback on a one-on-one, you know, to a person. It's like it actually still applies to feedback. It's like it's a technology because you want it to be relevant. You want it to be actionable. You want it to be concise, right? So these particular concepts transcends, you know, human to human interaction, system to system interaction, but basically that's it, right? So broaching into, I'll pass it off here to Ethan in about 15 seconds, but basically these are all tenants of progressive delivery. Like how can we make smaller changes? How can we make more hypothesis-driven changes? How can we isolate and limit the blast radius? And also, how does it look in your technology organization or your team today? But Ethan, why don't you take it away from here and take folks on a progressive delivery journey? Yeah, awesome. Thanks, Ravi. And it's kind of importantly covered all that, right? Because when we talk about how you then turn around and start applying that type of thinking and that scientific method approach to your organization, progressive delivery is one of the newer and emerging ways you can do that. I recognize a lot of folks have not heard of progressive delivery yet. So we'll clear up what it means. And if you have heard of it, it might be kind of confusing. It's a little bit buzzwordy. You might have heard different definitions in different places. And some of the things we get asked a lot are is progressive delivery part of CD? Is it CD? Is it the same thing as feature flags? Is it a DevOps practice? Or is it more for like application people, building software for end users? Is it supposed to help product managers, right? And the answer to all of these questions is the same because when we talk about what is progressive delivery, the answer to all of these questions is yes. It is all of these things because progressive delivery is really a way of working as much as it is any specific feature or practice. It's a handful of things you can do. You can do them together. We'll talk about feature flags as kind of a cornerstone for that. But progressive delivery is really applying everything Ravi just talked about to your software development and delivery process. So if we look here at kind of how progressive delivery might deviate from normal CD, that's a pretty good starting point I think on the... And for those of you who don't know what CD means, that means continuous delivery. So yes. Good catch, thanks, Ravi. So when we talk about CD to newest delivery, what we're talking about is taking your code and getting it into production. You have a new change and you're trying to put it out the world. Whether you're doing that in a rather traditional way where that's happening over a longer duration or whether you're doing that and what we think of is like the kind of more modern cloud native way where you're trying to do that constantly. When we talk about continuous delivery, we're talking about taking that code with the change and getting it live in production. And in this world, even in the modern world of CD with cloud architecture and microservices, deployment is basically still when the code is gonna go live to end users. What progressive delivery is kind of asserting is you can decouple that deployment from the release to your end users. So you move the code artifact to production, but then you'll see on the right side of the screen, there's a step after that. The code going to production doesn't necessarily have to be where everybody gets the change because you can't learn that way. If everybody in your audience is getting the change with every deployment, how do you iterate? How do you do cohort studies? How do you start to figure out what the impact of the changes? Whereas Ravi was talking about control the blast radius and learn from it. So a progressive delivery does is post that deployment, you can start to turn on that new code for specific people based on criteria you want, and then you can learn based on what happens. And to give you one kind of concrete scenario, what we see here, let's say that I had a new UI and I wanted to first turn that on for 10% of my UK users. Maybe I had a much lower traffic footprint in the UI in the UK. So I'm a little worried about performance of the new UI. So I'm gonna start with a small subset of my users in my lower traffic drill. Let's say that looked good after a few days. So I'm gonna do two things. I'm then gonna turn it on for 10% of my US users, and I'm gonna ramp it up to 25% of my UK users. And here we can see the paths deviate. Maybe because of something specific to my US based cloud architecture, I'm getting an exception. I can turn the 10% of US users back to zero while I fix that. But my UK cloud architecture is not throwing the same exception. So I can actually ramp that up to 50% and keep learning. And the thing I would highlight about this path, right? Powerful on its own, but to do this with progressive delivery in most cases, we'll talk about feature flags in a moment. You can actually do this without having to do any new deployments or any new rollbacks. You can do this just with the code live on the server, not have to change the code itself at any point to accomplish this end-to-end scenario, or you can even have very non-technical people like PMs or other folks doing this and only going back to engineering when there's issues. So engineering is not having to babysit the end-to-end rollout process. And if you look at what this gives you as an organization, right? It ultimately gets you a whole bunch of things, but if we were to try to summarize it so it's a little easier to take away, you're gonna reduce your risk this way. You can start with small changes and you can accelerate the blast radius of those changes as you get confidence or learn from them. And again, you can do that without the overhead in time of engineering, having to coordinate new deployments, new releases, rollbacks, exceptions, all those kind of things. Just do it separate from your deployment process. And ultimately that means you can learn a lot faster because if you can turn things on and off in a controlled way for audiences you define without having complex releases to manage, it means you're gonna do that a lot more. You're gonna feel a lot better saying, yeah, let's give this to 10% of the UK people, let's see what happens because there's no pain or cumbersome process in doing that. So that your disincentives for trying to learn, your disincentives for experimenting are gonna be greatly diminished with progressive delivery. And this is ultimately gonna lead to an outcome that probably matters most to all of your stakeholders, hopefully to y'all as well, but you're gonna end up shipping a better user experience and it doesn't matter who your users are. If your users are internal people on your team, if your users are end customers out in the world, because you can learn faster, because you're more comfortable making changes, you're gonna make more of them, your experience is just going to get better. Some of the blockers to improving things, to learning, to making the right calls, or to just trying things out and seeing if it's better, with those gone, your end customers are really gonna benefit. Now what I'd say touching back on the scientific method here though, is progressive delivery is ultimately a means to an end. So you don't wanna do it just for the sake of doing it. Now we've got progressive delivery, aren't we great? You're trying to accomplish something and you've got to keep that in mind. So when you think about the right way to do progressive delivery, you really wanna be clear with your team, what are we gonna get out of this? How do we know this is helping us? How do we do this in a way that's useful? So what do you want to learn? Are you trying to find out if a change is stable? Are you trying to figure out how it impacts user behavior? And that could be things like your conversion rates, that could be things like funnel through your applications. See if it's reducing cost of your application, an API refactor can make a huge change in cost in either direction, be great to learn that quickly. You can also make sure your support team is not overwhelmed, right? So if you're gonna roll out a big change, you're moving a button from this place to that place, if you've ever worked in consumer applications at scale, moving buttons results in a ton of tickets where people tell you they don't like that you move the button and you don't have to deal with that, right? Your poor support team has to deal with that and they do a great job, but you can kind of let them know what to expect if you do that in an incremental way or you can control that customer feedback by doing in an incremental way. So we encourage people to talk about what progressive delivery is. Don't think of it just as an engineering outcome. Now we can do this great thing where 10% of our users get the code artifact. We think about what is the user outcome, the business outcome, the team outcome you're trying to achieve and then work backward from there to the right way to do progressive delivery. Now another concept that is really tightly coupled with progressive delivery in a lot of cases is feature flags. So if you haven't heard of feature flags, you might have heard of feature toggles. You might have heard of just kind of app configuration files that you keep in your repo or your database. There's a good chance that you've been exposed to something like feature flags even if you've never heard them called feature flags. And for some context, what a feature flag really is, you can imagine kind of a fancy if statement in your code around two potential code paths. Let's say one makes a button red, one makes a button green. They both exist in your code, right? You're not taking the red one out to ship the green one. And then you have kind of an if statement in the code that communicates up to a server or a config somewhere that lets you choose which one of those code paths to light up for who and under which condition. So it's really just a way to ship multiple versions of something so that you can turn them on and off and learn in production. And feature flag and progressive delivery, the reason they go hand in hand is because feature flags are ultimately such a great way to enable progressive delivery. So when we talk about enabling feature, or enabling progressive delivery with feature flags, it's because feature flags let you make those changes with no new deployments and rollbacks because all of the code is there in the deployment. Initially, this kind of branched off with those flags or those if statements like I talked about, you don't have to do a new deployment every time you wanna change which code artifact you light up. The external service or the config mapping if you're using your own kind of solution, that tells the server when it's evaluating on the client side, which version of the code to light up. And so feature flags are really that cornerstone. They also let you control that changes so that you can correspond them with metrics and outcomes, right? So let's say you're gonna change a feature flag because you wanna know if the red button is gonna have a different impact than the green button. You wanna know when you're gonna turn that on. You want that to correlate to maybe events in your metrics and logs so you can easily see the outcome of that change. Maybe you wanna fire some stuff off to analytics. Maybe you wanna let people in the organization know we're making that change. Again, flags let you make that change in a very, very controlled way separate from the release process. The other thing flags do this last point there was very important which is that you can see what's actually live for people. So one thing that can start happening when you have feature flags, experiment, progress and delivery this modern world is you go from there's a version of your application that's live to 20, 30, 40 versions of your application live for different subsets of users, right? You end up in this kind of multi-dimensional state where different people have different configurations. Feature flags are gonna let you easily see who's got what. This user had a problem. This geo had a problem. What configuration of our application is ultimately affecting them. Now, we talk about flags being a cornerstone of progressive delivery. It's also important to know that flags are not just for progressive delivery. So they have some other good engineering practices that are worth knowing about. With feature flags, you can work dark behind a flag so that you can ship constantly. And what I mean by that is if I'm a developer, I'm gonna start every new feature branch but just instantiating a new flag so that all my code is by default behind a flag turned off. This means my team can ship that code or merge that code to master every single day and it's not gonna take effect because it's behind a flag and the flag is turned off. So it's a really safe way to constantly keep my work merged in where you don't have to wait till I'm done and I'm ready to actually stress test it to do any kind of merge or CD event. It's also gonna let you coordinate launches and stuff like that internally because engineering can complete their work, they can ship it off to production, they can move on and marketing may not be ready to turn it on and talk about it for three weeks later. That can be pretty painful scenario normally for engineers. This, you don't care with flags, right? To ship it to Proud, the flag is there. Here's the link marketing team, turn it on whenever you want it. They can also for engineering teams be a great way to provide kill switches or different operational things that maybe you just always want the control turn this off instantly. And in particular, turn this off instantly for certain audiences. Last thing they can do is they can help you manage what customers get access to what features and where. And so one way to think about it is if you're building a new product, you launch a new feature, put the feature behind a flag and then you can go tell your support teams or your account teams or your PM teams. Hey, if you've got any customers that you wanna give access to the new feature, just go add them to that feature flag. You might have seen that work show up on engineering's plate day to day. Can you guys turn this on for this account? Now can you turn it on for this account? It's very distracting. It's very interrupting to engineering teams. So on top of the progressive delivery workflows there's all these operational and way of working type things that feature flags will give you. One other thing I'd note here is the feature flags are not just for user facing features. So we've been talking about red button versus green button or moving beer to the front of the store, right? Feature flags can also help for some less obvious use cases that you might not be thinking about, such as refactoring your API. Roll that change out incrementally to stress test it or gradually rolling over to a new service. We, I worked for a startup a while ago, we were migrating to a new logging backend and we did that over the course of several months day by day with feature flags incrementing it live so that we could iterate in production without constantly deploying and rolling back when there were issues. So the entire engineering focus use case but feature flags really came in handy there. And they also let you do things like, yeah, your data residency laws might be a little different in some country than others. So maybe there's some features that you need a slightly tweaked version of them in Europe versus the US. Flags are a really, really easy way to do that where you can visualize that and control that across your organization in a less engineering centric way such as all those configs exclusively living in a repo. So then ultimately why feature flags? We talked about why progressive delivery earlier and this is similar, it's adjacent and not exactly the same thing because your end story here if you think of it in terms of hierarchy feature flags bring all this flexibility engineering organizations. And one of the great things engineering organizations can do is leverage them to start doing progressive delivery to start working in that scientific method that Robbie was talking about earlier. That's cause feature flags are gonna let you go really fast no changes risky if every change is behind a flag, right? Put it dark behind a flag, ship it constantly merge your feature branch to master every single day. You're never gonna be at any risk with any change with feature flags because they are dark. It's gonna take a lot of work and coordination and things like that off of engineering's plate day-to-day including for things like launches, rollouts but also for things like testing Robbie mentioned getting that feedback. Wouldn't it be great if I as a PM could turn that new UI on for people as I talk to them so I can make sure I'm capturing that feedback and learning instead of having to turn it on for a bunch of people and just see who raises their hands I can reach out to them directly. I've got a new UI, I want your feedback on this UI and then I turn it on for them with progressive delivery and feature flags. And the third one's identical, right? You're gonna end up with a better experience. If you're letting engineers do their work more if you're letting engineers do their work faster if you're letting the organization learn what you're gonna end up with is your applications are getting better. You have less downtime, bugs are getting fixed faster early adopters are getting more feedback to drive the mature versions of features. It is a virtuous loop once you bring feature flags in and start using them to build out progressive delivery workflows. And I am happy to have Robbie jump in if there's anything I failed to cover but I think that's kind of it. We're happy to take some questions. Yeah, it looks like there's a few questions coming in and also, I mean, feel free to ask, ask away since there'll be the question and answer session but if you were curious about getting a copy of the slides, just learning more you can give a quick scan on this QR code here hit this bit.ly and you know you can certainly connect kind of connect afterwards but why don't we go through some of the questions and then we can, you know we can just role play or answer them and you know if they have any further questions keep them coming. So the first question is do you have a list of some sort? It is a funny like getting like this tall to ride the roller coaster. Do you have to be this tall to enable progressive delivery? So I can take a first half of it or you can take a first half of it. Do a choice. No, I'd love to hear where you have to stay around. I'll give a more technical answer for this question. So like, while it's hard to feature flags there's a remote communication that's needed, right? And so like there needs to be a way to check that if a feature is enabled or not like an FNL statement but usually it's a remote call. So I'll give you an example, right? And this question will boil into the second question that was asked though. It flows really nicely. So usually how you check, you know if a feature like if you had like a distributed application so it might not be the same if it's a piece of infrastructure that flag might be in a configuration file somewhere but a very, I would say a very modern feature flag that's in distributed app. You're making a remote call to check every time a user transfers versus something if it's enabled. So if I was a platinum card holder or a diamond medallion on Delta and I log into Delta like airlines website it's constantly checking like for entitlements like is Ravi a diamond medallion? And that manifests itself like that remote call must get fired off 50 times to double check my entitlements and display the diamond colors display a personal message from the airline. And that takes a lot of overhead. You made the remote calls go from one to 50, right? For that personalization for that entitlement. So if you're able to support the type of remote call sure like if you can make some sort of remote FDNL sort of checking a database checking checking a config management database checking some sort of backend even processing some sort of logic, you know if it's for a particular user it's additional overhead but you have to be able to make code changes like from the get go to roll that out, right? So if there's no minimum height required just understand that it's gonna add additional overhead but I don't have anything to add or subtract. Yeah, the only thing I did there agree with all that is also it can seem a little overwhelming it can sometimes seem a little bit like there's this nirvana state and if you can't get there you can't do this at all. I would say you can probably bite off some use cases almost no matter where you are at I've seen versions of progressive delivery work for on premise software I've seen it work in some pretty legacy places even if that's what they weren't calling it I would maybe work backwards from the core question of what are we trying to learn and how can we learn it a little faster in a little more controlled way and not worry so much about does the end state look like the perfect slides for progressive delivery? Cause you can get there over time, right? You can get there step by step. Awesome answer, okay. So the next question it's about maintaining feature flags so kind of TLDR if what are some of the challenges in maintaining feature flags? Like if you had thousands of flags over a period of time like when do you drop the feature flag? So I can take the first step you can take your first step or rock paper scissors. Hola, I'll jump in on this one. It's a really good question, right? There are lots of great thoughts out there here at Harness we're working on feature flags and we're trying to take on some of this work. A thing to note about feature flags, right? Is nothing is all roses. So when you get a lot of feature flags in your system what you have is in some cases a lot of tech debt because maybe the feature flag stopped being useful six months ago you're done with that feature. What you have is a lot of different versions of things different developers might standardize slightly differently. So there are definitely maintenance and lifecycle challenges with feature flags. Right now, most of the solutions tend to honestly be hygiene and process. Good teams should clean up their flags as they're done using them. When you're done with a flag is kind of up to you but it's essentially when you're not gonna need to turn on and off that feature dynamically anymore. If that new UI is hit 100% in all GOs and you don't anticipate that you need a kill switch to suddenly revert to the old UI you should probably go in there and deprecate that flag and just make that the code instead of having that continue to live under a flag that's tech debt. Right now again that tends to resolve revolve around good practices hygiene. I've seen some really smart stuff built out in JIRA in terms of there being flag cleanup states associated to tickets three months later. I've seen some good Slack ops type stuff with reminders. We're trying to help with this problem through some automation and cleanup work we're looking at. If you're interested Uber has a very good system on this. They have a white paper out called Piranha that I think is very inspirational to folks like us at Harness who are working on feature flag products really trying to automate that cleanup so that you don't end up in a big mess of tech debt. So no going in, it's a problem and talk with your team about the kind of hygiene and practices you want but pay attention to it for sure. Yeah, and just deciding on that like awesome explanation like we've actually we've faced that problem at Harness right now. Like we have our feature flags is in the thousands, right? That we have because of entitlements and like R and D and like just every single possible reason to use a feature flag we have. Now we are solving that with, we're baking that more into our platform. It was born out of necessity for us because as a little background Harness is a SaaS provider. And so we had a large number of flags that you didn't know if they were alive and they got deprecated. Like going back to Ethan's direction there like, hey, if it's all users, why even is it behind the flag coverage wise? So we're solving that with our platform itself but that could be another talk for another day. So good question. So next question here is I think progressive delivery is more specific or more, it has a more specific target than CD. Is it so? Yes, it is very, feature flag is extremely specific. So I gave a talk a while ago is called, did you flag or do you deploy? And we went through several fictitious scenarios if it's flag or deploy. But to the whole point, a feature flag is specific. If it doesn't impact the entire user base, it is a good candidate for a flag. If it does, it's a deployment. So to Ethan's proposed workflow, to enable feature flags, you have to deploy the flagable application at one point. So if it's a greenfield application, it's there from the get go. If it's a brownfield application, you're making changes in deploying a version of the application that can be toggled on and off or blocks of code or methods or the invocation path changes, right? So it is pretty specific there. Ethan, anything specific, any added to track to that? If the only thing I'd say, and I might not be interpreting the word targeting customer and quite the way you intended, but just to throw it out there, I would say progressive delivery is very problem-specific in a way that CD is not. CD is a delivery deployment, whatever you wanna call it, or at the end goals, get your changes onto production. Progressive delivery is really a way of designing scenarios against specific problems, specific things you're trying to learn. Whether that's infrastructure things, customer things, it's a way that you can build a learning loop around your changes instead of just shipping them. Awesome, awesome. I think for the nice question, and actually you answered it part of what that, that was my first time hearing that Uber project, maybe just restating it over time, like some advice for like some resources to learn for next steps. Yeah, so the Uber White Paper is called Piranha. It's a very cool system. I think anywhere you look at kind of the DevOps Institute type stuff, the Accelerate type stuff, you're gonna find a lot of information around progressive delivery. You're gonna find a lot of information about feature flags in that general world. Awesome. The next question is more of a statement or just saying that this person agreed with us. So similar patterns are awesome. Yeah, like keep on pushing with progressive delivery. The next question, so reading it, it's what about clusterization? So if you, this is a good one for you, Ethan. So like you're working with a significant amount of feature flags, how do you keep track of like what customer gets the flag or like if they get multiple flags? And it's the last point, it's a pretty good open discussion like how do you identify a specific feedback? How do you disseminate that feedback? Yeah, so that's a really good question, right? We didn't cover it, but one of the things that's pretty common is teams will start out by building their own sort of feature flag solution internally at config switches and things like that. One of the reasons we think it makes sense to graduate to commercial feature flag solutions is the basis of your progressive delivery workflows like the one we're building here is exactly to solve that problem. So for instance, you need your feature flag solution to have really good UIs so that you can put in different accounts or targets and see exactly what the state of the world is. Which 15 flags are impacting them and what state are those flags in? You also need some evaluation precedence because to your point, I might have one user that through three different flags is in a conflict situation where one flag rule conflicts with another flag rule, right? The system needs to know how to handle that precedent. What rules get evaluated in what order? If the rules disagree with each other, who wins? Commercial solutions like ours and like others tend to have a lot of thought put into this, a lot of maturity put into this to handle exactly those kinds of scenarios. For instance, we've got a screen where you can put in any identifier for one of your targets and instantly see every flag impacting them across your whole application. That can be hard to do in your own solution and it can start to loop back around and cause problems instead of helping you if it's allowed to go a little bit too mature or scale up too much without solving that. That's an awesome answer. I see that, that's how I hear the PM for very laser focused on this problem, I love it. So for the next question, I would like to stab at this and have the hitter come in behind me, Ethan. And so what kind of resistance to adoption of aggressive delivery due for C? Well, my biggest part of, if I was in the other seat and I was working on a brownfield with an existing project would be, was it even built to make these kind of remote calls? For instance, pull us back to the very first question like you have to be gay tall. Is there some sort of marker how tall you have to be? It's a code change, right? In any sort of commercial or open source or even homegrown solution requires you to make calls, right? So you're making a bunch of if then, else or any sort of a combination or aggregation of those particular statements if you're gonna enable something or not, right? It could be the way that math is something is calculated. It can be a way that if a code block is executed it could be different types of UI items are presented, right? It is more engineering expertise and anytime you add a line of code it's like, is it maintainable? Is it meet our standards? Like there's quite an essential question to that but you know, you've been in your years years on focusing on this problem what were some common, oh that's some common adoption problems. Yeah, there's maybe three, it mentions resistance, right? So I'll focus on maybe three things I've seen a few times. One is we can't do that, right? We're not that kind of company, we're traditional we're legacy or a bank or on-prem, right? And that's often just not the case sometimes maybe sitting aside the trendier language like feature flags will help facilitate that conversation internally and focus more on technical implementation and outcomes because sometimes the buzzwords can set people actually set their alarms off real quick. That's a new term that's for these cloud native modern companies we're not that end of conversation. So if you really focus more on what's happening underneath in the application that makes all this possible you can very oftentimes have a healthier conversation if you're getting that kind of resistance. Another form of resistance is we're gonna add stuff to our code. People don't like to do that how is another tool gonna help us we have to change how we work. I'm an engineer, I know what I'm doing I don't wanna start working behind flags and this is really about maybe isolating a small project a POC kind of thing to prove to everyone it works talk about some really safe ways you can do it are there places in the app people aren't worried about trying something new just finding a way to forget people internally to see that value usually at an engineering team level. The third barrier can just be security maybe is the right way to say it you're changing stuff in production that's always scary and there's very valid reasons people would have resistance to those kind of things the way you really wanna talk about is how safe it is look at the commercial tooling you're using to do it you talk to them about how they make it safe what are their policies how do they govern your data all those kinds of things so you can show people that even though we're making some changes in production actually what we're gonna end up with is a safer app an app that works better we're not adding risk we're removing risk. Awesome, thanks Ethan I like that I'm gonna steal some of your words there if you should conversation stuff I think that was all the questions that we have we have one or two more minutes if anybody has any more questions if not on behalf of Ethan and I and the Linux Foundation we really enjoyed our time here and hope to catch up with you you can reach us on Twitter our Twitter handles below and always love to converse with people Christina anything else? No that's great just wanna thank you both again for your time today and thank you to all the participants who joined us as a reminder this recording will be on the Linux Foundation page later today and we hope you're able to join us for future webinars have a wonderful day.