 OK, we're going to get going. Thank you very much for turning out. Welcome to Beyond the Buzzword. So my name's Duncan Nguyen. I'm a platform architect for Pivotal. And joining me today, we have Sean Keary and Keith Streeney, also from Pivotal, some of my colleagues I've worked with for a number of years. So just so we understand our audience, who here has been using Cloud Foundry for a couple of years? And who here would say they're a Cloud Foundry operator down in the weeds actually operating Cloud Foundry? Anyone? A couple of people. And so last question, who here feels the need to justify the value of using Cloud Foundry upline to somebody else in the business? Is anyone in that position? Perfect. So this talk is absolutely for you. So the premise of this talk is around a what if. And that's what if everything you do, everything you build, every function that your developers deliver, every bit of technical debt that you remove, you can measure the value. And by value, we're not talking about sentiment. So for example, my customer is delighted with this feature. We're actually talking about a quantitative value, a hard line return on investment or revenue. And obviously, we're at a Cloud Foundry conference. So we're going to talk about how Cloud Foundry plays into actually delivering that value and how using service level indicators and feedback loops, you can start to measure and quantify the value that you deliver to your organization. And so we've spent some time, Sean, Keith, and I, working with a number of companies. And for someone who's seen my previous talk I gave last year around buzzword bingo, we spend a lot of time talking about buzzwords and explaining what these terms mean. And we don't just mean the literal definition, but we actually dig into the sentiment, the value, that each term is trying to deliver. So for example, with agile, we do standups and we do sprints. But the key thing about agile is getting features into the hands of end users very quickly so you can get that feedback. A lot of people talk about DevOps. And for us, it's really establishing a culture of DevOps, not just a set of tools. And so once you understand these terms and the value that these terms are trying to bring, assuming you've taken these things on board and you've been through this cloud native journey, your environment probably looks something like this. You have Cloud Foundry. You've defined the IS layer that you want Cloud Foundry to run on. Maybe it's a hybrid strategy of public private cloud. You have a container strategy. You have your build packs locked down. Maybe you're using Docker images. You're deploying your applications to Cloud Foundry. You probably have some set of microservices. You've defined your messaging tier, your database tier. And then with your organizational structure, you have agile delivery teams. You have a culture of DevOps. You have a centralized platform operations team. And you pipeline absolutely everything. You pipeline your applications. And you pipeline the deployment of your platform itself. If you get here, if this is where you're at, you probably feel really good about yourself because you're effectively a cloud native Jedi status at this point if you've done all of this. But to get here, it's not for free. It actually comes at a cost. It's time consuming and it uses resource. And so when you go down this journey and when you get to this environment, it's right that you ask the question, what is the value of doing all of this stuff? And this is a question you should ask every day for everything you do. As you work, as you output in life, you need to understand the value you're producing. And if you can't answer that question, or at least if you can't get to the answer through assessing it, then you should really evaluate whether you should be spending your time on that activity. So going back to our cloud native environment, the platform operator, they're often really pleased about this environment and they pleased on the way they structured it. So they go and tell the developers, maybe this is a single line of business, maybe this is central IT, they talk to the other lines of business and developers start to use it. And at this point, the executive team stand back and they ask, what's the value of this environment? What other apps and what other lines of business should come onto this platform? And how do we justify the work to lift and shift some of our applications over to this platform? So it's a really legitimate question. So to answer that question, we have to spend a bit of time exploring what value really means. So value is both a verb and a noun if you look up the dictionary definition. The noun is to do with the importance, the worth. Some things are really hard to quantify, like we all have self-worth. So how do you quantify worth? And when you listen to other CF Summit talks and I've been to a number of CF Summit conferences, most talks stop at this level. They talk about faster time to market, innovation, delighted customers. These are all key tenants of Cloud Foundry. They're all really important aspects of why we adopt and use Cloud Foundry, but they don't go on to the next stage of quantifying the return on that investment. The verb is to do with estimating the monetary worth of Cloud Foundry. And that gives you that tangible, hard return on investment. And that's to do with things like removing spend, taking out cost, consolidating on your hardware, removing middleware licenses, removing OS licenses, getting better automation. And on those aspects, you can start to put a dollar value amount to that and that's really what this talk is primarily focused on. So within Pivotal and the companies we've worked with, we've seen this cloud native ROI continuum. And with each of these pillars, we'll be delving in deep, so don't worry too much about the detail at this stage. But it starts with waste reduction and the activities that go into that. It looks at things like paired programming and the fact that if you find a defect during development, it's a lot cheaper to fix a defect at that point versus in production. And it also looks at a platform and consolidation and automation, all that good stuff. You couple waste reduction with continuous experimentation and that gives you that feedback loop. It lets you look at service level indicators. It lets you assess the customer feedback so you can identify if you're building the right thing. And you couple those two together that should give you an effective use of CAPEX. Now, when we built these slides, Keith and I went back and forth on this use of CAPEX. We've worked with a number of CFOs and CTOs and they tend to interchange OPEX and CAPEX depending on how they've structured their budget. So I wanted to abstract up and just call it spend. Forget about CAPEX, effective use of spend. The reason why we kept it in is when you look at your operations, your operations is really on the left-hand side. That's your waste reduction. For argument's sake, let's say it costs a million dollars to deploy your applications to date. And using a platform, using automation, getting a DevOps culture, you can get that down to 100,000. That's an amazing saving and that should be celebrated, but there's a flaw to how low you can go. You can't go beyond zero for your operational expenditure. But your CAPEX, the investment into your development teams and your designers, that can have this exponential effect as you start to really resonate with your customers. And so your waste reduction's really good, but what we're really trying to do is progress to the effective use of CAPEX. So let's start with waste reduction. Waste reduction has come out of lean theory. So lean theory, for anyone who's not familiar with it, it's a management methodology, and it allows us to map out all of the steps and the processes responsible for software delivery or for delivering something. So you start with a request, and then you move on to delivery, and it lets you just map out the current state, identify areas of waste and try and get to a more desirable future state. Now it came out of so, as we look to identify waste there, within lean theory, there's this concept of mooder. So mooder is really any process that consumes more resources than needed produces waste. Now the key thing here is there's two different types of waste. Type one is non-value add activity, but there's still stuff necessary for the end user, and that's to do with things like packaging your application and the way you release it. And arguably you can get rid of that waste through things like automation. The second is non-value add activity, stuff that's unnecessary for the end user, and that's the stuff you absolutely need to remove from your organization. So lean started in manufacturing, but within DevOps communities, it's become really, really impactful. And that's because you look at this anti-DevOps pattern, this siloed organization. When it comes to software delivery, there's many different teams involved in delivering that software. You look at something as simple as provisioning a VM, and there's all these teams involved. When they're siloed, you get these crazy flows of interaction, you get tickets, handoffs, delays, and you get a lot of waste. By realigning these skills into a cross-functional team, you immediately develop a culture which eliminates a lot of that waste. And so lean manufacturing or lean methodology has become really impactful within these communities. You couple that with Cloud Foundry, and you get further waste reduction. And so there's nothing new here, but I wanna cover some of the high topics. So the waste reduction we see with Cloud Foundry, absolutely with speed and deployment of your environment. So if I was to say, in your organization, please can I have a brand new VM? How long would it take for you to give me a new VM? Anyone wanna shout out? Timescales? Four weeks. Four weeks. Any advance on four weeks? Two days. Two days. Anyone else? Either end of the spectrum. So you speak to an operator, and often I speak to operators, and they say they can provision a VM in minutes or several hours, depending on how hard on that VM is, what they're putting onto that VM, how much middleware, et cetera. You speak to a developer, and they tend to measure getting a VM in days, weeks, or months. The worst I've ever seen is, I was in a meeting with a customer, and someone walked in and spoke to their colleague and said that VM you requested 18 months ago, it's finally arrived. I'm thinking 18 months lead time just to get a VM. That's really painful for productivity. And so by having pre-provisioned environments, self-service, using containers, you get rid of a lot of that operational waste. And that theme is progressed throughout Cloud Foundry. So stability. The fact that Cloud Foundry self-heals means that the operator doesn't have to be as involved with bringing applications or components back up. And it also gives the developers the ability to adopt patterns of things like blue-green deployments, canaries, the use of feature flags, routing services, so they have more control on how and what they deploy. Scalability. So the fact that Cloud Foundry has dynamic routing in place means that there's less operational concern. You can autoscale the applications. You can autoscale the platform. And the same for security. We have the three R's. The fact that you can rotate and repave a new stem cell and get Cloud Foundry to do that rolling upgrade so you can easily roll out a CVE means there's less operational overhead there. So all of those combined, they're loosely focused for those savings on resource consolidation, software and license reduction, and more automation. So going back to our value stream, what we're doing is we're eliminating waste as we discussed and we're trying to get our lead time to match our process time as closely as possible. And so this is really the end state that we're trying to achieve. And when you start to factor in all the different disciplines that go into delivering software, you find that you have different value streams for different stages within the process. So provisioning an environment, coding, releasing, data operations, each one of these aspects has their own value stream and you can spend time mapping out the detail that goes into that and what that would look like using Cloud Foundry versus doing an outside Cloud Foundry. And so we've actually been through this process now with a number of the companies we've worked with. And so just to give you an indication, this is some feedback on provisioning a VM. Now in this case, there's something like 30 people involved in provisioning this VM. There's a huge lead time of roughly 20 weeks and then there's approximately 50 to 90 hours of effort just to get a VM. So the cost involved in this is $15,000 per VM. Now if you have six apps or six different environments a year, you can start to scale up. So you get these crazy exponential scaling issues of just in terms of sunk cost and sunk resource. If you compare that to what it's like in Cloud Foundry, within Pivotal at least, what we do is we do something called a platform dojo where we stand up a hardened version of Cloud Foundry, we integrate it into backend services and between sort of one to six to eight weeks we can get a hardened version of Cloud Foundry. You do that once for all of your apps. So you have a demonstrable time saving there. And the other end of the spectrum with patching. So if you look at a typical environment you have maybe 130 CVEs per month. For this particular company I worked with it was five hours per CVE to roll out across their organization. You basically gave it to an operator with an Excel spreadsheet. They took a script, ran the script, scripts failed, they raised a support ticket. It's very, very time consuming and very painful. Coupled that with Bosch where this release engineering tool chain can now dynamically roll out the new stem cell and dynamically provision that with a rolling upgrade. So this is taken from the CF Summit earlier this year. This is one of the companies we've worked with. When you look at these value streams on aggregate you see some fantastic results, something like 283% transaction growth, 50% reduction in time to market. And so I think it's absolutely worse spending the time to look at how you actually map out your process where you can eliminate waste and how you can get to these tangible savings. So Sean's now gonna talk about measuring the value and how we do that. Thanks Duncan. So my name is Sean Curie. Guten Morgen. Bonjour. G'day. I am the Minister of Chaos at Pivotal. I've been working with customers for the last five years in the industrial, financial and healthcare spaces helping them measure operational value. One of the things that we prefer to do instead of just measuring that value is to avoid the waste. So I'm gonna talk about how we can do that. So how many of you are familiar with the concept of site reliability engineering? Nobody. Oh, one couple. All right. So a system operator, system admin, it's just a crappy name, sorry. Site reliability engineering is the new buzzword. We're talking about buzzwords here if you guys weren't paying attention. It's something that Google has put together they like everyone to be an engineer. So what it involves is your operations team, not just being button pushers and ticket readers, but going out there and being proactive and making sure your systems are doing the things that Duncan talked about, security, secure, scalable, some other things he talked about. We'll come back to it. I'm gonna tie back to his concept in the value stream of waste with the seven types of waste from the Toyota production system. In Lean, there's the Google site reliability book. Talks about four golden signals for a platform. Anybody know what those are? No, there's just a couple people. We'll get to them. I'm gonna map those back to waste to kind of bring you guys into the 21st century out of the Lean 20th century stuff. But first we're really gonna focus on monitoring incident responses here, okay? So monitoring, we have our service level indicators. These are our lowest level measures of some aspect that the level of service is provided. So the number of people in this room may be an indicator. Anybody have a guess on how many people in this room? Come on. 40, okay. So then I have a service level objective, right? It's the target value or the range of values for a service that's measured by an SLI. So Duncan thought we could get 40. Our stretch goal was 50, right? So then we had a service level agreement, right? It's the contract between Duncan and myself in this case, for consequences of meeting or missing the SLAs. So what I have to do now is shave my beard because we didn't get to 50. 60, all right, good, thanks. Thanks for someone for actually counting. Okay, so I like to talk about test-driven operations. Moving forward, we have our site reliability engineers or our platform reliability engineers, depending upon where you are in the process chain. And those guys use a process of continuous experimentation, right? I don't know what those SLOs should be, right? I'm gonna monitor and measure the SLIs all the time. But when I compare the SLIs to the SLOs, I really have to decide whether I need to take some action. Right? If I got to 50, I have to shave my beard. If not, I need to do something else, right? If that action is needed, we need to figure out is that an automated action? Is it paging someone? What's the case, right? Take the action and then review. It's part of this continuous experimentation feedback cycle. Okay. So as part of the whole value stream, where I'm gonna talk about the part now is more your internal business or operations side of things, platform specific. Okay, Keith's gonna bring you more onto the customer side a little bit later. Okay, so avoiding waste is gonna give us value. So our first waste, golden signal number one is latency. We're gonna tie this back to the Muda style of waiting or delays in the process. Okay, so our service level agreement would be something like real time readiness of the platform. Our indicator would be sell rep time and I'm sorry for you guys who aren't familiar with these very technical terms. But these are just examples. Please do not use these numbers in production. Okay, auctioneer task placement failures, right? The number on the right is gonna be our objective, right? We want this number to always be greater than 0.5. The next one we'll talk about is errors in our system. In this case, we're gonna talk about errors specifically to help us avoid security issues, right? Anybody ever have any security issues with their platform? Okay, I did once, cost me $300,000 Amazon bill. Okay, let's monitor the security stuff. Number of authorization errors, failed logins, failed SSH attempts, right? And on the right, we've got our objectives, okay? In this case, 10 attempts could be 10 attempts per minute, 10 attempts per second, 10 attempts per hour. That's something you guys need to work out for yourself, right? Anybody have any other examples of a security indicator that they use? And none of you guys are operators. You're killing me. All right, moving on, saturation of our system, okay? So we wanna proactively scale our platform, okay? So if the number of unhealthy cells is zero, that's pretty good, I like that number. Someone who maybe cost them many, many millions of dollars because they didn't scale their systems correctly was Target, Black Friday, their system went down. That's the day after Thanksgiving for you guys who don't know that, very big shopping day in the US. All their systems went down, I'm sure it cost them millions of dollars. Anybody have any examples here, scaling, anything else you'd like to see up here on this slide? And then the final one is gonna be traffic, right? So we wanna proactively scale our apps that are running on the platform as well. So we wanna look at our router throughput, right? In this case, I have 10,000 requests per second, a little more specific than the first couple slides, right? But with Cloud Foundry, with our platform, we can also look at the requests per application instance to make sure our traffic is going the right way. And microservice architectures in our cloud native world, we can look at each application function. An example of a traffic problem. You guys familiar with the circuit breaker pattern? In your house, nobody has a circuit breaker? Come on, I know you guys have circuit breakers. In software, circuit breaker says if my behind the scenes system is off, right? Flip all the traffic and send it someplace else, right? So if you don't have that pattern in place, you can DDOS yourself, deny yourself service because you're making so many requests to the same point, you just overload it, right? This system can't even scale fast enough. So this is something you wanna try and avoid, that's waste, right? Traffic waste or transportation is the traditional mood of term there. So I just talked about in our continuous experimentation cycle, just a small part, the top part, right? Your process. Keith is gonna take it now to the next level and talk about how that fits into your product development process and your enterprise value stream. Hello, I'm Keith Strinney. I'm the Pivotal Federal Practice Lead. So this literally has nothing to do with my day job, but it's something that's very interesting to me and I really like this stuff. And so I've worked, Duncan's brought me in on a couple of customer engagements in which we worked out some of these different pieces. So one of the things that we wanna kinda revisit the slide and kinda say why? So we dropped down in the weeds, we looked at indicators, we looked at objectives, we talked about overall waste reduction and that's great. But as you heard Duncan say, there's a floor there, right? You never can drop, there's always gonna be some minimal operating expense. And if you, let's say fictitiously, you do go from a million down to 100,000, that 900,000, it looks great year one, but then year two, there's not that big drop anymore, right? So how do you continue to justify the use of the platform and why we're doing all this effort to make things dynamically scalable? Where is this all going? So the idea here is that we look at what the future value is. So how many of you feel like operation expenses is something sexy to talk about? Right, it's not. It's important, absolutely a fundamental step, but it's not that sexy, right? It happens in the beginning, you get your operation stable and then what? That's always the question that comes. And so this is what we say, we need to innovate, we need this new cloud foundry piece, buck the status quo, but then we go right back to how do we reduce OPEX? And then we run into more brick walls. And so the idea is we need to move the conversation forward from the reduction of software capitalization to the effective use of CAPEX. And what that means is, is everybody familiar with what capital expenditures? I mean, it is loosely used. It also does it depending on your accounting method, but does everybody kind of understand what that means? So, okay, great, awesome. All right, so this is excellent for the outfit. But in general, what it means is for every new investment, I have a certain number of resources that I have to use in order to just get that initial idea up in the software world, right? And so what we really wanna do is look at what the true promise of this conversation is. And so what happens when we get this, right? So we get the platform, your company has bought it, let's say, they love it, and they're like, okay, this is gonna solve world hunger, right? So what if, what we want you to do is we want you to, now that you have the platform, we want to acquire and retain new customers. But we're not gonna give you any new money, we just, you got the platform, it's magic, make it work. Well, so what we really have to do is, we have to do that first fundamental step, which is let's say fictitiously, we could get 40% reduction in OPEX, which would be phenomenal, right? And then let's say by eliminating the waste and the delivery of the software, we're able to see 20% more efficient use of our CAPEX for new customer acquisition. So what do we do with that money, right? So yay, we saved all this money, but what do we do? Any company that quickly disappears. So what we wanna do is we wanna have a plan. We wanna have a plan for how we're gonna use that first fundamental step to feed growth and capture and retain new market share. And so here, we see the company comes out with a second objective. Now that you have this cloud-native architecture and you can move at the speed of the market and do all these great things that we heard in the sales talk, we want you to do three new revenue-generating products per quarter, revenue-generating, meaning we're not counting the failed ones. We want just things that are gonna make money that actually are producing. And we also wanna prove that we get 10% lower churn in the new user base versus what our existing churn numbers are. So normally you would freak out, like how are we even supposed to do this, right? This doesn't, you just want fairy dust and everything is just flying off and magical. So we have to really look at this. It becomes work, right? All this is, is simply work. And what we wanna get to is, how many have heard of Blue Ocean Strategy? All right, so great concept, right? Don't fight in existing markets. Go find new markets and be the only fish in that market. Awesome, right? It's a great idea. The problem is that jump right now seems magical, right? It's very difficult to get into a new market that nobody's in. Otherwise everybody would do it. So what we do is we wanna be, we wanna make continuous experimentation on the product side of first class citizen in the actual development cycle, in the platform, and in the way we look at the user base. And so what that means is we have to observe users dealing with problems. We have to create assumptions about what we think they're doing to solve those problems. And then we have to continuously validate those assumptions to understand, is our hypothesis right or is it wrong? Either way is valuable. But how do we steer so we find that actual new market? And then from there, now suddenly new revenue streams open up, new customer acquisitions are possible, and we can actually see more than that 900,000 savings in year one, we see infinite number of revenue growth in all different market verticals. And the other thing that this does is, it has a, you look at the flip side of this and we look at technical debt. Well one aspect of technical debt are features that don't resonate, but we still have to maintain because if we rip them out, something's bad, it's gonna happen to the rest of the actual software. So what Cloud Native gives us, this kind of clean modularity that minimizes that ripple to be able to sunset things that we see in the data isn't generating any resonance. And by getting rid of that and reallocating those resources and teams, now suddenly we have new resources to put on new hypothesis to drive for new markets that we're looking for that we wanna expand into and generate new revenue. And all of this is, by pruning this technical debt early, we can reallocate these new resources to new hypothesis. And what this really allows us to do, as you can see, the idea of opinions, informed, non-informed opinions and going down these paths that don't necessarily lead anywhere. And so what we really wanna do is we wanna make it so that the product development can remain objective and more focused on what the user is telling them rather than guessing product directions based on unvalidated assumptions. And that's really the key here, right? Is what's the difference between speed and velocity? Anybody? It's a vector, right? So if I run, I wanna exit, my goal is to exit and I'm running this direction, it doesn't matter how fast because I'm in the wrong direction. So the idea is, how do we know we're going towards a new market, the correct market direction and how do we know the product is evolving along the right lines? We need actual velocity, not necessarily speed. We need to know that what we're doing is the right thing based on what our customers are telling us is what they want in our product and are willing to pay for it. So when we look at this, that's what we're trying to get to, effective use of CAPEX or expenditures. And this is what we have today, right? This is typically what happens, right? The idea is like the product teams get together, they huddle up, we got this great new products, we're gonna put it out and suddenly it's not doing anything, right? And we see this all the time and then that's why it becomes more art than science because we're guessing. I mean, ultimately that's what happens. You have all these product surveys and things out there and you have all these different biases that come into play but what really ends up happening is you're not really talking or engaging with the right folks. And it doesn't necessarily mean actual user centric, which is we do a lot of user centric design, which is awesome and it builds great products but not everybody can get that level of interaction with a customer. So what we also wanna do is we wanna instrument the platform and instrument the microservices and instrument end-to-end in the cloud native architecture to really understand these types of user interactions. And what that does is it liberates either the product or the company and enables them to evolve with customer changes in both their value and preferences. And the reason why this is important is because even after product features have been delivered, the granularity of microservice design promises clean modularity to minimize or to continuously collaborate with that customer and how they evolve. And we all know circumstances change and that means the user needs change. And instead of having to constantly restart with a new product, we are now able to evolve the existing product which is less money, costs less money because you have a core baseline there. And that initial product can remain relevant long after the initial launch of it. And so we look at this, we say, well, how do we do this? Does everybody know what Elf on the Shelf is? Anybody? So in the States, this is creepy little thing. It's a, you put it on your shelf and it's supposed to watch you all year and that's how Santa knows who gets presents and who doesn't get presents. That's the general synopsis. But the idea here is that we have, we actually have the ability to monitor the traffic that comes through the platform, to look at what the customer interactions look like and to try to derive things from those. And what we have, we have tools, very specific tools for this, like AB deployments or blue-green deployments that allow us to do some of these things. But those really tell us yes or no, right? That's what we're looking at, yes or no. But by building these up, they can also start to tell us the why. We can use these in conjunction with the granularity of cloud-native architectures in order to build up and distribute a tracing so we can see the shared context across interactions to really look at what is going on in the platform. And these primitives help us build up these very complex hypotheses and then which we can look at a number of different factors and validate behavior or invalidate behavior that we expect in. So what is this like? Anybody know what Choose Your Own Adventures are? Anybody remember those books? All right, so basically they were these books that, all right, so the object is a book, right? It's a book, everybody knows what a book is. But the difference in it is the way you navigated the book. It allows you, you'd read something in the adventure and then on this one it'd say, if you decide yes, go to page 13. If you decide no, go to page 30. And then it would take these alternate courses. And the idea was that now I'm using the book how I want to use the book, which is what made them so popular versus somebody telling me and dictating how that story is going to occur. So it worked like this, right? You had these pathways that you could go down. In this particular example, what we're looking at is let's say we're a company and we have basically three kinds of things. We sell physical media, we do electronic purchases of physical products and we stream audio books and movies. So we have streaming media. And then we have these different pathways. Let's say in the streaming media we have two hypotheses we wanna test. Automated recommendation engines, manual recommendation engines. Let's say on the e-purchases we look at tailored sales, not tailored sales, manual recommendation, automated recommend, targeted ads, not targeted ads. So we have all these things. And inside the company we don't know which way to go. We say, you know what? Streaming is the way of the future. We think that everybody's gonna do streaming. Let's just drop our e-purchases. And you know what? Big data is awesome. It solves all the problems in the world. Let's just do nothing but automated recommendation engines. But then when we run the trace and we look at the data, we say, well, wait a minute. Assumption one and assumption two are almost even. Maybe physical things, physical products is not necessarily going away. Maybe it's just as popular. But then we see the secondary where we see the blue-green deployment and we see that 60% use manual recommendation engines and 40% use automated recommendation engines. What's going on there? But then we also see that if they pass to that automated recommendation engines, we're seeing almost 100% conversion. What is really going on here? It immediately points us to possibilities and new hypothesis that then we can test with new sort of layouts. And for instance, in this particular case, if we think about streaming media, you know, streaming media is sort of a media gratification. I wanna watch a movie. Let me put it on. So maybe it's that our recommendation engines aren't very accurate. And if we spend a little bit more research in R&D and how those recommendation engines should work, we're gonna get higher conversion rates. Maybe not. Maybe it's that down here, everything is kind of split even. Maybe it's when people look at physical products, they look at more than one source of research, right? Because I don't need it right away. I'm buying a bicycle. I would go on a bunch of different sites. I'm looking at different things. So whether or not you target me or not, whether or not it's automated recommendation engine or not, is kind of irrelevant because I've already done my research. But in streaming that night, I wanna sit down and watch a movie. I'm gonna be more likely to look at recommendations. And so it gives us sort of points as to how the user wants to use our application and do further hypothesis testing to figure out where we can get the most ROI out of every dollar we're spending in engineering and engineering on the platform and the actual product line. So this opens up a whole new world of customer understanding. Things like, where are our highest bounce rates? Where do we see the most people exiting the conversion funnel? Where do we see what's most valuable in the product? Where do we see the highest level conversion? Well, what about the layouts? If we try different AB deployments of different layout structures and workflows, maybe it's just a user experience. Maybe in certain countries it's different than other countries and so we need to actually do some sort of localization but on the entire workflow rather than just language localization. So all different things that the data points us to and looks at and by instrumenting the entire platform we can see a shared context across the entire interaction. And the idea is this, we don't wanna be fads. You know, when we do product development it's significant R&D, it's significant investment dollars. It's significant the level of effort that we put into these things to try to figure out what's going on. What we don't want is something to get hot and then sunset and then all that investment that we put in is wasted. So we really wanna avoid fads. We wanna see long term how people are using it and then if maybe it needs to evolve and maybe it needs to continually to evolve the continuous experimentation process where we get these feedback loops that tell us how to evolve. And this is really what the panacea of all the cloud native buzz is, is how do you get into those new markets and expand that customer user base infinitely which is much greater than a $900,000 savings year one, right? You're talking about millions and millions of dollars that you could potentially bring back to your CFO and say this is why you wanna invest in like cloud native architecture. This is why we bought the platform. This is why we're doing the things that we do. Why we changed our whole DevOps culture. This is the whole point of this is because we know we can get to these blue oceans and we now have a physical way to do it. There's actual a step-by-step process in order to get there, it's no longer magic. And so the ultimate is to take away of this is to look at step one, waste reduction. Let's get our deliveries more streamlined. Step two, let's get closer interaction and feedbacks with our customers. And then ultimately what we really want is data-driven decisions that move forward in the future, higher customer spend ratios per investment dollar, lower overall subscription turn if you're a subscription model and then less restarts and more evolution which ultimately is less costly to evolve a core base and sunset and get rid of the technical debt because of microservices and the ability to segment those things and get rid of them without causing a ripple effect through the baseline. And so these are all the tenants that you've heard throughout the conference in these different pieces and this is ultimately who we wanna get to. That's all I have. Thank you. Questions? Thank you.