 Hello, welcome to the Project Mini Summit on Phenops. My name is J.R. Stormant. I am the president of the Phenops Foundation and I'm joined here today with really a rockstar team of folks who've been doing this type of work over the years. I'm gonna start a screen share here and hopefully everybody can see my screen one second here. All right, can I get a thumbs up from the co-presenters if you can see the black screen now? Great. So this screen you're seeing here, this black screen is actually exactly what we saw on the keynote on the first day of this event right at the moment when the Phenops Foundation's integration into the Linux Foundation was announced. Jim, who's the executive director of Linux Foundation his unfortunately his internet dropped right at the moment of our big highly anticipated announcement. So I wanted to go back and bring that slide back that was one of the things that was meant to be shown as a way of introducing what we're talking about here today in this group. So the Phenops Foundation, which everyone on the call today presenting as a member of just announced that we are merging into become a part of the Linux Foundation. And I'm super excited about this organization for what they do around open source for neutrality, for the building of standards. And a lot of what we're going to be talking about today is really sort of the kernel of what we're going to be building on over time as a part of this larger organization. So when we talk about Phenops, there's often confusion. Like what is Phenops? In fact, anybody who looked in the Slack channel will note that there is a PhenOS and there is a Phenops with a P, two different projects within the Linux Foundation now. We are Phenops, which is really another term for any of these things on the screen. Cloud financial management, cloud cost management, some people call it cloud economics. These are all synonyms for the same practice. And the practice is one of how do we help get the most value out of every dollar being spent in cloud? You'll note also, if you look at the list of speakers on this call, all of these speakers do Phenops in their organization, but they may not call it that and their titles are all over the place. We've got Mike as a principal systems engineer, Ashley as a senior Phenos manager, Sasha's head of global cloud operations. This is very similar to the mix of things we see in Phenops around, which is a really combination of different types of people doing this work. And the work and what we're going to talk about today is really this, it's Phenops is essentially a set of practices, it's best practices, culture and prescriptive actions that are there really to bring together disparate teams, business teams, finance teams, engineering teams so that they can work together to get the most value out of their cloud. It's not about saving money necessarily, it's about making the right decisions, right? The right business decisions for where to invest more or less and it can be all the way down to tactical things like we need to right size this thing or we need to buy this reserve instance or commit a juice discount and it goes all the way back up to how we structure our charge back and our investments for where we're going to spend additional money on infrastructure all supporting the business decisions and bringing these groups together to collaborate. So the Phenops Foundation, which we're all a part of is an organization as I said, I'm now excited to be part of Linux Foundation that is here to do these three things. First is really to be a central community for this group. All these people and the now 1500 practitioners who are part of the Phenops Foundation have different job titles, they have to work in different teams, it's really hard for them to find each other at a Amazon re-invent or a Google Next or any of those things. So we've created a set of now virtual events for these people to come together and really to help advance their careers. Education is like the core thing in this space because people are not always knowing what it is or they're new to it. You typically, you can't go to a college and get somebody who's graduated with a Phenops degree yet, right? So we need ability to help people up level their knowledge, their training, their certifications. And the third thing that we are all working on together and now open sourcing via the Linux Foundation is best practices and standards, right? So what are the metrics that really matter in this space? What are the core capabilities that you need to do? How do you structure teams? All of these things are being built out now and will be open sourced on GitHub. Need to throw in the requisite Dilbert comic. I'll give you a second to read this. This is basically the challenge and the interactions that happen in companies around cloud, right? You've got some business users who don't understand the cloud technology. You've got some finance people who don't understand, you know, the cloud as well. You've got the engineers who don't necessarily understand things like OPEX and CAPEX and their spins impact on the business. And so you end up in this world where there's really ripe for issues of miscommunication, people not knowing sort of what changes to make, who to talk to. And that's really what the Phenops practice is about. It's about bringing together all these disparate groups, right? So engineering and ops teams who are actually deploying infrastructure, business and product owners who somebody who's responsible for an application or workload, the executives, right? Who are looking to get more input into where technology decisions are made and invested and the finance and procurement teams who are in a completely new world, a crazy new world for them where they historically could report, you know, quarterly and retroactively and now have to keep up with per second level billing from cloud providers and constantly changing forecasts. So the Phenops team is really the group that comes together to help all these individuals work together to make the necessary changes internally with their own processes, but also to interact with the cloud providers as things are being deployed. So today we're gonna focus really on that first set because it's the open source summit. I don't think there's probably a lot of finance folks on the call unless I'm incorrect about attendance, apologies if I am, but we wanted to focus the content today really about engineering and ops in this world of cloud spend. I think the winning title in my mind was the live ramp teams title, which is bringing cost aware software, doing cost aware software development in the cloud. We wanna talk about the pieces as we walk through this agenda of how engineering teams have now started to have to think about money in a way they have in the past and how that affects their behavior. Mike's gonna dig into that and then how to start to tease out metrics that matter to engineering teams. Josh, Sasha and Patrick are then gonna talk about how they structured and built their own practice of Phenops and some of the learnings they had from that. And then we're gonna wrap up with Ashley really showing us what more of like a run stage if you think about the crawl, walk, run methodology of starting simple advancing and ultimately getting much more mature in your processes. Ashley's built an amazing, well-documented practice at Pearson with a lot of process built out to ensure that all these different groups can come together to have the right conversations, to have the right data so they can eliminate confusion and ultimately make better decisions in their cloud spend. So we've got about 90 total minutes. We're about eight minutes in right now. We're gonna do hopefully most of this content in about 60 minutes and then leave hopefully 25 minutes at the end for both answering your questions and also doing a bit of a panel Q&A discussion. So with that, I'm gonna pass over to Mike to give you the story of what engineers face when they start to deploy in cloud. Can't hear Mike yet. Let's reconnect the camera. Is that working now, J.O.? Yes, can hear you now. Okay, sorry. Yeah, we did. Back to that. You've seen the slides. Not seeing the screen share yet. We are seeing it, yes. Yeah, okay. So yeah, my name is Mike Fuller. I'm a principal systems engineer at Atlassian. I've been working there for a bit over eight years. About six of those have been focused in the Cloud Center of Excellence at Atlassian where one of the components that we support within Atlassian is the FinOps practices within the company. And so today I'm gonna talk about a bit of sort of how things change from an engineer's perspective as you move into cloud. And then I guess sort of how FinOps can help the engineers as the sort of land changes. And so originally I was a platform team member at Atlassian and we hear this story a lot where the platform team members, they sort of dream about having servers in the data center. And for those servers to become reality, they have to draw up plans and go to procurement to accounts and sort of pitch for why these new servers are gonna help the business or whatever. And if that pitching plan goes well, then we get access to the money and then we can get servers in the data center. And so this is sort of the traditional model. The engineers are requesters, but there's this gated approval for the actual spend of money in the organization. And the spend is fairly predictable. So besides these large purchases outside of that, they're fairly known spend month to month. There's usually long procurement cycles to get new equipment within the organization. And if the engineers come up with a bad plan, there's a high cost of failure there. So when we move to the cloud, we have service teams, we move to DevOps model. We have lots more service teams instead of platform teams. They're all asking procurement for access to the money. The reality is they're actually moving to infrastructure as code and automation, things like auto scaling. So now we've got the machines needing access to spend money. And so what really happens is kind of the service teams are able to spend money in the cloud without any sort of gated process in front of the money. And so then at first this seems okay, but then eventually these spends become material to the business. And that's usually at this point, we're here companies sort of starting to get cloud cost as a pain point and reaching out for how do we solve this problem. And so when we look at the cloud dynamics, we've got engineers with sort of a free will to spend money with code. Finest loses that visibility. And as I said, at first, this is not too much of a problem, but it does bubble up to a point where it's a tipping point. There's no like large upfront expenses and now month to month, the spend is very dynamic and very hard to predict. And so there's a lot less cost of failure for teams to practice, you know, to try out proof of concepts, et cetera. But the big thing here is there's a lack of communication now between engineers and finance around spend. And so what we wanna do when we move into FinOps is we wanna be able to make it so that engineering and finance are sort of working together, that we don't wanna really put in a procurement gate that slows down the organization. We still wanna allow that fast-paced agile experimentation and allow the, you know, infrastructure is coded, et cetera. And allow sort of some predictability to these cloud costs as we go month to month. And so when I've done metrics-driven cost optimization talks before, I've done a lot of the talks focused on the practitioner. So there's a lot of, you know, things that we'd like to have a single practitioner within the organization manage things like your committed use discounts and your reserved instances. And there's a lot of metrics that can help the practitioner within the organization to, you know, measure how they're doing on those things and use it to drive the way they, you know, manage those resources. But I wanted to take a step back and say, well, you know, how does this look from the engineering team perspective? And there are metrics that we've available that can help engineers work out how their costs are going and how efficient they are, et cetera. And so we have a pile of FinOps principles within the FinOps Foundation. And what I'll try and do is draw the content today towards some of those principles. And so one of them is the accountability is pushed out to the edge. And so the idea here is when we move to DevOps plus cloud, your teams move to where you build it, you run it model. And we just want to add on the end of that that you optimize it as well. And so, you know, when we talk about optimization from an engineer's perspective, we're really talking about usage optimization. And so if I can half the size of a server instance, I can half the cost of the server instance. And so if I take a pile of EC2s in this case, we've got really sort of a couple of main ways that we can start to optimize this usage. There's idle resource removal. And so over time, no matter how good you are at keeping the house clean, there will be resources that become idle, become forgotten about. And so we want to be able to identify which resources have been forgotten about and have them cleaned up by engineers. And then the other one is to realize that some of the resources may be oversized. And so especially when you're doing greenfields deployments or new migrations into the cloud, it's very hard initially to predict exactly the size of the services or resources that are needed. And so we can identify some that look like they're grossly oversized and pick them out to resize them down, reducing the cost. Now, that whole pile of recommendations is really divvied out to teams around the organization. And this is something that the Phoenix Practitioner can help with both generating or getting the recommendations from some other platform, either as cloud provider of themselves or upstream third party vendor. And getting those recommendations out to the teams that are responsible for these individual resources and asking them to do that investigation. Another principle of the FinOps Foundation is performance benchmarking is providing our context. And so if we take the wastage, first we can look at how much recommendations like dollars in spend that we have on resources that we could probably avoid painful and put that in context of how much we are spending on cloud. And that gives us a bit of a sort of feel for how much we're wasting. But really we wanna look at what this looks like over time. And so we start to build sort of, are we getting better over time with wastage or are we getting worse? And then the wastage itself, this is made up of all different teams that are responsible for those components of waste. And so we wanna be able to measure the teams across the organization. And so we could do that just in pure dollar sense, how much that they could avoid spending every month if they were to clean up the resources. And so if we did that, this example, we can see that team A is definitely got, the largest amount of savings that we could generate and we could say that team A is worse than team C. But we should probably put that in context to how much each team is spending on cloud. So if you've got a team that's spending a lot more than others, but then having a slightly higher percentage waste, it may not be as a good indicator compared to others, continues to grow or continues to sort of do what they're doing on cloud and scaling it up. That wastage is hopefully gonna reduce, but if it doesn't, that's gonna become a problem for the organization. And then it's also important to realize every recommendation, whether it's idle resource or usage of resizing, they're not all equal in size. Some things can save us a lot more than others. And so really what we wanna do here is work out what does this look like when we lay that out on our teams. And so we call every individual recommendations gonna take a certain amount of effort involved to look into it and to action to change deployment for cycles or whatever. And so this sort of needs to be balanced off to the amount you can save. And so you can see that while team A, originally remember we had $3,000 worth of savings, there's a lot of recommendations for team A to get that $3,000 whereas team B and team C, some smaller amount to save, but a lot less effort for them to look into the handful of resources. And so we're finding that balance between this sort of effort and potential savings that we could make. And then real-time visibility drives better decision-making is the, I think it's the last principle I'm gonna talk about today. There are more than these. And so we used to call it the previous effects. Now we call it Tesla effect. Apparently that's way more cooler. So the idea here is as we have, if you think about a 1970s car, we fill it up with gas and we drive down the freeway. For us to work out how our driving is affecting the efficiency of the car, we really have to sort of wait until the tank gets empty and then we can look at how many miles we drove and then sort of back calculate how efficient our driving was. When you move to electric vehicles, effectively they give you these beautiful looking dashboards where as soon as you put your foot down on the gas pedal or electric pedal, I guess, you're seeing that immediate feedback and you can see how your driving is affecting the charge on your battery. And so the car doesn't really say drive better but having that feedback, naturally people drive more sensitive. And I guess with electric car, you're probably worried about making it to the other end of the trip if you're driving really aggressively. And so the idea here is that we translate that to cloud spend by presenting the cloud costs and making that available to teams as close to the time that they're making actions and so they can connect actions to the impact of the cloud bill a lot better than over time engineers. Traditionally, what we've seen with the successful Phenops Practition companies that are doing successful Phenops is that they become more efficient just by the fact that they're showing engineers the cost impact of these actions and they become more lean just because they're made aware of it. And so coming back, engineers are really good at generating metrics to measure the performance of their applications, things like how much memory in CPU or requests per second or whatever it is, these dashboards that have really helped them dial in on the performance of their application. And so what we're really talking about here is this adding an extra metric or cost as a metric that they can optimize as well. And that transition us to this idea of being able to sort of do a business conversation around how good, fast and cheap you wanna build your services. And so we wanna have it so that your tier one services or tier zero services in the organization, you're probably gonna spend a bit more on those to make sure that they are highly available and as fast as possible. But we can then look at maybe those low tier services within the org and you can make those trade-offs between how available it is and how performant they run in balance with how much they're costing the org. And where we really wanna get everyone into is this final, currently, we call the final stage or the nirvana state of FinOps which is the unit economics. And so we're gonna transition people from thinking about the cost of the service over to the cost of serving their customers or building some form of business benefit. And so forecasting costs is really hard. And we sort of hear this story as well where you get finance saying how much you're gonna spend and engineers are very hard for them to work out exactly what they're gonna spend especially when you look at longer term forecasts for quarters or years or even three year forecasts. It's almost shaped the magic eight ball. Even worse is when you get told by finance that this is how much you're gonna spend and that's totally not where the engineers have had pegged their spend out and they're looking at all the projects that they wanna get done and the spend doesn't fit. And these conversations really come out of the fact that there's a focus on exactly how much they're spending on cloud. And so, when we look at the forecasting and the example I wanna sort of draw to which I had no actual data for is just to, I'm sure this is kind of how this worked. If we go back to sort of January this year, I'm sure companies like Zoom and Slack would have these nice forecasts what they're gonna look like the years gonna look like for their cloud spend. And then COVID hits, everyone starts working from home. They're signing up way more customers that they ever predicted and their cloud spend would have probably been way higher. And so these conversations, without sort of moving on to unit economics that would have just been conversations about cloud spend is too high and it's above forecast. And so, if we sort of move that to look at how unit economics allows this conversation enables a combination of conversation better is yes, cloud costs would have been way up but the cost per user or the user account that these companies would have been serving would have been way higher than they would have forecast. And if they actually combine these two metrics together you can see that in this sort of example here that the cost of serving each of their customers is actually getting more efficient as they're going up. And so it becomes a little bit less about the sort of the total cloud spend but more in the efficiency or what they're getting out of that cloud spend as I say efficiency metric. And so this is just a quick run through some of the metrics that are available to both engineers and generated from a FinOps practitioner in the org with but they're all gold around this idea having the business, the finance team and the engineers team engineering team sort of put their piece of the puzzle into cloud costs. It's often that the engineer will be tasked with a massive migration or net new deployment and they're really focused on sort of the technicalities of a migration, et cetera but that does become a point where cost is important and with a FinOps practitioner within the org it's enabling this, you know the three sides to come together nicely without sort of that angst about having to deal with costs. And so I think that's me and I'll pass over to the next guys at LiveRamp, I think it is. Hello, can you hear me? Well, I think that should work. So hi everyone. Just while Patrick gets our slides on so my presentation or presentation will be on sorry, I'm waiting for the screen share to close down on Mike's side, I think. Can you see the slides, Josh? No, but you have to screen share. Sorry, thanks for your patience everyone. Great, yes. So our segment is gonna be about cost software development in the cloud. So I think what I'll probably be doing is along with Sasha and Patrick is sort of going into more of a specific like case study of our experience with FinOps at our company. So the way all of us know each other is through LiveRamp. And I served as a DevOps SRE engineer now doing so as an independent consultant. I'm formerly a sysadmin and there you can find me on Twitter. Sasha is the head of global cloud operations of LiveRamp and Patrick is a senior product manager and infrastructure also at LiveRamp. And we've spent a lot of interesting time working together and you'll get to learn about our experience with cloud and LiveRamp and FinOps in just a little bit. Just a quick announcement. So there will be a little bit of an interactive segment during this presentation that we're gonna try out. Everything is new right now obviously because of COVID but if you're humorous and give it a try by joining the two track FinOps channel in OSSELC Slack, you'll be able to do a little bit of participation in that presentation later on. Great, so yeah, just to recap. So yeah, so we're gonna share our experience with LiveRamp, cloud and FinOps, talking about our experience and specifically like LiveRamp scale. Sasha will pose a rhetorical question. I will talk a little bit about how from the developer's individual perspective, the IC's perspective, how everything has really changed about building your app, powering your app and operating it and what responsibilities you may not know that you have gained in the intervening time but surprise you're gonna be held to those now and then we'll jump into some examples of how some mistakes be made during LiveRamp's experience and journey to cloud just so you can learn from them and then a quick wrap up. So this was our experience at LiveRamp. So the project that Sasha, Patrick and I all worked on together in our respective roles was to take LiveRamp's data center which was hosted in San Francisco and fling the whole thing into GCP within 12 months. This was a coordinated effort over five countries, 40 engineering teams. LiveRamp is in digital advertising so there's lots of data to it. We read and write 13 petabytes of data per day and we throw it all at Hadoop running on 80,000 CPU cores, a mix of VMware and bare machines and lots and lots of memory. Before in the data center, this was based, it was a chef in VMware sort of build which meant that people had to ask for server power even and provisioning, we moved all of that to Kubernetes and we also moved to infrastructure as code while doing so. And yes, we did it all in 12 months and we had many more gray hairs and more stories because of that. So yeah, so sorry, I'd like to hand it to Sasha now just to pose this question. Talk a little bit. Thanks Gish. So before I pose this question, this slightly saucy question, I wanted to just do a little bit of meta conversation and say, I'm personally very excited that the Phenom Foundation has moved to the open source foundation. I'm looking forward to contributing in a meaningful way and sharing and learning from the other members of the group. You're gonna hear us talk about some of the same things that JR and Mike talk about and we want to reinforce those topics because we think it's really important for developers to understand this. And the three of us, Josh, Patrick and I, when we talked about building our presentation, we asked ourselves who was for and who could it be most relevant for and there are specific things I think all developers should understand about both migrations and spending in the cloud that will help them succeed. And so part of what I'm gonna be asking and part of what I'm gonna be telling is the insider's perspective from let's say the executive leading the migration who has specific goals in mind that perhaps some developers are aware of and perhaps some are not. But the more that you know about this, the more of a likelihood that you'll be successful in the cloud. So the question I wanna ask all of you and it's a rhetorical question again is is the cloud migration successful if you wildly overspend your budget? And I think the natural inclination of engineers and myself included was to say, well, the technical part of it was successful but then the budgetary part was not. And I think it makes sense but at the same time it's not how the VP of finance thinks it's not how the CEO thinks of it. And this in fact happened to live ramp. We had this extraordinary migration. We pushed 100 petabytes of a dupe into the cloud within 12 months. It was organized in the most elegant way by Josh and Patrick and all the other development teams that were a part of it. And we did it and we felt fantastic about it. And then about a month or two into the time at GCP finance came knocking on our door and said, hey, you're wildly overspending like a drunken sailor, what's going on? And that was sort of the first inclination that I had about how vastly important it was to meet the budgetary goals because we're a public corporation. We give guidance to the street. Many of you on the call are in the same position and we want to share the pain that we had with you so that you can avoid this pain for yourselves in the future. So the first bit of perspective to understand in case it's not clear is that when we move from on-premise to the cloud, essentially the decision to spend money goes from a combination of finance and maybe a higher level leadership within engineering who meet perhaps on a quarterly basis and decide, hey, we need to buy 20 more racks of servers and moves from that decision making body directly to the developer. And that is a fantastic thing. That is exactly what we want because that's how we can go fast. But quite often people don't fully understand that along with that decision making power to spend money by command line essentially via API, there's a corresponding responsibility that comes with it. And from a finance perspective, the way they think of it is that they've gone from a process that they call CAPEX where there's a consistent and planned quarterly, yearly spend that they can forecast very effectively to monthly operating expenses or OPEX for short. And to Mike's point earlier in the broadcast, that could be limitless, right? We've gone from a situation where it's tightly controlled within a central org to where it's at the edge. And that could be a very exciting but dangerous place. So we know that developers love having that control, but it became very obvious to us at LiveRamp at least that we didn't talk about this shift and we didn't give developers the tools to be able to know what they're spending, how they're spending it. We didn't give them the alerts. We didn't give them the training to understand what was going on. Let's switch to the next slide here. And so when Patrick and I were trying to figure this out, we spent a lot of time thinking through and talking to people. And we heard about this Phenops movement and we wanted to know more. And so we flew down and took the first class that we could. It happened to be with Mike Fuller, actually. And the moment that I started to hear about what it was, the moment, that is when I understood that it was a perfect framework for addressing the challenge. And really it's a combination of governance and tooling and this cultural shift in engineering, which allows you to solve the problem. Like you first make the cost visible to all engineers by giving them a tool of some kind, whatever it is, that says, here's how much you're spending across your environment for your product. And then you allow them to see what opportunities there are for optimization, the engineers that make the change, and then they see the information once again and it becomes this beautiful, virtuous loop in running the environment. So I think that's the context that I wanted to provide to all of you so that you can better understand as developers how the rest of the folks in your organization think of the challenge and think of the problem. So with that said, let me hand it off to Josh back again so that he can tell you more about how he saw this problem and tried to address the challenge as a developer. Yeah, thank you, Sasha, that was great. Yeah, so I'd like to do a little sort of like then and now comparison to kind of think like, how do we get here? Like what has changed and how do we need to adapt? And so let's kind of take a bit of a trip into the past and look at what things were like before public cloud. It's a lot like what Mike and Sasha said, I think from the perspective of a developer. Yeah, like the most important thing I think there is that you never were involved in the decisions, right? There was such a low touch between engineering and finance because there was probably like a VP of your division who was talking to finance for you and securing resources for you, you weren't the person. And also something that's really important as a developer that maybe isn't appreciated most of the time is that some CIS admins build and operate or infrastructure and there's usually, I don't know, I haven't worked in that many companies but there's probably like a constrained amount of ways to express like how you can build your stuff. Like for example, we were limited by what you could build in Chef and auto scaling wasn't easy. So we sort of had to build projects based on what we could imagine with those constraints. And of course, any time you wanted to change something, next slide please. If you wanna change something, if you had a great idea but your data center was just not set up for it, then too bad, like you had to wait until the next capital or expenditure thing, you had to make sure that you as an engineer, you have a great idea, right? You don't even know who to talk to to say like I have an idea that can revolutionize our business but we need to make a change in the data center. Now in the cloud, right? All of that changes. In the cloud, you're able to take all the little building blocks that they give you and they're pretty agnostic, right? They're meant to appeal to a wide variety of developers. If you have a great idea that can change your organization, then you can build it. The cloud provider can take all of the little things that you don't know or care to build and operate like database servers, networking, firewalls and all that stuff that literally did require hardware in the data center and you can move super, super, super fast and it's so great for development. But unfortunately, so did the costs, right? So because now you're responsible for purchasing decisions, if you in the process of executing your idea are not paying attention, you'll probably make some interesting decisions that are great from a technical perspective but not from a financial one and believe me, if you're doing this under the auspices of a larger company, finance will notice that you have just spent a million bucks on something or a hundred thousand bucks or something like that. All right, and so with that, time to try our great participation experiment. So if you would all join the two track finance channel about to post a prompt and some potential answers. So, and I'll read the prompt. So how much does the app or product that you work on, you listen to this call, like whatever team you're on, whatever company you work at, how much does that cost every month to run the cloud in the public cloud? And if your company isn't in the public cloud, just give a guest to what it would cost. And I'm gonna post a couple of prompts here of potential values that I think might be fairly common and if you could leave smiley faces or emojis next to all of those options, we can see which one is the most popular. And if you feel that you have an answer that isn't accurately reflected here, feel free to just throw it in. And I'll give a couple of seconds for people to fill in their votes. There are some votes coming in. It's working guys, it's working. Okay, so a lot of people think they're over 200K. A lot of people think they're under 10,000. They're very happy people. They are very, they're very happy. So I think there's like, you know, maybe I'll put one more option here, not the slightest idea. And I'm curious to know if anyone else may think that way. You guys are very, y'all are very cost aware, that's great. Perhaps. There we go, we got, I don't know. Okay, great. So I think what I'm trying to get at here is that the scale of costs goes up very surprisingly for individual developers. It's very, it was very common at live ramp to see projects that were spending $100,000 a month in that order of magnitude. And you know, I think it's very hard as an individual developer just to reckon with that amount, because it's probably a significant amount of paycheck. So, you know, all that is to say that as developers, we need to start understanding those numbers and feeling comfortable with them and to design our software with cloud costs in mind. And so now with that in mind, what I wanna do is think about like, what do we all think about as developers when we choose to architect a brand new solution to scratch? Great software consists of a bunch of like, you know, great decisions made along various like dimensions of choice, a choice of tooling, a choice of performance algorithms, a choice of high ability, high error handling and resilience, a choice of, you know, frameworks and toolkits. These are all the things that we usually think of as developers, but what I'd like to propose is that cost simply becomes another dimension on that. And remember what I said about the trade-offs, the decisions being made on an axis, right? Like, you could decide that you will throw this many compute instances at your problem, but what if you have that number of instances and only got a 70% performance, or rather a 30% performance hit compared to using all, you know, the original number of instances you were thinking of, right? If you think of cost, then you'll start making that trade-off and choose thoughtfully like where to land on that axis, as long as you include the axis of cost in the design principles. And so let me throw out another saucy question, right? Like, you know, let's say, you're given metaphorically the keys to the cloud, right? And you're asked, go build a car, right? So you could build anything you want, right? You have this open budget to do so. And you can either build a Lamborghini, which is beautiful and goes really, really fast. And it's something that you feel like you can be proud of. Or you could build a Toyota Camry, which is cheap, easy to operate, a low cost of repairs, and could be replicated infinitely, right? Like, you know, what would you feel more proud of building? And, you know, more on the nose, right? Like, you know, bringing the car and app analogy together, right? What's the point of a beautifully architected app in all other ways that simply just costs too much to run effectively? You know, should you have built it at all if that was the copy of it? So I think hopefully by this point, I've sort of made an impression on you, the listener, that, you know, we have to pay attention to these costs. And I think that as engineers, we can always be inclined to say, okay, this is a problem, we have to deal with it. And I'm gonna do so much in-depth research to make sure that when we do execute, we build it right the very first time. And in my experience, it almost never happens, right? You can try your darndest to foresee the cloud crystal, you know, look into the cloud crystal and figure out, okay, these are all the things that are gonna affect my performance and costs and prepare for them. But I'm here to say that you're probably just gonna be wrong the first time you do it. So when you're in the cloud, think about iteration and embrace it, right? Like, do some forethought about in-cost, but sometimes you just have to play the game to figure out how everything works. And don't be worried about making small mistakes as long as you're in a tight feedback loop. Like, just like Mike said, you have the Prius or Tesla view of it and you're able to see, okay, how am I doing this week or this month and try to make improvements and take it seriously? And I think with that, you will be as healthy as you could reasonably be expected to be. So don't stress out too much about it. But yeah, so sometimes you do have to learn the hard way, right? And when you learn the hard way, it's really great to share all the hard ways you've learned so others don't hit them. So here are some simplified examples of things that we did at Librant that earned us quite a bit. Little mistakes do add up, right? So I don't know if any of all of you are familiar with driving AWS, but this is a command to make a data storage bucket, right? This looks like the most innocent thing in the world. And how could this possibly be a cost concern, right? But if you could click next, please, yes. If you think about putting a couple of gigabytes or terabytes every week or every month into that bucket and then you never remove any of the data from that bucket, you start to see like not only a mounting incremental cost, but obviously a mounting year to date cost that sort of starts rising like almost like in a parabola. And this is because now that you're in the cloud, you're paying for Google, Amazon, or whomever to buy hard drives for you to stick data on. You have no limits, so you just are inclined as an engineer to stop thinking about it outright and just say, okay, Google will handle that. And yes, they will handle it and they will gladly write the invoice for you to pay your storage bill. And for example, in this case, right? All of the major cloud providers give you a functionality to automatically expire data from a cloud bucket and in order to keep costs predictable and consistent, we definitely had to make sure that we put all, put those life cycle policies on all of our buckets. Another way we have burned ourselves in excruciating detail is just like experimentation so basically, there were many cases where we told people, told teams, hey, go and experiment with running your app on the cloud. Use your best judgment to try to keep the experimentation limited, but do what you need to do to learn. And so as a developer, I say, okay, make sure to leave this afternoon that I'm gonna create a huge group of compute instances, which are like M95.enormous and with huge instance size. And then, oops, I got paged. And I had to go deal with something and solve an issue with the existing production that we have. And then after that, after that fire was put out, which took days, you had to go back onto your secondary project and wait. You totally forgot to clear out your experimental instance and now you've just blown through your entire R&D budget for the whole month, perhaps in weeks or even days or even hours. It is possible to do that. So just think harder about unused resources as a developer and keep an eye out for events. And now I'd like to hand it to Patrick to help talk about what we actually did there to try and get in front of all of these emergent cost problems. Thank you, Josh. So this was actually covered by Mike earlier as well. The reason that these things happened to us time and time again was because initially we didn't think about visibility of our costs. We didn't structure our migration to cover these ideas that these things that we're spending money on are gonna add up. And if we don't give visibility to our developers, how can we blame them for spending too much? And so what we needed to do initially, once these things started to happen was we needed to find a way to be able to give visibility these costs to our teams. And this is where I started to really deeply partner with the engineering teams and where they started to, I started to realize how good they were at saving me when things went wrong. And so I deeply appreciate all their help that they gave me. But what we did was I just asked the teams, like what are you using today that we might be able to quickly whip something together and be able to show everyone in a way that works for our company and our teams how things are being overspent. And so the teams came back with a solution of just integrating our cost data from BigQuery. This was in GCP, running it through Datadog, which has a forecasting function. And then alerting teams in Slack at set, pre-fined at times that we got to a certain threshold of spend. And so this was prior to this existing within GCP. This is now more native functionality. You can get email alerts, but this was something that the engineering teams really put their heads together. And I think we solved this in the course of like a day or two. And so the amount of visibility that we went from, which was initially whoever had access to the billing console to every team being alerted whenever spend hit a certain level in a couple of days was just incredible. And so this is an example of how we solved the problem, but based on the fact that we continually kept burning ourselves. And so we also needed at that point to come up with a better way to visualize all of our spend. And there were, this is at this time we hadn't thought about this, but we started to really get into what was available to us. Did we want to build something ourselves? Did we want to hire another company to do this for us? Did we want to find a platform that could manage this for us? Or did we want to use the native functionality? And we ended up landing on a combination of these things. And so we started out using a free service that Google provided, just called Data Studio. We built some dashboards that looked very similar to this one that's on the screen. But what we found out as we started to operate the environment was that we didn't have enough detail. So the finance team came to our rescue and provided us some Tableau dashboards, but we found out that no one knew how to use Tableau. And so it was great for the financials that they needed to represent, but the engineering teams didn't want to spend a bunch of time in this and nobody had Tableau access. So we started partnering with the engineering teams again, saying like, okay, we need a lot more information here. We need to be able to have alerting that goes to people outside of the Slack channel, so executives that want to see this cost. We need to deeply be able to look into our GKE usage so that we can right size our clusters. We need it to be easy to use and a whole raft of ideas that we had. So the obvious choice was that we should build our own. Once we started getting into that, we realized how complicated that was, how many people we would need for that, that we would need a dedicated PM. And then that's when we sort of started to realize that there were entire companies built around this problem that had hired entire engineering teams and teams of PMs and sales folks. And so we ended up going with a cost management platform. But this was sort of, I don't want to say we were uninformed, but we definitely spent more time on the technical than we did on focusing on our cost. And I believe that had we investigated our cost a lot more in the beginning of our migration, we would have probably found out about some of this and we would have been a lot more prepared and given this visibility to our teams so that when things like Josh talked about were happening, they would have been limited in scope and we would have started thinking about them in more detail because we would have had alerting. We would have been able to do more to start optimizing our environment. And so I want to hand it back over to Josh really quickly to wrap up. Great, thanks Patrick. Yeah, so just to add onto the very end of that developer visibility engagement stuff, I think one thing we saw during our process was that once you gave the engineers the visibility on how they were doing, they actually really got into it as a part of their optimization problem. And that happened organically, right? Once they were able to monitor their costs, just like they were able to monitor their apps, like requests, disk space and whatnot. With very little further prompting, there were many people on certain teams that sort of got into it organically. So that's the second point of my conclusion here. I'd like to also revisit the thesis of my own section, which is that just simply to consider a cost as an architectural consideration and something you can be proud of to choose the right balance when you architect new software that's destined for the public cloud. And that the new reality of cloud is that you'll be collaborating with finance more, embrace the relationship and see it as a partnership. Like they're not around to punish you, they just want to make sure that like the blast radius of any given team with their cloud spend is within some margin of error. And it's actually a pretty fun problem to do that. And speaking of fun, remember to keep having fun, right? Because after saying all of this cautionary stuff, keep having fun, right? Because cloud makes everything move faster, evolve faster. It's a great time to be a software developer. And so with that, thank you all very much. Let's continue the conversation on the Slack channel and hope to see you there. And thank you so much. I'm going to be covering a few different topics here, kind of talk about what Kirsten's gone through for Mark, moving from migration to operation and how we kind of implemented some Synapse and Governors to help us kind of get through that with all of our projects. So just to start on an interview with myself, my name is Asha Nacko. I've done with Kirsten about six years and the assessment testing industry for 10 years. I started as a technical project manager and now I'm leading the charge to manage a new global synapse team at Pearson. So those that don't know who Pearson is, we've been traditionally known as kind of a textbook company, but we're so much more than that. We're leading the industry in online assessments, including certification and online curriculum. I think going back to what Mike was talking about, you guys can imagine how COVID has impacted us in both of those sectors in the last couple months. I was kind of going through our story. So we had rapid growth in four years. We doubled down our engineering team. We went from six AWS accounts to over 125 accounts, get into the GCP space and the Azure space. We migrated over 106 applications. We were doing lifting shifts, tech upgrades and even building brand new applications straight in the cloud. We did commission four data centers and then we were all set in spending three times the amount in AWS. So very, very quick growth in a short period. And then about six months ago, we actually faced another challenge where we merged our entire cloud management under a centralized team. So instead of having different siloed IT groups, it became one cloud management organization. And then we were doing a physical challenge of managing two master pairs. We had many applications, different business owners, more engineering teams, regionally based in different areas with different levels of cloud maturity. Our cloud financial scope now getting through almost three times more than where we were even six months ago. And it was a challenge to close about 12 data centers by 20 points first. So rapid growth, decentralizing our siloed IT groups. So that was quite a few issues. One, we couldn't get a handle of how much stack we had because there's so much rapid growth. We found it really hard to break down our barrel with products that shared accounts. We do really know how to prioritize more requests to run up and what was revenue generating, what was more important that we worked on next. We could predict cost spend even six months out. And this included predicting what it would cost to do data migration centers. We built something in us only to find out it actually didn't have any funding in it. And many of our stakeholders would experience this like sticker stock at the end of the month. We're also, they were being charged for their cloud usage and they were used to the CIO or paying for their entire bill. They weren't used to that. And then overall, we're just lacking really a cloud-conscious culture. Just kind of funding money with not thinking about what that would cost would be good for them. I think we need to handle this and we definitely need to do that. We formed a formal setup global service team. Recreate the data process for all users. Hey Ashley, we're having a really hard time hearing you. I don't know if you can hear us. Sorry everybody, a little technical difficulty here. So often it doesn't look like she can hear us. It doesn't look like she can hear us. So I'll drop back off. If you guys not hear us. Yeah, it's really muffled and garbled. I don't know if you can get closer to the phone or something, it's really difficult to hear. Is this any better? Oh yes, so much better. You want me to back up a few slides? Go back one, it got really bad there toward the end. Yeah. Can I state this is better? Yeah, it's better, thank you. Yep. A little bit here. So we had a lot of massive growth at one time and then we also had moved to central siloed IT groups. We were centralizing that now. We led to a lot of issues dealing with staffing, prioritization, not able to predict our spend anymore. Our stakeholders were seeing all the sticker stock. They reached it out for anything and now they're paying for all of their stuff. And then just in general, we just weren't very cloud conscious culture at all. So that led us to take some actions. What we did is we formed a FNOP team, a global team. We created an gated onboarding process for all new cloud applications and we created an inclusive cloud governance board that would have to review and pass all policies. So a common question I get asked is what does your FNOP team look like? So we have a group of nine people. We have what we call FNOP practitioner. They're very focused on our education side. They're billing specialists doing our RI management, savings plan, marketplaces, billing management. We also have data analysts. So they're looking at our cloud trends, looking at what we forecasted versus actuals, focusing on anomaly detection. And then they're also writing business optimization cases to present to the SRE or development teams. We also have two automation engineers focusing on internal automation. So how do we make our FNOP processes better and scalable? And then we have ops and cost management automation. So these are services that we're gonna provide out to everybody to use. So maybe they wanna clean up their sandbox every three weeks so we help build that automation that they can leverage. We also have a BI developer that focuses on a lot of our Tableau reporting. So that's internal KPIs that we wanna track, that scorecard where we're measuring those KPIs but at an application level and then a lot of our executive reporting that we do. Another question I get is like, where does our FNOP team? So we are embedded in our cloud and hosting organization, which is under the CIO, but we work directly with Pearson Finance Services as well. I'm gonna do a mic check. Is everyone still hearing me okay? Yes, sounding a lot better. Thank you. Thank you. All right, I wanted to check. So going into our gated process that we developed. So we realized that again, we can't just give everybody and everybody an AWS account, nor do we want them just putting them on their corporate credit cards. So we really wanted to create this gated process. And I will say at first, I think a lot of our engineers were a little bit hesitant. I think anytime you use the word process and gait, there's a little bit of fear, right? So we really engaged them and said, how do we make your guys's life easier? And we learned things like scope creep was a big thing. Being told to build something and there wasn't money was a big thing. Being asked to build something in two weeks when they've already got four of their projects in mind wasn't working for them. So we worked a lot with the engineering teams to build this out. And this is what it looks like. We do an engagement phase. We define what we're gonna build. We get everybody to commit to building it on that timeline. Then they build it and we launch it and go operational. And we provide operational support. So just to give you a sense of what that looks like. So we have all product owners still out of form. This is how they wanna engage our team. They get assigned a technical project manager with them. We collect information like what's the project description, security requirements, any timelines that it's due on. Then we do an engagement period. So if you can imagine, this is a group of people now virtually sitting together. We have engineering lead, security lead, governance lead, spinoffs lead. We sit down and we talk about the project and we say, what is the objective? We figure out what this thing is going to cost. What is going to be the schedule? What do things need to be built and signed off by? What is it gonna be some of our constraints? And we document that all in a statement of work. Then we kind of come back and do an internal review. So our hosting team kind of decides like, is this something we even think should move to the cloud? I think longer term, we're still heavily AWS, but we'll probably focus which cloud provider is the best fit for this thing to move. We'll also kind of push back saying that we don't wanna move this until it's not on Oracle or we don't wanna move this until they've refactored the database. So we do have a little wiggle room to say no at this point. During this internal review meeting, we also create a solutions diagram. We'll put in there things like we recommend serverless. We recommend that your lower environments run on spot. We document assumptions and risks. This is really useful to kind of help protect the engineers that are gonna work on this. And then our fin-off team uses a diagram to actually calculate what the infrastructure and labor cost is going to be. After we have this meeting, we get back together and we all decide to commit to this thing. And then security also presents any things that they have risk-cared version. If the product owner still decides to go forward with it, at least that's documented. Fin-off presents the cost saving opportunities in labor and usually we give them several options and they're able to choose from the one they wanna do. The goal of this meeting, only in a few days, is that we've committed a timeline to commit to resources. So that's been another benefit for our engineers is because he's actually been able to go ahead and hire more resources for the projects that are slotted six months, nine months, a year from now because we know they're coming and we know we need those resources to do that project. We've committed to responsibilities. So if a product owner is coming in, we let them know that they own security, they own costs, they own tagging compliance and that we let them know what we're gonna help provide for them, guard duty, config rules. And then both teams are aware of what the estimate is and would they both talk about it and would they both committed to the targets that they're gonna stay on. Once that is done and everybody has agreed to it, we move into providing them access. And so we typically give, right now we're giving about three different accounts for a product, so they get disaster recovery, non-production and prod. We also provide that development team a sandbox account. So going back to what Josh said earlier, we don't wanna like not let these engineers be able to build and be creative. And so they do have a sandbox to do those activities, but it has a limit of $500 that can be spent in it. So they're gonna need to shut things down on the weekend. They can't leave their sandbox running all the time, right? So we do wanna still inspire that creativity, but kind of set a dollar limit to it as well. And then our fin-up team, once those accounts are created, we automatically create budget alerts for those accounts that alert out to the engineering team that's building it into the product owner that owns it. And then all of our accounts get linked under our payers so they're all getting our, you know, our EDP discount applied. They're getting all of our config rules set with it. And we just, as a company, have more insights to what we're spending. This is just an example of the budget alert. This is very native of AWS that you can do. If you can imagine, we now have over 500 accounts. So we've had to develop some process to kind of automate creating these budget alerts. And then we allow teams to re-forecast every quarter. And so we've had to figure out a method to go and change these budget alerts every quarter if they've changed their threshold and they've gotten sign off. So kind of goes back to what I was saying about our FNAF team having some automation built into it. All right, so during the pre-operational stage, what our FNAF team is doing is we reach out, we do a one hour meeting with the team. And this includes as explaining to them how to contact us, how to use our services. We'll do some training on how to use the cloud data financial management tool. We'll go over workflows of how to make RI purchases, how to forecast, how end-to-end chargeback is gonna work for their accounts. So there's just no surprises before they get started. We also host something called SpenOps Friday Learning Sessions. These are typically one hour sessions where we pick a specific topic. So maybe we're gonna talk about the different peer storage of S3. And this is either hosted by SpenOps or by a cloud vendor. So what we do in these sessions is we kind of train the engineers on how to use that service most cost effectively. We also do bi-weekly training non-cloudability. We do Q&A sessions on this. We also go ahead and keep your budget alerts, like I mentioned, and any monitoring alerts to them. If any of our budget alerts go over a certain dollar amount or go over a certain number of times, like three months in a row, then we will get together with that team and we'll do kind of a cost cadence with them to get them back on track as well. So that's all the added service that our SpenOps team does kind of help both the engineering and the product team. While these engineers are building out, we kind of give them the space, right? We wanted to be able to build the infrastructure within the timelines that they were given to do that. We're there to help advise, to help understand cost as they may need it. Maybe somebody says, actually, we decided to switch from this RDS to serverless Aurora. Can you tell us how that changes the forecast, right? They sometimes will be like, we decided that we're gonna run a data migration services for 14 days, what's that gonna cost? That's usually something that we get a budget alert for and then we can tell them, hey, did you know you did this? Perf environments are a big one, right? So during that build up phase, it's very much letting them do what they need to do, but we're still here to advise or let them know about any anomalies that may happen. Finally, when we go post-operational, SpenOps still stays connected to the product. We provide monthly costing reports to show actuals versus what was forecasted. We track any accounts that are going over budget. We track why they went over budget, so that's there. We often are kind of that buffered to help descend from finance, right? We can kind of explain to them what's going on. We also track any auditing of compliance and that's mostly because a lot of our tagging relates to how do we do charge back as well. We do quarterly forecast, so we go ahead and forecast all, we have over a thousand products, we forecast them all. We go check in with the managers and see if anything's gonna change, that's make it significantly different from this forecast and we get them to agree to the forecast and then we help submit that on behalf of them to finance. We make all RI and savings plans purchases and then we also have been up analysts assigned to each business tower, so a business tower for us is anybody that's spending more than 500K per month in cloud spend, we provide them cost optimization business cases. And that's what I kinda wanna show you next. So here's an example of a business case that we provide. So the analysts is gathering all this information from various tools. We're meeting with engineering and product teams and sitting down and looking at the recommendations. We're allowing them to have the opportunity to decline a recommendation or opt in a recommendation if they opt into it. We typically ask like, when are you gonna be able to do this? Like let's get it on your backlog story and then we'll follow up with them on agreed upon timeline. So we're not there to pastor them about resizing every little thing, but we try to put enough of a case together to say, here are 12 things that you can do in this account to reduce cost, go ahead and look at it and let us know what you think you can take action on and what you can and then we'll come back and kinda show you the difference and help you write a success story on doing that. So here's a good example. This team, we recommended that they move to SPOT. They've had negative experience on this. This is one that I like to then call up our AWS fans and say, hey, can you have a one-on-one with this team about SPOT? I wanna see what's going on there that they had negative experiences with it. The other one is trying to get them to reduce cost in the weekends. So they accepted and completed it and you can see here that this is them scaling down the weekends. Now we'll probably come back and say, you know, we probably do those a little bit more, right? We tried that, we were able to scale down to meet, we scale it even more down. So this is just a good example of like how we use our business cases, but it's at the end of the day, it's up to the engineering teams, the product teams to accept or reject those recommendations because they're ultimately doing the work and supporting the work. And then lastly, I mentioned governance. So we've developed a governance board. It's been about a year now. So it is, we host bi-weekly calls. Anybody in the company can bring issues that they believe requires further discussion. The outcome is usually a rejected policy or we create a formal policy or maybe there's some technical implementation that has to be done. There are 12 voting members on it and that range from our FNAF team, our SRE leads, CISO, QA teams. So just a couple of examples of what our governance board might have come to it. So FNAF brought to the governance board, you know, we don't want you to repurpose AWS accounts. It's very confusing on a financial end when you shut the account down and then you spin it open and then also there's these costs associated with it. So that was something that we brought forward and they ran it as a, they executed as a formal policy that they don't do that anymore. Another one is a little bit more technical. So we felt that elastic IPs that were unattached greater than X days should be terminated. This was interesting. We all, we got together the SRE teams. We actually went into 14 days. The SRE teams were like, no, it should be five days. And so we were able to get this implemented as a policy with actually a shorter duration. And then go ahead and now build up the technical invitation, which means we're putting out a cloud custodian policy to do this. So I feel like as, you know, as you mature as organization, having this type of governance that's very inclusive is really important. And that kind of goes into my conclusion piece, which is, you know, either if you're a large corporation or a small corporation, I think having some processes and procedures to get started into the cloud is very healthy. It allows you to educate people. It allows people to have a gate to go through and allows people to have other people to consult with that they may not have. I think also trying to utilize your FNOP teams, they can help do the data gathering. They can do analysis of cost. They can help drive conversations. I think the same way as an engineer, you set up PagerDuty Alert. So you can, you know, go to a movie on Saturday and not think about, you know, if your application's going to alert, you know, our FNOP team is here to make sure if anomaly detection goes off, you've got us to help do the research for you as well. And then I also think another key to kind of a more mature cloud of state is to make sure you have a collective governance that, you know, it has people from all over the organization part of it and be open to that. I think at that I'm going to pass it back over to you, J.A.R. for some Q and A. Excellent. Thank you, Ashley. That was really good content. Appreciate you going back and covering some of those bits. Particularly like the process you put in place for a lot of this is I think, you know, many are thinking about this culturally but haven't gotten to that level yet. So thanks for sharing that. So we have about 15 minutes left and we've got a number of questions that want to go through in a panel discussion format here. And so the first one, and if we can bring everybody up, it'd be great, perfect, thank you. I think I'm going to go to Mike Fuller at Atlassian. And, you know, Atlassian is obviously kind of at the center of a lot of engineers processes, right? With Jira and those types of products. And one of the questions that comes up a lot is, you know, how to actually work this type of FinOps work into sprints, right? How to get it set up so that people are actually either planning for it or they're using, you know, backlog and grabbing pieces out. How have you seen the, in your own company, how have you gotten, you know, engineering teams to make this a regular part of the workflow and gotten that just sort of flowing in an iterative fashion? Yeah, so for us it was, he was having management buy-in to the need for FinOps and showing that it's important to teams to have cost focus as in amongst all of the other things that they've got, you know, to balance off. And so we're going to have teams that usually try to find sort of balanced sprints where they're introducing a certain number of points. You know, some of those points will be security related tickets. Some of those will be cost related tickets. And so having that sort of focus of a few points in sprints, the other one will be coming back to sort of the content I was talking about with metric driven. We're looking for teams that have got sort of the biggest opportunities with the lowest effort, you know, to reduce spend on wastage. And so we can identify those teams and get conversation happening with them. We can ask them to road map it into a, you know, a near term future sprint and they'll get those extra few points where they might spend, you know, a few hours during the sprint to focus on cleaning up some of that wastage. And then it's really key for us then to show the impact that that has had. So they feel like it's worth putting that time in their sprints. Excellent. Anyone else want to join in on that one? Oh, Sasha looks like we're muted. There we go. Okay. I would just, Mike provided a pretty full answer. The thing I would add at a 10,000 foot view is to present and frame the FinOps work and the cost optimization work as simply an extension of a developer's thinking and workload about their product, right? That cost is simply another dimension of an application's efficiency. And if you can do that, then everyone can begin to slot in how to frame it, how to include it in the regular work, how to include it in the JIRAs. It just becomes an extension of this, you know, bolted on idea of DevOps and then the bolted on idea of FinOps as well. Yeah, that extension of the work I think is spot on. I hear described as like getting them to be good citizens, right? And part of that being a good engineering citizen is to start thinking of this as a dimension, you know, of the work that they're doing. So, you know, on that topic, I mean FinOps isn't really something that a single person does, right? It's really a cultural change where you want to get engineers thinking about it alongside their uptime and aptX and all these areas. You know, for the live ramp folks, you know, who I think you started your practice probably in the last year, you know, what were the first initial things that you saw were most effective at starting to get that culture to be, to be picked up and adopted? Like how did you bootstrap it in your organization? Yeah, so maybe I can start to tackle that question and then Josh and Patrick can jump in. So our very first step was just figuring out that we needed it and that it existed. You know, we, you know, we had little pieces of this understanding, but it was really when we attended the session with Mike and we heard about FinOps that we thought, this is definitely something we need to do. And then I came back from that session, put together a presentation for the execs and had them sign off on it. And of course that started off with getting the engineering VP bought in as well. And then once we had that as a matter of working, you know, with Josh and with Patrick to figure out how we should approach the engineers themselves and how best to talk about it. So on that note, let me pass the baton to Patrick to talk about like from his perspective, what worked. Sure, yeah. So I was more focused on the product side as well because I just heard someone mention Mike was talking about road mapping. This became a really hot topic for us because when you present to someone, to an entire team of product managers that the roadmap they've been waiting 12 months to build because you use their engineering resources to migrate, now need to be diverted again to save cost. It was not well received. And so there was a lot of conversations that needed to happen around why this was valuable and how in the end it would actually be helpful to them as a product manager and give them advantages in how their app functioned and how they wouldn't have to worry or how they could get to a point where it was more cost efficient. They liked that tack. And in terms of working with the engineers, I think I would let Josh speak to that about how that became more useful to him. Yeah, so one thing that kind of jumped into my mind while hearing you all talk about this was a similar concept that's used in DevOps, which is like blameless postmortems, which is to say that when something goes wrong, you try to find the best solution to the problem like collaboratively with all of your stakeholders and describe exactly what happened and how we can try to prevent it from happening again. I think that blameless attitude should also shift towards like when you implement FinOps, you can do the same thing, right? I think when bad things happen, like try not to make it feel like it's the worst fall and try to include them in the solution, I think that's the attitude with which, we took some hits at live ramp with cost and we realized that we couldn't just shake it down. We would have to have the cooperation of the developers who knew about their applications and how they ran in order to get the best solution and compromise with all the different knobs we could turn to improve cost. So, and then beyond that, right? Like I said before in my segment, like once people could see the data and sort of the problem statement and the statement of like, here's what doing the right thing looks like. Can you help us get there? Everyone in our team was really, and our engineering team was really happy to move the ball forward towards that right thing. Excellent. I was gonna pass one to Ashley, but she's having this audio issue, so sorry to leave you out of the conversation, Ashley, but I'll pass this one to whoever wants to pick it up, which is, we've talked a lot about the aspect of the engineering team specifically, but a big part of this, right, is getting those teams working with the other teams and I've had so many conversations with finance folks who are needing to wrap their head around cloud concepts and reporting. So, what things did you do in your organization to bridge the gap into their reporting processes and what were the sort of initial and maybe early wins or challenges you ran into? Interfacing engineering with finance, if you will. Let me take that first and just provide a slightly saucy take on it, which is that when we were trying to align everyone together, we put a lot of thought into where the Phenops practice should sit. Should it be in finance or should it be engineering? And, because we thought that part of what was required to solve this problem was perspective and context and being as close to the problem as possible. And so we ended up putting it in engineering. Now, having spoken to lots of the folks in the Phenops Foundation and other companies, we have learned that people take different approaches on this one and I can see where putting it in finance, if finance is the central decision maker and they're quite interested and they have an idea of what they want their reports to look like makes perfect sense. In the large consumer internet companies that we've talked to, it's usually part of engineering, but I'm actually quite curious what everyone's experience has been and if anyone is a strong advocate of one approach or the other. I'm hoping you can jump in on this one. Yeah, we can. Yeah, we can hear you. Yeah, so with our finance org, we had to kind of do a Phenops finance summit, to be honest. We had to kind of level set terminology. Like I didn't understand a general ledger any more or less than they understood an RI purchase, right? So we kind of came up with an agenda together of what are things that we feel like they should know and things that they think we should know and we really had to get terminology, right? Like you would constantly, one common thing is say, this is how much you spent in January and then later learned that actually because they do a cruel methodology, that one showed till February and so we were talking about numbers in different terms in different periods, right? So terminology was number one to kind of get on the same page as the finance team. We still are probably like a middle person between our finance and engineers. They somewhat appreciate that, right? So come through with Phenops team and then we'll work with the finance team. I hope over time we can kind of close that gap. But I'd actually say some of the teams that we realized needed to work together more than anybody was our own engineering teams. We were so siloed that we had people that were really good at thought over here and really good at using Fargate and they were just sitting here experts in the area but they weren't cross talking to each other. And so that's another error. We got to come in and say, you wanna do spot and you're confused. I do this team at the 100% spot, like let's have you guys meet and we just got out of the way and let them talk. So I think that's powerful in itself too. I always wonder about that. I've met these almost call them unicorn engineers over the years that are like, really get into the cost stuff, right? Josh I hear you talking about it and Mike and like you get it. But that seems to be the exception, right? So like I'm wondering is that a personality thing or were you exposed to some of the others are? And if we were to figure out what is that magic bullet? Cause I think that's what everybody's trying to get to. Like I got 500 engineers, like what is the motivational piece? And we can say visibility, we can say, you know, given the metrics, but can you get everybody into it? Do you need a certain type of more business minded engineer? What's the right profile? What do you think Mike? Or does everybody even need to get away with that? Yeah, I think that was probably my point is maybe, maybe you just need, so we talk about like the idea of at Atlassian of like security champions where we try and get people around the org within the teams to be sort of champion security amongst them. And so we're trying to actually adopt that model within Atlassian where we get these FinOps champions that are, you know, one or two people, you know, in amongst teams all around the whole organization that are on the lookout for those, you know, things that are causing pain for spend and they're aware of their budgets and they're able to sort of help their colleagues keep in line. And even on my team at Atlassian, you know, we're sort of six engineers and there's only a couple of us that actually get into the FinOps stuff, you know, the other ones would rather, you know, jump off a cliff than to do FinOps. And so it's kind of like balancing that out and finding the right people within the org. And I think trying to train, you know, thousands of people on FinOps might be impossible, but training individuals spread out through the org is probably much more achievable. Yeah, and if I could add on to that, like, I think that for some engineers, you know, our experience at LiveMap is that people had a fixed idea of what they felt they had signed up to do. And finance was not one of those. And actually, exactly, exactly, right? Like, and, you know, I empathize with that feeling, right? Especially as an engineer, you know, you need to, for better or for worse, many engineers tend to see the world in various blacks and whites. It's like, okay, I'm here to be an ML engineer. And if I have to do anything besides ML, it's a waste of my time. You know, I don't think that attitude can be resolved overnight, but it is something I empathize with. And I think that just, you know, stating a steady drum beat of stating it as part of your optimization problem, which of course, optimization tends to be part of every engineer's job at some level, that's the sort of wedge you can just try to keep making bigger and bigger to try to get people interested in, you know, taking care of that mindfulness when building their own apps. I think that's spot on, it's that wedge, it's not a one-time process, and it's not gonna be, you know, very quickly, you gotta start slowly, iteratively, over time, and really scale it up. When with that, we are out of time. I did want to say, you know, if anybody wants to continue this conversation, FinOps Foundation is there with a bunch of folks like the ones on this call in Slack channels and meetings, check it out, finops.org. Not FinOS, but finops.org for ambiguity's sake. So thank you all. Content was phenomenal. I appreciate all the presenters taking the time to work through this, it was a fun experience. Hope we get to all have beers together at some point, and everybody have a great weekend. Thank you. Thanks so much. Thank you all for attending, and thanks everyone on the call. Bye. Thanks. Bye-bye.