 launch and hopefully not so fall to the extent that you would be able to follow what I'm saying today. So cost is a cornerstone of pretty much most of the decisions we take in our daily life. So we are trying to optimize or maximize the value of pretty much what we are trying to get even in our personal lives. It's pretty much the same for the cloud and that's what I'm going to talk about today. I'm going to talk about how we can maximize your business value through cost optimization with the AWS well-architected way. My name is Mohammed Wadi and I work as a solutions architect at AWS Solutions architecture team and based out of Amsterdam, the Netherlands, mini-acting or serving the enterprise and retail customers within Benelux region which is Belgium, Netherlands and Luxembourg. So before getting started I'm curious if any of you is already familiar with the AWS well-architected framework. Alright that's great to say. Was that has any of you really used the well-architected framework to run architecture assessments for your workloads? Well that's still good to say. So for those of you who are not familiar or have not run any architecture assessments I'm going to quickly cover that today as well. So before I get started I have a confession to make. This session is not really about you know the tips and tricks you can do in your environment so you can eventually save some costs for services. Instead it's more about the culture and the processes you need to embrace and incorporate into your organization so you can be eventually a cost efficient organization. This is our agenda for today. I'm going to first of introduce a well-architected framework what it is and what it really means to be well architected. Then I'm going to talk or dig deep into one of the pillars of the cost optimization framework which is into the cost optimization pillar which is one of the pillars of the cost optimization framework. Afterwards I will dive deep into the design principles of the cost optimization framework and here we can use it and use it to practice in order to eventually be a cost efficient one. And last but not least we're going to have some time for Q&A so please hold your question till the end and in case you know you didn't manage to ask your question or for whatever reason we ran out of time please feel free to reach me out right after the decision I would be in the building till the end of the day. So with that what does it really mean to be well architected? So over the years we have been involved in tens of thousands of architecture reviews with our customers and even internally and throughout the time we managed to see some successful patterns what really works well and what doesn't work out. Based on that we managed to extract a set of best practices that covers different aspects which works pretty much for most of the workloads we have worked with. These best practices are formed into pillars that eventually forms a well architected framework. The well architected framework is not meant to be an audit process yet it meant to be a review process so it should be helping you to regularly evaluate your workloads against the best practices. So it's not also a one-time thing so you might start evaluating your workload making sure it's even aligned with what we have but then maybe after six months you need to revisit it because we keep launching new services and key features which can if you incorporate it into your design it might help you to be more optimized over the time. So the well architected framework is built around six pillars security, operational excellence, performance efficiency, sustainability, reliability and the pillar we're going to talk about today which is cost optimization. The cost optimization pillar have five design principles. If you follow these design principles and manage to implement the best practices of these five design principles you can be eventually cost optimized. The first one is about to implementing cloud financial management. So to achieve financial success and accelerate business value realization in the cloud you need to invest in cloud financial management. It's pretty much the same like you know any team you have in place such as security or operations over the time you have dedicated time, energy and resources to make these teams mature nowadays we can see how mature our security teams and operations team are nowadays. That's mainly because over the time we have been heavily invested in that. You need to do pretty much the same for cloud financial management. This new area or this new domain of technology and usage management needs your time and effort through knowledge building programs and resources so you can eventually become a cost efficient organization. The second principle is to adopt a consumption model and what it means simply is to pay only for the computing resources you need and increase or decrease your usage based on your business requirements or business demands. A clear and great example is the different test environments we pretty much all have. You really don't need to run all of these environments all the time right? We work eight hours a day 40 hours per week so while running them for 168 hours per week which is the whole time so make sure that you're shutting down the resources you don't need if you don't need at that time and that would end up with savings of up to 75 percent. Number three is to measure the overall efficiency and you need to usually measure the business outputs and the associated delivery costs. So think about it as getting this kind of output or the workload I'm doing on a business level. However I invested this much time effort energy and money to get it. Is it really worthy? This kind of data is quite instrumental to help you to understand the gains you can make maybe from increasing the output increasing the functionality while in the meantime reducing cost. Number four stops spending money on undifferentiated heavy lifting. The cloud nowadays does take care of the heavy lifting of data center operations such as racking stacking and powering servers and it also removes the operational pardon of you know managing operating systems and even some applications by using the AWS managed services. This would allow you to focus more on business projects and on your customers rather than focusing on the IT infrastructure. And last but not least, analyze and attribute expenditure. The cloud made it way way easier nowadays to be able to attribute the right cost and usage to the right workloads which then allows transparent attribution to the right revenue streams and workload owners. This would help to measure the return on investment which is very important and can help the workload owners to be able to optimize their sources and reduce costs on the long term. So that how can we actually run this kind of well architected assessment? How can I do it? It's pretty straightforward and very easy. For that we have the AWS well architected tool. It's very easy to use, very straightforward. All you need to do is to navigate to AWS console maybe on the search bar search for AWS well architected tool and then you would be navigated to the AWS well architected tool console. From there you would be able to start a review, a well architected review for your own workload. You would have a bunch of questions. These questions have set of best practices that you need to comply to in order to say that I'm well architected. However, not necessarily all of these questions or all of these best practices could be relevant to what you're looking for eventually. And if that's the case you still can exclude some best practices, some questions, if it's not really relevant for you. So when you're done with a set of questions you wouldn't really be assessed again instead. If you're looking for further information about the best practices or the questions you have in place there is normally a hyperlink next to every single question and best practice. Make sure to click on it to gain further information about it. So once you run the AWS well architected assessment using the tool that's exactly what you get. What I have done here I have gone ahead and run an assessment and this is a result on getting just for cost optimization. I got 10 high risks because I really apply nothing of the best practices at this stage for cost optimization. So what do I need to do? First you need to add these 10 high risks. You might also be getting medium risks which is a best practice to address them. But for the sake of the time of decision we wouldn't really be talking about medium risk at all. So what we're going to discuss from now onwards is we're going to discuss the high risks. In particular we're going to discuss 5 high risks for the sake of the time session. As you see on the left side here these are 10 red circles. For every high risk we're going to address this red circle will turn from red to green. As you see here these are the best practices and on the top of it is a question. The ones that are grayed out down here these ones are the medium risks which we're not going to address today. So let's start off by talking about the cloud financial management part where first off you need to establish cost optimization function. And what it simply means is you need to have a cost optimization team. You can do that by first off identifying some key stakeholders from finance as well as technology. These two departments in particular need to work together in order to be eventually a cost optimization organization. To address this best practice all you need to do is to identify the key stakeholders, maybe align with them that you're starting out this new team and then you can create an email distribution list that includes them so you would be able to communicate with them further. And with that you can just check the very first best practice here. Next you need to establish cloud budgets and forecasts. And I know the cloud is kind of different because on premises we used to put huge investments up front into gears and stuff in order to eventually meet the big level which we might get at some point and we might not. That's a huge investment and that's the perfect thing about the cloud. You really don't have to do that. However it's tricky because you don't know how much money I'm going to pay eventually. That's why you should use a mechanism for forecasting your for forecasting how much you're going to pay for your resources that should be provisioned on AWS. For that there are a couple of ways that you can follow. One is to follow the trends way where you can use at all that I'm going to talk about just shortly called AWS Cost Explorer. This can use machine learning behind the scenes and give you some estimations about how much is going to cost you for the upcoming six months result. These kind of information are being extracted from your trends while working on AWS. The norms of spin that you are following. However this might not be sufficient because on the other side you still have a pipeline for projects that you need to work on. Sometimes these projects could be small while sometimes could be big which means also big investment that you need to put in place. So you need to ask yourself a question which way would work out for me. Normally for any customer at the beginning of their cost optimization journey I highly advise to merge both of the approaches in order to be able to have a more accurate estimation. Otherwise if you just follow the trend maybe you would be surprised with a big bill at some point. That's why I really would like to introduce my second favorite cost optimization tool AWS Cost Explorer. AWS Cost Explorer is a free of charge service that you can use whenever you provision your AWS account. Back in time when you provision an account you needed to navigate to the AWS Cost Explorer console and then enable it yourself. You really don't need to do that anymore. Nowadays it gets enabled automatically. However it still takes 24 hours in order to start to populate cost us for you. And over the time when it starts to populate costs you can slice and dice these costs based on the services, based on the regions, based on accounts. You can filter depending on how you'd like to filter it and gain more insights into what you have in place. And that can give you some decent insights about how we can handle the forecast. But in the meantime what can I share with the cost optimization team? Can I just ask them to log in to the AWS console and open the AWS Cost Explorer? Well that could be a way but it's going to be tricky especially we have people from finance and we need a more efficient way to communicate with them. That's why you should be also using the AWS Cost and User Report. The AWS Cost and User Report would give you granular insights into the cost us for your resources. So you can use it and even import it for visualization by other tools such as QuickSight or Redshift. What I have done over here as you see, I have just used QuickSight in order to analyze the data within the CSV file where I'm getting for example the monthly cost. I'm filtering based on how much services are consuming to what extent. I'm filtering based on the instance types so you can utilize such tools also to gain a decent visualization for what you have in place and that could be a great report to present for finance stakeholders. So in order to enable the AWS Cost and User Report, what you need to do, navigate to AWS Cost Explorer console, click on Create Report and from there you need to specify the S3 bucket to which you're going to import this report. It's taken well a few hours in order to start publishing it to this S3 bucket and then you can download it, maybe visualize it, maybe use it right away in your meeting within the cost optimization team. And now you have a cost optimization team and also a Cost and User Report. Now you need to establish a partnership between finance and technology by setting up a cadence for this team. A cadence where they're going to regularly meet, could be by weekly or monthly, depending on the size of your organization and how important these matters at this stage for you. Then you can introduce the Cost and User Report that you just got. These are a visualized one or the CSV file that you can export. And from there you can start discussing, are we really going on the right track? Are we meeting our business outputs, looking at the costs we have here? Is it really makes sense or are we really going the wrong direction where we are putting too much money and getting no value, even on the long term? That's the kind of discussions we'd be going through with the team over the time. Next you need to implement cost awareness organization-wide. So you can, of course, go ahead and create your own resources on the cloud, but there should be a process where the cost is being shared across different levels. So at least have a mechanism whenever you are starting a new resource provisioning, especially if it exceeds a certain threshold, maybe if it exceeds $50 per month or so. Maybe in the ticket you're going to create for provisioning these resources just as to shift the cost estimation. So the right stakeholder, when he or she looks at it, can be able to identify if it's really a good investment to proceed with, or if it's really too much to proceed with, or does it need an exception and so on. That's the kind of awareness you need to spread across the organization. And with that, we just managed to switch the very first high risk from red to green. Next, user governance. You know, you cannot really be cost-automized if anybody can just go ahead and create stuff, right? So you need to develop policies in place. These policies can segregate what kind of permissions these kind of teams have, what kind of permissions other teams have. So a good example could be the development team and operations team. The operations team might have access to many accounts with wider permissions, so they can maintain the environment. While development teams might need to just access the resources in which their application is up and running and so on. So just identify your business requirements. Identify rules. Create groups which you can use to segregate them. An easy way to do it is by using the AWS Access Management, which can help you to create your users, groups, and then associate the right policy that restricts their access. So instead of just going ahead and create whatever they would like to create, you would be restricting their permissions and what they can do. And from there, you can also, you know, segregate or restrict certain regions from creating these resources on it because over the time, if you're not using this region and allowing these users to create resources in this region, it might end up being charged for costs that you're not even aware of. Then you need to implement goals and targets. Remember the meetings you have was a cost optimization team? There should be some goals and targets that you would be discussing for every workload in these meetings. Some of these goals could be that by next quarter, I would like to increase my business utilization or my business consumption for this workload, while slightly increasing the cost of the workload. That's the goal I have. But the target would be that for the increased consumption, it would be by 20 percent. And for the cost increase, it would be 5 percent. So make sure you have accurate data points that you can count on and that you can assist yourself against it. Then you need to implement an account structure. It's really quite hard to apply governance and would be quite complex if you have tons of resources in a single account. So you need to have an account structure in place where you can segregate what the different teams are doing and where the different workloads are living. That could be that the team is doing workload one and it totally doesn't have to do anything with workload two. Or even if it does, you still can, by using other tools, enable them to communicate together somehow. But at least you would be making it easier for yourself to segregate the kind of policies and the kind of rules you're going to assign to the different people that is going to access these resources. And with that, you can use AWS organizations. So AWS organizations is an AWS service that can help you with account provisioning and also governing the whole accounts. So you can create an account with a certain permissions in place or certain policies in place that restricts what can be done and what cannot be. And few benefits of it is the shared identity and access management. So instead of having to create your identity and access management for every single account by assigning a certain set of permission for a certain user in every account, that's a tedious and long process. With AWS organizations, you really don't need to do that. What you can do instead is just to centralize it in a single account and this account would be your kind of hub to assume rules to the other accounts. You can also consolidate the billing. So having a dedicated billing for every single account is a tedious one as well, especially when you try to extract the cost and usage reports and how long the process would take while you have to go through for a very, very big number of documents you have in place to review your costs in it. Number three, the shared users credit. So if you have any users credit that AWS has granted you, if you're having a single account, not part of an organization, then it's going to be dedicated just to this account. However, if you're using AWS organizations, it can help you to spread it organization-wise for the different accounts you have. Unless, but not least, to share the reserved instances and that's where you really can have a reserved instance, which is a set of commitment that you make to use certain types of EC2 instance for one or three years. If you're making it for a single account, it's dedicated to this account, but if you're using AWS organizations, you can share it organization-wide. And it makes sense here mainly because if you decided to shut down an account where you still have some EC2 reserved instances, in this case you still can put it somewhere else and invest it somewhere else. So with that, we managed to switch or flip the second high-risk into green as well. Third is usage and cost monitoring. So for the time being, we have AWS Cost Explorer where we are monitoring our costs, but it's just getting updated every 24 hours. It would be nice if we can really gain more detail than sites. So instead of 24 hours, maybe it would be nice to gain insights every one hour, especially if you're deploying a new workload and you don't want to be surprised that the cost has insanely increased after 20 hours or so. And you can easily do that while the AWS Cost Explorer console where you can navigate to preferences and from there you can enable it, which is pretty straightforward as you say. And it's going to take 24 hours to start updating the cost every single hour. Next, you need to identify cost attribution categories. And you can do that by having a meeting with the key stakeholders in finance that can help you to identify these kind of categories. It could be the kind of tags that you would eventually assign to your resources, such as a cost center maybe, the environment name, the workload owners and so on. Then you need to establish workload metrics. And that's very critical here, mainly because if you really can't track the value, the business value of this workload, it might prove that it's not efficient after a long time, which means also huge losses. So here you need to identify what really matters business-wide as well, such as maybe the number of page views, the active subscriptions, the seconds of the interaction with your application. It really depends on your case. I'm just mentioning some examples here. So for that you can use Amazon CloudWatch, where you can publish these kind of metrics on it, and set up some alarms to be notified whenever it exceeds a certain threshold. So for example, if the number of active page views drop to less than 50 percent, then clearly there's something wrong. Maybe you're under attack. Maybe your application is facing technical difficulties and so on. And that would be the moment you really need to start engaging. Then you need to configure billing and cost management tools. If there is something that I really want you to take out the decision today, it's exactly this one. Make sure you are creating budgets. Normally when I start having conversations with a customer, I make sure that up from the customer, beside using a work and a company email for the root account and enable an MFA, it's actually to create a budget. For that you can use AWS budgets, which can help you to set some limits, based on which if you exceed, you would be notified. So you really know what to do next. By default, it doesn't take any action, but you can integrate it with other AWS services, which can really help you maybe to shut down resources, for example, if you'd like. And you can identify budgets based on actual amount of money or based on forecasts. If you're using forecasts here, it's going to use the forecasts of the AWS customer explorer and notify you that by the end of the month, you're going to exceed this amount anyway, so maybe you need to take an action. And for that you can be notified by SMS, by email, or by chatbot notification, which can stream these notifications to your Slack channels or time rooms. Creating it is very straightforward as well. Just search for budgets or go to AWS customer explorer console and under the budget section, that's where you can create it. I didn't find the things are very straightforward as you see here and once you're done with the parameters you need to provide, you can just create and you would be notified from now onwards if these thresholds are exceeded. And now we have managed as well to flip the shared high risk from rate to grade. Next you need to decommission resources. And that's, you know, one of the most important things, because I have never seen a resource that's meant to live forever. At some point you need to decommission this resource. So for that you need to really track the resources over the lifetime. If you're just creating the resource and not really aware of where this resource is or what is, when should I decommission it? That's a bit of a tricky one. That's why you should be using tagging. Tagging simply labels that you assign to your AWS resources. It's a key value bearer that you can use to gain more insights or make the data about this kind of resource. For example, which environment this resource exists in. Maybe what is a cost center I should charge back for that resource. So on and so forth. And then you can even visualize these kind of resources or how much, for example, all of the resources associated with a certain cost center, how much money it's costing. So on and so forth. So you can use that as well to gain more insights on cost on the tagging level as well. After identifying a way to track your life cycle of resources, you need to implement a decommissioning process. And for that, you really need to make sure that you have a way to decommission the resources eventually. In order to make it easy for you, I highly encourage you to use infrastructure as code if you're not already. Using it this way can help you whenever you decide not to proceed with these kind of resources, can just go ahead and decommission it. However, if you have a hand-created resource, you still need to decommission it. But to make it easier for you, and maybe because it's a new resource that doesn't support infrastructure as code at this moment, at least try to track it via tagging. This would be helpful later on, so you'd be able to find this resource and decommission it. And now we managed to flip the force high-risk from to green. The five question is about evaluating services. So you need to identify organization requirements for cost. How can you do it? So think about it as an example where you need to provision a workload that needs a database. This database could be provisioned in EC2 instances, but could be provisioned on RDS as well, which option you should go for. EC2 instance sounds more appealing because it's cheap, while Amazon RDS sounds more expensive, but that's not really true. It's not really true because there are also some hidden causes here. If you are under the pressure of time, or if you don't have an experienced DBA who can really manage the database, clusters, and maintain it over time, that's too much cost and stress for you, which, although it's not a sensible cost, but it's gonna look you at some point. So use your judgment over here. Do you really have the skills and have the time to proceed with an option that sounds more cheap, or should I go for that option that sounds more expensive, but it's gonna help me on the long term? Then you need to analyze all of your components of the workloads, and then run a thorough analysis for them. You can do that by making sure that you're going through your every component you have identified in your architecture, and then evaluating if it makes sense. Should I replace it? Should I decommission it? Because sometimes at initial stages we just add some components to solve a certain workaround, but over the time it might no longer be needed or could be done in a better way. Well, my advice here is if you are planning to, you know, if this resource or component is costing you not too much money, let's say five dollars per month, and you are under the stress of time, and it's not having any severe impact like a security impact or so, maybe in this case it's not worthy of your time, but if it's costing too much money, if it's costing too much pain, then you should really go through it. Last but not least here, to select software with cost-effective licensing. I normally advise customers to use open-source software, but even the open-source software which can result in significant workload optimization from a cost-effective perspective, sometimes some of them are licensed as well. Well, if that's the case and if you really need the tool, at least make sure it's not bounded to an arbitrary attribute, such as CPU. I do remember a customer who asked me if they can really provision an EC2 instance with a 512 gigs of RAM and just two CPUs because you were bounded to the CPUs. So make sure that you don't fill in this trap and instead, if you're going for a license, make sure it's more oriented to the business outcomes and results you are trying to achieve eventually. With that, we managed to flip our fifth high-risk, and we are gaining a momentum over here, but we are only half the way. We still have five risk issues remaining, but we don't have the time. As I mentioned at the beginning, I'm just going to add this five high-risk. That's why I highly encourage you to visit the cost optimization pillar documentation. That's where you can gain more insights about every single best practice and every single question that we have discussed so far. So with that said, I really would like to wrap it up. So these are few key takeaways that you should really be considering out of decisions starting from creating a cross-functional team that has people from finance and technology. This team would be the foundation to eventually become a cost-efficient organization. Next, create a billion alert, create a billion alert, create a billion alert. You really wouldn't like to be surprised by a budget four times or a bill four times more than what you have initially estimated. By creating budgets in place, you'd be notified up front when this happens right away, so you can address the problem as soon as possible. Create a life cycle through your resources. This is a very important one as well, so you can really know what these resources are meant for and what you can do with them whenever you would like to decommission them, otherwise it's going to be, you know, a long journey trying to search for the orphaned objects and resources here and there. And last but not least, regularly check cost-against-exemptations. Make sure that the associated costs or the associated delivery costs of the workload you're architecting is really bringing value to the organization. Otherwise, you might need to revisit it. That's it. I really thank you for your listening. In case you do have any question, I would be more than happy to answer.