 Hello, everyone, and thank you for attending this session. I know you have so many options in the time slots at the Linux Foundation event, and this is a presentation that is part of the Cloud Open Tract. My name is Victor, and I will be discussing some details that you should consider in order to create right migrations or right migrations project if you are in an environment that you could consider traditional. In my case, this is mostly a tale of modernization. The previous two years, I've been working with many government and enterprise institutions that have been doing the switch between traditional and monolithic systems to Cloud Native, not specifically any Cloud, because I will discuss more on that later. But this tale is about the mistakes that I did, a possible set of solutions, and some common practices that you should be aware in order to do a good effort, and most importantly, a successful effort. So let's go with the presentation. First of all, I am a consultant, and I work for a consultant firm in Guatemala, in Latin America, Central America, right behind Mexico. And if you consider my environment, this environment is mostly traditional in regards to Cloud migrations. Of course, many enterprises are already running some workloads in the Cloud, but you could be surprised when you go to traditional institutions like banking institutions, like government institutions, and some big players in the industry, because they have a way to do technology information management. In this line, I have the perception when I talk to IT managers about Cloud Native, and there is a misconception in their perception about Cloud Native, because most of the times, if not all of the interviews that I did in the past, start with a false assumption. The false assumption goes like this. Many of them believe that by migrating to newer Cloud technologies, being serverless, back ends as a service, microservices, companies will achieve scale, because in the recent times, as you probably know, many organizations are interested in scale, because they continuously grow in number of concurrent users that they are receiving on any software and mobile platform. Why this is a misconception? I think many of the market professionals tend to over present their products. And as I work, as a consultant and software architect, I have to fight back, let's name it like that, in order to establish what could be done with migration to Cloud Native technologies, and what could you actually expect, because the promise that surrounds technologies for instance, are that infrastructure will be self-healable, that it will out-of-scale, that cost will be easier, and most importantly, cheaper. But if you consider that as a fact, because actually it could be affecting any project, that's not the case for each of the projects. So I've been participating in successful projects, but also in not successful projects. And I want to share with you some of the lessons that I learned during this journey. Anyway, I promise a tale about migration, so this tale starts like this. A well-established IT manager, IT director, CTO has been creating technology for a lot of years. I think that the average experience for any CTO in Guatemala is between 10 to 20 years of experience, and they've created a lot of successful systems, implementations, use cases in these scenarios in which monolithic applications that are mostly internals are a good fit. It doesn't matter if you are an IT manager for Java houses, for .NET houses, for PHP houses. The traditional software development environment works like creating value for the user, and this value for the user is implemented as an application being web deployed over a web server, being IIS, Java application server, and probably Apache web server if you're using PHP. And after that, you start to receive a lot of requests about we need to scale this system. We need to scale the capacity of the system because this system that was designed to provide services for 500 users from one day to another will have to support the load for 5,000 users. And you have to do it in a proper way with a limited budget, and you have to take in considerations all the laws that surround your organization, especially if you are in government. So, an IT manager starts a road like Alice in Wonderland in which he digs into the internet in order to find what's Cloud Native and what opportunities, which opportunities can he take advantage of in order to provide this scale. That's my experience, but also the experience of many of the people that I've worked at in this past years, and this tail continues like this. When you start to learn about Cloud Native, one of the shocking experiences is about everything is changing, and I will resume the experience and everything is changing because you have traditions in order to do software that have been working to previous 20 years, but when you face new software blocks, software implementations, market presentations, you will start to find a lot of new terms and keywords. And at least in my experience, the first shock that you will have is what's going on in here. So, when you look for a Cloud Native definition, a proper definition will be, this is an approach for building modern computing systems and dynamic environments such as private and public clouds. Would you could question yourself, well, I've been doing that since ever and since the first versions of AWS is available. But if you're digging and you continue the Alice in Wonderland road, you will start to find terms like reactive systems, to all Cloud Native factors, Cloud Native design patterns, domain driving design, microservices chassis, cloud container orchestration, and most importantly, the newer clouds. So, if you wanna catch up with the new marketing terms in order to be successful in a new project implementation, you will start to do a process about crossing all of these concepts in order to create a good software architecture. And most of the times, marketing promises say if you go from traditional to Cloud to Cloud Native, you could actually attain a scale. So, what's this approach? Or what's in my opinion, the right approach? The right approach goes like this. We want reactive systems. We want systems that give the impression that if some component failed, the system will continue to work. Or in other words, it is fault tolerant, but also it scales depending on the demand. So, it's also elastic. And this, of course, message driving because the traffic that the system will receive will actually drive the entire system scheduling and entire system operation. In order to implement these reactive systems as described in the reactive manifesto, we could look for a methodology and a valid test approach is the 12 Cloud Native factors. So, these 12 Cloud Native factors that were designed by Heroku, describe all of the rules, the minimal rules that any software development implementation and most importantly, distribute the system should comply in order to be Cloud Native. After you get the 12 Cloud Native factors, you start to have a lot of issues that weren't present at the traditional monolithic development, like distributing processing, compensations, like messaging between distributed system. So common mistakes and solutions are described by Cloud Native design patterns. Hence, if we are creating this distributed architecture in order to obtain better ways to scale up our applications, we tend to divide the system by functionality, being that functionality and specific domain of knowledge, hence we divide the system using domain-driving design among other ideas. After that, you should start to dig about microservices chassis, being the tools like Spring Boot, Node.js, .NET Core in order to implement distributed systems that communicate with each other, deploying this over a container or station platform, being a complete software service, being a Kubernetes implementation, being something simpler like Docker Swarm. Hence, the Cloud Native area is not only about taking your application and uplifting this to the Cloud, but it's actually an approach in order to build a good project over Cloud Native implementation and Cloud Native technologies. With this, the tail continues in the second chapter. Now I have the right knowledge in order to bootstrap my Cloud Native strategy, but this seems pretty difficult. And actually, that's why, in my opinion, close to Cloud Native migration or on-premise to Cloud Native migration should be targeted as a macro project, a macro project that is divided in small steps that we should discuss in the following slides. But in my experience, if you focus only on the microservice implementation only on the domain-driving design area, your project could not be successful because you have to cover all of these areas in order to do a proper implementation. So what's the next chapter? These kind of projects are easy to say but difficult to implement. And when you are the consultant, as myself, you have to guide the efforts, you have to guide the investment, you have to guide the technology selection. The first successful project that I did was a result of a really deep discussion with my client, but most importantly, a sincere discussion about what's attainable or not, considering its actual state, being the actual state developer capabilities, the actual IT department budget. And we did a benchmarking about what's going on with different enterprises with different sizes. And in this third chapter, we found the Butterfond's Cloud Native Journey. The Butterfond's Cloud Native Journey that I am presenting in here is a small slide that was presented actually in a Butterfond's presentation, of course. But what I liked about this slide is the different levels that you could define as the cloud maturity level. This is pretty important, of course. This is not an academic definition about cloud maturity, but in here, you could observe that your organization could be in four different levels of maturity in regards to cloud native implementations, being legacy, virtualize, cloud ready and cloud native. Most of the time, if you are working on a traditional environment, you will be working in the first two levels. You could be deploying applications directly on a server that's mostly reserved nowadays for really legacy applications like mainframes or web servers that have been running for a lot of times. But the first revolution that impacted our ecosystems on server development and deployment was the virtualization. With virtualization, the promise was to divide effectively the computing power in the hardware in order to isolate my applications and to reserve resources like memory and CPU. And most importantly, to provide an isolated environment in which I could run my application. In the virtualization scope, you actually have this advantage over the hardware, but you are not effectively running on a cloud. Of course, there are some cloud providers that offer this kind of upload and shift whole virtual machines in order to run your actual running applications. But the traditional application developed for any specific operating systems is not cloud-ready, at least not at this point. So you have to ask yourself, what's the next level? And the next level is cloud-ready. The cloud-ready, as discussed by Butterfung, is like virtualization or defined in another way. It's like defining the limits for CPU and memory consumption, but you have to consider or take in account not only where my application will be running, but you also have to consider which are the best practices in order to deliver in a safe way my software artifacts. So to resume, the cloud-ready implementation is an implementation where you have secured proper CI-CD flux in which you have a fully and automated app build and run cycle. So you don't rely specifically on an operating system level. You are redeploying instead of restoring your application and you start to create your infrastructure as code. So you have an application. This application was, first of all, deployed on bare metal. That's legacy. You deploy that application over virtualized implementation. That's virtualization. And if you have a process in order to compile the code, test the code, and deliver the code automatically to an infrastructure that also could be provided as code, you are cloud-ready. At this level, you could see tools like Ansible, like Chef, like Puppet, and traditional CI-CD servers like Jenkins. And you are right behind the cloud-native area. So what does it take to be cloud-native? Is taking in account all of the cloud-ready implementation? Please notice that that's important. You have to be stateless. That means that your application could run on any server of a Kubernetes cluster, could run in a different instance of a lambda implementation, a lambda function. But besides that, it has automatization in the deployment and the monetization and the control of your application. So it is stateless, sealed-healing, out-of-scaleable, and separates the metering of the application users and the building. And it is basically self-recoverable, self-discoverable. It doesn't matter which implementation do you use in the end. What's the main problem here? Most of the times, being cloud-ready is a fact about culture. Because if you have a way to do things, like I've observed in many developing teams, software development teams, this is pretty difficult to implement. Because when a developer has found its preferred framework, its preferred way to do things, it is difficult to switch to a new paradigm in order to implement things like automation testing, things like CI-CD. And that's a prerequisite in order to do successful cloud-native migration. So you cannot go cloud-native without being cloud-ready at first. And that's the main issue that I found in my implementations. People want to go to legacy or at least virtualize directly to cloud-native without changing the culture of the teams. The question now is, how do I change the technology that I use in order to serve my users? And how do I change the culture that my entire IT department has? And the answer is pretty simple. You don't do it. You will fail if you try to do this at the same time. So as I stated in a previous slide, my recommendation is to do this as a macro-project. You have to tailor your project in order to do a proper migration of technology, but also a proper migration of culture. So I want to continue my tail talking about two implementations that I did in the past, being the migration projects. The migration projects were projects that I've been developing and I've been working as consultants, especially Kubernetes consultants, since 2020. But this migration project just finished during 2021. And one project finished like three months ago. So these are the migration projects. In the migration projects, I tend to nickname these projects as the government wants to go cloud-native because I did these projects in the government sector of my country. Of course, as many other sectors, the Guatemalan government tends to require NDAs, but I could discuss a little bit about these implementations. And the first motivation for the difficulties that I challenged that I faced in this implementation where that government institutions in Guatemala are required to store the data in Guatemala. So the local data is required by law. These institutions are actually part of the same government sector. Despite being in the same government sector and have the relation between each other, there is no standardization among technologies in this government sector. At one institution, I found a Node.js-based solution. It is a recent implementation. Let's name it like that. After doing all of this over development over Node.js, they had this Docker-based monolith. Despite being on Docker, this was actually a monolith because they were deploying between 20 to 30 services over Docker containers in a big server without virtualization. This was a bare-metal implementation of Linux, running a lot of containers. And they had this culture about doing CI, CD with GitLab, mostly CD, not so much testing, but this was enough for its actual workload. On the other side, I have a more traditional institution in which they have been developing a solution for like seven years by using a Java Enterprise Edition-based solution, specifically Apache Tomy with a monolith and no CI, CD at all. Both systems were working just internally, but these systems were facing the same challenge. We have to provide services for all of the, for the people in Guatemala, being 16 million potential users. The actual population of Guatemala. I know this sounds like a lot for many of you. This sounds like a small load for others, but this is the actual load that could be possibly working on the systems. But most importantly, they estimated that between 1,000 to 5,000 concordant users will be running on the systems at any time. When I faced this implementation, I started in the easy one because I was hired as contractor in the first institution. And at this institution, I started to tailor a custom project in order to go from the traditional docker monolith to explore another and more formal orchestration platform. Spoiler, it was Kubernetes. And after doing the migration, to be fair, I received a good feedback about it. And I started the second project on the same sector, but not under the same conditions because the other institution had no CI, CD at all. And that's why I basically discovered that I have to face this kind of migrations, not as isolated efforts, but as projects, as macro projects. And I don't have like this manager profile. I am mostly a software developer, a software consultant, but I was a software consultant in need in order to create my own project and to provide a good service to my clients. So what was my approach? My approach was the PMI approach, the project management institute approach. When I faced this challenge, I started to question myself, how do I tackle this problem? And most importantly, how do I do it on a successful way, considering that I've done some pieces of this in the past, but this looks like an entire implementation. So I did my research in order to found this approach and I matched many of the PMI phases, the project phases that are universal to actual migration project. So the classic phases like concept as initiation, definition and planning, launch or execution, performance and control and project closing were matched to different phases into traditional to Kubernetes migration or not so traditional being docker to Kubernetes migration. And in here, I found some interesting facts that I wanna share with you about how could you do your migration and who could you face this as a project in order to have a fairytale and not a nightmare. Let's name it like that. In the beginning, as you probably guess from the DevOps keyword, you have to involve important stakeholders in order to define a proper roadmap. If you are a consultant as myself, you have to create this roadmap in order to give a map of investments. So when you bootstrap this kind of migration projects, I recommend to create brainstorming sessions, no more than two brainstorming sessions, but you have to be sure that the important stakeholders will be participating in the meetings. I take these bootstrapping meetings among two meetings, not more than two hours in average, but I secure the following stakeholders. First of all, most of the institutions despite all the DevOps hype in which software development and software infrastructure are one unit, that's not true for most of the institutions. So in my brainstorming sessions, I had a software architect being a tech lead in some teams, being a developer senior in other teams, but most importantly, this role has this power of decision in order to enforce some standards and some practices in the software development team. You have to include also the infrastructure director being a CIS admin or being a service reliability engineer in order to match the expectations of the software development team and infrastructure maintenance team. Between them, you also have to secure a direct contact point. After the negotiation that I did as a contractor, my direct contact point, what's this point that was always reachable because you are creating infrastructure and you are creating culture. So you have to have this direct contact point in order to ask for acquisitions, ask for trainings, ask for infrastructure. And the key questions that you should be clarifying in these brainstorming sessions are the motivation for the migration. So in order to decide which technology could be better for this migration, you have to understand the actual motivation. Why is that important? Because if you are actually running a successful system, you don't have only the cloud native approach in order to obtain a scale. You could do the traditional approaches in which you create, for instance, application server clusters or you could scale horizontally with the technology that you are actually already running. So your motivation to be cloud native should be strongly enough in order to be able to secure the resources and to do the proper migration. In my migrations, the motivation was, first of all, to obtain a scale to provide services for all of the Guatemala population. But most importantly, they have this roadmap between the actual year and to, let's say, 2031 in which they traced this roadmap to go to the cloud, at least the hybrid cloud. So the applications that were being developed today have to be prepared to be running in the future over Oracle Cloud, over AWS, over GCP. They don't have clarity now, which will be the selected provider. But they want to be sure that they will be able to run the workloads over any cloud. And after that, you have to secure what's the actual current team size and skills because it's not the same being a cloud native software developer versus being a traditional developer. Finally, not every technology is equal. And you will find that in the hard way because it's not the same to migrate, probably no JS services that are already isolated and without a state being stateless versus migrating a Java application created with Java server phases that provides a lot of state. So you will find particular challenges depending on the actual technology that the organization is using. In this line, the first deliverable that actually helps me as a consultant is an architecture review. Depending on your situation, this could probably be also worked by our early rate revision. But in this architecture review, you have to be specific on four points, which are the actual issues that the IT department or IT professionals are facing inside the organization. And once you find the issues, you have to define a possible set of solutions, approaches, and most importantly, actions. And these actions, when you are a consultant, take different forms. Being higher and not a consultant, being trained your actual software development team, or being please outsource all of the infrastructure management, please outsource all of the infrastructure as code management, you have to provide options. And why do you have to provide options? Because in this kind of migrations, you have a limited budget. Let's state that you don't have an infinite budget in order to go, okay, let's hire a lot of consultants and let's go and migrate in a period of one month, everything to AWS. That doesn't happen in the real life. So you have to provide these kinds of options because with this, you will offer an opportunity to create contracts based on deliverables. With this contracts based on deliverables, you could actually provide options to different teams, and if you do it in the proper sequence, you could actually be creating a migration by phases. You could evaluate from phase to phase the actual results of your migration. What's my recommendation for this definition and planning? I know that there are a lot of tools in order to do a software architecture review, being Togaf, being C4 architecture, but for my use case, I found the software architecture review template by Atlassian, which is a software architecture review played by Atlassian, the most easy approach because this template actually considers the previous factors, the issues, the possible solutions, it allows to establish a roadmap, and most importantly, in the future, you will be creating the contracts by deliverables or phases by using these possible solutions. I know this is in Spanish, so I will translate this because this is the actual result that I've been using in one of these projects, but I divided the Atlassian template about mini-projects that are part of a macro-project. Each of these mini-projects has its own description about issues, challenges, and possible opportunities, and what managers like to see over here is estimated times of implementations. I know for real that this won't be the actual time of implementation. They know it for real, but when they have estimations, they could say, okay, this whole mega-project will take between one year to one year and a half. I think we could tackle the first two mini-projects and see what's going on, and if the results are positive, we could later on go with the next steps. Once they have this whole panorama about project descriptions and the estimated time, they have also opportunities of tertiaryization, being let's hire a consultant, let's grow the team with a service or creativity engineer, what's my actual responsibilities for the team. So this is a good roadmap in order to define where I will go with my investments and where, most importantly, how do I decide in which areas I will be investing my money in order to go from legacy, from virtualize, to cloud-ready, and to go to cloud-native if I really need cloud-native in order to solve my problems. After the definition, and in my case, after being hired as contractor, I have to launch the second phase on my project and to execute each of these phases in which I have a direct contact point of communication and with this contact point, I have a direct mean of communication, in my case, being WhatsApp or Telegram, depending on company and organization policies, but also a non-reportable mean of communication because when you are interacting with different development teams, software development teams, you have to be accountable for the things that you do right, but you also have to be accountable for the things that went wrong. So having a non-reportable mean of communication gives you a risk management area in which you as consultant, but also they as a software development team could have this auditory log, let's name it like that, in which you could see which were the decisions and most importantly, at which time this decision impacted the system. These kinds of projects have two main cultural changes, being the first, the DevOps implementation, because to go from virtualize to cloud-ready, you have to be a DevOps-Aware institution, but most importantly, to do DevOps properly, you have to be a test-driving development team, not specifically the TDD culture as a whole, but you have to have some automation testing in your implementation. And if you could switch the culture about how the people is creating software nowadays, you could actually go with the infrastructure. Otherwise, you will be providing a lot of infrastructure that will be like a Ferrari. A Ferrari without having a proper driver because you will have the tools, but people won't be using these tools because they don't know how to use these tools properly. In infrastructure, as consultant, as third-party consultant, as you hire more people to your team, you have to secure things like, do I have a policy for remote access? Do I have security policies? Do I have the proper MBAs? This is also cultural in order to allow people to work remotely because you are doing this on the cloud. You have to secure the BCS infrastructure, namely Git, at least that's the gold standard nowadays. I'm not sure what will be next. But also to secure that your current CI CD server will be working with the newer cloud technologies if you need to do an upgrade, just do a planification to do it. Or actually, in my case, I had to implement a CI CD server in order to be cloud-ready way before being cloud-native. Just after that, after you have the culture and you have the basic infrastructure for being cloud-ready, you could go to the cloud-native state. In the cloud-native step, you will be taking decisions like, which will be my cloud-native platform. It will be OpenShift Kubernetes, Vanilla Kubernetes. I will go over a cloud provider like AQS or Kubernetes Engine. And after that, and when I start to do the migration from cloud-ready to cloud-native, I will try to tackle new issues because there will be new issues. You have to be sure of that. And the most important issue at least in my opinion is observability. Because if you go from cloud-ready to cloud-native, the services will be self-healable, self-scalable, which means on a practical way that these copies of your service will be growing without your intervention. But the logs, the metrics, and the actual state of your application will be so dynamic that an actual human being cannot manage this. So you have to tackle the actual infrastructure implementation, the culture that's around this implementation. And I wanna highlight and emphasize that the observability part is an observability that is a part that won't be so easy to do and most teams neglect on their first implementation. Now on continuing with detail, I wanna share a fact that I learned in the hard way. All of these phases will have a direct consequence. And the direct consequence was that in this ever-changing environment, you will need to document in order to be able to support this in the future. I did a previous implementation in which we neglected the documentation. And as today, this is a difficult infrastructure to maintain. So you have to assume that all of these execution phases will produce documentation. My recommendation based on experience is that you will be tailoring a live documentation, a documentation that will be changing between the actual infrastructure implementation, the first try and fail tests that your users and your developers will be doing and the actual tuning updates maintainers work that you will do over this infrastructure. So this live documentation should be documented in a wiki, like in a no less beige, like offered by Atlassian. I do prefer wikis being versioned as live documentations in a Git repository. But this live documentation also has this compliment about, well, I decided to train my team. The training and development will include the production of some of the following artifacts like bootstrapping archetypes like pet products in which you could base your following products. You will have to also secure a proper software configuration management tools being for instance maybe in Java or Node package manager Node.js. And you have to document also your TDD practices, your domain driving design decisions, your microservice charges implementation, your infrastructure as code. And my suggestion is that you don't have to kill the monolith right away. You could start with your actual monolith and in order to bootstrap your knowledge and most importantly to try your new infrastructure, you could create satellite services that consume information from the monolith. And just after that, you could actually divide the monolith in a distributed software implementation because you won't have enough knowledge in the beginning. That's at least my opinion. That's where I fail it a lot because I tried, okay, let's go with Quarkus in Java. But in the end, it wasn't about the tools, it was about the culture. So in the next and successful projects, I started with the monolith. Let's keep the monolith and try to create satellite services over the monolith and actually gather knowledge in order to do it properly. So new project should be created with the mandatory cloud native factors, but you could actually live in a hybrid environment in which you are communicating with the actual monolith. It doesn't matter if it is in on a private cloud or you could probably know an offload and shift virtual machine and in order to interact with the actual monolith, but you have to change the culture at the same time that you are learning the new technologies. In this link, I want to go deep into the DevOps, but you have to define a proper DevOps way to do things. Probably you are agile by using Scrum or some other methodology. That's good, but that's not the entire DevOps thing. You have to actually implement a proper way of integration and testing of your software. That's only continuous integration. You have to automate your releases. It doesn't matter if these releases are to store your artifacts in some kind of repository or you are actually releasing this to a testing certification environment or actual production environment. If you are able to secure your agile culture, your continuous integration culture, your continuous delivery culture, your continuous deployment culture, you are DevOps. Otherwise you cannot go cloud native without being DevOps. I know you could give me counterargument about it, but that's my opinion. That's my experience. If you are not doing DevOps properly, you cannot go to the next step. Remember, DevOps is part of the cloud-ready journey and is a prerequisite actually for the cloud native journey. So for instance, in one of my projects, I had to bootstrap a whole CI CD platform. I strongly recommend GitLab because it is easier to do and most importantly, it is open source. So it is probably good for in-house development teams and you could do it on the open source way or you could also acquire contracts, different contract levels about it. But GitLab is a good way to start if you need to do these on-premises because as I stated previously, I had to do these on-premises because of the particular characteristic of this project. What's the next step? Once you have your infrastructure, once you have your DevOps culture and probably you decided for Kubernetes for another proprietary implementation, you have to take in account the performance of your application, your team members, and most importantly, to control this. On the tech side, you will be controlling things like code quality, like coverage of your integration test, code smells, box vulnerabilities. You will be controlling the performance over the communications like network latency and failures of your services. And to do that, you have to be creating some kind of instrumentation. The instrumentation or the most common instrumentation for code quality is probably static code analysis. And for the performance, you have to create some kind of metrics gathering or observability inside your platform. For instance, in my projects, it was Kubernetes. So I had this performance and control metrics over SonarCube, in which I discovered a lot of things that could be better implemented in the source code. This is actually a good practice, not only for cloud native, but also for any project in general. As other tools, you could probably are guessing right now, SonarCube has this open source version and enterprise version. But after that, I also take advantage of tools like Rafaana, like Prometheus, and most importantly, service measures like LingerD in order to have this performance and control phase, to be able first of all, to verify my implementation efforts to actually have this metric about I am doing things right, are the users receiving a good service or do I need something else in order to do the proper migration? But if we go back an empiric metric that could be useful to you, probably cool will be how many services I've developed with the new culture, because you will be going from a particular culture, being the traditional development, so software development culture to the new cloud native culture. And in this project, as probably you are also guessing, you have to control the budget because everything is possible with money. But the budget is limited most of the time, so you have to control your budget and you have to control the deliverables versus deadlines delivery and the users and developers perception. What's my final message in performance and control? You have to be sure that your developers are understanding the new culture, that they are actually being productive with the new tools, you are delivering a new and good service and all of this is working as it is supposed to be working. So the performance and control phase as described by the PMA framework, it is actually possible to implement it by using the proper tools. For instance, and again, I did the code coverage verification with an RQ and the Kubernetes monetization and metrics gathering by implementing Prometheus as metrics storage, Grafana as the control of the dashboard of my cluster and things like Linkerd as a service mesh in order to monitor and to create some particular rules to the network in order to be resilient, but most importantly to know for sure if I am doing the right migration steps. So what's the final phase in this kind of practice? Implementation should, and I emphasize this strongly, implementation should transition to support. You have to consider that once this project or once this migration is finished or at least not finished, but it is starting to take off, you have to be prepared to pay for support being an internal service reliability engineer or contracting a third party support for your infrastructure. You have to maintain a live documentation and this live documentation has to transition also to support because once the migration is complete, you have to update in the documentation which changes are meaningful in this project. And most importantly, you have to register what could be done better in your implementation. In my case, as I stated previously, I did this over a week. So again, this is also in Spanish because I did this in Guatemala, but the final message is this kind of migration projects as any other project in IT will be considered successful if the project will be continued living. So when you say this application is legacy, it is another way to say this application was successfully enough in order to be alive all of these years. So my main intention by using the BMI framework acting as consultant is go and assess properly an institution. This institution has to understand which are the right motivations to go or not to go to cloud native. After that, you have to create a project being a micro project in which you will be changing the culture and the technological stack. All of these changes, if possible, will be required to acquire resources, to acquire licenses, to acquire support plans from many providers. And in the end, you have to create your knowledge base in order to do this transition from the actual migration project to the support project in which you are already able to do this cloud native service. So that's my presentation. I know this was pretty much theoretical, but I wanted to share with you these lessons that I've learned. I hope you find this useful. And I also hope you enjoy this presentation. And please remain here and enjoy all the presentations of the open cloud open on the open source summit in Latin America.