 Well, hello and welcome everybody again to another OpenShift Commons briefing, and I'm really pleased today to have Doeve Khan from Red Hat. He's the practice lead for application migration and modernization, which is a big title for someone who has really got a lot of hands-on experience with customers and helping them make their deployments at OpenShift work. And we've asked him here to give his thoughts on moving from legacy to cloud native, and he sort of identified three patterns of modernizing applications that I thought were really insightful. I saw him give this talk internally, and I'm really pleased to have him here to give this to the OpenShift Commons community. So without any further ado, you can ask questions in the chat. There'll be room and time at the end for Q&A. So I'll let Zoe introduce himself and take it away. Thank you, Diane. Hi, everyone. As Diane said, I am the practice lead in Red Hat Consulting for application migration and modernization, and today I'm going to be talking about the journey from legacy to cloud native, especially when you have a large portfolio of applications and how we can actually navigate through that and modernize the portfolio. There are three patterns that we have identified by working with various customers in the field, and I'm happy to present on this topic for you. So let's get started. So first, just a little bit of background with respect to the adoption of emerging technologies by enterprise customers. This is from the enterprise ID adoption, they track how enterprises actually adopt emerging technologies. So one of the key trends that they have pointed out was enterprises actually are towards the maturity cycle of the technology option in most of the cases. Reason is that there are companies who experiment with the technology, but most of the enterprise ID wants to adopt technologies when they have a clear ROI in front of them based on what they are looking at. So I was just going to talk about why does it happen? Why do we see enterprise IT? I don't want to call them laggards, but obviously they are not the for starters either. So typically the dynamics in the enterprise IT, the way it works is that they are mostly aligned with the business mission, and IT is generally considered as a cost center, which is an enabler. For most emerging tech, ROI is usually unknown in the beginning. So if you take out Facebook, Google, Yahoo, and Amazon for example, the rest of the companies usually want to adopt something when they know pretty much what are the use cases that they can address with them and what is the clear ROI. So majority of the times these shops adopt emerging tech at a significant maturity of that, if not towards the end. There are a few other things that are also critical as part of the enterprise IT organization. There are skill sets that people have that they want to continue to use and continue to apply. And we have seen that whenever we talk to the customers, they would want to do the same thing even in the cloud realm, want to use the same languages, same procedures, but obviously with a new flavor. There are existing processes that add to organizational inertia. And obviously there are existing systems that these companies have that sort of introduce inertia in terms of momentum when we talk about adopting newer technologies. On the other hand, the business is constantly pushing IT to move faster with less. There's a pressure to reduce cost. And generally the trend is moving towards how can we do more with less. So this creates a fundamental conundrum in the IT shops where if you look at the amount of resources and money they have, generally the 70% of that is allocated towards maintenance, where 30% is towards innovation or new things. This is a generalization. Obviously we have seen numbers as 80, 20, 90, 10, but we haven't seen 50-50. The reason is that most of the IT spend generally goes towards ELAs, towards maintaining the current systems, keeping the lights on, that kind of stuff. So this equation sort of tells us that most of the money and resources are dedicated to keeping the status quo. And one of the things that prompted me to think about this is what if we can help the customer change this equation in such a way that even if we move the needle maybe 10% towards the innovation, those are real resources, real money, real stuff that customers can use to develop new processes, to develop new systems or enhance new systems, take care of technical debt, and sort of start looking at modernization as a real thing in order to improve the productivity of their shops. And that was one of the key drivers of introducing modernization at that scale for the entire IT portfolio. I'm going to briefly talk about this, where we talk about CIOs and CTOs. Their main challenges are among many, how do they rebalance and maintain innovation? So based on the previous slide that I mentioned, this is obviously in a struggle. Also to avoid vendor lock-in in the licensing model. So granted that you bought a piece of software 10 years ago and now it's commoditized but you're still locked into a proprietary vendor, there's no reason for you to not move to open source equivalent after a long period of time when you have already paid a lot to proprietary vendor. Also become more productive with lightweight technology, especially the technology that is going to be more nimble and allow you to move faster to the cloud native architecture. You cannot necessarily take your heavy systems in the cloud if you want to go more agile. And also adopt new processes and technology. So these are the key challenges that CIOs and CTOs face today. But the underlying theme of all this is how do they actually rally their troops and put a structure plan in place to modernize their portfolio that will allow them to realize these challenges, solve these challenges. So why should we even care about modernization? What's really the value of it? First of all, modernization enables experimental approach to product development. This has long been a challenge in a typical enterprise where we see the typical release cycles are anywhere from six months to nine months. We generally see one release a year, maybe two for a large portfolio of systems and they generally go through a very large funnel of synchronization where a lot of environment is locked down for a single release of multiple systems at the same time. If we contrast that with an approach where the organizations can actually experiment and get something in the hands of the user quickly and get the feedback to refine their offering, that's usually considered as a monumental task. Most companies are not even ready. So one of the key benefits of modernization is that it allows them, it enables them to not only think in that direction but also realize that potential. It also has a good side effect of creating high performance teams that improve the quality of work and frankly, people like to do this kind of stuff that helps in terms of recruiting and retaining good talent and generally drives the overall improvement and quality. Some of the key things that modernization drives are frequent deployments. The rate of frequent deployment goes up. There's a faster time to recovery from failures. So if let's say in a typical six-month release cycle, if something critical goes down or does not work out and we realize as part of production deployment, then usually the entire release is backed out. So if this touches 10, 15, 7, 8, 10, whatever systems, all those systems are going to be impacted one way or the other. Whereas when we look at modernization, it gives you a framework. I'm going to show you how it gives you a framework in terms of to allow you to isolate your failures and be able to recover from it very easily. Also, it leads to shorter lead times. So if the business comes up with an idea that they want to get out quickly, they really don't have to wait for six months. And then also lowers the failure rate change. So these are some of the key benefits that why we should consider modernization. There are organizational aspects to it. So it's not going to be just technology enabled, but the whole idea is that we use technology as the enabler and put a structure in place that would allow organizations to actually do it in a more structured fashion. So this is one of my favorite slides that I talk about with my customers. So if you go on the left, we look at, you know, we started with application lifetimes where it took months or even years to develop the systems and years and decades to run them. So those systems, you know, obviously we're talking about systems that have been developed 20, 25 years ago or even longer. And the life cycles of those systems were very long, you know, before they were considered for any type of significant change. As we move forward, you know, we have gone from, you know, years and months of development to weeks and months of development. And systems now run from months to years before they are considered for a big change. And the latest trend is, you know, the business wants everything out in days in a matter of weeks, even like, you know, three weeks, four weeks, let's say. And then, you know, the system is out there, you know, we're talking about apps, iPhone apps, you know, mobile apps, integration points, you know, web apps, for example, that real customers are using. And suddenly, we want to introduce new changes as a result of that in a matter of weeks, even. So these are the dynamics that exist today. And inherently, they are pushing us to look at deeply across development processes, application architectures, deployment packaging, and application infrastructure. So if you look across the board, you know, back in the day, we were using waterfall, developing monolithic type of systems, deployed on physical servers, run in a single data center. You know, as we made our transition from, you know, decades to years, we have changed a few things. We have adopted virtual servers, for example, data centers are still around. We have moved from monolithic to interior systems. And we, most organizations are either in the process or have a flavor of agile now in place. But as we continue to push the envelope, and, you know, when the demand is increasing so much that, you know, we need to get something out the door in a matter of days, we need to look at, you know, what processes are going to enable that, what application architectures are the ones that align with that kind of approach. And also similarly, packaging and the application infrastructure. This is not meant to be all in or nothing kind of proposition. So, for example, we still have customers that are deploying monolithic applications on virtual servers that is still possible, that is still viable. But the general idea is that eventually when we have to move in this direction, we have to look across the board and see how we can actually align our efforts in terms of all these verticals, so that we can maximize our potential. And it is a journey and every organization has to figure out their own path, you know, through that journey. So, let me give you an example of, I told you about how the typical release cycle works. So, over here we are looking at monolithic release cycles and generally, you know, we have a release plan that governs multiple systems. And part of that is, you know, when the release plan is scheduled for analysis and, you know, development, we have several teams that go in parallel and start the development effort. You know, if this is one system or multiple systems, you know, the tendency is that we can staff one or more developers to make modifications to one or more systems at the same time. You know, given that this release plan touches all these components. But then something weird happens, which is these, everything has to be synchronized for integration testing, QA testing and UAT testing, which means that there are environments that are locked down, nobody's allowed to make a change. And things have to be synchronized and sequenced very carefully. So much so that if one database is not available, then it has the potential of holding up the entire release train. And once these activities are done, then finally the deployment can go to production. Now, let's say if you find bugs, if you find bugs in the part of the development, that's the most cheapest to fix in terms of time and money. But if you find bugs and integration QA or let's say UAT, which is usually the last step, then, you know, it has to go back to the developers and get it fixed. But there's a cost associated with it, the cost of time, cost in terms of environment, cost in terms of people's time. So the cost actually slightly increases if the bugs are found, you know, after the development cycles. And then, you know, extrapolate that if we find bugs in production, you know, the cost is even higher, because then we are looking at hard fix or some kind of a patch deployment that will then hold the subsequent release and also impact people's schedules, because now this has to be fixed in production right away, and everything has to drop. And this is this becomes the priority. By the way, this entire set of activities span typically six months to nine months in a typical enterprise. And that's what we continue to see. Now contrast that with a microservices base release cycle. Obviously, I'm making a lot of generalizations here. But the idea over here is to present that when we talk about, you know, modernization and going cloud native, one of the key things that we consider is to have microservices based architectures. And what they allow us to do is something very, you know, something very phenomenal, which is called independent deployability. So typically, the systems can go be governed with their own individual release plans. And so that's number one. So we basically decouple, you know, the release releases of systems, and, you know, take out the unnecessary dependency so that each system can actually march at their own pace, granted that they have to implement, you know, similar type of mandates from the business or new functionality, but the release can be decoupled. So there's no need to synchronize at the release planning side. On the development side, you know, there's a plethora of options to choose the development languages. So some systems can continue to remain in Java if that's the key technology. Others can be, you know, developed in Node.js, Python. So for example, if you want to experiment quickly, you want to see, you know, get something out the door, sometimes Node.js can get you those cycles quickly and get to the 1.0 release quite fast. And that's probably the team that you may want to have to get ideas out quickly to test that and be able to get the feedback. And once it matures to a degree that it now becomes an enterprise system, you can then decide to actually rewrite in Java or any other technology that you may steam appropriate. But the whole idea is that these systems can now go to production independent of each other while the whole system continues to run. And one of the benefits that we see as a result of this is, let's say if a bug is found in one of these systems, then it can be passed for that particular system without impacting the others. So there's no need to synchronize the environments. As you can see, there's no need to hold up other people's releases, other systems releases, because, you know, we don't have one single database that locks everybody down, or we don't have one environment anymore that locks everybody down, or we don't have one thing or one team anymore that locks everybody down. So we have potentially scaled up by adopting this new architecture, new way of writing cloud native applications in such a way that gives us flexibility to deploy independently. And this is one of the key benefits that most customers don't realize at the onset, that in order for you to be able to release quickly, independence is key. So I just wanted to highlight that this is one of the key benefits that is part of the modernization. Now, we all have hundreds of systems running in production, right? We're not going to shut them down and suddenly start rewriting them from scratch and, you know, tell the business, hey, we are not going to do anything for the next nine months or a year. You just pay us and we are just going to continue to build the new systems. That's never going to happen. So what do you do? Let's say when you have a large portfolio of legacy applications that is delivering value in the business, and it's critical to the business flows, but you still have to modernize. You still have to move forward. You still have to leverage the benefits of modernization to deliver faster and everything else that comes with it. So let's talk about a structured approach of how to move forward with it. And this is distillation of our work that we have done with various customers in the field. So there are three main patterns that we have identified that I'm going to talk about in detail. So the idea is that when we deal with the application portfolio, we can actually analyze that portfolio and be able to apply one or more of these patterns so that we can in a structured fashion develop a capability that allows the app team or an app dev manager if he has 10, 15 systems or a VP who has, you know, hundreds of systems or a CTO who has even hundreds and thousands of systems put a very structured approach in place. So the three patterns are lift and shift, augment and complete rewrite. So let's see what they look like in action. So I'm going to move to this slide. So the whole idea is that you have a defined starting point and you have a defined point of maturity for your modernization needs. So in this slide, you can see the three steps that I presented. The idea is that these are three steps, but they can be four steps, five steps. The idea is that there's a starting point. There are some intermediate points on the way and there is a point of maturity that we want to realize. So generally for lift and shift, we start with a set of systems that are clearly architected, have clear separation of concerns. And in terms of our target state, you know, for cloud native, we recommend, you know, developing towards microservices type of application architectures deployed on some kind of a container platform that will allow you to grow, you know, dynamically. So the idea is that you basically wrap your microservices in a self-deployable units that wrap the data and configuration and everything goes together. So there's no more shared databases. There's no more shared configurations. The entire idea is that this is independently deployable. So let's see how that looks like and how do we actually go from, you know, starting point to the point of maturity. So the first step in the lift and shift is we lift the existing runtimes into containers, but we leave the harder bits, which are the data, the configuration, the messages, where they are. So in the first step, we are going to take, you know, in this particular example, I'm showing that application, which is clearly architected with UI, business logic, persistence. That was the typical architecture that everybody was developing back, you know, 10 years ago, for example. So if you have this kind of separation or even better, you have clean separation in terms of, you know, vertical entities, you can actually easily lift them into containers and deploy them into containers while continuing to use your Oracle or Sybase or, you know, SQL Server, whatever, you know, wherever they are. So what we are saying is that we are being very explicit in the first step where you make an effort to wrap your container runtimes and run them into a container on a past platform, but you don't touch the harder bits so that you do not introduce, you know, a massive change upfront. This paradigm will allow you to get into a position where you can now quickly iterate and, you know, improve your system while continuing to deliver business value. So we are not telling the business that we are shutting down the system for eight months, nine months, year, whatever, and not delivering any value. In fact, what we are doing or we are showing them that we are continuing to deliver new functionality, new business value, or maintain existing value. And at the same time, we are, you know, taking step-by-step approach to modernize, you know, our system. One of the key benefits of this first step is that we introduce a new development and ops methodology that will train the development team and the ops team to be able to deal with cloud-native application deployments down the line. So this is where we introduce our fully automated CI CD tool chain. This is where we introduce our, you know, container-based development workflows. This is where we introduce disposable environments that can be stored up, let's say with OpenShift. And that will allow teams to scale independently, you know, without, you know, choking everyone at the same time. So we have, you know, but just with this single step, we have actually eliminated so many, you know, choke points in the release train that, you know, the business can see the value right away. Now, the next step quickly is as part of the step two. Now, since we are already running in the container platform, we have the automated tooling in place. We are able to now, you know, take piece by piece and, you know, deploy that in a microservices architecture. Now, this approach will require some deliberation, some re-architecture, some, you know, in some cases, we have to a little bit of be careful and rewrite the pieces that do not make sense. But the whole idea is now that we are enabled to move faster, we can actually take the time to do this, even on a regular release schedule, because, you know, we can open a change request that says, okay, we are going to develop microservices X as part of this release cycle. And that release cycle will also provide business new functionality. So as you can see that this becomes a repeatable approach that can be applied. And the beauty is that this becomes a very fast way for us to move forward. And I'm going to compare our approaches in a minute. Next, so next pattern is what I call augment. Have you heard about augmented reality? Well, we are sort of augmenting our reality over here by applying this pattern. So let's imagine we have systems that are monolith. There are systems that everybody's scared of. Sometimes these are mainframe systems. Sometimes these are systems that are developed over a period of years, 20, 25 years that people are scared to touch. They don't want to make changes. Some of them have business logic installed procedures even. So if you touch one part of the system, then something else breaks somewhere else. But the business is demanding that we integrate with mobile or we integrate with our partners, we integrate with Salesforce and other systems. So we are in this situation where we cannot move forward, but we have to move forward, right? So the way we approach this is let me just go through the steps for this one. So the first step in this particular situation is while we are augmenting our reality is we recommend that we do not do anything to the current running system. Just leave it where it is because generally people are familiar with how to manage it start and stop and how to manage its errors. So it has gone through the level of maturity in the organization in terms of management. What we recommend is continue to run the system where it is but do two things make two incisions. So the first incision is to expose the functionality and the second one is to expose the data. What this allows you to do is call the functionality from the outside. So this could be your web service that you write that wraps a call to the mainframe. This could be your messaging based invocation that you send a message and then there's an adapter that you write that talks to the mainframe, let's say, and gives you the response back. And let's say your mainframe is responsible for order booking or patient records or whatever is relevant to your business need. So suddenly what you have done is you have exposed a set of services that can now be called from the outside using an integration technology such as Fuse. And you have enabled yourself to use the legacy as a backend now. And the second part is that you can also query, submit queries requests that give me all the customers that has property X, for example. So that is going to go back to the mainframe and query the data in the mainframe and give you the result back. So you have essentially enabled yourself to reads and writes and updates. These are the critical operations that you would want to be able to do once you are developing new functionality using mainframe as your legacy as a backend. And then you develop your new functionality into a container based past platform and provide the value to the business that they need. And again, imagine you are not telling the business that we are doing a massive rewrite of the mainframe. Everybody has said that 10 years ago that we are trying to mainframe. And here we are, the mainframes are still around. Legacy systems are still around. But by taking an iterative approach such as this, you isolate yourself from the changes that will happen or continue to happen in the mainframe slash monolith application and start to deliver new business value to the business. You are not telling them that I am shutting down that system and I am retiring that system. Everybody will laugh at you. The second step is now that we are at a point that we are now delivering new functionality to the business by enabling these integration points and developing in the new paradigm. Now suddenly we can actually take one piece at a time and deploy that as a microservice as part of our regular release. So if let's say there is a customer feature in the mainframe, like, okay, give me all the customers or update the customer or create customer, we can actually model that as a microservice now and move that entire set of functionality into one or more set of microservices that are vertically aligned and stop using mainframe for any customer needs anymore. So we will still use mainframe for other needs, but let's say for customer, then we will go to the microservices. Similarly, we can do this for orders, we can do this for health records, we can do this for patients. You get the picture that you now can actually take feature by feature and start putting into the new world because you have essentially made yourself independent in terms of, you know, latching on to you are not dependent on mainframe slash monolithic system release cycles anymore, right? And eventually, let's say, I mean, if we get to that point, you can potentially look at returning the mainframe, no pun intended. Third type of pattern is what I call rewrite based modernization. And this is sometimes necessary because we have systems that are end of life that have written such a long time back that there's no way that anybody is familiar with it. So people don't know how to change it anymore. The next step or the only step forward is for us to start to rewrite it. Let's say the platform that is running on is going out of end of life, when there's not going to support that anymore, you know, the Solaris, let's say, if you're running in Solaris or if you're running in something that is going out of support, then you have to get out, you have no choice, right? So if you find yourself in this kind of paradigm, obviously, this is a hard situation to deal with. But the structured way to deal with is, you know, what we recommend is develop feature parity. So instead of just saying that I'm going to start rewriting, you know, the new system, what we recommend is just make a list of features that are critical to the business that in the old world in this, and then start mapping those features into the new world of microservices based cognitive applications. So if you had 10 features that the business wants to use from this, make sure that those 10 features are implemented. So you sort of achieve the feature parity. The reason this is important is because it's easy to start writing new systems, but it's hard to finish, you know, because of scope creep, because of new things that the business continue to request. So once you start writing or once you announce that I'm writing a new system, you know, everybody, you know, guess what, you know, everybody's going to come and say, can you do mobile integration for me? Can you do Salesforce for me? Can you, you know, give me this report? You know, can you do X? You know, so you will continue to receive new requests. That's the reality of life. But now what you're doing is that you are making an explicit decision. You're saying that, yes, I will add a new feature, but I also have to maintain the feature parity. So maybe I will port five features from the older one and add five new features in this one, in this release cycle, and then the next release cycle, and then the next release cycle. And that's how you can more structurally, you know, approach the retirement of the older system. So you can actually prioritize what makes more sense for your business needs, whether adding new functionality or retiring, you can optimize for that. And that becomes very important. So let's see how these patterns actually compare with each other. As I said, the generally people think that rewrite is the easiest one, but what we have found is the rewrite is easier to get started, or very hard to finish, because of the things I just described. Lift and shift turns out to be the easiest to finish, faster to finish, because what you're doing is that you are not actually making any application changes. In the first cut, you are actually lifting the binaries and deploying them into a new paradigm, and enabling yourself towards, you know, further modernization as a result of it. So lift and shift can actually give you the best bang for the buck if you have an application that is the best candidate for it. And what we have seen is that augment is one of the more common patterns that apply to letter off applications, because, you know, there are portions of applications that can be lifted and shifted, but some portions do require you to actually peel the onion and, you know, take that piece and rewrap it shrink wrap it, or, you know, rearchitect it and then deploy it in the new world. So, you know, chances are that we will see a combination of these patterns to be applied together for one or more systems, and that's going to be the vast majority of it. There is going to be a class of systems that will be a good candidate for rewrite. But the whole idea is that we, you know, on the scale of cost of migration and time, we can see that how these patterns play out and divide our portfolio in such a way that we can prioritize for the first one. So perhaps when we start marching towards modernization and we have inventoryed the system, you know, we can actually make some educated decisions. So one of the key questions is, what does that roadmap look like? So if we have a portfolio of applications and now we have to move forward across, let's say, hundreds of applications, you know, what roadmap can we suggest? So this is obviously a generic roadmap that I present to the customers. But the whole idea is that we sit down with the customers and develop this specific roadmap for their own needs. So as you can see, the starting point usually for lift and shift is customer coming from non open source, middleware applications. These are your applications that are running in, you know, your proprietary stack. So the first step that we recommend is enable yourself on the open source. So get onto Red Hat JBoss stack if you are not using it, because first of all, we have clearly architected JBoss portfolio to be nimble and cloud ready, right? And you get the benefit of paying less, because we are a lot more affordable than your other vendors. And just within chances are that we will be fraction of your maintenance cost even. So just for that thing alone, it's worth the price of the ticket, it's worth the price of admission, right? So that's the first step that we recommend. Then the next step is usually to set up a container platform such as OpenShift in the environment in a non product fraud environment in a lab type of environment so that we can train the team, we can get the hands wet, we can see what that looks like, develop the development workflows, CI CD type of pipeline, put that infrastructure in place. And then the march toward the desired state where we actually identify the applications and start putting them towards this new architecture. And similarly for the augment piece, generally the customer is coming from mainframe type of applications or monolith applications. The first step, as I described, is to enable them to create those integration points using, let's say, few stack or some other integration technology, just so that you can expose the data and features. And once that is done, get yourself enabled on a container platform such as OpenShift, because that is going to be the key for us to move forward and then start developing your applications. And similarly for the rewrite, chances are that you are dealing with a monolith or mainframe type of application that is about to be retired. We will work with you to develop the feature parity, the list of features and develop a capability mapping from where you are and where you want to go. We will then set up and train with the, you know, set up hands on workshop so that you can actually decide which technologies, which open source technologies you want to go forward with. And part of that is to develop cloud native application, you know, as part of your effort. We are also, you know, we also have some tools that are a disposal that we have developed, you know, as part of our consulting effort that is now available generally. So our methodology is that we will, we sit down with the customers and catalog the applications and start categorizing them in different buckets. And the way we do that is we have a tool called windup and we conduct a set of workshops. So windup traditionally has been used to identify key points for migrating applications from WebLogic, WebSphere or any other Java container to Red Hat JBoss stack. We are now enhancing windup to also analyze the application so that it can tell you how much your application is a good candidate for being cloud native or go cloud ready, how much cloud ready it is. So we actually assign a score as part of this. And the result of that is we get a very good indication, a starting indication that, you know, this application could be a good candidate for lift and shift versus augment versus, you know, something else. And we use that data to basically develop a plan of action. And that plan encompasses your entire portfolio and gives us very good starting points in terms of, you know, how we actually proceed forward. And, you know, this is a structured approach that we can put in place as a result of that. So, you know, we have started to field test this already with a lot of our customers and we are seeing good results, you know, with this approach. And we are continuing to move forward with this, especially, you know, the initial challenges were when we have a massive portfolio of applications, how do we actually deal with that and how do we actually, you know, put some structure around it. So we now have a methodology, we have a roadmap, and we also have a set of tools that we use. So there are some real life examples that I wanted to show you. So this is an example of a real life customer that had a very large web application. And the problem that they were going through was at the time of Black Friday, their servers froze, and they had no way to actually add more capacity. So we actually work with them using the lift and shift and we were able to deploy key components in the container-based platform that allowed them to scale very fast. Secondly, there's a malware application with a huge data in the mainframe. And this customer, the challenge was the user sessions were equal to the DB connections. So if it reaches a certain point, you know, and there are no database connections available, then the system came to a complete halt. And in this particular case, we applied an augment pattern we introduced. We actually took the features and ported them into microservices layer and introduced database caching as a result of that. And then we moved from less than 50 concurrent requests to more than 5,000. It was a ridiculously large number. And the whole idea is that we actually applied this pattern and were able to quickly move forward with it. One of the key things I just wanted to highlight over here is that we have a very structured approach to deal with this. We generally start with a discovery workshop, which is generally, you know, at no charge, but then we engage with our customers in the design workshop that we, you know, remember when I showed you the slide that has the list of applications and we categorize them and we run the wind up analysis and then we make a plan. That's what the design workshop is. And then eventually, you know, we work with you to scale up in terms of migration and modernization. So it's really a no big bang approach, very, very structured, very flexible. You know, so we have been doing this for, you know, past two to two and a half years now in a very, very successfully. Before we put this structure in place, you know, we had mixed success mixed results, but now we have fine-tuned our process to actually hone in on using this as a very structured approach moving forward. So that brings me to the end of my slide deck. And I thank you for, there are some other slides that I just don't want to talk about because there are too many details. I would be happy to answer any questions at this point though. So Diane, back to you if you see any questions at this point or anything that you want me to address. Diane, I think you are on mute. So let me start to take a look at some questions here. Do these migration paths have security implications or challenges? So very good question. We try to work within the same security confines of the customer. So for example, if we talk about different levels of security, if they have single sign-on, if they have procedures for network security, firewalls and stuff like that, we try to work within those same confines. If the customer is also moving to the cloud as part of this effort, and traditionally they have not been part of the cloud, then obviously those security concerns have to be evaluated as part of this migration effort. But generally, I have not seen the security considerations change. What happens is that when we move to the container-based paradigm, the customers are not sure how to, security scan those or how to patch those. And those are the new set of processes that we put in place. So yes, there are some additions and so there are new things as part of this. But generally, the security infrastructure and generally the security stuff, we try to work within those confines. Another question is ideas on how to convince customers to get to these paths, clarify ROI. Frankly, if the customer is not already thinking about modernization, then I don't know, maybe this is the type of customer who has really not thought through about moving forward into any kind of cloud-based architecture yet. So if we have that type of a customer and they have not thought about it, then what I see is that they have this thing at the back of their head, but they don't know how to get started. And when we present this, the general feedback that I get is, yes, this is a very structured approach and we actually can work with you on this. But if the customer has not decided to move to the cloud for various reasons, then this could be just the catalyst that they need. I have seen several customers who are traditionally on the fence and they haven't moved forward for various reasons. But when we give them this kind of approach, which is smaller to start and can scale up according to their needs, they feel much comfortable. I wish I had a general answer for you, but in this case it depends on customer-to-customer. ROI, we are not recommending a big-bang approach. We are recommending get started small and even tackle just the lift-and-shift class of applications. So that way the customers can see the benefit of lift-and-shift right away. I'm talking about in some cases even less than three months, where we have actually successfully moved the workload from non-container-based to container-based environment very, very quickly. And that has allowed customers to realize the dev workflow automation and efficiencies in standing up multiple environments quickly. So when we develop the roadmap with them, we try to maximize with the quickest wins first. And I think that's where the ROIs can be adjusted or fine-tuned for the customer needs. Diane, do you see any specific questions that you want me to address? I'm just going to that list right now. Hi guys, I've lost my network connection multiple times the past few minutes and you've handled it very nicely, Zoëb. So thank you. Do you have a plan to ensure that changes to one microservice don't affect other services that depend on it? This seems to be difficult when you aren't always sure who's using your service and how. So great question. So we just don't blindly move stuff to microservices. We actually use methodologies from the Wayne Devin design to model what are the bounded contexts and define those bounded contexts and use that as our transaction boundary or process boundary or microservices boundary to move functionality from the legacy to the new paradigm or called native paradigm. So you're absolutely right. Like if you are not careful, then you can end up with microservice that can act same as monolith because of the dependencies. So the whole idea is that when you actually decide to move something from legacy to microservices, you have to model where is the transaction boundary? What are your key aggregates that model the system in terms of define the transaction boundary or a typical bounded context? And using that knowledge, you then start to port your code or your system into the microservices world. The whole idea is that it should be all encompassing so that you should not unnecessarily be crossing the transaction boundaries from one microservice to the other. Now there will be cases where, multiple microservices will depend on each other for data and some other stuff. And that's where we have the event messaging where you have the set of microservices that control the main transaction but notify the downstream microservices, let's say via event domain events. And that way you can actually implement a set of systems with microservices that are fully independent in their own transaction context but act as a group when it comes to key business functionality. So Zoëb, I'm not sure if you can hear me? Yes, I can. Okay, cool. My network has been going up and down but your recording is perfectly fine and I can hear you. And I think you've answered most of the questions that are in the chat but there's a good set of conversations going here. The entire deck will be made available for everybody who is listening to this. Are there any other questions that folks have if you pop them in into the chat? We can continue. Otherwise, Zoëb, can you put up your final screen with any contact information you might have and how people can reach you and find out more information? Yeah, I can definitely do that. And I apologize for the the network. You just decided that it was going on vacation too. So you can reach me on Twitter or I have a blog that I write that I use to write about stuff so you can check that out. So these are the two things that I commonly use and you can reach me over there. All right. Well, thanks Zoëb and I'm glad we got to have this presentation today and I'm sure we'll have a few more patterns in the future but I do love the augmented reality one because that's the world we all live in I think or we're going to soon. So take care everybody and we will talk to you all soon and if you're coming to Berlin travel safe that's March 28th and hopefully you'll get to see Zoëb in person. Pretty sure he's got sessions at the Red Hat Summit coming up in Boston in May the second and third. I think you're co-presenting with Verizon there on some of these bases so that'll be fun. There'll also be an OpenShift Commons meetup or gathering in on May 1st the day before Red Hat Summit and you can register for that now. The Berlin one is sold out.