 All right, so what we're talking about today is modernizing complex legacy applications. And I have to say, my colleague, Pranjal Bathia, absolutely deserves the top billing here. I apologize, you guys are getting the second strainer. Pranjal was the architect behind all of this work. She's a tremendous colleague, and unfortunately she couldn't be here for reasons involving our state departments in ability to process visas. But let's jump right into it. Legacy application modernization. What do we mean by legacy? I've got a couple of definitions up here, and I'm not gonna read through them, but there are a couple of things that really jump out at me here that I wanna emphasize. First, the idea of something being outdated. What does that mean? It could mean a lot of different things. It could mean that it doesn't have the look and feel that people like, or it could mean that it's missing features that you expect. It could mean that it doesn't mesh well with the other applications that you wanna put into place. Or it could mean that it actually has security or other flaws that were acceptable back in the day but are no longer something that we are willing to accept. The real takeaway here is that all that outdated can mean definitively is that it isn't really what you would put in today. Given the choice of building something and putting something in now, this is not the thing you would choose, right? The other major point for something to be considered legacy is that it's difficult to replace because it's in wide use. And that, I can't stress this enough. That wide usage of these legacy applications is a tremendous factor and in some case, barrier that you can't overlook. And it really raises the question, since there are people who are using this application and people who stand by it, people who like it, people who rely on it day to day, really raises the question of should you modernize it all? And that's not a straightforward question. So here we have my personal favorite tool using it, the one from 2001 with his bone smasher. And he loves his bone smasher. It's great, it does the job for him. He takes real joy in using his bone smasher as you might remember from the movie. And if you offered him a gelatin factory, he would not be excited by that. He wouldn't know why you were giving him this. It wouldn't give him the same visceral joy that he gets from smashing bones with his club. And talking with stakeholders, I have to say, can sometimes feel like talking to this ape with his bone smasher. But they have a very valid point. Why modernize? Why replace something that's working? There's some reasons, some better than others. Improved functionality, that's any time you're looking at something that is decades further along, it often takes into account many things that weren't necessarily on people's radar back in the day. So that's a reason that the new thing is going to have more features, better functionality, more bells and whistles. That's a reason, but I'll be honest, it's not the best reason. Because the functionality that's in place that people are used to, aware of, comfortable with, unless they are coming to you and asking for more functionality, it no matter how much better the new application is in its initial release, people are, there's going to be some resistance. People are going to be uncomfortable, there's going to be pain as people have to change. A better reason or an additional reason is the expansion of system capabilities and these are the illities, right? Those non-functional aspects, scalability, performance, maintainability, flexibility, all those things that architecturally make a system flexible, easy to work with, easy to work into future plans. That said, the most compelling reason that I've found in the many modernizations that I've done, the most compelling reason really comes down to opportunity management and risk management. And if you're looking for, why should we modernize? Why should we modernize this application? Those are the places that I would look to first and what I mean by those. Risk management is probably the easiest one to explain. You've got that cobalt application, it's been running like a champ for 40 years, but it runs on specialized hardware, it's getting kind of hard to find cobalt programmers and are you going to be able, if something goes wrong, are you going to be able to bring it back up? If your business relies on something that is not easy to support, then it's creating that risk, it's very easy to quantify that risk, and as you're doing business continuity exercises, it's easy to bring up and spot places where something might need to be modernized. Opportunity management is sort of the flip side of the same coin. There are things that you can do today, there are things that you can do with systems that weren't necessarily available back in the day. One of the easiest ones to mention is of course scalability, right? By modernizing, and this is where I got into the container slot, by modernizing to a container-based solution, you can potentially scale up in a way that the legacy application can't hope to compare with, and this can create great opportunities for your business. That said, there are also reasons for caution when you're thinking about or proposing modernizing a legacy application. First of all, I already mentioned the resistance, the fact that there's going to be some pain associated with change, and you can call that in terms of cost, you can also consider that in terms of business disruption. Those are huge elements to consider. Another element that often gets missed is drift and decay. Drift and decay, strategic drift and feature decay. Feature decay is when you scope something out, you scope it to perfection, and then you start implementing, and in the time it takes you to implement the features that you scoped out so carefully become no longer relevant for any variety of reasons. This happens really in any business context. There is feature decay, and the thing that makes features sticky is actually use. So the longer it takes you to get something into production, the more acute your feature decay is going to be. Strategic drift is a very similar factor where the fact that the company is looking at other options, other ways of doing business, it's entirely possible to have a proposed migration come out the other end of the migration and discover that the entire function that you've been looking at is no longer relevant to the business for whatever reason. Those are some of the large pitfalls that you should be wary of. That said, let's assume for the moment that you've decided to go ahead with migration. You're going to modernize, which is great. It's a great choice, but that application needs to be updated. Well, this is, how do you do it? It's like any project. You figure out where you are, you figure out where you want to go, you figure out how you get there and then you execute. In this case, you actually have a real leg up in figuring out where you are because you have an existing system. And that existing system, the technical system certainly matters, but more important is the business function that that system is serving. And that's where you really want to focus your audit, focus on understanding that business system, the business value that's being created and the people that rely on it. Then you move forward, consider what you would replace it with, compare the technologies, evaluate your potential targets. Get ready. Get your technologies aligned. Get your, make sure that you have a good sense of what you're replacing. And I'm going to go into each of these in detail. Make sure you have a good sense of what every factor within your system is going to move to. Make your plan and then deliver on it. It's just that simple. Well, okay, it's not actually simple. Auditing the system. There are a lot of things that go into a system that often get overlooked. This is a pretty good way of capturing all of those. That is this series of steps. First, before you even look at the architecture, I would say you need to understand the business. This is what I was referring to before. Understanding the business, understanding the use of the application, understanding the business users and the value that they're receiving from the business, from that system. Until you understand that function, that functional aspect, you're really in a terrible place to propose a replacement solution. But once you've done that, then you can start identifying the specific aspects of applications and infrastructure, move to assessing, validating what you've come to understand. Then things get interesting. The mapping and dependency steps are where things get pretty technical. Mapping the relationship between applications and mapping the relationship between applications and value is where a lot of the major challenges happen in order for planning out that longer deployment. And then understanding all of the dependencies on those applications, you can think of this as sort of breadth and depth. The dependencies is the depth. Understanding not only the technical dependencies, but the ways in which different corners of the business rely on what you've, rely on the system. And that can often reveal things that are very unexpected. Any questions? Just a comment. I'm used to giving this in a much shorter period of time. So if you have a question, please holler. I will probably cover all of the material in less than 45 minutes otherwise, but really happy to dig into some of these points. And if we get to the end, I can just tell you more stories about different migrations that have gone horribly wrong. Evaluate. So once you know what you're trying to do, you start figuring out what your options are. And this is a basic Q analysis, right? Where you say we have a bunch of things that we need to use for our target system. And there are various qualities that these are going to have. And here are our requirements and how can we meet all of these requirements? Evaluation is fairly straightforward. The preparation step is less so. So when I talk about preparation for a complex system, you probably have many, many subsystems and different aspects. Some of these are going to be very clear candidates for migration. Some of them you might want to keep exactly as they are. Some you may be able to replace with something that's now commercially available. It wasn't back originally. Some things you might want to keep exactly as they are. For every aspect of the system, you need to look at it independently and say, is there an opportunity for rehosting? Is this an opportunity for something being phased out completely? Is this something that needs to be re-architected from scratch? And then we get to planning. There are a lot of factors. A lot of factors when you start trying to say, how are we going to bring this all together into a plan? The users and the user disposition is critical as we'll get into a little bit more when we start talking about the one of the real world experiences that we've had. Another aspect, which isn't actually listed here, but which falls kind of in users under stakeholders certainly, are the developers and other people associated with the current maintenance of the system. They need to be factored into the plan as well. They're going to be critical for bringing your system, for modernizing your system, retiring the old components and bringing the new components up. And you ignore those people at your peril. Sensitivity of data. Often, these legacy systems contain data that, well, let's just say that it isn't necessarily being handled in the way that the industry would like, but it's been handled that way for so long. It's kind of grandfathered. People have forgotten about it. That's not good and it's not an excuse. But once you start turning over those rocks, you're going to raise some real questions about what you're going to do in the future. I would strongly suggest bringing Infosec or whatever your security team is in very early. And they can often provide a real case for modernization and help get that buy-in that's required. They're also going to be extremely important as you try to figure out how to manage this data going forward. Often, because of the state of the legacy system, people may be used to a degree of access, which is really inappropriate. And that's something that is going to need to be changed. And you're going to need the help of allies in your security team in order to make that change. Understanding the business functions and the way in which you can tackle the business functions is one of the most concrete ways to approach planning. Some business functions are going to be very simple. Some will be more involved and there will, of course, be interdependencies between them. So mapping these out and figuring out which, what order you can approach them in is going to be critical for your success. Application size and complexity of data also plays similar roles. And then, once you've got your plan, you can move on to delivery. And here we have your basic iterative movement, your OODA loop, or by any other name, your agile iterations. And that note at the top, I want to stress, you should try to get to an MVP as quickly as possible. Minimal viable product, something that you can put out there and have people actually start using it as soon as possible. Sooner than is comfortable. Sooner than is good, right? You should put something out when it has worked, when you know people are going to complain. Why? Well, a couple of reasons. First of all, feedback is absolutely critical for this kind of project. You are going to, I know I've said it a few times, I'm gonna say it a few more, you are going to upset people. There will be people who do not want this to change. And the sooner you can get their feedback and start responding to it, the better. Also, and this is maybe a little underhanded, but it's psychologically valid, they will be happier. If they see something bad and then see rapid improvement, then they will be, if you release something that is so-so and then can't respond very quickly to them. Getting them, getting the users, using your system, giving you feedback and responding to that feedback early is one of the most critical things, paths for success in a project like this. I'll tell you, just anecdotally, I've seen really any number of these efforts fail. Due to people, due to the developers trying to get everything perfect before they had anyone see it. That's a trap. And it falls into the same problems that we were talking about earlier with Feature Decay. As they work and work and work to try to get everything exactly right and exactly according to the specifications that they were given, those specifications become out of date. And they eventually release something that they've spent multiple years on to an audience that can see no value in it whatsoever. So go to an MVP as soon as possible. All right, so now I have gone through how to do this in the abstract. Let's talk about a concrete example. So my team has done quite a few of these. And one that we did fairly recently was for Red Hat's certification engine, certification platform. So what is solution certification? Well, certification is a very overloaded word everywhere, but especially at Red Hat. So I'm not talking about the way in which we recognize engineers who have achieved a certain skill in a particular area. And I'm not talking about a number of other things related to the word certification. What I'm specifically talking about here is when we work with partners to develop a solution together or when partners develop a solution using our products, those solutions can be certified. And this happens in a number of different domains. At the time that we did this migration, there were three particular domains that we were interested in. Hardware, where the certification was for Linux, for RHEL running on hardware produced by these partners. OpenStack, OpenStack software, where partners were creating solutions that would work on top of OpenStack. And Cloud providers. So those are the three areas and you'll probably see them mentioned later on. Three big areas where we're working with partners and those partners are developing solutions and then those solutions need to be certified for support to make sure that we can support our mutual customers well on those platforms. So the certification CWE system had been around really for just a little bit less time than we've been working with OEMs, partners on hardware, so more than a decade. And it's the way in which it had been built out, and I'll get to this in the next slide. The way it had been built out was with multiple systems that were connecting back to this certification workflow engine to allow testing the partner solutions against a number of different test routes. Make sense? Any questions about this? So it's a pretty critical part of Red Hat's business, right? Working with partners, absolutely essential. Not something that we could shut down for several months. And it was also something that with every release, as you might imagine, every release of OpenStack, every release of RHEL, these certifications would need to be renewed, reconsidered so the system needed to be constantly updated with those new requirements and be available to work through those new certifications. The legacy system, so more than a decade. And it was built on top of Bugzilla, which must have made sense back at the time, actually. I know the guy who did it, and it made sense when he did it. But it was, as it grew organically over many years and was expected to reach out to more and more systems, this became pretty unwieldy. Pearl is good for many, many things. It's really not the easiest language to support longstanding features on an enterprise level, my opinion. But the bigger problems with the system were not that it was built on top of Bugzilla or that it was built in Pearl. The real problems were that it had developed so organically over such a long period of time. It was undocumented in terms of its functionality and scope. It had many aspects that people were using that were never really intended by that implementing team. It was just since it was built on top of another platform, people found interesting ways to leverage the platform like users will do. Supporting it was difficult, performance was bad. First of all, because it was an old system that had never really been scoped for the kind of traffic it was receiving. But secondly, because as time had gone on and more and more requirements had come for it to interact with other parts of the Red Hat ecosystem, this old platform had had stuff just sort of added on top of it, glommed on, and it never really had an opportunity to be evaluated for scalability. Let me just, these are the numbers we're talking about. Now, the request that could handle a minute, 25, that's the response time of the API over three minutes. Now, no, it's not good. I could say, well, that API is talking to other systems that are, you know, this isn't serving a webpage, people that someone sitting in front of a browser waiting for three minutes, but still, it is bad. So the good news is, this is a good target for modernization, just scalability alone. You look at it and you say, you know what, we can do a lot better here. However, I'll say the first step for modernization here was not to say, you know what, we can do better. We're gonna promise the world to everybody. We're gonna go out on a limb and do this. You know, the first step was, well, the first step, let me talk about why we didn't really want to make those promises. There are a couple of concerns. One, we knew we had to continually support the business, right, we didn't have the option of putting this into maintenance mode and developing the new thing and swapping it over, not an option. This had to be continually kept up to date with the latest Red Hat releases, otherwise the business couldn't function. Also, this was a new business domain for us. This came over to my team and none of us had a really good sense of what certification, how certification as a business was run. So that was the first step, we get into the audit. And you know, it's interesting, in some ways, coming at the audit from a position of ignorance actually probably helped. We really could look at it with a fresh set of eyes. But the steps are the same regardless, whether you're coming at something fresh or whether it's something that you have been maintaining for years and we've done both. So the first thing, stakeholder analysis and looking at the business, doing interviews with all of those stakeholders, talking to them, not only about what they're doing today but talking about where things were going, trying to protect against that strategic drift and doing our architecture discovery. It's the biggest section of this. Now, once we'd done that, we actually had the ability in this case to go through, audit the existing logs and validate what we'd heard, validate what we had understood. And there were some discrepancies. There were things that showed up in the logs, things that people had said were absolutely critical, we cannot survive without this, that it turned out, you know, nobody was using. Nobody had been using for years. But we wouldn't have known to look for that if we hadn't first had the interviews. But then you go, you look through the logs and then you continue with the mapping and dependency. Now, one of the things that we found that was difficult were, one of the biggest challenges here was that there were so many different ways in which people were using the systems through those various interfaces that weren't really well documented. And so one of the decisions that we had to make pretty early was in this process analysis and particularly in the dependency analysis, not on the technology but really on the business, what were we going to support and what was that, what part of that long tail were we going to lob off? One of the things that really helped there and I really want to call out was the Dataflow diagram. And I wish I had it to show you, but I don't. But by mapping out the Dataflows, we were able to identify the points of leverage that we could use in our migration. And that really affected strongly our decision about where we were going to continue to support things and what sort of things we were going to say, you know what, that's not crucial to the business, it's not widely used, and we're going to phase it out. We'll talk a little bit more about this a little later. Yes. How long did the audit take? Yeah, the question was how long did the audit take? And I would say since we're coming in fresh, there was a little bit of getting our feet under us. I would say we took four months. We took four months, and that was understanding the business and really digging into the modernization effort. Now, but it's an interesting question because this came over to us and we didn't say immediately, you know what, this is an outdated system, it's crap, and we're going to modernize it. You'll thank us. That was not the approach we took. We were talking with stakeholders, of course, from day one, and we got to a point pretty quickly where stakeholders were coming to us and saying, we really need these. And that's the greatest way to approach this, right? When the stakeholders are telling you what you need to do. So when I say, when I say, so we sort of started our audit before we had made a decision to modernize, right? At the same time, I have to say, we had it in the back of our heads from the very beginning because there were some pretty significant problems. Okay, evaluation, evaluation for us was easy because first of all, we had access to all this fantastic Red Hat technology, which I can't say enough good stuff about. I'm obligated to. But also, we actually had already done some modernizations and we're using a lot of this stack already as a result of those. So we were really moving on a proven solution. And there'll be a, actually, I think my next slide. Nope, there's my slide, there's my slide. Just to talk a little bit about that solution. It's really a hybrid cloud deployed microservices architect, microservices platform and integration platform, which is based on Fuse. Messaging cluster and MQ messaging cluster and then interfaces that are using that react this react and pattern fly back by Redux. So you can see sort of our stack listed out here, but it really all comes back to microservices deployed on Fuse, something we've used in a bunch of different places and it has proven very flexible and powerful for us. But again, I have to say, when you're planning your own migration, you have to do your own assessment. You can't just say, well, this worked for Mike and I trust him, you have to do your own assessment and make sure it works for your needs. Preparation. So one of the things, there were a bunch of architectural things that we really wanted to do, but we weren't sure we'd be able to. So this was the point where we're considering all of those. One of the things, moving from monolithic database to a distributed data, to a distributed data array, moving from a file system, file system on NFS to a distributed file system, moving from Bugzilla, which like I said, it made sense in the past, but all of this information was really customer relationship information, vital for our customer relationships. And we have a CRM sales force and so using sales force as our backing system and system of record for not everything but for some of that critical customer relationship data. That was something we wanted to look at. And of course, moving from a monolithic application to microservices, these were all questions when we started. Now, we ended up being able to do all of these, but they all had to be evaluated as part of the preparation step. And then going back to this slide earlier, all of these questions about, well, what are we going to do with the different pieces? Some things we actually kept. The parsing logic that existed for taking apart the information we were receiving back from partners as part of this testing, we kept that pretty much entirely. There were also some features in that long tail that I was talking about that we could just retire. And we communicated that out and we removed them. And this, as I was mentioning before, this is the target system that we moved to. Microservice-based architecture based on Fuse. Now, one of the decisions that we really needed to make, there were three different interfaces that customers and partners were using and also internal associates were using to interact in some ways with the CWE. And this was hardware.redhat.com, which is a website, which was essentially a thin veneer over the Bugzilla interface, and which no one had been supposed to be using for, I think, five years, but it had never been retired and it was very, very actively used. So we had to make a decision here about what we were going to support and what we weren't. And one of the decisions we made early on was that we would just retire hardware.redhat.com. We would kill it and no one would be able to use it anymore. This was, even though it was officially supposed to have been phased out before, there's a reason why it wasn't, right? This created a lot of angst, created a pretty significant outcry, and it took six months to actually get to the point to actually go from, okay, we're going to retire this to actually getting approval to retire it. A little more on that in a minute. Let me talk about our plan. So we used a phased approach and you may recall, I talked earlier about the different aspects of certification, the software, this is the OpenStack software, hardware, cloud program, and admin functions were the four big areas that we identified, not just as functionality, actually a lot of the functionality was the same, but as stakeholders. Each of these had a very distinct stakeholder set. So what we could do is say, for phase one, for our MVP, we're going to target one of those stakeholder groups and we chose the OpenStack, the OpenStack functionality and the OpenStack stakeholders. We did that because it was the simplest business case because it was something where we felt like we could get to an MVP quickly, but also and critically because we had very close and active relationships with the OpenStack administration with the people who were administering the whole program and they were used to things moving quickly. So they were very open to working in that MVP environment. Hardware was the toughest. Hardware was the long standing and it was difficult. So phase two became hardware. We're going to get over the hump. One of the things you might ask is admin functions. Why did you leave admin functions to the end? Are you just trying to make your life hard? And that's a fair question. Honestly, we ended up doing a lot of stopgap solutions between phase one and phase four, but the reason we've left admin functions for the end is because admin functions have no business value. We could not at any point say, yes, yes, business, we have successfully created admin functions which you don't actually care about. It wasn't a win for the business and keeping your momentum going is largely, if not entirely, about demonstrating wins to the business incrementally. But this isn't the whole plan. There are two other crucial things before we get to the execution. Change management, developing those champions, aligning the teams and starting your communication. This has to be at the beginning of every migration project. And then process transformation, which dovetails on this. And here's something that I want to stress which is a little counterintuitive. I strongly recommend front loading impacts. What I mean by that is you're going to have to change, you're going to change things for people, right? You're going to change their processes whether you want to or not. Their lives are going to change. Make them change earlier than they need to. Make the changes to their systems, retire systems that they're relying on. That's one of the things we did. We retired hardware.redhat.com, killed it. People were using it. We communicated it widely, but we retired it before we needed to. What that does is it forces them to engage with you. It forces them to engage with you and it allows you to skate to where the puck will be. It allows you and your stakeholders to start focusing on the future rather than focusing on the current state. Now, obviously this has to be done pretty carefully because disrupting the business is something no one should do. You should not be cavalier about disrupting the business. That said, front-loading the business process impacts is one of the best ways that you can make sure that you're getting that critical feedback from the business early and that it is strong. If people can just keep doing what they've been doing, you may get some feedback, but it's not gonna be very strong. It's going to be, here's what I think, here's some stuff, but if you front-load those business impacts, people will come back to you and they will tell you exactly what you need to do and they will tell you loudly. Again, something I strongly recommend but not to be done without an element of caution. And then delivery. Delivery in phases and iterations. We do those phases. Each phase has to have a major, has to represent a major business win. And each iteration has to represent an incremental business win. I strongly advocate agile methodologies within the phases. And yet a lot of the stuff that you've seen probably looks a little water folly. Well, that's because there's a, you have to chunk these things out, deliver the major pieces of functionality, but underneath those major pieces, we were always running agile. Every three weeks, we were delivering something, trying to deliver a win for our customers. And the results. So we're not setting the world on fire. But you'll see our API response time is still 20 seconds. But a lot of that has to do with those backing systems. The point is, the point is not just where we've gotten to. The point is that we are now on a system where we can continue to scale, continue to innovate and continue to improve without having the drag of being on a system that requires a large team just to keep on a status quo. So we're continually improving now and the improvements so far have been quite dramatic. So takeaways, thorough impact analysis. Can't stress it enough. Containerization, I didn't spend a lot of time talking about, but containerization was absolutely critical for us to get to that scalability and to move into that hybrid cloud space, which has been so effective. Building smaller services and building those, really decomposing them into logical small services gave us a lot of flexibility and something that I would certainly recommend. Not only flexibility in terms of deployment, but flexibility in terms of implementation order. So allowing us to develop a small piece, release it, and then move on to other things that were dependent on it. Focus on business processes and users, especially as you're mapping out your transitional states, those major phases, those major states. And a phase migration, strongly recommended, allows you the flexibility to back up as necessary and avoid pitfalls and of course, always look for opportunities for automation to ease your journey. Thanks very much. Now, how are we for time? Can I overrun? Questions? Yeah. Thank you. Legacy system has a type of testing mechanism. Here you go. Sorry, okay. I just want to ask whether your legacy system has all unit tests or induction tests before, so that when you do the new one, it's easier to test. In our case, we also have some legacy system, but we don't have any kind of test environment or some, it was done by QA before. You know, QA on or people are living. So we don't have a clear way of test the goodness and lots of other kind of, you know, not easy for us and I just want to ask. Well, yeah, it's actually not that different from, so one of the things I didn't mention about this was the existing team that was maintaining that legacy system. And when we came on, the team that was maintaining that legacy system was fairly large. It was 10 people and, but none of them had been working on the system for more than two years and none of them had been involved in any of the decisions and none of them knew, you know, where the bodies were buried. So even though there was an existing team, doing that analysis was, we uncovered a lot of rocks and it sounds sort of similar to what you're describing where, you know, when you plunge into that analysis, there's a lot of unknowns and there's a lot that you can't always rely on the team to tell you. And as you're looking for a migration target, you really have to take, yeah, I mentioned, we've done this a few times, we had a target system, but the first time we moved, the first time we implemented the target system, obviously was the first time, right? And we had to do that analysis and we had to look at every possibility and spend some time on it. So, yeah, every migration is a little different, but I think that these steps would work for you in the same way that they've worked for us. Yeah, other questions? Yeah. So you mentioned your team. I got the impression that your team kind of comes in, helps to do this and then rides off into the sunset. Did you enable the existing team to maintain the system over time? That's my first question. Interesting question. So, every one of these is a little bit different. One of these is a little bit different. In this particular case, the existing team, they'd been working on this system for a while, none of them were happy. None of them were happy. They were working on a legacy system, they were trying to maintain it and it was grueling for them. One of the things that we did as part of this was every single member of that team moved on to a different role within Red Hat. And that was interesting and something that really had to be communicated carefully because they would come to us and I mean they were all happy to have a job. So they'd come to us and they'd say, it seems like my job is going away and we had to work closely with them to explain yes, your job is going away but you're gonna have a better job and we're going to work with you to make that happen and we were able to make good on that promise. That was critical because that team of course, to complete the migration, they were absolutely essential. I was involved, not directly, but I was on the periphery of a migration that went horribly, horribly wrong where it came out after the fact that the implementation team had been sabotaging the effort from day one because they were afraid for their jobs. It didn't go well for them. It was bad. But so my team actually took over the ongoing functionality. The system, this microservices architecture, my team runs that. The existing CWE, it's retired, went away. All of those CWE team members, they're now working in other areas and they're all much happier. But it was an interesting challenge, just from a management perspective, to make sure that we got to there cleanly. So now that you've gotten over that hump, your team is responsible for the ongoing support and improvement. Of the function, yes. My team manages that function, correct. As well as many other functions. And in, how long did the project take? I should know this. Two years, low less than two years. So when you were introducing the new, like phase one, I'm assuming that there were still parts of the legacy system in production? Absolutely. The legacy system remained in production until the end of phase three. So you're kind of what surgically replacing parts of the legacy system with a new part? Well, let's see. This was why we were targeting stakeholders rather than systems. So we moved over stakeholders, right? So, and you can think of it as the business function. So the entire OSP OpenStack certification function moved over. Now, they could have theoretically, they could have gone and tried to do things on the old system. Well, they couldn't because we shut down those endpoints. But we had moved all of the users to the new function. And we actually did it, the way we did it was transparent to the partners by abstracting out all of the endpoints. We basically were able to at one point say, okay, all of these APIs no longer go to these endpoints, they go to these endpoints go. But we were able to do that for one function at a time. And maybe I missed it at the beginning, but can you describe your team again, what your team does? And that's my final question. Describe what my team does everything, they're awesome. So, yeah, I probably should have mentioned this. So workflow enablement is the definition of my team. And what we do is we manage and run the internal tooling for customer experience and engagement within Red Hat. So that includes support delivery functionality, customer success functionality. And now, as of about two years ago, certification functionality. Does that answer? Other questions? All right, well, thanks very much. And if you have any other questions, let me know. And I'll be very happy to talk to you. Thank you.