 So welcome to our presentation or talk in which we're going to give a bit of an insight of a POC that we did about a year ago to migrate a Java EE legacy application to Cloud Foundry. Think as a brief agenda for today, we're going to introduce ourselves, of course, and tell a little bit about where we work and then basically dive into the topic of the migration. So give a bit of a background of what the initial application looked like, the technical and kind of business context, why we initially, why the situation was to migrate that application, what kind of benefits we expected from that. And then we're going to describe the migration path, like the individual steps that we did in order to make the application ready to run on Cloud Foundry. And in detail, so first of all, we're going to look at the application, what it took us to get the application basically first into like a Cloud ready state and make it deployable or runnable in a container environment. After that, we're going to, I think we're going to look at platform first and then persistence and messaging and basically how we handled the back end services. So as a quick information about ourselves, I'm going to start. My name is Matthias Heusler. I'm with my colleague Thorsten. We both work for Novatech. This is a consulting company based in Stuttgart, but pretty much like operational Germany-wide. I do mostly things like this, like helping clients, getting their applications and workloads, running into the cloud, either be it like from a legacy perspective or like in a greenfield approach. Besides that, I'm also running the Cloud Foundry meetup in the area of Stuttgart. So having said that, everybody who is close to Stuttgart is invited to come and join. Thorsten, you want to say something? My name is Thorsten. For me, a bit of things are the same. I am also working in cloud migrations. I'm working with cloud platforms, but I do not manage any meetups or anything like that. You help me a lot. I help you a lot. Okay. Good. Now, we're not going to talk much about what Novatech does in total. Just a few things. I think how it can be described best. We do a lot of community work, as I already said. I'm organizing the Cloud Foundry meetup. Besides that, colleagues of us do also run the Docker meetup. Besides that, we have a pretty strong pillar of agile development and agile coaching. So there's also a scrum-based or an agile-based user group that colleagues of us organize. And that pretty much reflects on the outside what we also do on the inside of the company. If you want to know more about that, it's probably better if we talk offline after that. Okay. Let's start getting into the topic of the presentation. So by the time when we started the migration POC, this was the landscape that the application was in at the time. We had basically three different apps. We started with one app, but then basically migrated two more. And they were on a constant heavy load environment. So we had about a million requests per hour, which meant this brought in additional requirements on how to handle the migration and not basically to have any downtime on the switchover. All in total, it's like about 230,000 lines of code. The application was currently or at the time was deployed in three geographies. And it was running on a web-sphere environment as you can see here, a typical Java EE stack running on web-sphere connecting to a DB2 as a persistence backend and having MQ as a messaging component. To the outside world, it exposed endpoints via SOAP, via JMS, and via REST. And the web-sphere version was eight, which basically meant it was a Java EE level six at that time. So most of you will probably be aware what that means if you have this kind of monolithic applications. It basically meant that whenever we had a code change and not just basically a release, even if it's things like a hotfix or so, it meant that the entire application had to be built and redeployed. The concept that was used for like ensuring the high availability during the switch was like a line switch concept, so there were like two web-sphere environments and you would deploy the new version on one side and then basically switch over. But it's needless to say that all of this took quite a bit of time for every iteration and the idea was to reduce that complexity, modernize the infrastructure, basically think in a way of further like splitting the application into microservices, and then being able to deploy faster, to deploy more often, and get a faster feedback loop on dealing with those things. So this one basically describes the things we're going to talk about today, like the approach that we took. So at kickoff, as I said, the application was in a self and in a private hosted environment running on a web-sphere server and the idea was to bring it onto like a cloud foundry environment, managed cloud foundry environment, and split into individual single services. In order to get there, the first approach or like the first step that we did was that we probably have to get rid of the web-sphere application server because this is not really the component that you want to run in a container. It's technically possibly doable, but it's not really the right way. So the first approach was to look at the web-sphere liberty profile. For those of you that don't know that, this is basically a very lightweight Java EE environment that IBM also uses a lot for their cloud foundry and container services in order like for a migration path of the heavy web-sphere applications. So and this was still what we tried to do on the same operating system level as before. So no cloud yet at that point. The second step was to use like a local development cloud foundry environment and deploy the application running successfully, hopefully by then, to deploy it there, but still on a local level. That means we would be able to reuse existing infrastructure components like the DB2, like MQ, and so on. The next step was a little bit bigger, going from the local development environment into like a hosted environment. That basically means that the backend services that we had from our local development environment would go away and we would have to make up our mind how we're going to deal with the legacy applications and what needs to be migrated there and how can we handle that. And that basically as the final step just before transforming the application and trying to like build the new services based not putting it into the monolith but building them in separate services and trying to decompose the other application. Okay, now speaking of the application, a couple of things is what I already said. I mean, if I say make the application cloud ready, technically that kind of means making it able to run in a container in a proper way. That also meant basically trying to take the state out as good as possible or validate that there is no state in order to make the application scalable. And of course, as you saw before, as the application was on a critical environment because there were so many requests on that, basically choose the path with the least amount of risk. So this was not like an application where we could just like try things out and play with it. We had to ensure that this goes as smooth as possible and do not affect the end users at all. So in an ideal way basically enable a direct migration from the old platform to the new one. Okay, so when we started with that, I said something about that a bit initially already. There were kind of three options that we had. So we knew from like our local development moment that technically it is possible to run the full WebSphere application server in a Docker container. But we kind of ruled that out very quickly. The second solution was, as I said before, the liberty profile or another approach would have been to really completely start rewriting the application from scratch and implement it in a clean cloud native way with domain-driven design, applied micro service and all that. Now we got a lot of support and advice from both IBM and Pivotal. Pivotal helped us basically on the cloud foundry side. IBM helped us with the liberty part. And so this one, as I said, was pretty much ruled out very quickly. So not even IBM recommended that, even though it was an IBM software product. It takes a long time to boot and it doesn't support a WebSphere cluster. So we didn't spend much time on thinking about that. The other one, the re-implementation in micro services, this kind of resembled the state that we wanted to be in the end. But still, as I said before, we had about 230,000 lines of code. It would have taken a huge development effort and still like combined with a lot of risk because it was difficult to test it in parallel to the other one running still in production on the hosted WebSphere. So in the end, that would have meant we have to maintain the old code and make that up to date with everything until the go live. And in parallel, we would have to re-implement everything based on micro services. So that basically led us to the conclusion to go on with liberty. So the reason why we didn't want to do or maintain parallel branches, we actually did that on one out of the three applications. We tried to implement the application as a spring boot version instead of continuing with the Java EE approach. But we very soon realized that this does actually more than double the effort. Because as I said, the old master branch that we basically see here on the bottom still needs to be maintained. And on every development or release cycle, you would have to merge and change all the changes into the spring boot component as well. And then you would have to the test effort on both sides. And this basically just didn't come out as a good solution. So we tried to, we discarded that idea and said, well, we stick kind to that 12-factor app guideline, stick with one code base and deploy out of that code base into different platforms using only differences in configuration. And so what the result for us was having one application code that we could run on both the traditional web server and the web for liberty server. This included a few changes that we had to do. But nevertheless, it resulted in the advantage that we had one code base that was basically currently tested in both in real world test and in production because that code was running. And we just like had changes on the configuration side where we said, okay, now this is the configuration we need to apply when we go for web sphere. And this one is the one we need to apply if we go for liberty. Good. So you saw that diagram before. I mean, most of you in being in the enterprise application development space will probably have seen a diagram similar to that if you're not completely fully un-migrated to web services by now. So the first step was basically just to drop web sphere and exchange it with liberty. Some things that, I mean, I can't go in detail here much because of the sake of time. But for those who want to try that path, and we can definitely say that it worked very well for us, the way you configure liberty is that you basically have a set of features. So it has a full capability of a full-blown Java EE server, but you can manually configure which of the Java EE features that you actually want to enable or disable. So we knew we had some heavy EJBs in there, we had JSP, JDBC, and so on. And then you basically just configure the individual features that you want to have. And we used some kind of migration tooling that IBM provided to us, and they basically took the configuration from the traditional web sphere about all the backing services and like Qs and JDBC connections and so on, and applied that also to that configuration. In the end, it's all packed into one server XML file, and that is basically used when liberty starts. Okay. With that, I'll pass on to Thorsten for the platform and back-end part. Okay. Now that we know how to bring that application on liberty, we need to talk about how we bring that to the cloud platform. In customer projects, it is quite common that you get some requirements. We got the requirement to run cloud-ready applications on Cloud Foundry. So we decided on trying first with a local PCF Dev and use existing database and messaging services. And if that works out, we will go to PCF at Microsoft Azure and use cloud-native database services and messaging. So how bring we liberty on Cloud Foundry? On the left side, we have that application running on our local machines, like for notebooks, and on the right side, we are using Cloud Foundry. So we need to wire the persistence and messaging services to the Cloud Foundry instance. That is pretty easy, but how can we do that, bring the application there? So there come some changes to the application landscape. You will recognize that we remove SOAP and MQ. I will talk about that later on. And what MQ needs also to disappear. So only DB2 would keep there, like this. So instead of running the application locally, we are running on Cloud Foundry and using the liberty build pack instead of anything else. So what is liberty build pack here? We are using a self-contained deploy artifact to put it into liberty build pack. First of all, we need to create the deploy artifact. That means we can use liberty to package our application, our configuration, our libraries into one archive, which we can put onto a PCF or CF instance. Then we can use that archive in our manifest file, you see that here, path f.zip, and start the famous CF push command, which will read that manifest file, create an image, which potentially became the droplet, pulling the liberty build pack, pack that into the image, and then use the deploy artifact and put it on top of liberty build pack. That means liberty build pack will do things for us. Let's see if we will talk about that later on. So now we have our image on a PCF instance, and that worked well, it was nice. The next thing is how can we get that onto a Cloud deployed PCF instance. So there is more migration needed, there will be no DV2 there, there will be no MQ or anything like that. We need to migrate database and messaging, just as I said before. Currently on PCF, we have that one, and we will change with PCF at error, where you only have to change on the very button. So there is no change on the configuration or like, we only changed the hosting layer. So we will talk about it later on, but now we will talk about persistence. Pretty fast, because it was a huge topic, but we have only 30 minutes, and 10 minutes left, so I will hurry up. We of course have requirements there too. We need to use a cloud native, and that database also needs to continue using Flyaway for database migration. Continue using JPA, in order we still have a Java EE application, and that database also needs to be as good as DV2 regarding all those things our customer knows, and it's familiar with scaling, it needs to have backup and restore algorithms, which are pretty much the same. So you may guess it, I saw it on the slides before, we choose Azure SQL, but there were other candidates, we might SQL or PostgreSQL, but in this particular case, Azure SQL was just the right decision for that application, so we cannot read that. We created a benchmark catalog where we benchmark all those against DV2, and Azure SQL won the deal. So we wrote about that, because it was a pretty huge topic, and in the end, you can say it was easier than expected, and we would blog about that, you can read that on our Nordic blog site, and since we have not much time over, I will continue to messaging. So we have SOAP, we have MQM, we have REST, we need to use cloud-native communication services, but the application landscape still needs to have compatibility to non-cloud-native messaging applications, so what can we do there? We need a JMS, and JMS is not cloud-native, we could use MQP and put a JMS interface in front of that, which would be, for example, Cupid, Apache Cupid, but we couldn't use that since we had some technical incompatibilities with our JVE stack, so we introduced something what we called async-rest, which means we are putting a callback URL into the REST request, and we integrated a MQP program right in the middle, so that was our solution there, and we also learned something there. We could use async-rest to clean up the messaging stack, so no SOAP anymore, since it was not used anymore in the other application landscape, or the rest of the application landscape, and async-rest will be removed once the application landscape will allow it, and therefore we will introduce MQP overall. So we wrote a blog, why we left JMS on the ground, instead of taking it to the cloud, you can read more details there, or of course you can read out to us after the talk. Using MQP is not the only option, there's a lot of more things that you can discover and put into your application, or used in your application if you have a PCF on Azure, for example on Azure it's also possible on AWS, other service programs, you can find a lot of more services which you can use on the CF marketplace if it is connected by the service program. So for example, using database in a classical local wiring way would be like the upper half of that screenshot, the build pack takes care of injecting the values, the credential strings and also shifting the credentials time by time just by using those variables. So that is pretty easy using other services only with the build pack. So what we learned there is in general using PCF death was very helpful, we could simulate how the cloud migration applied to our application and we could, and we did it, migrate the database and messaging systems or the usage of it while the app is still being developed. So we could test that and successfully could migrate to PCF and in the end it was very impressive to see how straightforward the push from PCF death to PCF at Azure was. We just changed, it was like just changing the API target and pushing up to PCF at Azure. That was pretty easy. So we're heading to our Q&A session. I will step in real quick. No, I mean, when I just looked at the lessons learned, I mean, they're all correct. So you probably see, we did that migration POC over the time span of about a year or so and this is kind of tricky to get all the technical details into half an hour. But what I would really like you to take away, if you're in a similar environment, basically running on a web sphere environment and once you migrate to the cloud, Liberty can definitely help you as a vehicle to get your application cloud ready. And as Thorsten just said, we were really impressed by how well PCF and Azure work together. Like that marketplace picture that you showed before, you have the same CF push, CF buying service experience that you know and all the Microsoft services are basically just tied in as native services. You don't have to worry about anything there at all. And addition, yeah, so I think that's pretty much it to sum it up. If you have any questions, you can ask them now. And we, of course, can try to help you and advise you on migration strategies, too. Did you also consider in your analysis to consider applying something like the strangler pattern to move certain web services or even certain end points only to the new environment rather than a normal thing? Well, we definitely considered it. I mean, this, we're not in a position yet where everything has been decomposed and running on microservices. So what we did was whenever we had additional development efforts on new items coming in, we did not put them into the monolith anymore. So we started writing microservices in Spring Boot then. So it was more like developing into a composition of one monolith and a couple of microservices around that. And so by this way, we tried to, like, starve the monolith and just, like, start and taking functionality out. It was kind of similar to the strangler pattern, but not applied in a way that we rewrote functionality which is in the monolith on the outside and basically let it die out in the monolith. That is something we will apply. But at the moment, it is still in a situation that the monolith is running on WebSphere in production and the microservices are running in the cloud and they're connected via network. But as soon as we're, like, getting off the strangler pattern, it's, like, one we will definitely apply. Thanks. First question. How much time did it take you to do the entire transformation, the migration work? And secondly, have you encountered or have you done similar work wherein the whole monolithic app was instead deployed on WebLogic? Okay. Well, I can answer the second one first. We have not done anything with WebLogic so far, but we have had experience with a couple of different WebSphere environment that seems to be a very popular application, a platform in our area. The first question, like, how long did it take? This is kind of difficult to say because we tried a couple of different paths and various providers and various platforms, so we narrowed it down to this one for this talk. I think the majority of work was, like, changing the developer, like, influencing the development and make the overall application runnable on both WebSphere and Liberty by ensuring that it's continuously up and running and it doesn't have any defects there, so that was probably about three months or so until we had that for each application part. The database migration towards Azure was, I mean, from a schematic perspective, just like getting the tables and the structures in the database ready, this was your task. I mean, what did you do? This was about one or two days. That happened really quick. We were lucky enough that the legacy database that we took over didn't have any really DB2-specific pieces that it used, which was not applicable on another SQL server, so that helped us there. For the POC, we did only, like, use the standard test data. We haven't made a full data migration of the productive data load to the cloud yet. Does that answer your question? Yeah. Okay. Thank you. Sorry, do we have one? Do you have any, like, two-phase commit type of challenges with the database where you have to synchronize? You mean about, like, the transaction problems that I don't, well... Not really. Yeah. We didn't use a XA data source in the past, so we didn't use it in the future, too. Yeah. This was another thing that we didn't have to worry about much. The application was bigger, but from our tested proof, to be pretty stateless already, so we didn't have to rewrite anything there. It was a back-end component on the heavy use, but the design of the application was pretty good for a monolith. So shifting that into, like, a stateless container was not so much of a problem. How much of the code did you touch? Did you refactor on the WebSphere side? It's question one. And the second question, we see exactly this initial architecture of many customers. Yes. And the MQ piece is very deeply embedded, like, even in the source, in the code. There's a lot of stuff in WebSphere, which is kind of, there are helper classes with MQ and stuff. Yes. And how did you get around that? Okay. Well, first question was, like, how much code did we have to touch in WebSphere? It's... Well, there's a short and a long answer. Well, there's an analysis tool, actually, that IBM provides to scan, as you have to submit your e-file and detect all the changes that you would actually have to touch. And this kind of came back with, like, a number of 10,000 that we said, okay, maybe this is not the right approach. However, there were many, many, many duplicates inside. And this tool didn't work very well for us because we were actually, like, had to do a bit of a mix match between Java EE6 and 7 features. And the tool did not support that. It just basically went from one version to the other. And I mean, on the 200,000 lines of code, how much did we have to change? It's... A thousand. A thousand, maybe, or so. I mean, the time that it took was more about, like, testing, does it work on either side? I mean, it was pretty much deployable straight away. But we had to go to various test cases and see, okay, here we get an exception that a class is not found, so we have to add that feature. But that feature, like, collides with another one, so we have to tweak the versioning there. So it was more of a configuration effort in Liberty than actually, like, code changes in WebSphere. We had, I think, we had three or four stories in development that our development team had to adapt and basically to move forward with that. And the other question with the MQ, it took a bit of time to prepare the migration away from MQ. And we had, I think, only one or two slides in there, but we said, okay, for the time, this was not the only application in that overall environment to migrate to the cloud. And we did not know if all of them would migrate at the same time. And we said, okay, if one is in the cloud and doesn't have MQ, another one is still not in the cloud and has it, so we need to find something which is, like, independent of the message provider. And that's why we came up with that, as ingressed with the callbacks. So we basically decoupled it completely from the messaging back end to make the transition into the cloud with the goal that when all of the applications are in the cloud, we use, like, a common message provider and switch it on again. But I, about, again, it's been a while ago, but I don't remember the messaging part to be particularly bad in order to get it out of the application code. But, I mean, this is, I mean, every code is different, so it's not easy to give a clear answer there. I think we're, like, two minutes over time. Maybe one last question, and then we still, we'll still be around if you want to know anything. I just feel free to pick it up. You said it's the request per hours were one million before. Yes. And after the migration, because, how's the impact, because liberty is there, still, it's light with virgin. Yeah. Do you see any increase in the response time or? No. I mean, what we, of course, we didn't get the chance to run it in a productive environment with the same amount of loads in real world. But we used, like, getling load tests to simulate that thing. And this is something we missed to show in the slides here. That was actually one thing that worked very well. So we could scale the application to multiple instances. And using Azure SQL as, like, the bound service, we could really trigger a certain amount of invocations. And then we could see that the application is not responding anymore because the database can't handle it. And you could basically switch live with just a slider bar, increase the capacity of Azure SQL, and it would just recover from that and scale on demand. So this was actually something that worked better and easier to handle than in the non-cloud environment straight away. Did this answer? Okay. Good. All right. So with that, we say thanks for listening, and we're going to be here.