 Okay. So first, some fired exit announcement. We probably saw it several times today. Fired exit out there. So I will go. Okay. Migrating ASP.NET legacy large application is a big challenge. And most of the companies and developers are afraid of facing with. So today I will go with you step by step through that process and show you my way to do that. My presentation have two goals. First of all is show the migration that I did like few months ago and show you that it is possible and it is not so hard to do and it is nothing to afraid with. Second goal is to show you all knowledge that I learned during that process. But first I would like to introduce myself. I'm .NET geek. I'm working with .NET for more than 11 years. And last few years I spent in the cloud. I'm working in Grape App. And I'm helping our clients to do the digital transformation and go to the cloud. Before I start, I would like to ask a few questions. First one is who of you can call yourself a .NET developer? Okay. So that's good. And second one, who of you ever make the CF push? Okay. So it's not so bad. Okay. Unlike the other presentation that you saw today, I will forget about .NET Core. For us, .NET Core don't exist. I would like to show you that even old legacy .NET framework application, even web form application is good to be in the cloud. And it gives you a lot of benefits. And there is nothing, nothing hard with doing that. Unfortunately, we couldn't use containers as application as Windows Server 2012 that most of this application is using now. And because of some dependency, we couldn't use the 2016. Or we are afraid that our application will not work correctly in 2016. It's not supporting it. But still, Cloud Founder allow us to start that application. Okay. First, when we are going through the migration process, we need to make some goals and decide to do some goals. Our was, we have three milestones. First one was continuous integration. So we would like to implement in our application full pipeline, starting from building application. Second, run some unit tests. The last, the next one was focused on the Cloud Founder. So we would like to deploy our application dynamically to our Cloud Founder environment. And using that Cloud Founder environment, we would like to start some acceptance test based on it. Second milestone is allow, allow the pipeline to create our release, a reusable package. Then by one click, this is the best solution, give the operator way to deploy it to an environment that you want, UAT, QA, and production. The last goal was to allow operator to go to the production in easy and ensure that all issues that will be found there will be easy to roll back. So we got the requirement that we should have green, blue, green deployment and also pre-prod environment. As you see, we, our goals were not so hard to do. And we don't force to make all 12 factors in our application as we would like to avoid huge refactoring that will be needed when we would like to go to the core, especially with application, with that type of application. First of all, we also go through the existing process. Unfortunately, nothing could be reused as it is, like I see the smiles on your faces, I like it as standard on that type of application. So all steps till now were for full manually. Manual builds on developer machines, then developers just move this application manually to some dev environment, sometimes not, sometimes it goes to QA again manually, and even deployment to production services were also manually. Even that it was vertically scaled, there were free instances, all steps were manually. So now let's go through the application. Application is a teenager. It's 12 years old, written in SP.NET web forms. Some code is dated back to .NET 1.1. Application required to have installed on the machines some custom providers. In that case, it was Oracle, Crystal reports, and some custom archiving tool. Our hopes that this will be sessionless was, of course, go away as application, use database to store the session. So, yeah, we can vertically scale application, but we still have the, we could have issue with database. And the last issue that we found, the biggest one was the usage of Active Directory. In the solution, we have several application and one of the internal one was using Windows Identity. Also, this was not all issues. We have another smaller, for example, some of the third-party web services used mutual authentication and required some certificates, even with private keys installed on this machine, in the local store. So, as you can imagine, someone can save at the end. This application is unable to go to the cloud, but I don't agree. It could go to the cloud with really only small refactoring needed. Okay. As we already know the process that we start with, as we know the application, we can start migration process. But to do that, first, we need to choice the CICD tool. Based on some internal policies and some license stuff, we have only two to choose. First one was on-premise TFS and second one was concourse. We talk about TFS when we're thinking about it. We know that it has a few advantages and some disadvantages. The most important, from our perspective, advantage was integrated work tracker, integrated Nougat, and Active Directory integration. So, everything that developer know till now can be reused. But the biggest disadvantage for the TFS on-premise is lack of CICD process written as a code or script or anything that could be stored in Git or anywhere else, and that could be reused. Good information for everyone is that VSTS, the online version of TFS, have already that support. So, there is a pretty good hope that soon even on-premise version will have it. Of course, it was released three months ago, so, you know, everyone need to test it and check what the first version works, but there is still some hope. We talk about concourse. It has totally different advantages and totally different than that we know from the TFS, but from our perspective, it provides a very similar functionality. First advantage of using concourse is a docker. So, based off using the containers, we are able to, we are sure that our builds are immutable, and each build is in totally new container made. A second very important from our perspective is cloud-funded integration. For cloud-funded, there is made special resource inside the concourse that we can use. We do not need to create any custom PowerShell scripts to push our application. The integration is very smooth. We can also use the same type of authentication as we have in Cloud Foundry, for example, OpenOut. Unlike TFS, it allows us to use multi-resources and trigger the application if any of them change. So, we can have our source code in different git, the pipeline definition in different one, and also, for example, some other build scripts in the next one. So, we can do that in TFS. We are not able to do that even in the VSTS, that one, that online version. And the last the biggest one compared to the TFS advantage is CICD definition. It is written now in YAML. This definition are very nice, are very easy to compare and to maintain. The biggest disadvantage at this time is not integrated Nougat and not created Nougat resources. So, every time when we will push our application to the, for example, to Nougat after build it, we are not able in concourse to observe it and trigger the next pipeline, for example, deploying when we have next version there. After considering all pros and cons, we decided to use concourse, especially that after small POC, we check that in easy way, we are able to use Maven to store, to store, to use Maven to store, for example, in Nexus or inside that TFS, our versions. Okay. So, as we have already all tools and all processes and all our migration path chosen, we can start migration. So, first thing that we need to do if we talk about .NET, SPNET, web forms application and Windows 2012 is configure our virtual machines. Unfortunately, we have no docker, so we can do it automatically when we deploy application. We need to install some stuff inside the virtual machines. In case of concourse, it was, of course, MS build, web deploy and some custom tools that we are using like crystal report or record data provider and any other custom tools required by application. We needed also install IIS and Google Chrome, but after, I will talk later about it more. In case of PCF, our virtual machine needed to have only this custom tool that is required by the application. Okay. So, everything is prepared, so we can execute CF push. But before we doing it, we need to build application. Like, it is also common in the old application, we were not able to even easily build it as concourse required that application will build on totally clean machine without any specific configuration. Our application, of course, requires that the others are in some exact folder, so all developers' machines are set up in that way. It is unacceptable in case of concourse, so it needs to be totally clean up. After already clean up, the next thing that we need to handle is using different version of MS build. In our solution, we needed to use it, and Microsoft, I need to say that it didn't show off. Because if we even install MS build, everywhere, the MS build will be with different path. There is no store that show us the path to different version of MS build. So, we did very small workaround, set up own custom variables, some environment variables in the inside concourse, so we were able to manage it. After we pushed this application, the first issue that we saw that it warm up very, very, very long. So, our acceptance test will give us a lot of issues, and also our vertical scale was very hard to do as it was not smooth. Because even when we got some instance, now we get like timeouts, or even two minutes response, because of the warm up. So, we decided to precompile all ASP.NET pages. To do that, in concourse, we needed IIS. It works like that as you probably know. So, we installed it. The next issue that we found after that, you need to remember that precompilation will go through all ASP.NET pages. Even that one that are blocked by feature toggles or were implemented 10 years ago and are not any more used, all of them need to work. So, again, some cleanup. But, you know, it was like half a day or something like that. So, it is not a big deal. Okay. So, our application is build it. We are fine with, we are in the point that we can upload it to our pass. In Client Founders, to run legacy ASP.NET application, we use HWC build pack. This build pack is using self-hosted web core API that run on IIS when you run or run that EXA and that run your single application. So, we are able to use it in that way. I will let you know more in few moments about that. So, to push application to our CF, we need also to have the YAML files with some definition about routing, URLs, a lot of stuff. Of course, we don't want to hard code this type of files in our source code. So, we would like to make it dynamically based on some environment variables. Unfortunately, there is only one PowerShell module that supports us with that. But, after one day test, we saw that it do not give us sufficient functionality that we need and the YAML that is produced is not in good shape. So, we decided to use Python. Again, we need to install it on concourse if we would like to use only Windows machines. In some cases, we can use, for example, Linux machines. So, in that case, we can use it. Okay. So, we have already the manifest prepare. We have our YAML files. So, we can, we could deploy it to PCF. Integration between concourse and Cloud Foundry saw in so good way that there was no any issues that another application was pushed out and start working. But unfortunately, it started working. We start some manual tests and we found some issues, several. So, this was not so bad as we were thinking at the beginning. Now, at this slide, I'm showing the, I think, that most important issues that I think all applications that you will push to the Cloud Foundry that are web forms that you will have. First issue was when we had several instances, in case, for example, two, and our post back go to the different instance than the previous one. Because we don't have sticky session, we don't want that. The first issue that we saw was that view state couldn't be decrypted. And it couldn't be decrypted because they are using different machine keys. So, again, need to work around, set machine key on an application level. There is no way to different work around it. Next issue was with request context. You need to know that router that is used by Cloud Foundry mess up it quite significantly. For example, all ports will be totally different. All IPs will be totally different. So, all our redirects that we have, for example, in our application, as per the net type that we are also sometimes using, will have totally wrong URLs. So, you need to remember that all the users of it need to be a factor. Next thing is start location. For example, for certificates. Every place in your application that you are using local machines need to be changed. Best to current user because it is enough as when Cloud Foundry is starting new instance, create in case of Windows 2012 virtual container that there is nothing more than the new user. Just create totally new user. So, we are sure that these users are in the table. So, next time when we start application, it will be totally new user. So, if we install all certificates, put some registry entries, anything that our application required in context of the user, we can easily use that. Next one was the biggest one. This one, we spent a lot of time of that. And, unfortunately, there is no way to work around it. We couldn't use Windows identity in Cloud Foundry. For us, there was the good decision from the PO that there is no problem with that because this is only internal application and only admin use it. So, we can change it to forms authentication and writing the custom code. We just hid this Active Directory via LDAP provider and ask for all data. Then, next thing is the customer management and health information. I will do not say that this is refactoring because this is not something that we need to change in our application. But if you would like to have any application in Cloud Foundry, we need to remember that there is no way to contact to that server. For example, if you would like to debug any issue in our application, there are limited ways. So, we need to create these custom pages by us own. We can create them like rest endpoints like it was shown on the previous presentation. Only thing that you need to remember is that if you will have any issues in free things, in free places like web configs, global Assets and any handlers, you are not able to debug your application in managed points at all because this managed point will do not start at all in that case. So, all the bugging in web config need to be done in different way. Okay, next part as we have already application started, tested in manually was the acceptance test. I need to say that I was very surprised that there was no any big issues with Selenium running on concourse and connecting with sb.net web form application. Only very small issues was first, Selenium don't like update panels, I think no one. Second one, some the latest version of headless mode that we are using in concourse that we are forced to is not able to deal with our checkbox. I don't know why, but it don't like, there is work around, just simply send some space bar or something like that, it will work. And also Selenium was the reason why we needed to install Chrome inside concourse because we were using Chrome headless mode, so Chrome need to be there. Okay, so we have various milestones, right? The application is working in PCF. On the Dev environment, it pass some acceptance test, it also pass some manual test, but this is not our final goal. As we would like to have application that could be published to the several type of the servers. So we need to ensure that our configuration will be removed from the source code. In sb web forms, we have web config file that contains everything that if we talk about configurations. So there was a two way. First, massive application refactoring and, for example, change all the settings, read all settings from environment variables inside the global assets, for example, and change these values. It requires some refactoring, but also in some cases it is even not possible because you need to remember that you are not able to change, for example, adult session object, configuration session in global asset because it will restart your application and you can have a loop, you can have a lot of issues. Or the same as with certificates. They need to be set up before you start your application. So we decided to use custom build pack. Custom build pack is nothing, custom build pack to prepare our application before our application is started and prepare our, let's say, let's call it virtual container. So prepare our user to work with our application. Our custom build pack was nothing more than set of PowerShell scripts that we run when before HWC started. So we got just HWC build pack and customized it to do some code. So first of all, we needed to prepare the configuration. First, we read it from the environment variables, then we download some files from the Git, like certificates, like XDT files, the XML document transformation that we used to set up correctly our web config. So still, web config was there in the application, but it was totally empty. And everything that need to be there was done by PowerShell and XDT scripts. So application could use that. And next thing was install our certificates in user storage. And next one, to do not refactor application a lot, because it was, this application was using local file system, like probably most of the spinnet legacy application. So in that cases, the easy work on was just to create network drive in context of user, while during our build pack start, and then just change the paths to this network drive. For ASP.NET, there is no any differences if it is local file system or this is map network drive. And also, we do not need any credentials in the application, anything there, because everything will be done by PowerShell scripts. And with this few workarounds, we were able to separate our application source code and our configuration. So we can go to the next part of the transformation on our migration. So to do the release, there was only a few small steps to do. First one was correct versioning. In case of concourse, there is way to use default bumps. You need also only to remember that to connect it with the, for example, your assembly information, you again need to have some PowerShell script that will change it or will change your MS build script. For versioning, we were using semantic versioning. And also, we needed to produce some smoke test, the production one that could be even run on the production environment. And the last part was create scripts for deployment that will change our environment variables and our routines to the correct server if we know that previous one is working or not, if it is not working. Okay. So the true that, yeah, application is migrated. Application is working on several environments. We can set up as many environments as we want. We can have as many instances as we want. And everything working in Cloud Foundry. But we couldn't say that it's a cloud-native application. There is a lot more to do, best to have all 12 factors. But even if we don't want to have 12 factors, there is still some part that we can very easily to do in case of ASP.net applications. It is log and telemetry. We can use Grafana application inside or any tool that you know and you like. There is good to think to add sonar to the application, to the cloud, to the CI CD. Also, splitting the big solution that we have, it was not so big because only 17 projects different that have dependency to each other. But splitting it to the multiple solution using NuGet to have a totally dependency is very important. And why? Because you need to know. If you have automatically deployment, if you will have continuous delivery, that if you make any code change, you would like to deploy this application. If you have situation that you have five, six, seven applications in your solution and you make change in one of them, you don't want to deploy all of them in the same time. So it should be done by NuGet dependencies. Also, the total minimum, and I don't say that it is ultimate solution, but total minimum is move your session from database, for example, to the Redis. It will make it like twice faster. The latest statistics show that. And for all new features, make this feature totally sessionless. I put this feature in separate microservices. And of course, if you have budget, if you have time, start a factor in previous application. But we know how it looks like. Okay. So thank you very much to came here. I know that this is very late for you and you are probably very tired. If you have any questions, I will be happy to answer it now. If also, if you would like to talk to me later or tomorrow, you can find me at the Grip-Up both.