 Hey, good evening all and a warm welcome to everyone for our talk We know it's the last session for the day and many of you are getting ready to be back home and enjoy the weekend we'll keep the session as much as interesting as possible and I'm Poonima. I'm from Cognizant. I'm the engagement lead. I'm strategize and deliver transformation engagements for the enterprise and he's alphas. He's my architect He kind of closely partners with me on the journey that I'm working on for my clients so the topic or that Today's talk is all about how do you rapidly transform dotnet legacy apps into cloud foundry? And how do we really do it at scale, right? because many a times we start the journey, but we just get stuck because So much that if I take insurance as an example, right as an industry there's so much of legacy baggage that we carry and The the moment we start and if we end up failing it becomes like why should I progress with the journey, right? So with that There is a quick fire exit announcement in case of any fire Please feel free to go to the closest fire exit and exit out of the conference room and So we are we are talking about Rapidly transforming Legacy apps. So what it means like we cannot just just do that, right? So we need to really know what the key drivers are that is going to drive the transformation itself Why do we say we need to really know that drivers given that it's a technology transformation that we are planning to do But when we say we are going to do it at scale what it means to the enterprise, right? So we really need to focus on three key drivers obvious the business driver that will drive the transformation the economy driver and the technical drivers Needless to say the business driver is the key for any transformation that you do because you need to Deliver value to the business, right? And when we say deliver value what it means is your technology transformation should be able to give the change or the expansion that the business is looking to deliver pretty seamlessly, right and And of course the value proposition should be really really not purely based on the technology or the infracast or the Aspects like that, but what it means to the end customers so we really need to first identify that when we are talking about thousands and thousands of applications that we are going to transform and By default you get a good Optimal Savings on the infra platform operations itself, right when you're moving in away from your static infrastructure into cloud Automatically your developers can then focus on developing things rather than really having the overhead of managing the operations part and of course the Development cost itself is going to come down because you you are automatically course correct on your CI CD pipelines and other things and the Way you deliver change to the production is going to be pretty quick and seamless as you move into the the real transformation journey, right and Last but not the least the technical Uplift that the technology uplift that you get right so you you really get a you move away from a vertical scaling to a horizontal scaling and Of course from an IS to a past infrastructure because of which you get self-healing auto monitoring of the applications that you host on the infrastructure and of course The immutable infrastructure and why we say like we need to really look at these key drivers as a key element is You can really strategize on your roadmap, right? We are talking about thousands of applications and you cannot just pick an application Randomly and start doing the transformation, right? We really need to categorize them We need to really identify. What are the answers for these business driver these three drivers, right? And then even go to the extent of really classifying them as simple medium and complex the changes that you would need Whether you need to really replat form or you need to re architect or you need to really modernize the applications, right? and probably build a holistic roadmap that you can really Baseline and start to measure your transformation journey itself as opposed to just picking in pockets applications and then transforming them into cloud so so we we have Had this experience where for one of our enterprise we tried doing it more in a very unplanned fashion and All we could do is we were just able to make a progress of some three four apps that we were pushing We were able to push into cloud, but when it became like okay now We are looking to move thousands of apps So what is the cost benefit going to be like which apps should we be moving so all those questions came in and that's when we Thought it makes sense to do an extensive discovery and scoping sessions with the enterprise Identify their imperatives. What does that they want to really achieve as they are end goal, right? It can be anywhere starting from a customer experience to an infrastructure saving, right and Determine that and then go about marching into your cloud transformation journey itself and and What it takes to replat form right when we say replat form all it means is apply your critical 12 factors You create a cloud ready. You're a cloud friendly app and which are the candidates that really Are are good for this kind of a thing, right? so it is more the batch applications the back-end systems and The less frequently changing apps if we were to say right like they are pretty stable They have a good solid code base all it requires us They just need to be replat form be cloud ready and just be have the ability to operate on the platform and and of course Definitely cast is a big deal there Even if you do that because the same aspect right you are static infrastructure has gone and you are on cloud and that's how you go about doing it and Then on the modernized front, right? What are the candidates that really are the suitable ones for the modernized aspect like the heavy tech debts Applications right the applications that have really the old code base heavy code base and they really require a lot of dynamic scaling because they are data-intense apps and they need to be really Either be re-architected so that they for this digital era The the changes that you apply to those apps really need to go quickly out in the production and thereby really impact the end customer, right and And of course the same of having the value of more secured applications because it's going to be on the cloud and then It's a the the the static infrastructure itself. You're going to do a lot of saving, right? So with that I will be handing over to alphas who is going to talk more about Re-platforming the focus for today's session is on Re-platforming and not really modernization. So where we are going to talk about how can you really go about doing the re-platforming of the apps? Over to you alphas. Thank you. So good evening everyone Thanks, Ponima So re-platforming, how do we do especially we are focusing on dot net applications here So we see here like the replat back the platforming process we Categorized them like or divided them into like three phases discovery and scoping build and test and scale and operate So basically punima covers covered like most of the part from discovery and scoping perspective Like I will go over in the build phase Go more into technical and see how we can Achieve those Applications re-platforming of those applications and then finally like once we are done with the build and test We get into the scale and operate mode Which is the end state of the system where we scale our tune our applications and we Handle our production readiness all those the later things after the build and test basically after delivering our application to production Will happen in the third phase So we identified like eight key enablers They are basically out of critical 12 factors, so what are the 12 factors that we basically focus on re-platforming because there are certain mandatory 12 factors that your application should be Should be applied with those 12 factors so that they can be running in cloud the minimum things that we are talking about and We came up with some strategies and approaches like how we can Do the re-platforming process for a dot net application this can also be a generic thing But in smaller stages, so even if you get stuck somewhere like it will be a smaller failure You can turn around and you will get a quick turn around time doing these processes The first key enabler that we will focus on it's like code base See I and delivery so you have your application your legacy repository whatever it is like take their application down to a new repository and See if your application already has automated integration regression test suits right But if you if it's already available then we are lucky like we can just go to the next process if not identify the Breaking points that we have to identify what are the points? What are the integration points that I may break doing the re-platforming for this particular application? So identify those and build automated test suits why we are doing this because we will go over like multiple stages of Re-platforming and we cannot rely on like manual testing every time on the application Right if you build something in the very beginning as a minimum level then we can leverage that for every stage Then we where we move on into our stages that saves a lot of time and effort everything Then the next thing like we would suggest is like to upgrade the framework of the application to the latest framers So in dotnet like we may have legacy frameworks like right from 1.1 Or 0 through anything like we have to upgrade to the latest ones like at least 4.6 or above So that we can leverage our standard libraries today all those libraries and APS are available in Donut standard so we can leverage them Later in time like we will see into the session like where we can leverage and things like that So we can we have to upgrade that to the latest framework Then obviously like we have to build the CI pipeline and the delivery pipeline for this application so that we can Push the application to cloud Foundry and test it out Going through the stages The next thing that's key enabler that we will discuss about is dependencies I'm making them self-contained so here we are talking about dependencies like a gap and Then registry dependencies and then internal or any kind of like Third-party client utilities that we use for the application for example any of the database clients that we may use So usually in our traditional way in our static infrastructure What we do like we have to install our operating systems With all the necessary Applications and the drivers and things like that up front before running our application for example you need to have IAS in your application as operating system to have a web application running right so think about in cloud foundry We are running small containers and they are like a thin image of operating system and you won't have anything installed on it So you have to take everything within your application so identify those kind of dependencies bring them to your binary published folder and Modify your application so that it uses that Dependencies from there to execute it so though that is the first step like that you have to do and once you get this done Now your application is ready to push to cloud because you have your you need not any anymore you need not Think about any of the dependencies within the container So you are now ready to go your application is all set you have everything within the application package now you think about creating a manifest so and then Push using this cloud foundry command like cf push so if you look into the manifest Most likely we deal with either of these two of the build pack that you see in this slide Either either the hostable web core build pack or binary build packs so in case of console applications and Basically like non web applications you can go with binary build pack the build pack It does nothing like it takes whatever you give part of your binary bin folder It takes it and host it in the cloud and make it run there But whereas the hostable web core is a thin version or the underlying layer behind the is framework like you can like I Yes, so it basically allows an application to run on top of it So basically the build pack gives you the runtime capability and takes it part of it So when you push an application with that build pack that means your application will be able to run on top of it In cloud because you have to think like you don't have ideas on the containers The third one we talk about is configuration so in cloud think This way like configuration should be injected by via environment variables so in our in our like Normal development methodology like what we do what we practiced before is like we create a static XML file and inject all the configurations on it And then you make use of the application to read it from there But in cloud when we go to cloud we cannot do it anymore like we have to Treat every configuration as environment variables. So for that we have certain things like Microsoft i configuration you can use that that's one of the coolest thing to use and we have other option in Cloud Foundry that we can use like User provider services like we can see some samples next in the next slides I think you can see that The first one the old state using configuration manager So if you look into the code can really see like this is how traditionally we do that Traditionally how we get a configuration for a web application from a web.configure and a custom configuration file So simple thing to change is just keep the class keep the configuration manager or the configuration server as it is and Just make a small code change like take it from environment environment dot get your value whatever it is So that makes you come out of that problem So you are now out of that problem and how you are going to make this environment variable with your application in cloud So for that you can make use of manifest So I have put a small screenshot here down how you can inject your environment variables part of your manifest to your application You can also use cf commands to you do that But this is a nicer way to do it so that application deployment configuration all is available in manifest and this is Not going to spend much time on these lights this is like the next enhancement piece of it how you can leverage steel toe and make use of steel toes Your steel toe give you out of the box like how you can read the configurations from Cloud foundry environment, so you can very well use this I have like two slides here like explaining how you can work together One is like how your knee state of configuration file or looks like Second one like you have to just create a configuration model Like for example sequel connection string output file anything environment specific can be brought together externally and then how you are How your application so you have you should have something called like a configuration manager within your application that builds The configuration for the whole application making use of steel toe Cloud foundry the code the code is like almost complete code like anyone who is interested can look at it later to Get the code out of it and then the only thing is like you have to you have to invoke this manager Where you start your application? So basically our web applications like we start from global ASA X file Everyone knows that from there we use like application start event So that is the place the application starts will be firing up and you will get all your configuration in your application So that you can use it anywhere. So this is using a steel toe then one of the factors Logs as consider logs as even stream. So logs are nothing but like Application events right so we have to concentrate them as streams in our old Ways like we used to write them in files. We have a lot of longing frameworks that we use We used to write them in as files in our application In some file system or somewhere But like here we have to treat everything as like standard out or standard errors So that is the that is simple to exit simple to implement We will show you show the code code snippet as well for that and then once you implement it Or like you can use like tools like Splunk grey log, whatever you see here to actually like You know drain the logs out of Cloud Foundry and do whatever analysis we need so Cloud Foundry has its fire hose which pushes all these streams of events coming out from an application Not to its console. We can drain it from there using using any of these tools like it's optional for us to use And if you see the code snippet here, I have in the left-hand side. I have option one I am just using a console dot you're a right line and console dot out that right line There's a simple logging you can replace like all the third-party frameworks and just write a simple logging Which will be good enough for Cloud Foundry and other other hand side like you have Microsoft extensions logging Which is a package you get package available You can make use of it and you will have an abstracted logger that you can implement in your application and later You if you decide to move to like log for net or any serilog or any kind of logging framework It's very easy to implement Use it for the application using the Microsoft iLogger You need not do any big code changes in one place. You have to say like I am my logging provider is this So the Microsoft iLogging will provide those kind of benefits to us And then file streams to up. This is another key enabler So most of the applications like we deal with files. We come across this log. This is almost similar to that We write the files we read the files from like file distributed file systems Right are some within the local server or some file location, but here we cannot do that So our container is going to be stateless and it can be crashed anytime So you lose all the files and it's anti-pattern to write anything any of this kind of data within the container It's an anti-pattern. So we should not do that So what we do we leverage S3 object storages like cloud storages for writing it outside Or you can also use no sequel databases like red is or mango anything like that You can use it to write any external file system of data to any of the file systems and this also helps us Helps other applications which are lying outside of cloud foundry to access the data from this file So they can interact they they can like consume the data output from one of the cloud application the same way vice versa we can use it and I have put some sample model code sample as well here So you can abstract the you can abstract the file handling with the helper You can abstract them and your system need not know about where it's going to write It's going to write in a file system. It's going to write in red is or anywhere like it need not know So abstract you can create an abstract helper class kind of thing and inject it wherever you need to implement and you see here I have written like old implementation versus like new implementation So it's as simple as to change the code like this only thing we have to think about how we have to inject that Into the required classes So next one is stateless processes. We already came across this But a good example like I can bring here is the session So they look at the picture here We have three users working on an application and the requests are floating around all the instances of the application So where will you store your session? So an instance cannot handle the State of the application. It should be stateless. So the good thing about red is we have it as a marketplace service We can leverage that and then we do have a Microsoft Nuget package available ready session state provider There is a link provider here. Like you can see how we can easily implement I have also put some code snap code samples for how to easily implement it two changes one in the web.config file as What do you see here? You just have to mention the configuration. What do you see here? And you can throw up the connection string of the red is Using this is the exact little working example. You can take this and use it for any of the dot and applications so this will help on that and Then think about scalability So if you have your applications bulky like 4 GB 5 gig of containers like it's not good for scaling But you need to have it as thin as possible. So what do you do in our? Legacy applications what we do like we to handle to meet our SLAs or to handle all these things like what we do We create concurrency within the application like we use multiple threads Right child process work processes and we keep the application like bulky and heavier But here like we need to have our application as thin as possible so that you can scale horizontally So this picture here will clearly tell you what is Horizontal scaling and what is vertical scaling looking from two perspectives? One is like infrastructure perspective another one is like an application perspective So you see in the left-hand side of the picture The you are increasing the size of the servers and you are increasing the size of the application to deal with like multiple concurrent processes Whereas in the other side you are not changing the size of the servers on the size of the application You are just cloning the application. You are just increasing them horizontally this clearly it tells what is horizontal scaling versus vertical scaling and So in our legacy application, we may have handling like oh multiple threads. What do we do there? So do some simple code change to make it single threaded and push it to cloud and then think about scaling just by increasing the number of instances Instead of you dealing with playing around within the code to handle multiple threads Those are some simple techniques we can use and we have like a app auto scaler available This is available in PCF as a tile can use it So that your application can be scaled based on I think CPU HTTP latency and HTTP throughput you can configure a very simple configuration that helps you a Dynamically scaling your application Up and down based on the demand and last but not the least here like security So thinking about security like telephone to the power. It gives you out of the box Checks most of the runtime vulnerability takes care of it and it gives you like UA and like single sign-on providers So that you can leverage that for authentication and authorization purposes if your organization is already using any third-party Gateways or identity providers what we are going to do so you can look at it like this is a code sample Like how you can simply make it happen So you can write your own middleware or the stgp model as what you see in the top figure like there There's a simple way of writing a middleware for a dotnet application You can write it and you can add your authorization all the implementations where I put a comment there down And then add these lines in the Web dot config that's it your story done So you will have your security you can make use of it for dealing with Re-platforming and making use of any of the external security providers and before Going to the end we have like a couple of case studies a lot case studies like samples like I can show you like how we Came through this journey by all all these stages like all the stages that we were describing before So I took a simple application here you see the application the top So basically the application what it does the user is like submitting something some request in the web application and the request goes into The request will create a trigger file and that lands into a distributed file system share So there is a window service up there monitoring The file share looking for that trigger file once it receives the trigger file It takes it starts processing and then lets the user know as synchronously that I'm done with the process He can go and take the status anytime of his process. Whatever he requested So how we are going to do that so all whatever the eight eight enablers that we came across We do for the application like here. There are two three applications here One is a web application and one other one like a backing service for that application and another one a window service So how are we going to tackle the situation and make that run in cloud? So we leverage two Things here rabbit MQ as our message broker So we converted our my window service which is monitoring the file system for the file We converted it to an event-driven service and it will be subscribed to that rabbit particular Q and the Back ends are back and service you see here as API will publish a message as soon as the user requested So immediately the event-driven micro service is going to start processing and do this So we achieve the same thing making use of some additional services like rabbit MQ and Object storage and then the replot forming. So this is a simple example How we leverage them and another one like little more complex application So this is a little tricky application like we have an application it runs like couple of days a month and But the problem here is We have to run them in like two days and it processes like millions of transactions We cannot we don't have time we cannot run it for like ten days to get all those millions transactions complete We have to complete by your SLAs two days and Today our static infrastructure. We have around like 150 like CPU course working around that application to process those loads Within those two days. So that is how it is it was today. It was before the whole state of the application and Think about the whole year 98 99% of the times like this whole infrastructure is going to be ideal It's not going to be used by anything It's dedicated for that application and we have like three gen three or four gen servers of like 150 course Like they are going to sit idle and what are you going to do with that? So this will be a good candidate to pick it and run in cloud so that you can make use of pay as you go kind of thing So what we did like we faced we faced them like we create we Had all these faces like discovery and scoping and then come across an MVP zero We take the most CPU oriented portion of it like you see like scaled it to a larger instance Take that portion we platformed it We made them up in cloud and because they are all console applications. You can see from the picture there We made use of CF tasks So cloud foundry task is a good thing like you can launch your task and it goes away as soon as it's done It's not a long-running process. So we made use of that So we have to talk to cloud foundry to start the task and stop the task So no start the task and it will be all done by itself. So we made that happen Just by platforming portion of the application, right? So then that is MVP zero and the next one we think okay Then we have a remaining few portion of the application partially outside So if you look into the picture like you have a primary server you have second is you have multiple servers use work together for that application But in the MVP zero stage you have only relay on one static infrastructure Whereas the remaining all went into cloud the scale most scaling most CPU oriented most memory All these are we brought into cloud and what is the next MVP next MVP? Took the remaining portion of the system and we brought it into cloud. So that is as simple That is not very complicated as what we did in the MVP zero, but still is it done? No So we had limitation in scaling because our application was like as I said We are application should be as a container should be as thin as possible when we run in cloud So that we can granularly scale your application as needed, but we didn't get that in just replatform So we couldn't go every each and every instances container instances were like four to six Gigabytes so we couldn't like even though we have enough stat Infrastructure we couldn't scale up to that level So what then we had to be think about modernization of the first portion that we moved into in MVP zero We took them I we modernized them We broke this application into smaller micro services and we made them all like you can see as you see in the queues They all are like even driven queues and we can scale them granularly as much as needed like whichever takes like we can find a ratio Between these services which needs more more power like we can just scale them now After our modernization each and every service like comes to like 256 MP from like six gig and four gigs So you have like more rooms to scale room to scale So that's when we had to go in this hybrid approach. It's like a face-by-face MVP zero one two now We are done. So and there are some standard recommendations. I think I took a lot of time So there are some standard recommendations for dotnet developers to make use of steel to wherever needed some They can make use of atomic reddish atomic functions to basically If you come across a situation that I need to synchronize between the scaled instances of a particular application What do you do so you can simply make use of atomic functions in reddish There are some two functions available make use of it and then similarly like we have to think about DP How data protection is used in dotnet application? So it handles by itself when you scale your applications, how are you going to manage it? Think about world at this how we handle the firm configuration It's the same thing. I put a screenshot over there just by having that you can handle that so it will be given to the framework to just make use of it and Debugging in your local for dotnet application simply use Hwc.exe the hostable which is the same one that platform is using you can use make use of it I think others like you can go over these slides And the slides are available in the I think the shed and go download it and see the code in detail Thank you, then I will hand over to Purnima for the conclusion. Yeah, we are we are almost done We just quickly had the benefits to the business right out because typically you do the transformation And what it means right so what a quick with just the replatforming we have seen some Significant infrastructure savings for the enterprise that we did this work and even from months It takes to deploy we were able to do a weekly deploy because we moved from a legacy code to a microservice architecture and then disaster recovery right any ID firm like we Do a lot of focus on the disaster recovery by just sheer moving to the cloud infrastructure We were able to better manage and optimally cost-manage on that part. So these are like real Quick wins that we can really do if we just start rapidly going into the cloud journey and then just moving ahead So without our session ends and we hope like it was useful and interesting any questions for us, please