 Thank you, Diane Hello everybody, my name is Stefan Fütterling and I'm here with Patrick to present our work at Daimler How we Did the application modernization of the global car configurator and how we brought it on to the OpenShift platform our intention was to be here with our Customer, Benny Rossiart from Daimler who is our product owner in this engagement, but he had such a heavy workload He could not join us So we are from Capgemini. We are doing some of Daimler's large business applications and I have an overarching role as technical account manager Focusing on innovation topics like application modernization cloud and big data And Patrick, do you want to? Yeah, well, I'm actually focusing. I'm a developer at Capgemini And I'm mostly focusing on past platforms and developing applications there Okay, so let's start into the content Yeah Many of you might know this web page the Mercedes Benz dot com web page if you enter it you can see these car models and You have the possibility to enter into the configurator And you have the possibility to configure your own personalized vehicle These websites are one of the front end applications That we have for our global car configurator back end. So what we are talking about is is the back end behind that Consuming applications and it holds the vehicle master data So it knows all about these car models the equipments the prices the taxes The descriptions in many different languages and it can check if a configuration Can be built in a in a plant So more than a year ago. We started a journey here with this global car configurator and if you look at the left side you can see the Starting point where we started we had a quiet traditional Java application so we had fixed release cycle with two release cycles per year we Was built on the license software stack with DB2 and web sphere and the development team handed over the software as a product to different Other departments and those other departments had their own infrastructure. They deployed the application and They also did the operation for the application And as you can see we have even different platforms with Linux and mainframe COS where the application is running and Where we are today is in the middle of this slide We have now a central GCC service and this central CCC service GCC service is running on a Private OpenShift Environment in the download data center it's a set up and operated by T systems and All these Consuming applications are now starting to connect to the central GCC and by the end of the year We want to have all consuming applications Migrated to to the new central GCC There's a large business benefit. So now we have continuous delivery We have zero downtime deployments. We have scalability we have a very good performance and we have a cost reduction because we do not need all these Operational teams and we we have do not have the mainframe and the license software anymore So we are completely on open source. We are cloud ready We have rest interfaces that are published in the corporate API management That's the small epigee box on top and we have a new third-party system that is called WLTP That's the new regulation for the emissions of vehicles and What is now starting is the the third part here GCC and micro services We have already started to Implement selected functionality from the global car configurator as own micro services So the GCC itself will get a little bit smaller and will talk to these micro services And with this micro services, we can avoid redundant functionality in in different business applications So on the next slide on the left side you see still the monolith The the big blue box and the functionality inside the monolith and you can also see that we are having one big database for this monolith and That's our still our status quo and on the right side. You can see this micro services architecture where Special functionality is Implemented in micro services and has its own databases and data supply And we have some architectural decisions made during that journey and also some lessons learned that we have put in the gray box One learning is we started with Jbos for a fast migration of the monolithic Java JEE application into Docker and We decided to start using spring boot if we implement these micro services here on on the right side and We have also decided to use the base containers that are supplied by Red Hat for Jbos and Postgres because we do not Want to build our own base containers and do all this maintenance work So we we had that decision taken and now I hand over to Patrick who will Dive deeper into the component model of our application Thank You Stefan Now going more to forwards to I guess more the technical part of the application like what did we do in our journey and There we started off actually with a lot of questions in mind The first question is how do we integrate the the application actually into the into the OpenShift platform and and then the second question is how does OpenShift actually help us? is there anything that helps us and To that last question I can totally easily say now after the journey Yes, it helps actually a lot the OpenShift platform gave us a lot of Things that we could easily utilize in our application and how we started there it was actually looking at we did some sort of assessment at the component component model of the application and There were a lot of components in there who were not needed anymore suddenly Like we could replace them easily with smaller and easier Components the only component and that's the component that you can see there In this light blue tealish box there That is actually a business kernel. That's the business functionality And that's the thing that we want to keep and OpenShift and allows us that so that we Actually have only the business functionality and only some sort of adapters around that business functionality which are fairly simple to implement and well, of course also we implemented also rest interfaces which are not really related to OpenShift but That was really helpful actually to simplify our application Some examples of what we did there and things that we changed are enlisted there in this table for instance, we did changes to the configuration beforehand all the configuration had to be done in different environments Differently there were actually different scripts running others just manually change the configuration We actually changed our component model so that it now uses the Kubernetes or OpenShift platform capabilities there like config maps secrets environment variables That's how we configure our application now then as already Stefan said we introduced rest interfaces The most important part about that is that we integrate that we use those rest interfaces as some sort of common standard throughout Daimler universe and integrated that into the API management solution of Daimler Then we changed the logging That's a pretty common problem when changing towards a container platform that your containers Will probably die or that they have that you want them to easily die without having any losses And so we are changed the way how we locked and introduce also chasing logging So that we can now aggregate the locks fairly simple Another very important topic for us and for actually performance of the application is We introduced caching distributed remote caches They were fairly simple to set up on the OpenShift platform there. We got the tables data grid From Red Hat and we could easily set that up and got a whole cluster running Distributed cache cluster there Lastly the monitoring chopping which in the end gives us now a lot of insight into the application and on the other hand well Implementing those life nurse checks actually helps also me to have really silent nights So that I do not have to wake up every time a container will die So there are a lot of things that the platform helps us and our main Achievements and main things that we got from there is that first of all we can keep the business functionality and just Do not just put some small adapters around there. They are really simple to implement and replace like like a lot of complexity from our application with Kubernetes or OpenShift specific components and In the end also introduce blue-green deployment to have zero downtime deployments for the application Now going a little more in depth Into the rest interface. I do not want to stretch that too much. I mean rest is I guess pretty common but The interesting part here is as I already said that we introduced that into the API management solution of Daimler which is called one API and There we could again utilize the concept of the platform We now have a small API gateway running in front of our container the small API gateway is Therefore Doing all the authentication stuff. It's actually a pretty good separation of concerns that we have now There and it gathers a lot of metrics Which API's are called how often and stuff like that So That's Yeah, we were actually coming from soap and arm I that that is also really that was actually a problem for us in our journey that there were a lot of different interfaces that we were using and that the systems the consuming applications actually used different Ways to communicate with the with the application Now having all that in place We now had the problem. How do we actually test that we didn't want to go live like Easily we wanted to see what how does it perform? How would it perform in a productive environment? and that's one thing that is I guess very interesting also for for having Everyone who actually wants to move towards such an open shift platform we introduced We introduced actually load test scripts Which were which are run outside of the outside of the cluster against our our cluster this Load test scripts these low-test scripts Emulate the usage of the car configurator so how would a common user of the car configurator use the application the Configure a the configurator in our case. He would probably come from a certain market have speaker language We would randomly select that for him He would probably choose a random vehicle and want to come once to configure that and changes the equipment and in the end maybe gets alternatives And selects one of those alternatives and What we could see there is and that's also very interesting Because see our cluster performing and in like a almost live productive environment like productive scenario and See all the auto scaling working and see how there may be bottlenecks are and I guess that was also a journey for a lot of us that That the cluster itself we actually helped in in building up the cluster and there were a lot of settings that we had to adjust so that Not a lot of settings some settings that had to be in just so that everything works fine for us So this load test we actually created that and we run even longer tests with that And I would everyone recommend to try to stress your application and of course also the cluster But mostly the application so that you can see that and really try to bring it down That's the the interesting part here Lastly I Want to speak about I guess the most important and also the most difficult part of our journey and There we are still working on That's actually the cultural change change that we are going through right now It's regarding a DevOps operating model because When we move to to this platform there was suddenly a new separation between tasks of several teams Before hand we actually gave the application to some operations team and they Operated the virtual machine. They operated the application server All for us and they also operated the application for us But now on this platform Those guys not do not operate our containers anymore They just operate the platform and that's where they stop and Now there's a gap which has to be filled in and so That's also learning and that really fits well. We are already recognizing that really right now That we started to introduce DevOps into the processes so that the developers actually start to to operate the software and in the end Yeah Yeah, and in the end are also gaining a little more are becoming more responsible for the application and I guess that's something that In the end helps a lot, but it needs a lot of cultural change Importantly to say here is When you start moving towards such a platform think about that pretty soon I guess so that you tackle that problem early because it can have some impact on the on the on the yeah on the culture and And plan your team or maybe even Yeah, like somehow Give your team some more knowledge do some knowledge transfer to them so that they gain those operations Knowledge the operations knowledge that they need to operate the applications on the platform Yeah, and lastly, that's the most that's the one single thing to say here move towards such a Product organ a product team organization like a DevOps organization Well in regards to the time. Thank you very much for