 Thanks Ralph. Hi everyone. Good morning. It's quite exciting to be here Let's talk about how we try to adopt how we did indeed a devos culture at Porsche and how get lab helped us on getting there But first things first This is who we are here with me. It's Dennis. I am Alberta. We are both software engineers We work for smart mobility at Porsche and for those of you who are not familiar with the company Well, here's some figures we're quite a lot of people and To be honest, we are not the software company. I guess you already figure it out but in the recent years IT and smart mobility are becoming one of our main pillars and the company it's really Committed on developing father of those businesses. So indeed. What is it that we do? well nice cars Not only and in particular this one it really represents a lot to us. It was a real game-changer for us Together with this vehicle. It's our first electric vehicle Porsche committed on further developing and really investing a lot of resources on building up portfolio of digital services and Not only worrying about making nice cars But also fulfilling the experience of our customers with additional digital services So what are these digital services? So for instance, one of our latest ones is Apple music integration in your car So you need your phone to play your music or Phone applications that allow you to honk and flash or unlock your car check the battery level We also have a lot of Other type of businesses and services such as Porsche Drive and passport These are sharing and subscription services that allow you to Drive forces even not owning them And also things that are not so visible for and customers, but equally needed for making all these products Be where they are such as our gravity platform This is our Microservices platform and it's what Dennis and I work most of our time So so far so cool, right? I mean just Cool stuff And why there's a story. Well, the truth is it wasn't always like this There was a time in which it was really painful to to to work on these products because they said that that we had in place Didn't really cope with our needs So so for development at Porsche didn't start few years ago when we committed on developing these digital services So what development existed for a while and we had after sale systems. We have production systems we had HR systems a lot of things coming from a big company, I guess you can relate and Asking for these needs There was an central department taking care of all these things So they looked and they say let's set up an infrastructure for this project infrastructure for these projects Let's set up a code repository. Let's set up a pill tool. Let's make everything for them Taking care of the needs and they did so so they they built a whole infrastructure and whole solution based on the existing needs for Like some years ago or what you were in the status region also for development Then what happened? Well, indeed there was nothing wrong with this, right? I mean it worked for a while and for these projects and for these teams It was enough but then so for development started changing and people like us I started joining the company and we started going into new businesses such as the digital services I talked about and so teams started popping up and we came with nice ideas New requirements we wanted to go to the cloud. We didn't want to deploy on-prem We had different ways of thinking how the Artifact should be built and what were our what were our requirements with regards the build tools and so on So the thing is we came back to this IT department and we said hey, we have these needs So how can we do it and then? This happened don't flip because they were not prepared for that many teams coming asking new things And we quickly ended up in a burden trap That basically led us to just frustration Poor performance and we basically were investing most of our time in fighting against this traditional department that were Reluctant to change rather than focusing on our products and our services So just to make you a bit more of detail on how did it look? This is a very simple diagram of our infrastructure and on the right you'll see our gravity platform It's a micro microservices platform that I talked about before and then on the left it was the CICD build co-repository Setup that we had on-prem One of the problems we had for instance were availability They were going down at least twice a week and it would take even half an hour to go up again We had a very limited set of agents for running our compilations on our builds and that wasn't the only problem It was also each of the agents were definitely configured So you were not sure whether if your build was going to be successful or not depending on the agent that was picked We didn't have any admin permissions for configuring it as well. So we basically was sold On top of that we wanted to deploy on a cloud platform from an on-prem infrastructure And we were enforced to go through a reverse proxy that also had a lot of performance and availability issues So basically we said this is not the scaling this will eventually kill us We cannot give proper services to our customers with this setup. So What is it that we did we looked around and we looked around and we thought It cannot be that we are the only ones at Porsche such a big company having these problems But the thing is we looked around and we couldn't find any other team that did something different They were all accepting the travels and living with them So we didn't want to do that and we decided to build it ourselves And of course we didn't build yet another CIC tool, but we decided to set it ourselves so Then it's I and some of the colleagues we sat together and we started thinking how the solution should look like and We started saying We want to promote Collaboration between teams because in the past with the previous infrastructure and with this it department being in control of all the repositories There was no collaboration between teams and we thought that in order to make the company successful in Digital services that was one of the main key features that we had to achieve We wanted also a reduction of toolings because we had a set of tools each of them for a very specific need But we felt that they were not properly integrated with each other There was some logins that were not shared and it was painful to use as well. We wanted to set up an open source Concept in or like approach at the company, but the truth is and I guess you figure it out as well We are a very traditional company. We are very big. So going to open source Takes time at least what we achieved is getting into a need of source model And now currently unless someone is really against that or there are some confidentiality Aspects that we don't take into account all people can check some other teams code, which is pretty cool We wanted to have a single source of truth We didn't go to we didn't want to go to different tools to check whether who committed what or when was deployed or what was a version or what was the agent running what and Also wanted the status builds because well as you remember with the problem with the agents We got differently configured agents. We got interproducible builds We got some random behavior that we could not control so a stateless builds it was basically answer that Problem that we had and then as well. We were tired of seeing people clicking through you eyes for configuring the builds We thought that wasn't scalable whenever you had to create yet another project You couldn't look to a code or to a file and kind of get ideas from it But you had to go to a web UI and eventually miss a checkbox and yeah, there was a mess So with this set of features we yet again looked around and this time We did find What we thought it was a really good fit and that's basically the beginning of the story of how we jumped into kid lap Now I pass the word to Dennis, which will tell you how we did it. Yeah, thank you very much. I bet oh Yeah, now that we know Why we had to act and what we had to do we again looked around and Said okay, we want to use githlap, but we don't really want to operate it on ourselves That's why we searched for some host us and we came up with githlap host calm and I think they're also here in the audience today today and props to them they're doing a really nice job and What we did in the beginning was just Setting up a plain instance with a static set of runners and just playing around with it so we we just took a relatively easy or simple project and migrated to githlap and Try to get used to the Concept of githlap and that went really well. It was really nice for us and we then soon started to Migrate all our crop projects there or our platform projects and also we got a high demand from the external project which Building those and apps or services you've seen at a battle's presentation and It was a really nice experience for us so then afterwards we we opened this instance up for for the project and We quickly realized that we can't build another silo system so we had to somehow think about how can we integrate githlap with the existing on-prem infrastructure and Talk to the guys which operate the infrastructure on-prem and Got a really nice solution to integrate their IDP so we provided a single sign-on for githlap and we don't had to mess with user management creating users deleting users it was all like given to us and we also enabled Repository mirroring so all the projects could just try out and still use the old tooling they had and It was really smooth. I mean for us. It was done in like one to two weeks with about ten projects and And the feedback from from the external third-party project was was amazing and Then after more and more people Came to the to the githlap instance We quickly realized it really makes a lot of sense to think about How can we share tooling because our tech landscape as we've seen before is based on cloud foundry and most of the projects use Java and spring boot and that's why It was really Beneficial that we introduced some shared build images and thereby also shared scripts So everyone could participate and that was the first occasion where we really Could enable this idea of having an inner source model and fostering collaboration and it really worked out nicely because Every day and another project came and say how we want to have this can we build it and we just said PRs welcome and They did it was really nice Afterwards more and more projects joined and We really invested invested a lot of time into infrastructure automation and user creation and providing technically users all that stuff and We heavily invested in automating all our infrastructure So we came up with an idea that we just provide a configuration file for a project which fill in their data And then everything is set up automatically then build job on githlap is triggered and all the Space creation on cloud foundry is done all the permissions are set up also We use the githlap API to register some secrets to enable them to directly deploy to their respective infrastructure and it was really Really beneficial for us and we saved a lot of work and time there and then the The big requirement came across to Deploy globally Maybe you know China is the biggest market for us and we started in Europe Which was quite easy, but then we also got this requirement to deploy to China and That really changed the game for us because we had to think about how can we enable multi regional deployment within from one project How can we parallelize that and how can we avoid having too many path dependencies between regional deployments and We came up with a really nice solution to trigger The project on its own to have paralyzed Pipelines which deployed to different regions and it works really good And thereby we adopted more and more githlap CI features like different pipeline triggers all the different conditions you can set up there and also more Use the githlap API, which is really good and then We set up our multi regional deployment, but then the greater firewall came around the corner because When you would try to deploy an artifact with like 30 max of size to China It could take hours if it goes through so it was a huge problem for us But we could manage it somehow by doing night shifts and trying to deploy and then Leveraging the cloud foundry caching mechanisms, but for the project is it didn't work so we had to think about how can we transfer data reliably to China and We tried a lot and we wanted to avoid setting up a Different or parallel githlap instance there because all the Chinese colleagues are also collaborating and working with us in our European instance and that works quite good And then we came up with the idea that we just transferred a source code and set up some dedicated runners there Which worked all the box. It was quite challenging to set up the infrastructure there, but when you have it It really works nice when you have a central instance even in Europe and then talking to the agents in China and Also what happened? more and more people again all the time joining the instance and We really run into high queue times for runners to pick up build jobs even sometimes 20 or 30 minutes Especially when the sprint is ending for certain projects and Thereby we set up an autoscaled runner class and AWS which Enabled us to have sort of infinite runners depending on if we are willing to pay for the compute resources so and This is what it looks now. It's not completely different. You can still see the on-prem infrastructure, which is Used for instance as IDP. We still use study code analysis tools from on-prem infrastructure artifact store an issue tracker But the cool thing is we have our central instance in Frankfurt and set up this environment of runners for the European region to deploy to the European gravity instance and also the Connected runners in China which enable us to deploy easily within China so Let's talk about some results. These are the components. We are heavily adopted. We use this github instance for one year now and one really nice features the permissions model to set up a group structure and share secrets and share permissions and Also, what's really nice to give an expiry for the permissions and because oftentimes we have changing Personal and it's really nice to to have This permission model also we store all our source code within github We built everything in github. We completely use that feature we do code reviews in github and also we heavily use the release features such as Defining the environments so that we can centrally see Which version of which software is deployed where and on which stage and we also use a little bit of infrastructure automation for the for the runner setup and also for Setting up some settings on on github itself and here are some hard facts As I said, we have this instance for one year now We came up with more than 660 repositories more than 250 active users, which means active developers not just registered project managers We also have this feature of infinite runners, which really sped up our build jobs and more than 80k pipelines are triggered until today I Have been triggered and these are the features. We would really like to adopt in the near future So to also track issues within github to have this feature to measure cycle time reliably and conveniently Which we can't do right now that easily also getting rid of our on-prem Artifactory or package management tool and using github and we are especially looking forward for the dependency firewall feature and then maybe adopted hopefully and For the secure and monitoring part it all comes down again to integration like not having an additional tool for monitoring your services and For the secure we really would like to use these static security and coordinates is Pictures of github, but they're unfortunately behind the ultimate paywall and we just have premium so where we can talk later And yeah, that's it if you have questions or anything else just Come to us Thank you very much. Thank you