 Welcome, my name is Peter Röhnle. I work with Ericsson and I'm going to talk about some experience we made with GitLab, KEPT and a Fluxberg flow we've set up as part of a proof of concept. Just briefly on my background I work as a senior expert in deployment architectures at Ericsson. I'm based in Aachen, Germany. I have been working since quite a while with cloud native applications infrastructure, so we as a telecom company we have basically a full set from infrastructure. We have a Kubernetes distribution, we have applications running on top and the respective management systems and I'm typically working with all of these applications helping them in their cloud native transformation. That also leads me to that I'm excited about all things automations even before going into a Kubernetes based or cloud native type of deployment. Primarily I'm passionate about being in the mountains both on foot and on two wheels. So software delivery in telecom and listening to the talks around here it sometimes feels that we're quite a bit behind towards other industries. Software delivery in a telecom environment very much looks like that software suppliers like we are in Ericsson developing our software. We have actually applied CICD concepts to a large extent in our own R&D process but then we package and it feels like you package, you ship it and the customer then deploys it. To support this we have service delivery organizations which basically take our software, customize them for customers needs and then deploy it in the communication service providers which are basically network operators around the world. There is a lot of harmonization ongoing, we're trying to standardize this process as much more, there's automation coming into display and we're slowly moving into a more automated service delivery where we're trying to have integrated pipelines or CICD loops both on our side with our service delivery for the customization part and then going into our customers. Now the process is at the very early stage I would say since a few years we're now rolling out pipeline technologies into our service delivery. Sometimes we work with customers to make this happen but many of our customers have not adopted this process yet so it's a very much a traditional process where our service delivery, they adopt some automation mechanisms but we're not deploying into our customers environment. Eventually what we'd like to get to is really have this loop completely closed and we can seamlessly deliver from us as a software supplier into customers environments and having our service delivery focus on really supporting that automation and also getting feedback from our customers. Today this is very much two separate worlds where we have dev on our side and ops very much in the customer side so there's a strict handover in here going over these packages. So what we thought of when applying GitOps to this world and as I said we're at the very early stage of that transformation and we're basically trying out the tools that we have available from the industry and we're around here to learn more about these tools. The idea is basically at each of these handover points from Ericsson as a software supplier into service delivery and then from service delivery into the customer apply the same principles where we still have to deliver packages where not yet at the point where we can have very small and fine grained software changes rolled out to our customers very frequently. Typically doesn't fit their process yet. There's only very few customers who can actually would actually be able to take software updates in a very frequent manner so we're more talking into slightly bigger timeframes in between delivering software. So we typically have a number of software components which have changed which we need to package and deliver. Of course the aim is to go into very frequent and then we would call a microservice type deliverable. Meaning what we deliver is container images, helm charts and some sort of metadata describing that package of what we've just delivered. The metadata would then typically go into some sort of management system, orchestration system or OSS systems as we call it in the telco industry which is then in turn populating the Git repository. So it's really helping the operator in this case to manage their applications without actually having to deal with the data in the Git repository. Not having to edit manifests manually but really have a more user friendly front end which lets them focus on what they need to do which is managing the actual network applications and not necessarily software how they're composed of. And then in the park and what we're currently working with is we have flux to adopt to reconcile the content of the Git repositories and eventually deploy into the clusters. What's very important here is this abstraction we gain through the management systems we deploy in the fields. This is where CAPT comes into play. CAPT is a tool initially developed by Google for configuration as data. What it allows us to do is actually package a number of changes we do to in our case flux manifests and the flux folder structure into a package and ship it to the customer. It works on the Kubernetes resource model to represent that configuration and it offers some very flexible ways to modify that configuration to update manifests in the in the repo. And you can also extend it with your own configuration modifier so you can basically build container images which are loaded by CAPT to update the configuration or to modify certain configurations. The reason why we've used it is for two reasons. It gives us the packaging aspect so we can package a number of changes we need to ship together and we have a very powerful freeway merge for KRM based configuration. If we then translate that into the POC setup, what we've basically done is simplified the management system I've just introduced with a simple give that pipeline calling CAPT to visualize changes and to provide a diff when we do software updates via a CAPT package mechanism. The metadata is then also basically the metadata we get with the CAPT packages. In this case the pipeline scans for any deployed CAPT packages in the repo. If a new CAPT package is available we create a new branch and update that branch with the CAPT package update command which gives us basically the freeway merge to properly visualize the changes. And then we create a merge request of that branch which has the updated content where we see which helm charts, which helm configurations are being changed and visualize that compared to the main branch we have available. And then of course we go into the regular review process of that merge request and after all let flux reconcile the changes once the request is approved. Now if everything goes right I should have a little demo for you. So this is a very simplified demo. One thing I should say is what you can see here on the left is a sample deployment of a helm chart using flux. Typically when we deploy an application we're talking about 50 to 100 microservices as one application. If we deliver a change a significant amount of these microservices have changed. A significant amount of these helm charts have changed. So this is what we typically need to manage when we ship an update of an application. So this folder structure here on the left would typically contain a number of subfolders with sub helm charts and so forth which are dependent on each other. What I'm now going to do is in this definition of in my CAPT repo down here which basically contains the manifest which describes my deployment for my application. As you can see plenty of sample data in here and what is called a CAPT setter. That's a mechanism in CAPT to replace these variables or these parameters at deployment at rendering time of that CAPT package. So I'm stepping up the version of the helm chart I want to use in here and I'll just commit that change. What that does now is basically CAPT uses a Git repo to maintain the packages. It's basically folders within a Git repo for each CAPT package we have. They're planning to move to providing that by an OCI registry however that's not in place yet. So the package we deliver is basically exposed as a folder in a Git repository which then the CAPT tool is using as an upstream. What you see here is the Git's tenant repo so where my applications are being deployed. We have our folder structure here with tenant one which has currently three apps deployed. What I'm going to do now is start the pipeline which scans for deployed CAP packages and then updates them or runs a CAP package update on these CAP packages to give me a merge request. So I'm executing that pipeline now. It should only take hopefully a few seconds and then I should see my merge request created out of this. So basically I'm scanning now my repository for Kpt files. Whenever I find a Kpt file I run a CAPT package update on this and then I should see okay that didn't look good. Let's see if the merge request. So I had two packages deployed here for my two applications both in my production cluster and what I can see now here if I open this one I have a merge request which shows me the changes which are being applied by this particular package update. Now in this case it's very simple. I only have one Helm chart but here you would typically see a list of different Helm chart versions and which Helm charts are being being affected. Now the idea here being that this is the base for further information to be supplied. I mean I just heard the talk from Kostas with a very detailed analysis of the exact Helm parameters and manifest parameters which are being changed. We expect for us that may be relevant information for some of the users but what this enables us to do is see the changes which Helm charts have changed and provide additional information release information for these Helm chart in addition to what you see here as for example part of the comments on the merge request. So if I'm going to approve or merge this request we should now see that the tenant, oh that was a very quick one. So unfortunately I passed the update so in this case you see that the MyApp02 is now step 2, 1.13 which I've just pushed out. Let me briefly get back to my presentation. The learnings. The kept freeway merge which is one of the features we can use here to merge upstream packages we deliver with customizations customer have done for instance specific data. I have one example here as well where for example we set the Helm repo names, we set the account names, we set the app name and so forth. Those are the customizations or some of the customizations we do at deployment and we need to merge them with the upstream package we deliver in the update. That only works for KRM conformant configuration which basically lets out values.yaml which is not following KRM. There are other means in flux to encode values but as soon as you use values.yaml that doesn't work anymore. As I just said, kept helps to customize these flux manifest as long as they follow the KRM model which helps us to set the repo names, accounts and so forth. What we've seen in our test is that kept overrides the change parameters which we get from the upstream package in certain cases. The way how we could mitigate that was to re-render the kept package after the update. We're keeping basically our customizations in a separate file and we just render the kept package again with that customization. We don't know yet why that is the case but it seems to be some sort of preference over the upstream version going first before the local changes. kept can be used to deliver and manage helm-based deployments with flux. If you manage to generate the values.yaml or update the values.yaml with separate tooling. As I just said, in the first item you can't really use KRM, you can't really use kept to do that. What we've seen is what is really helpful is to generate these merge requests visualizing the changes which we impose by a package update. That really helps to improve the confidence in deploying these packages when you know exactly what is ongoing. Because we still need to have these packages which collect a number of changes in one deliverable. We've also seen and this may be a bit specific to telco environments which are sometimes air-gapped environments, sometimes have very strict regulation on security. As of today, kept requires a container runtime to execute certain configuration modifications which we don't have in all the cases. It also pulls the basic functions directly from the gcr.io repo which is not allowed in many of the environments we're deploying into pull from a public repo. So typically container images are replicated in a private registry which is monitored and maintained by the CSP and you can't pull from a public registry in a telco environment. What we've also seen is that KPT in the version we used does not support authentication towards the upstream git package repository, meaning I had to build an open repository which of course is in this case also not really practical if you can't have an open repository. Nor does it support authentication towards a container registry which is also a common requirement in telco. We expect that some of these things are going to be solved on the road, but for the moment it makes it really unpractical to use KPT for us. So in summary, as part of the POC where we've also done a few other things and I think there were other talks at different conferences showing that a github space deployment can be adopted also for telecom applications. I mean in the end we're shipping software and that problem has been solved by the body industry. I mean our applications are fairly big and large, but the problem can be solved with githubs. And we also believe that this is a good way forward. For larger applications, if we have a number of helm charts which we need to deliver together, which typically form a slightly more complex structure, we need additional metadata and packaging to deliver into a github's environment. We've seen that KPT can be used in combination with helm and flux to do so, but it comes with a lot of limitations. We've also seen that for us at least KPT lacks some of the critical features to allow operations in a typical telco environment, authentication to the registries and so forth. So overall I think the telco industry is on the transformation process towards githubs and we're looking to adopt this. We have some specific challenges to be solved and we're trying to look for good approaches to do so. That's why we're here and that's why we're also listening to this community. In the end I want to thank you. I've just seen that we've released a user story around CI-CD pipelines which we've done together with the CD Foundation. That's the link to your left. And we've done an Ericsson Technology Review article where we explain a bit our overall vision around githubs in the telco industry. So if you're interested in that please check out these links. And with this I would like to thank you for your attention and if there are any questions I'm happy to answer them. Please. I think we have a mic somewhere. Thank you. Yeah. So I also, kind of from the telco industry and big fan of Erlang. So thank you. And I have a question about, so you definitely packaging your manifesto and trying to do it with CAPT, with Flux as well. Did you try to explore the way of OCI images and package, like render help manifest inside the OCI, render them inside the YAML, just the YAML, and package inside OCI image and also like sign it to ensure supply chain, like these kind of workflows. Because for example for us it worked like when we tried to do it we like found a lot of opportunities and a lot of, it's ease of use and it also supports all the necessary application, compliance, deep also, very easy to collate all the stuff. So have you tried to explore in this direction? So just to understand you, what you basically said is you'd be rendered a Helm charts into Kubernetes manifest and then we package everything into an OCI image. Yeah. I think there has been some work done around this. I'm not sure how far they got or which state they are at the moment. But I think we started to look into this. I don't know the exact outcomes of that. Okay. Thank you. So another question over there I think. Yeah, I guess I'm, I was just wondering what value are the Helm charts actually providing. And so, you know my experience with CAPT also the runtime sort of just it was a blocker for us. And I wonder whether or not you could just use CAPT setters to sort of alter your native or your vanilla manifests in an inflate like inflate the Helm charts. You know, use CAPT setters instead. Could be an option. I mean, one thing we're in a transition phase when we're with regards to to Helm charts. So at the moment we deliver into Etsy NFV environments and not sure network function virtue or Etsy network function virtualization describes basically a management concept. And they make use of Helm charts as of today. So what we're trying to achieve is have a deliverable which we can deliver into both types of environments. And for one, we certainly need Helm charts so that this qualifies at least short term the OCI alternative or pre rendering the Helm charts and then use CAPT setters to do so. Long term, certainly something to be investigated, but it was not part of this part of the questions. Thank you again. And enjoy the booth crawl.