 All right. Thank you very much, Diane, for having us here at the Red Hat OpenShift Commons. My name is Dominic Kahn. I'm a cloud engineer at the SIX group. Let me introduce myself. I'm 27 years old, working for SIX now since one and a half year, and the colleague Marcel. Yeah, my name is Marcel Harry. I work as a cloud architect in the Swiss consulting team of Red Hat. I got into the SIX engagement, I think, last summer. Started taking over from another colleague who actually started setting up OpenShift there, and now we're moving forward with it. Yeah, and what are we doing? We engineer and also operate the SIX private cloud based on Red Hat OpenShift. Currently we are running on 3.9, looking forward to the release of 4X, and we're also doing the onboarding stuff for our customers, so we really have big engagement to our internal customers to get them on our cloud. We also have to integrate a lot of things into the existing SIX environment. This is really challenging, which we will show you afterwards. Yeah, and also we drive the agile way of working inside of SIX. And on top of OpenShift we do a lot of stuff. We develop extensions and get OpenShift into an enterprise-grade environment as SIX is. Yeah, so what is SIX? We are not the rental car company. We develop and operate the infrastructure of the Swiss financial center and the Swiss banks. So as an example we have the stock exchange, which is running on SIX. Let us show you a quick introduction video which we did for our internal customers. Today's world is changing fast. The age of digitization is redefining how we live and work. In this world everything will be software, and speed is becoming crucial. For us as SIX, time to market is important. We have to be able to adapt to new requirements and conditions. We need to be capable to deliver software fast and safely. We are continuously improving and learning, and we are getting ready for the future. The container platform of CIT is the next generation of IT infrastructure, providing instant software deployment, pay-as-you-go, self-service, integrated monitoring, instant provisioning, automation, fast time to market, cloud native ready, no down times with deployments. Join us to build and run the future together. The future belongs to containers. But let's talk about the timeline we had within SIX. We started 2017 using OpenShift or put a cluster into non-production. The funny thing about that is we started with one guy and one project manager, so it was a one-man show. So if Oliver Guckenbill will see this video credits to you, it was really cool work. Then 2018, I joined the company, Mark, our software engineer, which we also have a talk about the operator later on. Also joined the team. Then we had the First Call Life app on OpenShift after January, I think. Next thing was we had to really improve or extend the cluster inside SIX. OpenShift is a really great product, but we have to extend it to get it into our environment. So we had Mark did it. He created a project provisioner that he automatically creates project on OpenShift. On August, we had the network zone implementation. We have about, I think, 30 network zones. So it's quite funny to implement all those and also get in touch with the security guys and firewall guys and stuff like that. So the next thing was Tuffin integration. Maybe you guys are aware of it. It's firewall security stuff. And we did our own implementation on OpenShift and Tuffin. Then we did an onboarding expo inside of SIX. So for our internal customers, they were some booths which they can have a demo and stuff like that. And now we are going to the public cloud approach, which we are currently doing a POC in Azure. We also had the opportunity to have a speak at the Red Hat Summit in Boston, which was quite cool. And now we're here in Barcelona. So when it comes to running environments like the stock exchange or running IT infrastructure for the Swiss financial market, then security is really strong. And you get lots of different requirements and also regarding governance, audits, and so on. And these were kind of like a little bit different things we had to tackle with integrating OpenShift into such an environment which is not flat network public cloud thingy. So network segmentation is one of the things where you need to do a lot of talking with traditional firewall people, talking about them, about SDN, having security people being involved. And one of the things that a lot of people often ask is, I'm okay with going this way, but I need an audit trail. So who did what, when, and how. And also things like, oh, I want to have an audit when something is being denied. And then also like when you start talking about onboarding and you don't want to have everything done manual, you have already existing approval processes within the company. And Dominic mentioned our Tuffin integration. We did. Where, I mean, the whole company is doing approval of who can talk to whom where through Tuffin. And OpenShift with network policies has a very nice and automated way how you can steer traffic or how you can manage traffic within the cluster. But OpenShift doesn't come with any kind of approval process. And so what we did is we hooked or glued these things together. And the procedure for making apps talking outside is the same as it would run on a VM in terms of approval processes. So we just hooked into there this lowered the cost of implementation, but also raised the adoption and also the acceptance within all the other existing teams. And self-service became very fast a cornerstone because you want to reduce the operational overhead. So when we talk as an example about onboarding new projects, for sure you want them to have following certain naming convention. Not every project should be named POC or test or my app or so. But when it comes to integrating into a larger enterprise, then usually things such as, okay, who will now pay for that within the company? And then with such a segmented network, in which network zone are you going to run? Do you need to talk to anybody outside? And things like that are becoming very important. Or even where should your locks end up for audit purposes? So having a project is usually not only this umbrella within OpenShift that matches to a namespace in Kubernetes, it usually becomes much more. And the standard or a very common approach that most people do at first is they just disable automatic project provisioning and then they set up a template and then comes the question of, okay, now where do I handle all that kind of information? So where do I know which all the projects are actually there and how do I look up that information? But also how do I maintain that over time when new requirements are coming to something simple such as the project definition? So last summer people started using operators for that. I guess that's kind of like a recurring theme today. Operators are everywhere. So the benefits are that you can really write a piece of software where you integrate your business logic into it. And at the same time, the state is fully controlled within Kubernetes, all the same RBIC apply, and also you can very quickly react on it. It doesn't get asynchronous because you can really watch on CRs and then just automatically change things. So certain definitions like the contact details, they are just random fields within the project that even normal users are able to change. But the operator is controlling them so if someone goes into it and changes it, it's just being changed back because the source of authority is within the CR. And we started small there. The first thing was a little bit, okay, let's make sure we have the RBIC under control. We are controlling resource limits that people are getting because this is the way how they are being charged. And then we enforce certain validations. At some point we started sending them emails and then over time now there's even more integration. And now there's even you can just toggle whether you want to have your own dedicated tiller in your project so you're able to deploy Helm charts with the Uber tiller, without the Uber tiller. And yeah, this is now going away but for now for the current stable version of Helm charts, this worked quite nicely. And in the end this allowed us also to integrate the onboarding process in the overall company-wide order portal because the order portal in the end just had to talk to the OpenShift API. It just got a very narrowed down service account with the RBIC on the CR of the custom project definition which the operator picks up and then applies onto the various object. And yeah, from there on when people are ordering a new project, the CR is written and fully managed by the operator afterwards. And actually it didn't, there was not only one operator is running and doing things like that and there are, as I mentioned, operators for network policies or operators managing all the egress services which are leaving the traffic. So every kind of connection outside of the cluster needs to go through a controlled point and one of the ways how you do that is using egress services in OpenShift and the way how they are being provisioned is also through operators that where people are getting the stripped down definition. Lots of, also lots of default like it was mentioned in the previous talk so you can like really steer your environment towards something. But operators were not the only thing that were done at SIX with integrating things on top of OpenShift. Yeah, of course, we had to integrate some monitoring metric stuff within the SIX environment which is strictly regulated and we have to like to engage with the current environment. So what we did is here is an architecture overview. On the left side you see the end points which the customer is coming in. So here is Grafano which has an integrated authentication. He goes to the Trickster service which is a really cool software which castes the Promethois HTTP API which makes everything pretty fast inside Grafano. After that Trickster goes to the Tunnels query. Tunnels is also a great product which gives you the possibility to have the federated stuff from Promethois and query those services and also the environment evaluation on the Tunnels sidecar. After that we also gave our customers the possibility to access the data not only through Grafano but also directly on Tunnels. So we had the challenge to the fact that Tunnels and Promethois doesn't have an authentication layer. We put the key cloak in front of it that everything is authenticated at the end. What we also had to do, the most important thing is how do we alert? The alert manager was missing so in six we have a strong process how we alert in case of an impact of a service. So here is a big picture how we do the alert and workflow now. The developer has the opportunity to label his objects that can be a pod, that can be a deployment configuration, persistent volume claim and so on. So he labels his stuff say hey I want to enable the alerting. That's the first thing. The second thing is he can say I want to have an on phone call, I want to have email alerts or I want to have SMS as well. So he labels his objects. After that we automatically scrape his objects inside Promethois. The next thing we do is re-label and enrich those metrics with, as example, the business unit, his service codes and stuff like that. So we enrich all those metrics based on his labels. After that, Promethois evaluates those rules. If there is an alert, of course, we will fire. At next, this is a really important thing. We re-validate if all those labels are valid so we can send it at the end to the existing alert system. And at the end, the operator or the developer as well gets the alert. So we do not have to reinvent the wheel. We have, as I said before, a strong alarming process. That means we had to improve or to, let's say, customize the short-time things as containers are and bring them into our alarming system. As I said, don't reinvent the wheel. We have a lot of departments. Even we have one floor only for alerting. So you have to get in touch with them, of course. Also, we heard it before, self-service. So they do not have to get in touch with us to enable their alerting, which is pretty cool. They can just enable it by their own. We have predefined alerting rules. And we also have open-sourced it inside six, not generally. So they even can send us a pull request to improve the alert or, as example, if they have a JVM meta space alert which they want to integrate, it can be reused by another project and team, which is pretty cool. So don't repeat yourself. Another good point is we do not have a lock-in on an alerting system. So that means if we're ever going to change that old system or the good system we have in six, we can just adjust our configuration and send it to the new system, which is pretty cool. And of course, as I said before, the self-service is really important for us. So the developer can enable it by his own and even can contribute to the alert and at the end to the alerts of the whole company. Yeah. And what I think to summarize a little bit about the sixth story was, as Dominic mentioned, everything started with a POC with one person doing all the work and one person managing all the things around the project. But people started onboarding things very early and also listened to the customer feedback and had that very iterative cycle. So things then grew and also people adapted as they went on. For example, as it became clear that there are so many network zones to be managed, things need to be automated. So operators also for a lot of the network management stuff became important. And then also I think one important thing if you come with new technology to an existing company where maybe a lot of things are already here are that you bridge these things into the different existing environments because this A lets you enable things that are already there and B also built bridges between new and old teams and also maybe let them moving things over. So one of the interesting things is that or now it becomes challenging for the platform but the team hosting the internal container registry is actually now hosting it on top of OpenShift which comes a little bit. But they were so happy about how OpenShift runs that they just said okay for the rest of the company where we're offering these hosting servers we're just going to run it on it and now. So and you can like really also tailor OpenShift to things that you're established processes when you are using CRDs and then operators to more streamline things on the outside. Yeah some words for the end. Actually we are hiring so get in touch with us and of course we are always trying to contribute to open source and stuff like that but within six it's not that easy but we are really working on to open source such thing as the project provisionaries but yeah join us on our path to OpenShift 4.x. Thank you very much.