 Good afternoon and welcome to everybody that I'm really pleased that you have joined this session to see how Dormant Kaba leverages and deploys in Cloud Foundry. My name is Adriano. I'm working at Dormant Kaba for the most thrilling projects. And in the remaining time, I contribute and maintain some open source projects. And most of them are used within Dormant Kaba too. And really often I'm an unconventional thinker. I have always the challenge, the drive to challenge the status quo. And last but not least, I'm happily married with the best woman in the world. And we have two beautiful kids. And this is Michael, not my kid, right? Yeah, I'm Michael, CEO of a newly founded consultancy called Velkly Gambiha, Swiss term for small cloud, mainly focused on cloud technologies such as Cloud Foundry. I don't have any kids but two beautiful cats, so that's something. Well, who have you heard about Dormant Kaba so far? Raise your hands. Who knows Dormant Kaba? At least the half of you should know them. OK, these digital components on the doors that you see on this conference these are Kaba locks, digital Kaba locks. I will try to tell you a little bit about Dormant Kaba a little bit. And then we switch to more technical stuff. So the story of Dormant Kaba is a long story. It starts more than 150 years ago. On the Kaba storyline that you see there, in 8062, the first locks shop opened. And on the Dorma storyline in 98, Mr. Durkin and Mr. Mantkel from the Dorma part started the journey too. We had a lot of years of experience that merged together last year, 2015, in September, to form the new Dorma and Kaba group from the Dorma group and the Kaba group. The merger has created one of the top three companies in the global market for security index solutions. With the performance sales of more than 2 billion Swiss francs and around 16,000 employees, Dorma and Kaba is active in over 50 countries and has a presence through both production sites and distribution and service offices in all relevant markets. The growth drivers that shapes our industry are the urbanization with the infrastructure that determines security needs and the number of mega cities that increases. The increasing prosperity in emerging markets with the tourism that is on the rise all around the world and people demanding protection for themselves and their property. The demographic change with an aging world population with consequences for building requirements. The increasing need for security caused by globalization and more geopolitical risks. And the more obvious one that we focus on today, the technology. For example, the whole digitization with Internet of Things, cloud computing in general, and changing business models. So like software as a service stuff. Our innovation portfolio is driven by market trends and customer requirements, technological trends and corporate strategy. Intellectual property management and product design is part of our innovation management. We are committed to investing in research and development in order to leverage on the opportunities of Dorma and Kaba and the digital transformation of our industry. We are investing four to five percent of our annual turnover in innovation and product development. One of our latest innovative solution is Exivo. This logo here, perhaps you have seen this in the Internet somewhere. But what is Exivo? To have a rough idea of what this really is, I will try to show it by a little explanation video. So now this is Kaba Exivo. Kaba Exivo is the access solution for enterprises that need an access solution but don't want to waste time on it. So how does that work? Very easily and conveniently, the doors in question are secured by electronic locking components. And you decide, quite simply, who's allowed to open them and when? The access rights of every single medium can be changed or withdrawn at any time. Your Exivo partner does that for you. Kaba's web based solution, this is a part of Exivo, leaving you to fully concentrate on your business. Or you can do it yourself. You choose which jobs are to be performed by your partner or if you prefer by doing it yourself. You have full cost control and transparency at all times. The monthly amount depends on your requirements. Once defined, it stays the same every month. Pretty cool, didn't you think? I think so. So for a company, I don't know, for those who know Kaba from earlier, this is really a big step. So from this 150 years of history, traditional company to come to such a platform, you see here on this picture, it's not just a new access control solution. It's really one platform to rule them all, everything. The whole customer experience chain is on that platform. So for example, the partner can plan the doors, he can offer them, he can order them. The market organizations, they can review the order, they can accept it. The connected factories can immediately produce the hardware and can ship that hardware, that door parts. And the customer has an immediate access to the platform, to its customer application part of the system. So in cloud native app speaking, we can say, yeah, our front-end apps are on Cloud Foundry. So all this customer, partner, market organization, administration, support, and a lot more, the most of them are a single-page application written in React. All this is on Cloud Foundry. But our backend services, our backend apps are on Cloud Foundry too. So the whole business domain stuff, all these identity management web servers, a lot of APIs and workers, all the stuff is on Cloud Foundry. Our IoT stack is completely on Cloud Foundry. So the whole end-to-end real-time communication between the business part and the IoT devices, all that is on Cloud Foundry. So messaging, signings, there is authentication, firmware update, virtual visualization, and a lot more. And you can trust me, there are a lot of security topics in this area here. And I think we can proudly say that this IoT stack is now today in production on Cloud Foundry. So yes, the whole business model, the whole customer experience chain is on Cloud Foundry. So from acquisition to lifecycle management, everything. So everything is a service, right? Everything is a service. We do not just deliver just products anymore. We deliver, maintain, and sell services. So we can say, everything is on Cloud Foundry, right? But why? Who does this? Yeah, this is the explanation, OK? Yeah, we are motivated to do everything on Cloud Foundry. It's fun, it's cool, and it's not a joke. It's really not a joke. It's seriously. We at Norma Kaba, we really wanted to focus from the beginning on the application part and nothing else. So we don't want to grow know-how in your topics like open stack stuff, or we don't either want to maintain or operate our own POS. No, we want to concentrate on our applications. That's why we were, first of all, looking for a great POS solution. And this is obviously Cloud Foundry. And then for an extraordinary partner, and not just a common service provider. And for us, this is Swisscom. So, OK, let's make a little step, OK? Can you raise your hands if you deploy more than five apps in parallel to production? More than five, OK? More than 10, 15, 20, 50, OK? Perhaps you can talk a little bit more afterwards. We have approximately 80 cloud native apps per stage. Divided in two bigger parts, so the whole business domain parts, all these front and the back end apps that are related to the business, and the IoT stack, separated. That way, all these customer or business information are not influenced in the IoT stack. So the IoT stack doesn't know nothing about the whole business part, just focused on all this end-to-end communication, security stuff, and so on. So all these yellow points that you see here on these graphs, on these illustrations, are our apps, are really apps. And these black lines are our app-to-app connections. Oh, sorry. You don't see here any services visualized. But in the numbers, perhaps, you have noticed that we have pretty much as many services as apps, approximately. And this is, for us, the nature of understanding microservices. But we have more than just microservices. We have passwords. We have other passwords. We have patterns. We have concepts. Some stuff like event sourcing and CQS gives us great power to do really magical things. Because everything is loosely cobbled, this enables us to even replace programming language, app by app, without touching everything. The 12-factor app methodology is a great starting point to make cloud-native apps. I really recommend that. The share-nothing architecture approach gives us the possibility to even fastly replace a database, a service, without touching the whole system. Because we have pretty much for each microservice app a service, a dedicated service. And to be tolerant of failure, it forces us to write reliable code. For example, if the app detects that a connection to a service is broken, we kill immediately the app. This makes the consistency of the system safe. Stuff like distributed domain-driven design. This triggers us to think in the intentional behavior of what we have to do, and not directly in data or code. And I have to be honest, this is an approach that not every developer is good in. We directly try to imagine stuff and model in our brains into, OK, I have to do this function, this abstraction, and this data. And domain-driven design really triggers us to not do this. And for everything we do, we have to be concerned really about security. Because we make security, right? Access control is important. And not just only security, but multiple layers of security. OK, let's have a look at our CI landscape. It's really more or less traditional. We have Mr. Jenkins. Mr. Jenkins, that does a lot for us right now. For example, by having access to our code, he can run unit tests, integration tests, end-to-end tests, do some reports. With the help of BrowseTech, can do this nicely, end-to-end test stuff, and then the results can be right back to JIRA, Slack, Mail, and so on. Another job is the whole deployment. So we deploy to our Cloud Foundry instances. We use the Swisscom App Cloud, as well as a virtual private Cloud Foundry installation, also by Swisscom. And both are offering the possibility to use MongoDB, Redis, Atmos, Revident Q, and our preferred App Runtime Node.js. And currently, all our apps are written in Node.js. We have six different stages. And each stage is also a Cloud Foundry space. For example, the develop stage is fully automated to ensure that our continuous integration pipeline will work. So this means every time a developer commits codes to our develop branch, this is our integration branch, Jenkins does the unit test, the integration test, and to-end test, then deploys it to the public Cloud, to the public instance of Cloud Foundry to use. And afterwards, it executes other integration tests and other end-to-end tests directly on Cloud Foundry. The test and the reuse stage, also deployed on the public Cloud, are used to review some features, some feature branches, to test specific stuff manually, because we don't have just software. We have even hardware, the doors, firmware, and so on, different firmware versions, and so on, other systems in the back end. And this stuff can really be hardly automatically be tested. So we need a sort of manual testing from time to time for different parts of the system. Then we have this sort of next stage. So I can say we have a continuous integration pipeline that works automatically, but we don't have a continuous delivery that is automatically done. For this, we have this next stage that is deployed on the virtual private. There, we choose the next release candidate. And normally, this happens every two weeks. The release candidate will be deployed there, and there we do some deployment tests, stress tests, specifically for this release candidate. And afterwards, we push this stuff to the staging, staging stage, also on the virtual private. And on this stage, we have our in-house sites. So everything that DomoKappa uses internally is deployed there, and all the doors connect there. And this is where we eat our own dog food, right? And this is just a measure to be sure that before we push this to production, we, on the first hand, we can just feel what we have coded, or if the features works. And after that, we deploy this to production, also on the virtual private. But now, let's take a look more deeper on this blue arrow, on this blue deploy arrow. Yeah. So I'm now going to talk about the whole journey Dormankava has experienced on deploying on Club Foundry. Let's first check out where it all began. It started out fairly traditional with Club Foundry Manifests and the good old CF push. So for each of these applications, the initial development cycles, there weren't like 80 apps, there were like 50. For each app, you pushed it, you contextualized it, you bound it to services, you mapped some routes, everything manual. As you can imagine, that doesn't really scale well. Well, it scales with humans, but humans are hard to scale. So we, like probably everybody else, just wrapped the CLI in a shell script. So we started writing shell scripts and scripts that execute other scripts and that grew kind of fast. And was kind of slowish when you like to do all those actions and always wait for the action to complete. It was quite a long time for 80 app deployment. So next logical step, parallelism in shell scripts can be done. So each app was deployed in a parallel fashion, but then suddenly we started experiencing failures in deployments like all over the place, try to create 50 service instances of the same type on a service broker on any provider, wondering if this will work out. So we saw some errors. Next obvious step to do was to wrap it in a higher language to get some more proper error handling capabilities. So we wrapped our shell, we convert the shell scripts to Node.js, wrap the CLI there, had some nice abstractions around the CLI so we could get some reliability in. However, still we had unexpected failures. The CLI didn't give us the proper HTTP error code. So you had to like parse the text in the debug mode of the CLI. So it wasn't like really happy days. So at the end we said, stop, we need something more sophisticated, something reliable, something flexible to do our application management. So we started looking around how to do this. And quick question to the audience, who had a similar journey thus far with deploying to Cloud Foundry? So who started with the like the CF push and then road shell scripts? One, two, yeah. And who then went further and expected the same thing in a higher language? One, one and a half, two, three, four, okay. So for those who went even further and came up with something what they call super sophisticated, please tell us because we were looking for stuff and didn't find really something. So what we did is we created what is now called Push to Cloud. So what is Push to Cloud? Push to Cloud is two components. It's our software, open source software that allows us to do in separate steps proper state definition of all our Cloud Foundry entities. So apps, environment variables, routes, everything. On the other side we have workflows so that we can remediate the state on our current deployment. We separated those for the reasons I'm gonna talk later. So let's start looking into what we call state definition. Of course, you start with your app. My app is called foobar, needs couple instances, this gram, build pack, the usual stuff you have in ECF manifest, right? We added environment variables to contextualize your application. You add service bindings. You want your app to be bound to services. This is all just you describe the state you would like to have. So you bind some routes and then we for instance have a nice feature that you can define app bindings. So you define that your app needs to talk to another microservice. It's a usual case in a microservice architecture. However, it's not that often first class citizen in application configuration. So we implemented for example like this. It's just a plug-in, but yeah. So if app A needs to talk to app B, app A in its environment gets one of the routes exposed by this app as a value as well as username, password for basic off. So by just defining I need to talk to the other app, I automatically get where to connect to and how to authenticate that I'm talking to the right person. So now we have your applications. You have one or 80 in the Dormacaba use case. And often the times you need your versions kind of matter still. We still all dream from the version that we just continuously deploy each individual microservice directly into production. But sometimes version numbers are important as they're breaking changes. So you need to somehow orchestrate what is a release that we know that works. So this is what we call a release, you just define your apps, where to find them and get and which versions to use. And now we have this release. So we want to deploy this. This is then what we call a deployment. So for all the Bosch people in the room this kind of starts probably sounding very similar. Deployment is just a definition of a target. Where do we want to deploy this release to? Which actual release to deploy? Some defaults that we can polyfill into our applications like in production we want three instances as a default and the gig RAM, whereas in development one instance and 512 max of RAM is perfectly fine. For the deployment we also do what we call service mappings. So the app developer just says I needed service binding to my foobar dash DB. And here you can translate dash DB for this deployment translates to MongoDB large. Or in the case of development MongoDB small. So this is done on a deployment level as usually when you switch deployments you kind of have a different mindset what you want to achieve there. And also we have our secret stores that you can configure. Currently only support is HashiCorp Vault. So you define where to find your secrets. So the passwords we saw before in the app connections those are gathered there. So now we have all those definitions. What do we do with it? We just feed it into a compiler. So we've written this compiler. You feed into deployment and then it retrieves all the required files and gets you a deployment configuration. This deployment configuration is basically the same data but just structured in another way that's easily machine readable. And this is a complete definition of everything you want to have happen on your Cloud Foundry. So when we now go back to the workflow components we usually have an actual state and now we changed something so we have a new desired state. This new desired state is complete with your deployment configuration. So you do your state definition, compile it, you have your desired state. The format we've chosen in a way that we can easily retrieve the exact same data from Cloud Foundry. And now everybody that does a little bit of programming should know that just dipping to data structures is kind of easy. So what do we plug in between workflows to remediate state? So how does this look like? Now comes a slide with some code. Watch out. Can anybody read this? Cool. So these are what our workflows look like. So what you see here is an example of a blue-green workflow that's available up on GitHub. I've color coded some elements. I'd like to talk about a little bit more in detail. So first up we have in green at the very top deployment config. So this is the parameter you pass in. I want to use this workflow and this is my new desired state. In blue we have our API. API here is basically your target and we put your target information up into a Cloud Foundry adapter so we can talk to one Cloud Foundry. And if the multi-cloud question is something for you this could also be just an array of APIs and then you talk to a whole set of Cloud Foundry endpoints and deploy your apps on multiple ones. Push your client in theory is completely ready for this. In yellow we have what we call workflow utilities. So your traditional waterfall do one step after another or like map function. So this was heavily inspired by functional programming. So you just map a function like create the roots on missing.root. So missing again is just a helper that gives you what you have in your desired state yet not yet in your actual state. And we have the same thing with old. So this kind of looks like a DSL but just to be perfectly clear this is JavaScript. So due to the nice dynamic nature we can make them kind of look like a DSL. But if you at any point here want to call out to your own service like your ITSM or want to write an email you can easily do this is JavaScript all the way through the workflow. So running out of time to quickly summarize we have the two components in push to cloud. We have the compiler that looks after state definition and then merges that together into a readable format for workflows which migrate your state. Of course we have a CLI to instrument the whole thing and at the very bottom we have a kind of awesome CF adapter. So if you ever have no JS and want to do anything versus cloud foundry at least have a look at the adapter. So very short, we believe with push to cloud we have written something that is very sophisticated in application configuration. You've seen the whole circle thing is we have a door macabre. We can deploy all those complex microservices easily with push to cloud. Once the state definition is there changes are easy to make and then the workflows allow us to easily achieve this. For us important was also the strong notion of having a release in the deployment. So release, it's the whole water scrum fall story, right? We usually just deploy twice a year so you need to be able to fix which versions to use and deployment obviously if you want to deploy to multiple stages. The whole thing is target platform agnostic so it works in any cloud foundry certified platform. Just said, yeah. The workflows are easily customizable. Little example here, almost running out of time. Swisscom for example just offered a new service plan with MongoDB with the newer version and instead of migrating our data ourselves we just wrote a workflow. A workflow that pushes Docker app, binds it to all the new services, create the new service of course and then migrates our data over with MongoDom, Mongo Restore and then maps the service back to our apps and we're done. So even like normal operation tasks we can use push to clouds workflow engine four. It's not just for your actual app deployments. The whole thing as we mentioned is extensible. So we've taken a plugin based approach throughout the whole piece of software. So if you want to use the other secret store than for example HashiCard Vault, your own proprietary one or whatever it's just writing a plugin. So we have APIs, just implement those APIs, put the plugin in and you're set. And it's all open source, of course. Some features with push to cloud was presented, last summit as well that we, some features that we've been working on. First of all Docker support, we've support for, you can just push Docker images instead. TCP routing is in there. Something we're really proud of is our retry and the error statistics. So even though we've pushed to cloud and retry ability so we can redo the whole kernel requests until they succeed, it would be nice to have some statistics which fail how often. So we now collect these statistics can easily provide those to our provider. And other minor things such as the cost retry handler. So sometimes one provider has a different low balance in front of their API nodes and some calls return a slightly different error status than in the spec. You can now per provider have a different retry handler. So who's all of this backed by? So we have free partners at the moment in the whole push to cloud project. It's at the HW University from Zurich. So academia is in there. We have Dorma Kaba industry and we have a service provided with Swisscom. So we have the free, I think pillars of writing cool stuff. So this is it. Thank you for your attention. Any questions? Yep, very back. So how do you avoid the manual change of your application configuration? For example, maybe I'm in operations. I see the SAP is running out of memory. So manually I change the memory publication of the SAP from that file to 1024 max. The next time I put it in, obviously it's going to be a reset to a file file. Can you just do an application with the Ops guide for us to get something in place to run it in landscape? Dorma Kaba, the Ops guide is basically that guy over there. So it's not a lot of people show, but we currently have no safeties in place there. But the DSLs, the whole thing with the workflows is don't use the ones directly on GitHub and customize them to your needs. So if you often have operators manually changing things, then just do a pre-flight check that if memory is higher set than I have configured, then take the current value instead of my configured one. Exactly this example we have Dorma Kaba enabled. Exactly this example. For instances, memory and other stuff, that was changed on target and it's higher, then we take that and do not change it back to the lower. Yeah, that was not a question. How big are your releases? How many applications do you have for release like all of them? 41, so are there different chunks? So as you have seen before, we have complete, on the complete stack, we have 80 cloud native apps. And normally if we really can target these two weeks of release cycles, then most of all there are five to 10 apps that should that changes. But if we made something groundbreaking, so on the architecture part or I don't know, a month ago we changed the whole logging library, then we have to update all apps simultaneously. So it depends, it depends a little bit, but we can do everything. Other questions? Yeah, we discussed about it and finally with that thought, okay, it will be really, really complex if we introduce some stuff because everyone would do it a little bit different and we have to deploy additional services like something like Consul or something like that. And for that to say, okay, for our use cases, we don't need it right now. It's okay that way, it's flat, but it's not so much because everything we need is the service part or it are injected in the environment where all that we already have from Cloud Foundry and our stuff is the credentials part, not so much per app, we all have little apps and these app to app connections. So right now there is no inbuilt functionality for this, but it could be done with a plug-in. Yeah, this approach also has the benefit of it runs everywhere. So we do have no external dependencies, we just use native Cloud Foundry features for what we call our service discovery. Monitoring, so you have a dashboard attached that you can see which functions and so on with which landscape currently and the state and maybe some more information. So we have monitoring, we have two types of monitoring. The one that we use really usually is the one that offers Swisscom. So the whole portal of all those apps that are there. This is a first step. And for more details, the details that come from the applications inside, we have another monitoring system that with our custom defined alerts, we can do monitoring stuff. So really, really specific to what we have there. So nothing, we have done nothing right now with the push to cloud tooling itself. But what we have done is a little prototype some time ago only with the Cloud Foundry adopter of push to cloud. We speak directly with the API and retrieve the information in a certain interval. And we can show that up through web sockets to a little web interface. But it's just to have made a little prototype. Nothing that we use right now because Swisscom does the job right now for us. Anything else? It's okay. Just if somebody interested, Dorma Kaba is hiring. Yeah? Well, thanks you for having me. Thank you.