 Hi, everybody. Welcome to my talk Beyond CF Push, Managing Large Microservice Applications Like a Bush. My name's Chris of Morty. I'm a senior lecturer and researcher at the Syracuse University of Applied Sciences. I'm the lead of Cloud Platforms at the INIT Cloud Computing Lab. INIT Cloud Computing Lab is available since 1912, and we are working on the forefront of cloud computing research active in several European research projects and also working with Swiss SMEs, implementing solutions in the area of cloud computing. In this talk, I would like to take you on a little journey of what we experienced, what it means to manage complex microservice applications, and I will also show you some of the tooling and we used during this journey and we are developed during this journey. Managing application in Cloud Foundry is simple. If you are using the Cloud Foundry command line, everybody loves CF command line, everybody loves CF push, especially if you combine it with the manifest, it's quite simple to manage simple applications. But is it also suitable for more complex applications, specifically if you go to microservice architectures with a lot of components you have to deploy, a lot of services you have to bind, and consisting maybe 70, 80, or more than 100 applications you have to push to manage, and each of these components can be available in multiple versions running on different environments. So this journey, in this talk, we will show you how we approach this. And this journey, the ICC lab was not alone. We are working together with Dorma Kaba. Dorma Kaba is one of the three companies in the global market for physical security and access solutions, which is a merger from Kaba in Switzerland and the Dorma Group in Germany, which was happening last year in September. And in addition, also, we are supported by Swisscom, which is Switzerland's most leading telecom provider and one of the leading IT providers. And you know Swisscom because it's a gold member of the Foundation and also a certified provider of Cloud Foundry. It all begins with the time when Kaba decided to develop an application which should disrupt the market for implementing four access solutions for your premises. They decided to implement a new and easy and convenient secure solution to plan, monitor, and access to get access to your premises. And this product was called Kaba Xevo. The idea of Kaba Xevo is to integrate the ordering, the presales, the offering, the installation, the support, and also the operation of your access solution to your doors and to your devices, to your keys, et cetera, by integrating everything into a central solution. Doing this, keeping the data central, it was much easier to not lose any data. And to give also the partners of Kaba access to the same data they were using as the same data as the customer has, as the same data as Kaba is using to provide you with the keys, with the locks, and all the security systems. It is a complete web-based solution. You don't need any server. You don't need any hardware. You don't need any software on-premise. The only thing you need is a small box which keeps the system running as soon as the internet connection goes down, as the whole system, the whole solution runs in the cloud. In addition, it was designed from the beginning as a microservice-based cloud-native application. It is using command and query responsibility segregation and in the event-sourcing architecture, which is very suitable for this kind of applications where high security and reliability is required. And it is also using distributed domain-driven design approach. From the beginning, all the components were designed as a factor application compliant. But this also means each component consists of several units of deployment, that several applications are required for each microservice. So you need, as you see on the graph below on the side, I'm not sure if you can see it, you need several components for each microservice. You need also bindings to several services. And this was one of the major problems at the end because this raised the number of services required and the number of applications to higher numbers. Just to give you another overview of the system itself. The system itself is split into two layers. It consists of an IoT layer, which is the back end doing the connections to your periphery, like the devices, the locks, and this little box, which is called the access manager. And the application layer, which consists of the front-end application, which has specific applications for your customer. It has specific applications for the partners of Kaba. And it also contains applications for managing the whole system, reporting, and connecting to the billing systems, et cetera. And the whole business domain of the business domain driven approach is running in the application layer. To give you a short feeling how large the system has gotten when we joined the project, it was about 50 applications running in the application layer, which were using 48 services with 150 instances of the applications running, around about three instances of each application were running. In the IoT layer, the IoT layer was a little bit smaller. It contains 30 applications, 18 services, and 90 instances. Still quite a lot of applications to manage, specifically if you consider that applications can move from version 1 to version 1.1 to version 2, et cetera, and you have to keep overview of the whole system. And this was also one of the main challenges for Kaba at this point in time when they developed the system. First point was keeping the overview of apps and services because the number of apps was large, number of services was large, the naming of services was not consistent, service bindings, which routes, which service, or which application is available on the which routes. Then, as I said, you have different versions of each component. And what's also the case, you have application interdependencies. So one application may depend on another application or on a specific version of another application. Even if you are using 12-factor apps, this can be the case, and you have to consider this. And it was difficult to have a coordinated update of the apps and services to coordinate this, take some time to plan and implement. The system they used was based on some shell scripts and the CF command line tool. And increasing numbers of the applications also increased the effort to maintain these scripts and also maintain the scripts for different environments. There was an environment for the developer, developing locally. Then there was an environment for the CICD system to deploy to the testing, to the production or staging environments. And all the scripts had to be maintained. This effort was getting very large. And they also, with the raising number of applications and services, the deployments got slower and more unreliable. Services which were not coming up within a specified or reasonable time, applications dying because the service was not available, applications dying because another application was not available in these kind of problems, which I think many of you also have already experienced. Because Kaba wanted to focus on their main business developing their application, they asked the ICC Lab to create a concept and engineering of a continuous deployment system for Xevo. The ICC Lab had several years of experience using Cloud Foundry, installing Cloud Foundry, maintaining it, and also lecturing it in the classes. So it was a good possibility for us to get more experience also in this kind of applications. Because usually, within a research environment, you don't have these large applications. So we were starting exploring what tooling is available for these kind of problems, Kaba had. We also started thinking about how we could improve the shell scripts. This framework of shell scripts, can it be extended, et cetera. But at the end, we saw this is not really the solution. This works for smaller implementations, smaller applications. But as soon as you get larger, it's almost impossible to do it. Then there are the Cloud Application Management Toolings and the Orchestrations Tooling, like ChefPuppet, Ansible, et cetera. But they provide almost no support for the platform layer. They are focusing on infrastructure and only minimal support for platform layers. Specifically, if you want to migrate applications from version 1 to version 2, there's almost nothing available. And what most companies do nowadays, they build their tooling on top of a CI-CD system. So they create scripts, or they create workflows to deploy the individual components of their applications within or specific for a CI-CD system. In this case, not an option, because it would mean for them to rebuild or map the system architecture on top of the CI-CD system. So if you change your system architecture, then you also have to change the architecture of your CI-CD system implementation and your workflows. And they wanted to keep the CI-CD system as simple as possible. But at the end, we also knew there is tooling around which does this very well. And this tool is called Bosch. Bosch is very proven to manage large distributors to Cloud System. I think you had a lot of talks about Bosch in this summit. It is infrastructure agnostic using the CPI. It the most important component for us, or the most important concept for us, was the separation of a release and deployment. And it is also reliable and creating reliable and seamless upgrades and downgrades of applications. But also here, Bosch is focusing on the infrastructure layer. Bosch is not built to manage your application on top of Cloud Foundry, at least not nowadays. So we came up with a concept, and we decided to implement three main points. First, we wanted to reuse the successful concept of Bosch. Then we'll come what these are. Then we decided to adopt these kind of concepts on top of the application layer. So we managed the application on top of Cloud Foundry and not on the infrastructure. And what's also required, what was required for Kaba, is to have very flexible and reusable but extensible workflows. Each application has different behaviors. Each has different needs if you upgrade them, specifically if you consider migrating your data from version one to version two. Then you need a different workflow. You need different workflows for testing environments. Maybe you do a clean install in testing environments. But on staging, on production, you won't like to have blue-green deployments. And even then, within blue-green, you may have specific applications which behave differently. You want to have specific solutions for specific applications. So after some time working on this, implementing a prototype, we could improve the deployment time of Kaba by more than the factor of two. So we implemented these concepts. We could improve deployment time by more than the factor of two. The application release deployment management was much simpler than before. And we implemented a workflow engine which provides this kind of flexible, reusable workflow. The prototype was so successful that Kaba decided to reimplement it, to clean it, to refactor it in a clean and in a product-ready way. And for this reason, I'm proud to announce today that Kaba is open-sourcing their tooling, which they implemented. And we are open-sourcing it under the name of Push to Cloud. You see the web page, Push to Cloud, www, Push to Cloud. And also Twitter handle, where you can reach us and get more information about it. What is Push to Cloud? Push to Cloud is an application management and a deployment toolkit. It has sophisticated application configuration, which I will be showing in a second. We implemented the same concepts as are available in Bosch, defining a release, an application release, consisting of applications and versions of applications. And we separated the release from the deployment. The implementation is target platform agnostic. So we implemented a Cloud Foundry adapter, it's kind of a platform adapter. Currently, only the Cloud Foundry version of this platform adapter is available. So first release is for Cloud Foundry, specific for the Kaba requirements. And we could implement flexible customizable workflow framework. The new implementation or the re-implementation is very extensible. It is very modular. You can create your own plugins. You can add your own stuff. You can create your own workflows, extend existing workflows, provide workflows, et cetera. And last but not least, it is open source. And we are hoping also to help, using it open source, also help other companies make use of it, help by using it as for their deployments. OK, some of the next I would like to show you some of the main concepts we were using. First of all, one of the concepts was the definition of a configuration, application configuration management. This is the shema we used. Sorry, this one. This is the shema showing how we structured the configuration definitions. So the basic unit of deployment is the application on the right side. Applications can have dependencies on services, as we know it in Cloud Foundry, but applications can also have dependencies on two other applications. A really application is defined using an application manifest. Also, all the application-specific configuration were environment definitions, memory definitions, also how much memory is used, et cetera, will be defined in application manifest. Similar to a manifest you are using for the CLI. What's new, specifically new, is that we created a release. And the release is a composition of applications or a list of applications with specific versions. Compared in Bosch also to a release, nothing else than a list which applications belong to this specific release. Release also defined in the release manifest. Release manifest maintained by a release manager, which knows which application works together. These components are still independent of the target environment. All the configurations so far do not contain any information about the target environment. So we can use the same release for multiple target environment. And the target environment in Push to Cloud is a CF organization space. So environment is defined as a space within an organization on Cloud Foundry. What's doing this connection between the release and the running system? It's what we call the deployment. Deployment contains all the information required to run a release on a target environment. And also here we use the notion of Push. We created a deployment manifest describing or containing all the information required to do this. Based on this information, we can generate a deployment configuration. So starting from the deployment manifest, which references, which release we want to deploy, which references, which application contained to this release, we can create one large file in memory or on disk containing a normalized version of our desired configuration or the desired state of our system. Just to give you a short overview of how an application manifest may look like, it's currently a JSON file. Because the implementation of Kaba is based on Node.js, the whole system they are implementing is based on Node.js, so it was a natural choice to use JavaScript and JSON for this. It would be easily adopted to use YAML, if you like. So application manifest defines some basic metadata, some where, sorry, wrong slide, some application-specific configuration, like memory, et cetera. Then environment specifications, application connections. If you are depending on another application, you can define to what applications I depend. If the system should inject connection credentials in my environment, so I can read in my environment how to connect to the remote system, or if I do this in a different way. And also I will define what services I would like to bind to. And you see, if you look into this configuration, there is no platform-specific, it's only names, numbers. There's no platform-specific configuration. Release manifest is much simpler or even simpler. It's basically a list of applications which belong to this release. You can give one source where to get the application manifests, or you can define for each application what's the source of the application. So it's also very flexible in the configuration. Last but not least, the deployment manifest then is doing the binding. Here you say, what is the target environment? And you give all the information required to connect to Cloud Foundry. You define which release has to be deployed. You can define some application defaults. If an application is not defining how much memory it uses, it will use the defaults from the deployment. Here you also define the service mappings where we support wildcards. So if you have a NAMIS team for your services, you don't have to define each service itself. You just define the type of service as a wildcard and then the type and the plan for this wildcard of services. And then we can also configure some application-specific settings for this deployment. So we override some application-specific configurations. As I explained before, the deployment configuration at the end is built based, describes the full normalize configuration of deployment. And we have two ways to build a deployment configuration. The way I described it so far, using the deployment manifest, release manifest, application manifest, we can compile from this the desired deployment configuration. And what's more interesting, also, we can, from the other side, we can go to Cloud Foundry, get all the information from Cloud Foundry about the applications, about the space, about the services, et cetera, retrieve this information, and also from this side, create the deployment configuration, which then describes the actual state of the system, of the running system. And the neat thing, because the deployment configuration is normalized, we can now create the difference between the two. We can see, OK, these are new services. These are new applications. These are new routes, which were not there before. These are new service bindings, et cetera. And these differences can then be used in the deployment process to only use the minimal steps from the actual deployment to the desired deployment state. And to do this migration from the actual to desired state, we implemented the workflows. Workflows in push to cloud is a collection of actions which are used to migrate from the actual to the desired state. As an input to the workflow, we are giving the desired deployment configuration. The typical workflow starts getting the actual deployment configuration from the system, creating the differences. And then we can define within the workflows, we can define what steps to go through and implement and deploy the new release of the application. We decided to use an imperative design. So workflows are code. So workflow in push to cloud is JavaScript code, which is asynchronous, which is functional, which we also implemented some tooling to control the workflows, to run some stuff parallel. So we can deploy a number of applications in parallel. We can make sure that the services are created one after the other so because to not overload service brokers, which was a common use case in early stages. And we also implemented some reliability features like timeouts and retries. You can specify what is a timeout and how many retries the system should do. And one extremely useful feature was also implementing a grace period. So if an application starts, we are waiting for a certain amount of time if the application dies and you can configure this time and how many retries it should do. And if the application is still up after a grace period, we consider the application as successfully running. So just to give you an idea how such a workflow looks like, if you go through from top to the bottom, it's quite easy to read. You initialize the workflow, you package the application, you create a service instance, you create routes, you create the missing applications, you upload the missing applications, you stage the missing applications, you bind the services to the new applications. You start the new applications, you switch the routes from the old to the new applications, stop the old unbind and delete the old applications. So workflows are quite easy to read. And because it's basic JavaScript code, you can also easily implement and extend it to your own needs. In the remaining time, I would like to show you how Push to Cloud really works and give you also a demo. Deploying an application in Push to Cloud is basically three steps. Defining a deployment configuration. A deployment manifest, sorry. So configure the deployment manifest. Any editor you want to use for this or any. Then step two, you compile the deployment configuration from the deployment manifest. And step number three is then to execute the workflow, which is then using the deployment configuration and starting. And so I will give you a demo. If you have time still, this doesn't really work. I have to go out here. What you see here is the deployment manifest for a demo application consisting of a frontend here and an API tier. And the API tier is using a Redis database for storing the stuff. So what we have, what we defined here is the target environment, the release. We have the service mappings and also some application defaults in the deployment manifest. So it's more or less a few pages. Sorry, I'll go back a few. Next is the application configuration. Sorry, this is the release manifest. You see two applications, push to cloud example host and push to cloud example API with the specific path within the Git repository, the job place. This example is also available on the GitHub documentation project you have. You can download it and try it yourself. Next is the application manifest of one of the applications. It's only a small part because Kappa is using a node. They said, OK, we are using the package JSON file itself to also using the package JSON file as an application manifest. And they just added a deployment component within the package JSON, which is in the application manifest. But you could use any other file. You can configure which file to use for the application manifest. Application one, then we have application two, which also defines dependency on a service. This is the blue green workflow, around about 50 lines of code. First step, just to see, is to compile the deployment configuration. It's a bit, we sped it up a bit. It takes about six minutes. Most of the time is used to build the application. Also, the NPM build process was about five minutes of these six minutes. And the rest was just compiling the deployment configuration. Then when you have built the deployment configuration, we are executing a blue green workflow. You see, the thing is that the workflow is initialized using all the, getting the actual deployment configuration from the system. It's creating an app, creating the two apps, uploading the applications, starting, staging the applications. After staging, then you see the output of the staging process, which is not, because it's a debug mode. We are running this. It shows all the details. It's not very readable. And then after this, we are starting the application. And these red messages you see, this is the grace period, where we are trying if the application is still running. Finally, when it's deployed, we can see it works, basically. What we are now doing is doing an update. We are changing the application as only the host part of the application. And then also switching it from version one to version two, committing the new version, committing the new changes, pushing them. And then again, compile the deployment configuration with the new thing. And execute the blue green workflow. And now you see it is only creating the version two of the example host application. And it's only binding it. And then at the end, staging only the new version. The other one is not involved. And then also staging it, starting it. And again, going through the grace period, if the application is still running, before switching to the new application and removing the old one. And it still works now with version two in the title. That's how it is to use push to cloud. Only two slides left. As I mentioned, push to cloud is a modular design. We provide some basic workflows. Everybody can create their own workflows. You can add your own plugins for the compiler if you need some specific stuff which has to be done while preparing your application. You can add additional plugins. What you're also planning to do for the next step is to run push to cloud as a REST-based service. So currently, it's just a command line tooling. Easy to integrate in CICD systems. We also plan to add runtime monitoring. And one option we also are considering is adding new platform adapters so we can also support different cloud platforms. OK. Good. What I mentioned, because many organizations have the same challenges as the store of Acaba, they decided to make push to cloud open source, hoping to help many projects to succeed deploying microservices. And the web page is available starting from today. It's going online from today. The GitHub repository is online from today, push to cloud. Quitter, if you want to contact us, feel free to subscribe to our Twitter account. And feel free to contribute. Feel free to try out push to cloud. Give feedback, what's good, what's bad, what's missing. And we are also expecting, if you like, we also like to have push pull requests on GitHub if you would like to contribute to new workflows or you would like to contribute code to extend it. Always happy to get your contributions. And I hope that I can stand here next year again and tell you where the project was going, where we stay, and how it developed. Any questions? So when you presented this Xevo app for the control service, you mentioned that for the presentation layer there, you had 50 apps and 47 services. That seems like an excessive amount of services. Was it because it was hard to generalize them or did each app use a whole lot of services? Usually, if you use cloud as microservices, each microservice has its own data storage, using its own data storage services. Some of the services are shared. So you have messaging systems which are shared between the services and used for messaging. But most of the components of the microservice are using separate databases for their view model, which is called in these kind of applications. So you have many different kinds of data services that each app uses. And they also have applications, specific applications for users, for their partners, for themselves, et cetera. And an application can be only a part of a UI. So one UI web page can consist of several components which are implemented as separate services. It seems kind of complex. Do you think that's a kind of shortcoming of Cloud Foundry itself? Because you complain that you had so many apps and services to manage. But it's a common use case. This is necessary. Today's microservices, this is a common use case to have these kind of applications. Cloud Foundry is only the basic tooling. I think it has a lot of space on top of Cloud Foundry to implement this kind of tooling. You could implement it in a CI-CD system. We now decided to have a separate tool, implement a separate tool, which can be used in any CI-CD system. So we are now bound to a CI-CD system. OK, thank you. More questions? How a push to cloud workloads can work in the multi-cloud environment, like private and public? In multi-cloud environment, because in the deployment manifest, you declare which implementation of Cloud Foundry you want to target for this deployment. So just changing the URL of the API endpoint in the deployment manifest, and then or to create the known manifests for each deployment for the public or private deployment, then you can push it to the specific target environment. Because a release is independent of the target environment, and only the deployment manifest specifies where to push it. So what Kaba also is doing, they are pushing several instances of their system for testing, staging, and production. And for each deployment, they have their deployment manifest. And one of the systems is internal. The staging environment is on the Swisscom Cloud, and the production environment is also running on the Swisscom Cloud. So we have multiple. Currently, one deployment is bound to one instance of Cloud Foundry, to one space. Option to extend this was the requirement for Kaba. OK, thank you very much.