 Здравейте всички! В времето е дългот. Я имам да споредувам, как да изготвам някаке на донатайма за куалата фондиарапликти, с му датабазни миграти. Меня е Цветан Цоков, на някакът възгоднат в SAP, си направи на сайкалата мага на куалата си на SAP. Бързото, я работам в мутиапс проект, която е възбира в куалта фонаринка в Кумбатур. И ще го меем. Сега имаме, че имаме една апликация бизнеса. Възбира възбира в архитектурата. Възбира в беккинг сервис, която е ренейшен на дейта база, къде се обръсва вашето на дейта. We have a backend application, which has some business functionality and uses the persistence data. And we have a web application, which serves the web user interface. And let's also imagine that we have a lot of business customers. And they are happy with our application and after some uptime they want for us to implement a new feature. When we implement the feature comes the question how to update the application. And also our business customers want the application to be highly available and they don't want any downtime during the update. And on top of that problems the feature also requires an incompatial change into the database. Everybody knows that every developer just loves incompatial changes. So the whole problem become how to make zero downtime update of our application with incompatial changes into the database. This is rather a complex problem that doesn't have a solution into Call Foundry and in general. And we at SAP are working on this problem and try to provide a solution for it. So prepare to see how you make zero downtime update of our application with an incompatial database change. So how can we make update of application in Call Foundry? We can simply update it with CF push comment or we could also use the booglin deployment approach. Assume that everyone of you know how the booglin deployment approach works. So let's directly apply it on our application. As you know the first step is to deploy the booversion on the application. And the next step is to deploy the green version of the application with the updating also the database with incompatial change. Having this incompatial change into call and database means that our booversion will stop work. And this leads to downtime which is not so good for our customers. What is the change? Let's go deeper into the database and see it. We have a context table with ID, first name, last name and some further columns. And we want to refactor the database. We want to merge the first name and last name columns into the full name column. This database evolution requires boog migration of the existing data. So applying the booglin deployment approach on our application we have two super approaches. The first approach is the usual one. When we deploy the backend boog application, then deploy the backend green application and update the database with the green database structure and merging two columns. In this case we break the backend booversion which is downtime. The second approach is after deployment of the backend green application to not update the database. In this case our booversion will continue to work but the green one will be broken. Which again leads to downtime. So what are the problems here? Both versions of the applications cannot work in parallel with one common database. Also we cannot revert to an older version in case of a bug in the new version. How can we solve this problem? Instead of going with one giant step we can apply several small steps to go from the initial to the final state. And we have to be sure that the consecutive versions are compatible between each other. In this case we will enable the two versions of the applications to work in parallel with the common database. And we will enable the revert to an older version in case of a bug in the new one. So let's see what are the steps here. The first step is to deploy our first version of the application with the table into the database. And also the first name and last name columns are set to not know. One API into the application for reading, writing and modification of the data. And after some time some data is generated into the database. The next step is to deploy the next version of our application and add new noble colon, full name. Also this new version of the app, the post and put requests write into the old and into the new colon in parallel. But the get requests still reads only from the old colon. Also the new colon, full name is noble. The next step is to make, to run a one-off task on the instant zero of our application, which migrates the data from the existing columns to the new one. After this one-off task is finished, we should have a notification with the result. And if we are sure that the migration is okay, we can continue to the next step. Where we start to read from the new colon and set it to not know. The next step is to stop to write to the old colons and remove the not know constraint from them. Now these old colons are not necessary anymore and we can delete them in the final version. And in this case, we make, we solve those problems. We have several consecutive versions, which are compatible between each other. Enabling both versions of the applications to work simultaneously. And enabling also revert to an older one. But as you see, these steps are a lot. They require some custom migration scripts, which makes them really error prone and difficult. Actually, nowadays most of the operations teams are doing exactly this for achieving zero downtime updates of their applications. So, how can we solve all this complexity? Maybe you know who is David Weaver. He is a computer scientist who said that all problems in computer science can be solved with another layer of indirection. Except for the problems of too many layers of indirection. So, obviously we need another layer of indirection or database abstraction layer. The database abstraction layer will separate the application layer from the persistence layer. And it will provide database API gateway for the applications to use the persistence data. They also will hide complex database stuff like migrations from the applications. How can create database abstraction layer in different databases? Well, in SAP HANA, which is enterprise database, we have the so-called projection views and synonyms. The projection views are the same as the simple views with the extension that they can access tables, which are located in external databases. Synonyms also provide access to external objects, but not only tables, but all kinds of objects, like sequences, procedures, stuff like that. OK, but what about other popular databases, like Postgres? Well, in Postgres, we can use foreign data wrappers with foreign tables and additional database user with special permissions. In this case, we can implement database abstraction layer in different databases. It's necessary only the database to provide this functionality for making the abstraction layer. And having an abstraction layer, we could make a schema separation with two schemas. Data schema and access schema. The data schema will contain database objects, which are using persistence data, like tables, sequences and indexes. The access schema will have objects, which are interfaces, like views, synonyms, which we mentioned, and also logic, like procedures, functions, stuff like that. So, in this case, the applications are bound only to the access schema and use it as a data source. They access data schema indirectly. OK, so now become the interesting stuff, how to handle the incompatial changes. In order to handle them, we should deploy into the data schema both versions of the database structure. In our case, we should leave the first name and last name cons to exist and add the full name con. And views in the bull access schema will provide an API to the bull version of the database structure. Or only the first name and last name cons will be visible from the bull access schema. Views in the green access schema will provide an API to the green database structure. So, in this case, only the full name con will be visible from there. Also, in this case, the both versions of the application can work in parallel. And we also should deploy some triggers into a data schema, which are making migrations between the cons in real time. Probably you are wondering who will handle these triggers, who will deploy them. For them, we should have a changelog file. The changelog file currently we have a proof of concept of it. It currently have JSON format. It has some metadata, and in it we describe our specific database evolution, which we want to achieve. In our case, we are describing that we want to merge the first name and last name cons into the full name con. And with SQL statements we are defining the migration triggers. So, now probably you are wondering who will handle this changelog file. It should be handled by a database module. The purpose of the database module is to deploy database objects into the data and access schemas. And to handle the database evolutions, which we want to achieve. This database module will be bound only to the access and data schemas. OK, but how to implement it? In SAP, there is the so called HANA deployment infrastructure or HDI. It provides a decorative approach for defining database objects. Also handling dependency management between these objects. And providing a consistent deployment model based on a transactional model. And you can think of it like it is really similar to liquid base and radioactive record migrations. And other similar technologies. So, for the implementation of our database module we can use of these technologies. So, we know all the necessary parts. So, now let's see the overall process. The overall process of the Boo Green deployment update is composed of three phases. The first phase is the install phase. In it, first we create our data schema, then our Boo access schema. Then we push our Boo database module, bound it to the data and access databases. And then run one task on it, which deploys our Boo version of the database. Into access Boo and into the data schemas. Later we push our backend Boo application and bound it to the Boo access schema. So, it uses it as a data source. And finally deploy our web Boo application and map an official route to it. So, it is publicly available and our customers start to use it. So, the next step is the date phase. In it, we create our green access schema. Push our green database module, bound it to the schemas. And run one task commit, which gets the necessary database objects. The second version of them parses the changel file. Creates all the necessary stuff into the data and access schemas. Including the triggers, which are doing migrations at runtime. Also, during this step, the old existing data is migrated between the columns. And after this data migrations are done, we push our backend green application, mapped to the access green schema. And finally push our web green application, which is mapped to a temporary route, which is internally available. So, after this phase, we can test our green application. And if it contains a book, we can revert the process. Or if we are okay with its quality, we can resume the process. Let's assume that everything is okay and resume the process to the final phase. Where we switch the routes. So, now our green versions are available on the official route. They are publicly available. So, the UI of the customers is internally refreshed. New JavaScript is loaded. And they should not see any glitches, any down times during this switch. And next we start to undepoit the unnecessary boot applications. And here we are making another one of task on our green database module. Which cleans up the unnecessary artifacts from the data schema. And finally we remove the boot access schema. So, as you see, here we have a lot of CF commands, a lot of steps. And luckily all these steps are automated by the MutiApps project. So, let's explore it. But first, as you know, the current distributed applications can be composed of many polyglot microservices, backing services with complex dependencies between them. So, the developer teams are responsible for installation, update, uninstallation, monitoring of all these distributed applications on different platforms with different versions. This is a difficult task. And it is addressed by the multi-target application model, which provides a decorative approach for description of distributed applications, which are sharing common life cycle during development time and deployment time. So, on the left side of the screen you can see our applications with the backing services. And on the right side you can see the definition of these applications and services in a so-called multi-target application deployment descriptor, which has a YAML format. And also the multi-target application model requires that all of our binaries of the applications, together with the descriptor to be archived in a multi-target application archive. So, having this archive, the MutiApps handle the life cycle of our multi-target application. The MutiApps project consists of two parts. The first part is the cloud foundry common line interface plugin, or the MTA plugin. And the next part is that it has a backend application running into the platform. So, the operator provides the MTA archive together with the deployment descriptor to the common line interface via the CF deploy or CFPG deploy command. And MutiApps installs all of our applications into the platform and manages its life cycle. So, our overall process with the Boolean deployment update is automated in just three steps. The first step is to call CFBG deploy with the first version of our archive, which deploys the Boo version of the application. Next we should call CFBG deploy with the second version of our multi-target application archive. Which deploys the second version of the application. Now it is available on the temporary route. We can test it. And if we are okay with the quality, we resume the process with the final command. For sure we can also revert the process if it is not okay. So, with these steps we achieved the zero downtime of data for our application. Okay, so let's go to the key takeaways. As we saw, the database evolution within CompatialChanges is a complex task, which has two solutions. The first one has many steps, custom migration scripts and is really error-prone. The second one needs an abstraction layer. And luckily the multi-apps project makes and automates the whole process. Okay, together with my colleague we have hands-on sessions. Today, which is already finished, but tomorrow we also have hands-on sessions. Another election and also a project office hour. So, if you want to know some more details about the multi-apps project, you can find us on these sessions. So, thank you very much for attending my presentation. We have five minutes. So, if you have questions, you can ask them. Can you repeat the last part of the question? Okay, so the question is, could we integrate the liquid base into the application itself, not using a separate database module? Yeah, and achieve the same thing. Okay, good question. So, the good part of the database module is that it has separate purpose and functionality, which it should handle. We can manage it with different lifecycle separated from the applications. So, it is more easy for this logic to be implemented in separate module, not in the application itself, which the application's purpose is to have business functionality, not to handle some database stuff. And this is functionality, which should be done by a separate module, which purpose is only to handle database stuff. And also, the applications should know only about the access schema. They should not know anything about the data schema. Because in the data schema we are doing complex logic, like migrations and stuff like that. They should know only about the abstraction layer, not the database internals. So, it is better the handling of the database object to be in a separate module, which separate module now knows about the two databases and knows how to handle the database objects in the separate databases, in the access schema and in the data schema. And the database module is bound to both databases. So, it knows about both databases, not only the abstraction. So, the question is, what happens if the database is used by external services or other applications? There are two apps running today, version and next. What do you deal with that? As long as the database changes are truly compatible, it can be spread about a little longer period of time. But, I guess, I may not have run it, and you want to separate those things, right? I think the presentation is trivial to make it simple. A lot of changes you can do will be easier or not dispersing last thing. You can, but sometimes you really do have to deal with data, and it's hard to manage that. There are many savings. And it's about like a separate app at the point? But sometimes when you have an active record of a database thing, you can never even stay at these constraints. Other times you have to restrict them. So, if you do have, like, not all constraints, it's bad if the application comes online and starts writing to the column, all of our data. So, you can always write everything for it to be compatible, and at some point you have to go back and delete the columns, because they might be taken out of state. Yeah. Okay, do you have other questions? A database module? It is several separate modules. It is not good to be part of the application. Yeah, it is separate app from Cloud Foundry. It is not necessary to be also mapped to a route or to have an IP. Yeah, but it should be on different address than the applications. The applications which serve some business functionality. Could you repeat? Yeah, the trigger. Not only... Actually, yeah, it is triggered on reads and writes by the applications. So, for example, if we have, actually only writes. If we have writes from the green version, which are writing data into the full name column, this data is populated also into the first name and last name columns. And we have one trigger for this stuff and another trigger doing the reverse step, the reverse step which is coming from the blue version. Writing from the blue version to the first name and last name columns, populate data to the full name column. Yeah, the reading is... We have the data there, which is properly migrated. If you only have write, so let's say in the green side you need to read a full name. But if that is already existing tracker, there is no full name. Full name is empty. So, if you read empty... Ah, you mean the data which is not migrated yet. Yeah, yeah. Then we read from the old columns. Okay, one moment. The stuff here is that we migrate the data. We migrate the existing data is migrated. Then we push the green apps and then they read only existing data. We cannot have... Yeah, yeah, yeah. First migrate the existing data. Yeah, yeah, this is a batch job. This is one of task. And then the triggers are doing only runtime migrations. All writes. All writes. Thank you. Thank you too. Okay, do you have another questions? Actually. Okay, we are out of time. Thank you very much.