 CloudFundry components by availability zone and open stack. On this slide, we provide planning sheets for CloudFundry jobs and I place only some of the jobs to give an idea of how the plan is done. The values in role total represent the total number of instances in all availability zones. Also, we have totals for issue of three availability zones. And also, we have total number of memory and CPU. The cells that are highlighted in yellow are the CloudFundry jobs that we recommend to place in three availability zones. There are service registry called ETCD, which should have three instances to fit HA requirements. And a aggregator traffic controller that we recommend to have at least one instance in every zone. And as I mentioned, application runners that may be DA or Diego cells. And the major resource consumers in CloudFundry deployment are application runners. Some of the components of CloudFundry deployment, they do not support by default HA configuration. And on this slide, I highlighted some of these components. One of the components is cloud controller and UA databases. To have it in kind of high availability mode, we can configure Bosch resurrection process for database instance. Or we can use external MariaDB Galera cluster. Other components in the deployment are Bosch director for CloudFundry and cloud services automation. Bosch is not directly part of CloudFundry. But it is used to manage the deployment of CloudFundry. So we have to provide the plan how to recover a single instance virtual machine. Another known component is Blob Store. By default, NFS Blob Store in CloudFundry deployment is single instance. And we can use object storage to place Blobs, for example, OpenStack Swift in our case. Let's take a look at some of the details for these components. So what does a plan for recovering Bosch director look like? The process is quite straightforward. You need to locate a Bosch state file and the deployment manifest. And also the persistent disk for Bosch virtual machine. It should be available. First, we have to edit this Bosch state file, leaving only several properties. And then we deploy Bosch and the touch persistent disk. In our tests, according to this scenario, Bosch director virtual machine was recreated in around 25 minutes. As an alternative, we can use OpenStack via migration. In case persistent and ephemeral drives are stored in OpenStack self storage. And during the VM migration process or recovery process, ephemeral and persistent disk, they can be attached to a new virtual machine. Also, OpenStack self, it makes possible to support live migration of virtual machine in one availability zone. And as the last example of configuring HA support for non-HA component in Cloud Foundry, I provided a sample of deployment manifest to set OpenStack as provider for Blob Store. We have to define credentials and URL to connect to OpenStack and also set a temporary key. And temporary key is one of the values which has to be unique for every Cloud Foundry deployment. If, let's say, on one OpenStack, we have two installations of Cloud Foundry. And it should work. Next, on next slide, I would like to highlight information about how we can configure process to restore database virtual machine. In our case, we have a single instance of database that contains a Cloud Controller and a UA database. The Bosch Resurrection, it is a feature that allows to have a recovery of virtual machine using Bosch. And when we test Resurrection process, it takes around two minutes to mark a virtual machine as unresponsive. And around three or four minutes to recreate virtual machine using Bosch Resurrection process. But we have to take into account the side effect when we stop a physical machine intentionally and draw a Bosch Resurrection property configured for virtual machines running on this physical node. Bosch Resurrection tries to recreate all virtual machines in the same availability zone. And there should be enough resources within one availability zone of OpenStack to recreate virtual machines if they are configured with Bosch Resurrection. As an alternative to such type of recovery for database instance, we can use external MariaDB cluster for all CF databases. Now let's take a look at the Cassandra storage. In our case, we use OpenStack self with replication and the data blocks that are distributed among all storage nodes. This means that one single data read request triggers several network operations. First, the application on this slide, it's an API. It calls Cassandra coordinator node. And the second step, Cassandra coordinator node contacts Cassandra data node that should have requested data role. The Cassandra node data node runs on specific compute node in OpenStack. And the third step, the compute node talks to OpenStack self controller. And finally, OpenStack self controller reads data blocks from OpenStack storage nodes. So there are four steps and there are four network operations to retrieve a single row of data. And it takes us to present cons of using OpenStack self as the storage option for Cassandra. With OpenStack self, we can deploy all cloud services in OpenStack. And it simplifies deployment and management because service deployment can be automated, for example, by Bosch. And the approach to deploy and manage cloud services is unified. Also, OpenStack self, it is distributed, scalable and replicated storage. So the failure of one physical drive or one physical node does not affect the availability of data blocks. And as the last but not least point, we can mention that the price of storage is such type of storage. It's quite cheap compared to hardware storage area network systems. What about the cons? In self storage, we have an additional replication factor. And totally with Cassandra data, if we use replica factor of 3, which is recommended replica factor in Cassandra, we have six times one data block replicated from Cassandra. And the performance of cluster, it heavily depends on the network performance. So it is recommended in OpenStack deployment to use 10 gigabit networks for storage services. In our case, we decided to benchmark Cassandra in OpenStack to understand whether it can satisfy the project requirements. We used Cassandra stress test tool and a sample cluster of six nodes. There was a simple replication strategy with factor of 3. And the network in OpenStack for all compute and storage nodes, it was one gigabit network. Every Cassandra node was configured with eight virtual CPU and 32 gigabyte of memory. And the ratio between memory and CPU is four. And it is one of the recommended ratio for Cassandra nodes. The test was conducted with just one object in Cassandra and the approximate test duration was around five minutes. So the next slide I provided some of the figures from this test. You can see there are three types of tests for write, read and write operation. Stress test tool, it measures throughput as number of operations per second. And several latencies that shows the distribution of response time during the test. On the slide, we put number of operations per second. Average latency, 99% latency, maximum and minimum latencies. All the latencies are measured in milliseconds. In terms of latency deviation, we may be interested in 99% latency and maximum latency. These numbers, they can give you an idea what to examine in details in terms of storage and Cassandra nodes configuration. These types of tests can be executed very quickly after we install the cluster. And it can give an insight what kind of performance is to expect from Cassandra cluster. Let's see if our requirements is to serve 10,000 operations per second. With average latency less than 10 seconds. Then Cassandra deployment in OpenStack can easily satisfy these requirements. But we have also to consider Cassandra data model and the application access patterns. Because they also heavily influence the performance of cluster and applications. Other recommendations for Cassandra planning include effective data size for one Cassandra node. It's from 3 to 5 terabytes. Total number of tables should vary from, let's say, 500 to 1,000 tables. To make a compaction process in Cassandra effective. And this free space on every Cassandra node should be around from 30 to 50%. To allow compaction process to complete. As for the recommended storage, the data stacks recommends to run Cassandra bare metal. Using SSD drives in JBOD mode. So we just provide the drives to Cassandra nodes, configure drives for Cassandra nodes. And all the data distribution is handled by Cassandra process. So these are some of the technical aspects from the project. So that I decided to share. And last part of my presentation I would like to say a few words about ultra-steam contribution from this project. So even when we work in such restricted area as healthcare, we can find a way to spread ideas and experience. During the project we created a Cassandra service broker that supports authentication and case-based provisioning. Actually at the beginning of the project we researched over the Internet. There were at the time several projects, but they didn't provide all the service broker functionality. And right now we are updating it on the regular basis to accommodate all the changes for the latest Cassandra version. Also we continuously improve ELK stack. Specifically we added a number of input and output. And we also provided extensions to lock stash. As an example, there is a lock stash extension to merge multiple lines of exceptions and stack traces in one message in Elastic Search. This helps to easily to find the full context of any application error in Kibana. Also we developed a web tool that allows developers to interact with and to work with Cassandra and to run any Cassandra statements and also store these statements and the history. This type of web tool is useful in private cloud when there is no direct access from developer machine to Cassandra data node. To develop this project we were inspired by Datastar's DEF Center which is desktop tool. And to work with DEF Center you have to connect directly to the Cassandra cluster. But in case of the private cloud which is located behind the firewall there is no way to connect directly to Cassandra data nodes. And we made this project Cloud Foundry ready so it can be deployed to Cloud Foundry's regular web application. These are some of the information that I would like to share. Thank you very much and I'll be glad to answer your questions.