 Hello everybody. I would like to greet you everybody here on Cloud Foundry Summit. And today I will be talking about the latest features and technologies emerged in Cloud Foundry ecosystem for the last year. I will be talking about their good parts and their bad parts. Also, I will try to share some stories from my personal experience with these technologies and from experience of my colleagues with these technologies. Let me introduce myself. My name is Alex Zalesov and I'm a Cloud Foundry engineer at Alteros. During the last two years, I do Cloud Foundry deployments. I do design of Cloud Foundry deployments and also I do the implementation and operation. After the deployment is complete, I often go to the trainings to educate our clients better about the operations of Cloud Foundry and best use of past solutions in their daily work. Previous to Cloud Foundry, I built management systems for traditional infrastructures like enterprise server systems, like telecom systems. So I know different types of infrastructures from physical servers to the cloud infrastructures we see now. The first feature I want to talk about is a cloud config. A cloud config is a way to extract ER-specific configuration from your manifest. Typically, you have pretty much information that describes your cloud and it is the same across all the deployments. One example is availability zones. Typically, you use the same availability zone separation for your services for Cloud Foundry. The next type is networks. In reference Cloud Foundry architecture, you have networks for elastic runtime and you have service network that is used by all your services, like MySQL, Redis, PostgreSQL. And in some types of clouds, like AWS, you have predefined VM types. You can't change these. But for example, in open stack deployments, which we do when customer requires Cloud Foundry deployment in the private data center, you can define your own virtual machine types called flavors. So this information is a good candidate for sharing across all the deployments. And when you extract it with the cloud config, you don't repeat yourself defining it in each manifest you have for deploying your systems. It reduces there and allows you to have deployment portability. So if you have, for example, two deployments in AWS of Cloud Foundry, you maintain two manifests and two separate cloud configs. And these manifests can be pretty similar or identical. As far as you know, before provisioning Bosch... Is this something wrong? Okay. Before provisioning Bosch, you need to create some kind of server infrastructure, like networks, defined security groups and so on. We typically use Terraform for this purpose. Terraform is an awesome tool. You define the configuration in your cloud using JSON file. And then you can check it to version control system. You can defeat and Terraform is smart enough to apply these differences to your infrastructure in a resilient manner. Before Bosch bootloader, we got outputs from this Terraform script that contained network IDs, subnet names, and so on. And we needed to put these information inside of our Bosch manifest, and then Bosch understand where it is deployed and it is ready to deploy other software. Bosch bootloader automates the workflow when you create infrastructure with Terraform script, and then it automatically inserts the information about your particular cloud into the cloud config. And you will get Bosch already ready to deploy your software. I like this technology because it saves my time in two ways. First of all, I can just start the deployment and it will proceed unattended, and in 40 minutes I will get the Bosch deployment. And the second is as it is fully automated, it can be tested, and I don't need to verify the infrastructure after the deployment. Before I need to do the verification in case I had some typos when copying, pasting these values from the Terraform output to the manifest. Now it is limited to AWS and Google Cloud platforms, but it is rapidly developing and I think soon you will be able to deploy Bosch on this sphere and OpenStack with Bosch bootloader too. I would like to say that the technologies in Cloud Foundry ecosystem are dependent on each other. For example, Bosch links is a service discovery mechanism and it allows you to utilize other technology. I have talked about the cloud config more effectively. If you have IPs in your manifest, you can't truly separate networking configuration from the system. And you can't truly use cloud config. With Bosch links, you can reference one job from another in manifest, and then Bosch does all the job of making these jobs to speak to each other. So it is a static service discovery mechanism. We use this technology to replace all the static IPs from our releases that we maintain. For example, Elasticsearch one and some releases that are already migrated to this technology like concourse. We can deploy them using cloud config and using Bosch links and not doing these static IPs rewriting. Now something about security. Components of Cloud Foundry installation, I think majority of them now, can talk to each other using TLS encryption. So nobody except these two instances can understand their traffic. And it is a big deal when you want to deploy your Cloud Foundry infrastructure inside of public cloud. In private cloud, you can have a rack and a switch in it, and you can physically secure access to your network. But in public cloud like AWS, you have your network shared between several instances, and you can't guarantee that somebody else is tapping into your traffic. I have a story of a customer who had a main installation of Cloud Foundry on OpenStack in private data center and wanted to add AWS installation for elasticity. Because when you have installation in private data center, it is elastic only on application level. On hardware level, you still need to buy new hardware and to manually install it to the data center. In AWS, you can buy resources on demand and then freeze them. And because we have no such feature at that time, it was about a year or two ago, we could only move the testing to this environment, to the AWS environment. It was pretty elastic, you run tests and then you can tear down this environment. And it doesn't work with the customer data, so it has pretty loose security restrictions. But now it is possible. Oh, this is my favorite one. This is Bosch 2CLI, that is written in Go. When I first read about it, I thought, okay, what is the big deal of rewriting Bosch CLI from Ruby to Go? But when I do hands-on with this technology, I understood that it is completely right with a separate set of features. It allows you to substitute tools for managing manifest like spiff and spruce with native CLI, so you don't need a separate tool to merge several parts of manifest together. It can generate default secrets for you. Cloud Foundry uses a lot of secrets inside. For example, it uses a secret that is shared between Go Router and Nuts Server, and they use this secret to talk together. As an operator, I don't care about how my router is talking to Nuts Server since it is secure. I don't care about some secrets like administrative secrets when I use it to log into Cloud Foundry, and I care about the certificates that I place on a load balancer or Go Router because it is encrypted in client traffic. But internal certificates between the Cloud Foundry components can be self-generated, and while they are managed correctly, I don't care what they are. Before Bosch 2CLI was available to do TLS encryption of traffic in Cloud Foundry, you need to generate certificates manually and then insert them to the manifest. And the manifest became huge. I think I saw about 5,000 lines manifest in the practice, and they were very hard to manage. Now you can generate these secrets, and they will be stored locally in your file. Current Hub Project moves this technology even further. It automates credential generation storage and lifecycle management, but it adds another feature. When you do local credential management, it is good when you are the only person that operates this deployment, but when you have a team of developed engineers, you need a secure way to distribute credentials between them and to maintain these credentials in a consistent way. You can use GitHub to collaborate on configuration, but for credentials, the best possible solution was something like passing this YAML file between these developed engineers. Current Hub provides you with the centralized server where you can upload these credentials, and you can extract these credentials from your Bosch manifest. So the Bosch manifest becomes easy to manage. You can just put them into your GitHub account, and don't worry that some of AWS access keys leak to the bad place. I want to share my personal story with Bosch manifests. When I just started working with Cloud Foundry, I deployed some stuff with Bosch, and I wanted to share manifest with other people to ask for help. I applauded it to the GitHub guest and didn't delete my AWS credentials. Then I discovered that there are two kinds of bots constantly searching through the GitHub for the keys. One kind of bots are the good bots. They will tell you that your credentials are compromised when they find your key, and the other bots are bad bots. So they will spawn about 500 VMs in AWS account and start mining Bitcoins. I was lucky. I received the emails that my credentials were compromised, and I quickly recycled them, but it was not a pleasant experience. So I think KredHub will make this credential management for the future generations of DevOps engineers in Cloud Foundry less painful. Sorry. Sorry, how much time do I have? Ten minutes. So then we have ten minutes. I will leave five minutes for answers, for questions and answers. And in the last five minutes, I will talk about two other interesting technologies. First is CF deployment. It is the way of deploying Cloud Foundry using many of previously mentioned technologies. 1150? Yeah. Okay, that's great. One moment. It uses many of the previously mentioned technologies. Bosch Links Cloud Config. It separates releases of individual Cloud Foundry components to separate entities, and you build deployment out of these many releases either from the single monolith CF release. We use this deployment for our proof-of-concept deployment now, and we also use it for the training purposes. For example, yesterday my colleagues gave a training here on Bosch, and they utilized this CF deployment and Bosch to CLI to install Cloud Foundry. It is not production yet, already yet, because there is no way to migrate deployment from CF release to the CF deployment release yet, and it is not necessarily a good experience upgrading one CF deployment to another. You may want to delete it and start from scratch, but it is very promising, and what I love about this technology is that it makes the deployment process of Cloud Foundry understandable. It is a big deal when you speak with students in the training to make them understand how the Cloud Foundry is deployed and to make them understand this fast without diving into specifics of speak manifest generation, into specifics of deploying networking stuff in the ERs. This can be dozens of steps, dozens of separate steps for provisioning infrastructure in the OpenStack or AWS. There can be a lot of steps, merging manifest together and then adding Diego runtime inside of them. And with CF deployment, you can do pretty diagrams and actually follow this process, and it will consist of three or four steps. And the manifests that you get using the CF deployment are pretty small, so you can just go through these manifests with your students and they will understand each line of the deployment manifest. So it's cool. On the top of your CF deployment, you can put container networking. What is it? It is a policy server and add-on to Garden RunC. Before container networking, if one of your applications want to talk to another applications, it needs to go through Gorotor. On OpenStack, it was even worse because Gorotor typically has a floating IP, and it is a separate VLAN, and you have a lot of traffic between the physical machines and on different levels. The reason was that policies were enforced on Gorotor level, and it was hard to put container networking. Now the policies are enforced on Diego cell level, and your containers can talk to each other without going through Gorotor. It will decrease the amount of traffic going through Gorotor, so you need possibly less VMs, and it will decrease the latency of your microservices communications. So sometimes you have a pretty deep stack when a client requests his or her first microservices, and it funds out to other levels, and these microservices reply back, and the final answer goes to the client. Now this latency is lower. Another story was a performance story. One day we discovered that our client has time-out problems. So engineers tried to push applications to Cloud Foundry and they time-outed, and we looked in this deployment and found out that ETCD was about 100% busy. It was busy on this keyword, and this deployment was on pretty slow HDD drives. We tried to tune ETCD to be faster, but it turns out that the possible solution was just to replace HDD with SDDs. It doesn't suit us. We did a hack like we created a RAM disk and make ETCD write to this RAM disk instead of actual hard drive disk to go this problem, but it was an nasty hack and I didn't like it. Sometimes later Diego was able to store his data on a relational databases like Postgres, like MySQL, and we moved to Postgres, and now we have now this problem. This is the last technology I want to speak about, and I wish I would have had this technology one year ago. These are isolation segments. The isolation segments allow you to define group of applications that run on a particular set of hardware so you can efficiently separate different environments, not only on the container level, but on the VM level. Combined with routing groups, you can separate traffic from these isolation segments to separate go-rotors. Typically clients want to separate their environments like production, staging, development, and before, we used to deploy several Cloud Foundry instances for these purposes, but it resulted in increased effort required to maintain these instances of Cloud Foundry. These required more resources because you are duplicating stuff like Cloud Controller, UIA, and everything else that can be pretty easily shared. Now this technology is generally available. We haven't used it in our projects yet, but we are looking forward to do some proof-of-concepts with it and to make future deployments use this technology. This is just a small part of the technologies that were matched in the Cloud Foundry ecosystem during the last year. When I did research for this talk, I outlined about 50 separate technologies. If you want to talk to me about other technologies or share your stories of operating them, please come to our booth, to Alteros booth that is in the hall. Now it is time for the questions. Question about mutual TLS. Okay. Is it the TLS between Bosch components or it is between Bosch and other deployments, for example, CF? This is TLS between... Within the Bosch director? No, no, no. I'm speaking about TLS between Cloud Foundry components like EDCD and Diego. Like GoRouter and Nuts. Got you. Is there any additional functionality? Because I think in the current deployment, we do have certificates. For example, EDCD has its own CA and certificates, similarly Council. Just wanted to understand what's the new functionality that we're bringing using the mutual TLS? Here I speak about the previous deployments when you don't put certificates on your VMs, so they talk using unencrypted traffic. And it was pretty easy to debug, for example, because you can capture packets. And now you put certificates to the VMs, to the different VMs deployed by Bosch. And when they communicate, you can't more see this traffic flowing. Makes sense. We have time for another question. The isolation segments that you mentioned last. What are the criteria you can use to segment your apps? Can you do it by org, by space? Do you have any information on that? You have to define your isolation segments in two places. First, you define them in your Bosch deployment, where you mark the cells that are from one segment or another segment, and then you can bind your isolation segment to your organization. We have one more. I guess you can do space-level isolation. Thank you very much, Alex. I appreciate it. Thank you.