 Good morning. Let's wait a few more seconds for folks to come in. I'm Dekel, with me, Faran, from the Cloud Foundry team. We're going to work you through an overview for Cloud Foundry, Cloud Foundry Open Platform as a Service, and then a deep dive on the integration we've done on top of OpenStack with Cloud Foundry and Cloud Foundry Bosch. And let's make this interactive. So if there are any questions, please ask them. This is not intended to be a death by PowerPoint. So really ask questions. So Cloud Foundry is now part of Pivotal. Pivotal is a company formed by assets from VMware and EMC. Officially launched April 1 as an independent entity. We've been working as a virtual org for a few months now. It's basically the idea is to create a platform for next generation applications that has data, big data, platform as a service, and can run on any cloud. So different assets from VMware, EMC, coming together under one company, which is Pivotal. So we are all one happy family now. And Cloud Foundry is a part of Pivotal. And we're going to explain how that works and specifically talk about OpenStack. So that was a big show. So what is Cloud Foundry? Cloud Foundry is a PaaS platform as a service, which focus mainly on making developers, such as yourself, more productive, enable you guys to focus on code writing great apps and not plumbing middleware infrastructure dealing with routers, load balancers, databases. The idea of Cloud Foundry is to allow you guys to write code, deploy in seconds, let the PaaS do the rest. We are focusing on, we said, productive. We want to be open. So Cloud Foundry, main differentiator is it can run on any cloud, vSphere, vCloud, AWS, OpenStack. The idea is that it's very easy to port cloud applications on top of Cloud Foundry between clouds. You will see in a few minutes how we made the integration into OpenStack using Cloud Foundry Bosch, which is the underlying mechanism that basically allows us to be multi-cloud. It's very easy to extend Cloud Foundry and to scale apps running on Cloud Foundry. We'll talk about that as well. So the idea of this slide is not to get into definitions of what is PaaS, what is infrastructure as a service. It's more to tell you how do we look at the mission of platform as a service. And as I said, this is about making developer more productive. And the idea is to empower developers to build great apps and not plumbing middleware and infrastructure. So PaaS is an obstruction layer that sits on top of infrastructure as a service. The unit of deployment is the application. And that's the key thing to understand. You are deploying an app to a PaaS environment. You don't see VMs. You don't see databases. You don't see app servers, web servers. All of this middleware layer that sits on top of the infrastructure as a service is obstructed from you as a developer. That allows you to basically, A, just focus on code, but, B, really separate the concerns between what it means to develop an app and what it means to deploy an app. And that's the key in today's world of complex architectures where you really want to focus on just deploying. In terms of where do we sit in the stack? So Cloud Foundry is this obstruction layer on top of this work. Great. So this is this obstruction layer on top of infrastructure as a service. What's, again, what's really unique about Cloud Foundry is it can run on multiple clouds. So basically, when you choose your PaaS, you don't choose the cloud that it's running on. Today, we're going to focus on how we're basically building or deploying and showing you guys the integration we did on top of OpenStack. This chart is actually taken from one of our current customers that are running Cloud Foundry on-prem behind their firewall, building a private cloud and a PaaS on top of it with Cloud Foundry. Don't worry, it's not your eyes. You're not supposed to see the letters. It's blurb for a reason because this is the actual deployment model. The point here is every colored box here is a manual process. For this specific customers, it takes between five and six months to get an application deployed from the moment the developer finished writing the basic code until this app hits. This is actually all the way to staging, not even production. This is crazy, right? In today's market, we can't wait five to six months until an app is actually running. Why does it take so much time? You need to open a ticket for someone to provision a server. Then you need to open another ticket for someone to open some configuration in a firewall and another ticket to have a database provisioned for you. And then maybe another ticket to have a schema, so on and so on and so on. And you need to do this over and over again when you're moving between development to QA to staging. And what's even worse is you're usually not testing the thing that you actually wrote. So my background, I'm coming from years of engineering in the J2E environment, so I had my share of 30,000 lines of Apache Config file, which is probably fun to some of us, but others that just want to write great code and build great app probably don't want to deal with Apache Config files. So what we aim to do is turn this nightmare into something that is slightly more simple. The idea is that you are deploying a cloud, and you'll hear all about this, how you're actually building the Cloud Foundry instance using Bosch in a few minutes. Then as a developer, which is the blue API here, you are targeting that cloud. You are pushing your code. You are binding a service, and you scale your application. So these are basically the four verbs that you're doing with Cloud Foundry. You are targeting, pushing, binding, scaling. Target means I can target my private cloud, public cloud, micro-cloud, which can actually run on this nice little laptop here. Within my private cloud, I can target several environments. So I can target development, development 2, QA1, QA2. I can even integrate this into a CI system. So basically, the output of a CI build will say, now move my application to the next environment. And the key here is that the app itself doesn't change when you move between environments. So for example, think about that push. I'm just pushing my war file, in the case of you guys build Java or Java Spring. Or I'm pushing my .rb file, if you are a .rb folder, if you're building Ruby. So it's really about you finished your app. You're deploying it into the past environment. Services. You don't deal with building messaging systems, creating databases. You are binding into all of that as a service. If you want to scale, and part of the reason the Apache config file was 30,000 lines of code is, as I'm sure you know, when you're getting into scaling, you need to scale all of these tiers separately. So the database has one scaling schema. Your web servers, app servers have another. Your load balancer, now you need to define additional DNS entries, and so on, and so on. So the PaaS, or the CloudFoundry specific, really aims at making this much more simple. This is what basically CloudFoundry looks like. Combination of framework services and clouds. We talked a lot about public private micro services. Out of the box, you can provision Postgres, MySQL, Redis, RabbitMQ, MongoDB. And you can plug in your own services. CloudFoundry is an open source system with more than, I think, 7,000 contributions now. We launched CloudFoundry two years ago. Less than 72 hours after our launch, the open source community already added Erlang, PHP, Python. So in this conference, it's very easy to pitch to the choir about the power of an open source community. So you're getting a lot of contribution because it's easy to do and to extend. And you will see that in a minute. And B, it's open. It's designed to be pluggable and extendable. So if you want to use your own service, you can plug it into CloudFoundry. Frameworks. So out of the box, we have around Java and Java Spring apps, Ruby on Rails and Sinatra, Node.js, Scala, and other languages. And you can also plug in, as I said, you have contributions from the community. You can run PHP apps on CloudFoundry and so on. It's all open source under the Apache 2 license. What you will see in the next few minutes, we'll talk a lot about how you integrate all of that into OpenStack. So basically, the idea is once you have CloudFoundry deployed on any infrastructure as a service, the rest looks exactly the same. So from a developer perspective, the target, bind, scale, push, all looks the same regardless of the underlying infrastructure. One of the major use cases we see people using CloudFoundry, including the way we develop CloudFoundry as part of the pivotal company now, is this idea that you can progress the app between environments without changing code. So you can move between your development, whether or not you are developing teams or on your own, all the way to QA and to production. And all of that is done by this target command. So you're just moving the target of your cloud. The power of this is, A, you are not changing the application between the environments. So for example, if you are using a war file and you are testing, deploying, developing a Java Spring app, for example, you can bind into different databases in the process. So you can develop on a MySQL database for just for the sake of example. And then when you are moving into a staging environment, you can change that database into a Postgres database. Your app doesn't change because the level of abstraction is in the right place in the stack because everything is bind as a service. So that really allows you to be, it kind of brings the true agility, if you like, because the agility of building an app is not only by your development processes, but is also by the idea that you don't waste time waiting for new environments to be set up and moving those up between environments. An example I can give you from another big customers of ours in the building private CF is that they made a huge effort in adopting agile processes. And they really did their development, changed the way they did development, and almost got all the way to like pair programming like Pivotal app does if you guys are familiar with that. But when they have to deploy the next step of the app, they had to wait three weeks for their IT to set up the next environment. So all of that agility kind of went into waste because they were ready with the code, but then three weeks to set up the next environment. So really that's kind of things you can solve with Cloud Foundry. Why are people using us? Developer productivity, we talked a lot about that. Ability to build web apps, social apps faster. Let developers like yourselves do what they do best, which is build great apps, not 30,000 lines of Apache config files. Sorry for getting back to that all the time. This is some of our customers that are running Cloud Foundry on-prem today. There are many more. Intel is building a huge deployment of Cloud Foundry for all of their internal developers. Debuild Comic Relief, if you're familiar with the fundraising that was done in the UK a few weeks ago, with, I think they raised like 75 million pounds or something like that. All of that was done on Cloud Foundry. What the system that supported that. I think what's really cool about Comic Relief is how fast they build the application. They actually build it in less than a week. And they use Jenkins in order to kind of progress their application between the dev stage, dev QA staging environment. So this concept of CF target, a target of a cloud, they integrated that into their Jenkins environment. And once the first stage was over and the CI build was successful, they kind of did target the next environment. And they were able to move, progress their app pretty quickly, all the way to production. You can have a session without a big logo slide. So this is our big logo slide. The point here is to make Cloud Foundry has a huge ecosystem. We have 60 technology partners, a lot of them are in this conference today, building different frameworks, marketplace for emails, for logging. It's very easy to integrate. It's very compelling for a technology partner to kind of integrate and work with us. And obviously multi-cloud. So deploying on OpenStack, AWS, VSphere, VCloud, different environments. Just talk about a few numbers. We have, there are thousands of members on our mailing list. You will see specifically the OpenStack mailing list in a few seconds. A lot of contributions and I'll just skip a little. We talked about, this is the major use cases that we see. Agile transformation, dev test trial. So I think this is pretty compelling that you can experiment more, get into more, do more testing, fail fast, all those nice things that actually you can do if you have easy setup of environments. Sorry. So as I said, Cloud Foundry can run on your data center, hosted public clouds, developer laptops. At Pivotal we are operating one instance of a public cloud, Cloud Foundry, called cloudfoundry.com. Have other partners like AppFog, Tier 3, and others that are building other instances of public clouds based on Cloud Foundry. And obviously you can deploy that on your own environment, which is very popular for OpenStack customers using Cloud Foundry Bosch. So the idea here is you're really not locking yourself into any specific environment. You can go to a choice of public clouds and then bring back the app internally. You can start internally and then go to the public cloud. One of my previous employers that I will not mention their name here, I was running a project that actually started on AWS. And then once it was successful, moving it back into our data center. After four or five months, we basically stopped the project because it didn't work. Just too much hassle, too much issues getting the app from the Amazon environment back into our data center. So with Cloud Foundry, it's hopefully a little bit more than just CF target. It really works, so we have customers doing this today, actually moving between environments. The reason it's easy to do, and after the session, if you're interested, I can show you a live demo of how this is done. And it's not slide where it's because the obstruction is in the right place. You are not tapping into the specific of the infrastructures when you're building an app. You are writing your app in the right level of obstruction that really allows it to move it between environment, public, or private. And we think that's the true promise of cloud computing. So this is kind of how the Cloud Foundry logical view looks like. You have routers, so we take care of load balancing your app. When you're adding an instance, we make sure we update DNS. You will have authentication mechanism, health manager. So for example, if I'm scaling a Cloud Foundry app to like 20 instances or 1200 instances, in the case of one of our recent customers, we can actually update the load balancer. So if one instance, not only that we are updating the load balancer, but we are maintaining an SLA. So for example, if one instance fail, we will automatically restart another instance to make sure you are keeping your SLA. All of that sits on top of Cloud Foundry Bosch, and that's what allows us to basically be portable across environment. And with that, we'll transfer into an explanation of how we build Cloud Foundry on OpenStack and what is Cloud Foundry Bosch. Okay, thank you, thank you. Okay, we're going to get a bit deeper and we are going to explain how you can deploy Cloud Foundry on premise using whatever app you want to use. If you want to deploy it on vSphere, if you want to deploy it on AWS, or if you want to deploy it on OpenStack. Okay, now we are in Cloud Foundry.com. This is our public pass environment, and we are ruining it on a vSphere environment. This is a big environment. We have about, it depends on the load of the number of applications that we have, the number of the users that we have, we can have about 5,000 VMs running. We have a lot of different node types. We have some VMs, the target to run some databases. We have other types of node that we can run some web services, et cetera. We have more than 75 unique software packages. You have saw in a previous slides that we run a lot of frameworks. If you want to run a Rails application, if you want to run a Play, a Go, whatever kind of application you want to run. So we have to deploy that kind of frameworks on our production environment. We have also different web servers anywhere. Our environment is at 24, 47, ruining all year. And the most important thing in our production environment is that our developers, our engineers, they deploy the bits that run Cloud Foundry, any updates on that bits, they deploy by their self on the production environment twice a week. And the way that we want to deploy that bits is that we don't want to have any downtime. So we need some kind of variable robots, repetitive deployments, a tool that can help us to deploy all of our Cloud Foundry bits without any downtime. Okay, we have a lot of Cloud Foundry engineers. We need the kind of tool that empowers them to do that. When we started with Cloud Foundry.com, one of the things that we started trying to find is a tool chain that helped us to deploy Cloud Foundry without that downtime. We look at a lot of tools, like Chef, Puppet, a lot of tools, different tools, but we always find that we need to find, we need to add some kind of logic layer between that tools and what we wanted to do. So we didn't feel enough comfortable with that tool, so we decided to create another tool and start from scratch and create another tool. This tool is called Bosch. What is Bosch? Cloud Foundry Bosch. Cloud Foundry Bosch is a tool chain for release engineering so you can make some releases when we date our code for CloudFundry.com. It's a tool that helps us to deploy our Cloud Foundry in whatever environment we are using if we want to deploy in our development environments, if we want to run it in our QA environment, or if we want to run it on our, deploy it on our production environment. And it's also a life-cycle management tool. What does it mean? It means that we don't want a tool that just set up a new environment. We want a tool that it was able to make updates on running production systems. Or if we want to move that environment from one hypervisor to another one, so we can do that with an easy task. It's a tool that is really optimized for large-scale distributed systems. It's something that we are running on CloudFundry.com. And what we need is to enable the systematic and prescutive evolution of services. So what we want to do is to define which are the services, which are environments, which are the resources that we need to run each of the different node types. So we were able to deploy it in a proper way. And the most important thing is, as I said previously, is that we want the service updates with a consistent result. And the most important thing is that we want a minimal to no downtime. CloudFundry also facilitates operation on any large service or any infrastructure. We have used it, and we are still using it to run CloudFundry.com, so it's a proven technology. And actually it supports different high-pivot source or public cloud infrastructures, like AWS, OpenStack, VFX, or VCloud. Just before going deeper in how we integrate CloudFundry.box with OpenStack, there's a very few concepts. The most components that we have is, the first thing that we have is the source, source codes. So when you want to deploy a new environment, you have some kind of source code. It could be your application, a Python, a Ruby, C, whatever you want. You have this kind of source. Sometimes you don't have the source because what you want to deploy is, for example, it's a database, it's a web service, it's anything. So you have a tar file, a RPM, whatever you want. So this is what we call the block file. When you have this source or you have this block files, what you need to specify is how you are going to style that code or how you are going to style that code server or database, et cetera. This is what we call a package. A package is nothing more than destructions on how you are going to style that software, how you are going to monitor that software, how are you going to start, and how are you going to stop that different services. For example, another example could be if you have some kind of database, so before you stop the database, you want to flush a chair, or you want to drain some kind of buffer or anything, anything else. This is something that you can specify on the package. The package also allows you to set up which is the version of the source code that you're going to deploy. On a higher level, we have a job. The job is the way that we describe how we're going to roll out the package in a different VM. So you can specify which is the template you're going to use because sometimes you have a package, this package has several configuration files, so you need to specify which is the contents of that configuration files. That configuration files should change between the different environments. For example, if you are going to deploy a database in a developmental environment, you will not have the same contents of that configuration file that if you are going to deploy that service in a production environment. So because you are going to use a different number of workers, different member sites, et cetera, et cetera. And then when you have all of the jobs, you create a release. A release is how it ties all of that different components together. It's just to deploy, okay, I'm going to run, we're going to deploy to a fund-reader.com. We have a release that it has several jobs that just could be, which are the databases that we're going to use, which are the frameworks, et cetera, et cetera. Then we have another concept. We call it the stem cells. Stem cells is nothing more than a VM template or a VM image. So instead of spin out a new VM, a Benile image that what we have done is what we create some kind of predicated VM, a image file. That stem cell has several components that are used by the director. For example, an agent that is able to communicate with the different components and say which is the state of the different components that are running inside of that VM. For example, which is the network configuration, which is the different size that we have there. And then we have the deployments. The deployments is nothing more than a configuration file that says, okay, you have a release. For example, you have the CloudFundry release. If you are going to deploy that release, depending on which environment you are going to deploy that, this is when you set, which is the different values of the configuration files that you have specified it on the template that is painted before in the jobs. Okay, which is the architecture of the CloudFundry botch. The CloudFundry botch have several components. The first one is a CLI, it's a botch CLI, is how the users can interact with the systems and it interacts with a component that we call the director. The director is nothing more than an orchestrator all of that environment. The director has also a REST API. So if you don't want to use the CLI, you can interact directly with the REST API using a, I don't know, a web dashboard or if you want to interact with your old system that you can do that. We have also a Blobastore. A Blobastore is where we store all the different packages. It could be the source code, it could be the package for the MySQL third file or the web server or whatever you want. That Blobastore, in Bosch, in Bosch we provide a simpler Blobastore. But if you want to use your old store, you can use several. For example, if you want to use the Amazon A3, or if you want to use an OpenStack a Swift object storage, or if you want to use, I don't know, RECA space cloud files or HTTP object storage, you can use that Blobastore. We have also some workers that is what they perform, the several actions that the CLI has to the director. We have also Mr. Bush. Mr. Bush is, we are using a company that's called Nats, and is there responsible to talk which is the different VMs. And we have a health monitor. The health monitor is responsible to check which is the state of the different VMs that are running and just alert the director if you have, for example, some VM that has been stopped or is not running or the agent that is running inside the VM is not responding, for example. The health monitor has an architecture that it can be extensible. So for example, if you want to integrate the health monitor with your old monitoring system, if you are using, I don't know, CA, HP, or AVN or whatever you are using, you can be a plugin. You can extend that health monitor in order to integrate with that kind of services. Then we have the IAS CPI. The director itself and all of the different components are agnostics in terms of what hypervisor or cloud provider are you using. The responsible to talk with your cloud provider is always the IAS interface. I will show you a few seconds later how it's built. And finally, we have the agents. In the agents, this is something that runs inside the VM and it's able to communicate with the different components of the CloudFundry Bosch. So I have said that CloudFundry Bosch is neutral in terms of what I, yeah, as I, are you using. Nowadays, we support three main players. The first one is in VMware. So we have CPIs for BSphere and we have a connector for the CloudDirector. For BSphere, it has been tested a lot. This is what we are using to run CloudFundry.com. For the CloudDirector, we have the code. We have not tested it. For AWS, we have the code. We have been testing it, Stanceable. We have running about 400 VMs in AWS and we are planning to run about 5,000 VMs in AWS to check that it really works. And for OpenStack, we have the code complete but we have only tested on small environments right now. Okay, the Cloud provided interface. The CloudFundry interface, as I said, is responsible to talk with the IIS. It has a well-defined contract. So this is the only way that we can separate the Director or all of the Bosch components from the IIS. So this is the contract that we have. We have a contract for Stanceable. It's nothing more than an image. We have a contract how to spin out new VMs, how to delete the VMs, how to configure the network for the VMs, and we have also some contract for this. The OpenStack CPI interacts with several OpenStack components. For the VMs, for the stem cells, the image, we are interacting with OpenStack labs. This is the image service. For the VMs, we are basically interacting with OpenStack NOVA IPA. This is our primary IPA where we are targeting, but NOVA also tells with our components. For example, if you want to deploy a complex network in topology, so for example, if you want to use static IPs for HBM, you can use the OpenStack one to an IPA in order to set up, which is the networking of that VM. We also talk with OpenStack Cinder IPA in order to create new volumes, in order to attach the volumes to the VMs. And for the BlobStore, if you want to run your own private BlobStore, so you don't want to use some kind of public BlobStore, you can use the OpenStack Swift API. Now, let's check some simple deployment file that I will show that once you have a release, that this is not sticking to what, where are you going to deploy it? If you are going to deploy a development or a production environment, or if you are going to deploy on a B-Sphere or on Amazon or OpenStack, this is completely a lockstick. On the deployment file is where you specify which is the settings that the deploy is going to be run on. The first thing is that you specify which is the name of the deployment that you are going to deploy. This case, this is a basic example of a WordPress. You can check in G-HUB that that example, this is the directory, so in your environment you can have several directors because you can, I don't know, you want to separate different organization units and each one should have a different director. For example, you have one director target to Amazon and another one for OpenStack, whatever you want. Then you specify which is the release, which is the version of the release that you are going to deploy. For example, in production environment, you want to deploy, I don't know, the version number one and even your development environment, you are just using a version two or version three, whatever you want. Then we use a compilation. Compilation VMs is something interesting because before we deploy a release, we compile the packages. So what we want to know is that what we are going to deploy, we will not have any problems when we deploy that. So the first one, the first thing that we do is we compile the packages. The compiled packages is nothing more that we take, which is the source code, we take the package and install it on a vanilla VM without anything more. If it was, then we will go with the next steps that is to deploy the process. And here you can define which is the number of work, the number of compilation VMs that we are going to use, et cetera, et cetera. And we have some specific load properties that you can define. For example, which is the times type that you are going to use. If you know that it's a big package, you can use an M1 large, or you can define here the custom instance times that you have on your own open stack environment. The next thing is the date. So when you have a running deployment, if you want to update that, instead of rolling out all of the dates on a once, what we do is create some cannery instance. On that cannery instance, what we do is we deploy that bitch. And just checks the number, we wait the number of seconds that is specified here to see that that deploy is working. See if we're working, then we go, or we proceed with the next steps. The next thing that you should define is the network topology. We support three kinds of network topologies. We can use a dynamic network. This is basically if you're in your open stack environment, you are using the flat DHCP manager, for example. So your VMs are going to ping DHCP to fetch which is the IP address. This is what we call dynamic address. So you don't know which is the IP address that VMs are going to use. And we have the manual networks. The manual networks, if you're going to use the manual networks, you need to use the open stack one, two. And the third one is if you want to use the floating IP. It could be combined with a dynamic and with static IPs. So for example, you have a private environment without any access from the outside, but one of the VMs, you want that it can be accessed from the outside. So in that VM, you can set a floating IP for that VM. In case you're using a manual network, this is what defined, which is the range of the IP address that you're going to use, the gateway. This is the IP address that are reserved that you don't want to use. And this is the range of the IP address that you are going to use for your deployment. And then you have a specific club properties. The club properties, you can define which is the security group that you are going to use and which is the network ID, the open stack one network ID that you are going to use. You can have several networks. For example, I have here a default that is a manual network, but I could have several network topologies. For example, if I want to use, I don't know, some kind of good service and that good service, I want to deploy on a specific IP range or if I have also a database service, I can deploy that database service on a different network subnet, for example. So you can define here whatever networks you want. Then we define the resource pools. The resource pools, what it says is, which is the stem cell that you are going to use so we can have different stem cells if you want and which is the club properties that you want to use. For example, we know that, I don't know, we have a database service and it should be run in a large instance and perhaps I have another kind of service that it should be running in a small environment. And then we have the jobs. This is where we set which is the different jobs, which are the different packages that we are going to deploy in our environment. Here we define which is the template of the job package that we are going to use, the number of instance that we want to deploy, the resource pools that we want to use. The case is the common one, but we can have several resource pools, the network that we are going to use, and for that instance, which is the IP address that we want to set. And then we have the properties. The properties can be specified for each of the different elements that we want to use. In the case, for example, but we have an MIS SQL server. We have, this is the IP address that we want to use for that MIS SQL server. This is the password that we can use. For NFS server, for our WordPress, et cetera, et cetera. So how is the workflow when you need to create a new release? In the case, what you have to do is you target your development environment, you create a development environment with all of the deployment manifest file with all of the different properties that you will have in that deployment, and then you write the code. When you have write all of the code, you can create a release. A create release, what it does is it gets all of the source code, all of the different packages, takes together and creates a third file. With that third file, then you blow out that third file using the voice of the release to the director, and then you can deploy that director. You run all of the tests and if it works, commit the changes and proceed to the next step. If not, you iterate until your deployed is running. When you have, it has been done, then you pass that, you can pass that, for example, for the Kuey department on the Kuey, you do the same. You create a deployment, target it to that Kuey environment, you pull the code that the developer has created, you create a different release, you blow out that release to the director, deploy, run tests, you can iterate. If it doesn't work, you report that to the development guys, and if it works, you create a voice release with file now, and then you commit that deployment. And in the production, it's exactly the same. You pull all of the code from the Kuey guys, you blow out the release, deploy the release, run tests. So it's exactly the same pattern that it can be used for the development guys, for the Kuey guys, or for the people who are responsible to the release engineering. And that's all. What's next? You can sync up to clafondry.com if you want to use that Chrome code. So you get a free access to the Clafondry code form, you can test it, whatever you want, you want to deploy your applications when you have tested it. If you want to deploy Clafondry, for example, on premise, on your own installations, we have several links here. This is where you have where you can find all the code of Clafondry, of Clafondry voice, whatever you want. On GitHub, we have also some kind of documentation that explains how to deploy Clafondry on your installation, and you have questions about how Clafondry works or how Clafondry works, you have several mailing lists. And that's all. If you have any questions, we are kindly to answer that. So this is completely configurable using box. So when you're creating the environment, you can decide if you're running a completely single tennis ball, a pair of one app or a container, or a movie pad. This is something that's pretty configurable for the instance of your run. When we are running clafondry.com, we have a complete movie pad report and we have 5,000 GFs and we support hundreds of thousands of apps, but that's one way to do it. When your customers are deploying this on-prem, usually running on a single tenant or close to a speeder. And they're actually changing this between their dev environment and their QA environment. So every bot deployment that you've seen here, you can control the number or the tenancy of apps. Does that make sense? So what you said is in clafondry.com, you are going to get the ad back in the bot. We are molded by CloudFundly.com, our public service, we are molding tenants for it. So you have isolation by account. So when you're signing up with your account, you're isolated from the other tenants based on the account, but there is no physical observation. So in the same big DM, you have the accounts. The health monitor that you've discussed about it, does it also have mechanisms to scale the underlying infrastructure, the VM, et cetera? So the way it works is actually really, oh yeah, no, you can do that. But health monitor right now doesn't provide any mechanism to auto-scale your application. We're working on that. But you can work if you want, you can understand it via plug-in, but we will provide some kind of auto-scaling mechanism for VMs, yeah. Scan the instances that the developer will do, and there is scaling of the cloud capacity. We, from a developer perspective, for you the cloud has infinite capacity. You never know that you're approaching any kind of thing. You have, so you can scan your app based on your account profile, up and down as much as you like, and that health monitor actually allows you to kind of, automatically start an instance with your instance file. Now scaling the cloud itself, using Bosch, this is something that currently you're doing in a much more manual way. So we are doing this twice a week in cloudfront.com, right, we scale from zero users to hundreds of thousands of users, we keep adding capacity, and you never see us say, next Tuesday, two AM, you will have 10 minutes of downtime, right? It's all done behind the scenes. So what's the instance of, like you said, app, you scale the app. So the mapping for an IAS instance of the VM, what's the equivalent? That's the beauty, there is a complete abstraction. There is no relationship between a VM and an app. This, the concept of VM doesn't exist. From a developer perspective, you see an instance, you see an app, you never know how many, you don't see a VM, you don't start a VM, you don't stop a VM, it's a developer. This is a different model than infrastructure. This is pass, which means you, as a user, only deploy app. We have a pretty short break here, we'll have to, okay, go ahead you guys. All right, thank you.