 Mike's on. Hello, everybody. Thanks for joining us. I'm Jesse Proudman, IBM Distinguished Engineer and CTO of what's now called Bluemix Private Cloud and introducing Animesh. I'm Animesh. I'm an architect for IBM Bluemix platform as a service. So we're here to talk to you today about two of the IBM offerings, but not about the offerings themselves, specifically about what we've learned through the operations of both Bluemix, which is Bluemix platform as a service, the Cloud Foundry installation, and the Bluemix Private Cloud, which used to be the blue box OpenStack as a service offering. At the OpenStack Summit in Austin, Mark Collier mentioned some of the stacks in terms of the workloads which are running on OpenStack. And followed by Kubernetes, Cloud Foundry is the second most popular workload, which is right now deployed on top of OpenStack. So what is Cloud Foundry for some of those? Maybe one of the questions. How many here have used Cloud Foundry or have heard about Cloud Foundry? Quite a few. How many people have deployed Cloud Foundry? So around, I would say, 30% to 40% of the room. So for those of you who have not used Cloud Foundry, Cloud Foundry is the de facto platform as a service. Just like OpenStack is the 100% open source component for IAS, Cloud Foundry is similar for the platform as a service. You take your code to Cloud Foundry. It's smart enough to detect what runtimes you need, what services you need. And as you deploy your application, it will provision those services in the background, bind it to your application, and bring it up and running within minutes. It was established. The foundation was established in December of 2014 under the Linux Foundation umbrella. And just like OpenStack, it has a very strong and vibrant community. More than 60-plus member companies are currently driving the Cloud Foundry foundation. So names like IBM, Cisco, HP, Privately, Docker, Huawei, Swisscom, they are all participating and deploying their own Cloud Foundry production clouds. I think the interesting thing about the Cloud Foundry foundation and its membership is that you've got a lot of actual users of the offering as part of the foundation, whereas OpenStack gets a lot more heavily weighted towards the vendors. So how do we actually deploy Cloud Foundry on OpenStack? We use a tool called Bosch. Bosch is a release deployment as well as lifecycle management tool for Cloud Foundry. It has Cloud Provider interfaces or what we use the term internally, CPIs, which bound Cloud Foundry to different IAS infrastructures. So for example, you can take Cloud Foundry, deploy it on OpenStack, AWS, Google's Cloud, as well as VMware. Now, these Cloud Provider interface methods, they actually use basic OpenStack methods like upload images, provision images, provision volumes, et cetera. How does Bosch work? Bosch actually takes a release, which is a collection of software packages. So for example, MySQL packages, your Cloud Foundry release package, any of the services which you are deploying along with it, it takes all those release packages. And then there is this concept of stem cells or base operating system images. Then there is deployment manifest, which tells Bosch where we are deploying Cloud Foundry, what credentials we are using, what network we are targeting, what is the authorization, authentication information. It takes all that information from deployment manifest and then deploys different VMs and converts it into different Cloud Foundry components on your particular IAS. So here's the first lesson. The learning curve for Bosch, as the 40% of you have deployed OpenStack know, is incredibly steep. And you're learning a whole new set of terminology and a whole new really approach to deployment. And so that's often the first hurdle, is getting folks in the organization comfortable with that new technology. So now let's talk a bit about the problems which we faced while we have been doing Cloud Foundry deployments on OpenStack for now close to two to three years, as well as the survey which we did last year in terms of finding out the hurdles the community is facing. Instability, when we started, definitely OpenStack distributions or the releases which we used to get, they used to be very unstable, as well as there used to be deployments where we have two components from one release. And for example, Cinder and Swift coming from another release. So a combination of these made these OpenStack deployments pretty unstable, as well as led to a lot of API incompatibilities. Also, the plugins which you use, for example, some of the plugins you will use for Neutron, that also leads and changes the API behavior at times. Capacity is one of the things which we have realized. If you don't size it properly, some of the errors which you will get from Cloud Foundry, they will lead you in totally different directions. And you will be debugging things which are not even important at that particular point of time. Network, definitely that's one of the important things. You need to decide, should your management component, should your DEA, should your services, should they collocate, should they be in different networks, et cetera. OpenStack for enterprise software. Now, this is something for anybody who is building a full platform and deploying things other than Cloud Foundry to support. For example, a lot of services which you might be providing, caching, queuing, et cetera, database services. What we found that a lot of enterprise software, it is already mature for VMware. They have these VMware OVA images you bring in, you stand up the software. But for OpenStack, we found a lot of that was lacking. That's one of the hurdles we faced. Other thing which we definitely wanted to do was this Cloud Foundry plus plus what we are deploying. We wanted to do it in a generic way so that we can deploy it on OpenStack on VMware, as well as a software which is our public iOS Cloud. Combine OpenStack and Cloud Foundry usage. Now, more often than not, when you are actually going in enterprises and deploying Cloud Foundry and OpenStack, the users are not only expected to interact with Cloud Foundry. They want to interact with the same OpenStack on which your Cloud Foundry is running. So you need to make sure you are taking measures so that the workload Cloud Foundry is not impacted by the usage of the OpenStack side by side. Cloud Foundry HA. Though Bosch is really good, you can increase the number of components, et cetera, do multiple HA deployments. One of the things which we noticed as we have been doing this is a lot of these databases, et cetera, they are not synced. So if you have a PostgreSQL or a MySQL database under the covers, you have to come up with some additional measures to make sure that if you are doing HA deployment of databases, you are syncing the databases in the background. So from an OpenStack perspective, with these first four, I think one of the things that was particularly interesting from an OpenStack view is that Bosch, and particularly during a Cloud Foundry deployment, really tests the OpenStack API. So it throws a ton of concurrent connections at that API and expects OpenStack to respond very quickly to those requests, as you would expect out of an IaaS. And particularly in earlier versions of OpenStack, a lot of the default configurations would fall over or fail based on that request volume. And so one of the pieces that we had to pay very close attention to you was tuning those settings, because as Anna Mesh mentioned, the error messages that you get during those failures aren't always clear as to what the originating problem is. Proxy or firewalls in customer environments. Now that's something which we learned as we go, we learned it the hard way that 99% of the customers where we are actually deploying Cloud Foundry, they have proxies of firewalls in place, which actually block any outgoing connection. Now for something like Cloud Foundry, which is a collection of the Cloud Foundry releases plus a lot of other software which you need to define to support the services as well as any applications which are being deployed there, they need to reach out. So you need to make sure that you are architecting this in a way so that Cloud Foundry as well as the applications which are falling, they can be deployed in a seamless way. And last but not the least, the constant release cycles, both CF and OpenStack have frequent releases. More so Cloud Foundry, it has very aggressive release schedule every two to three weeks, right? So how do you maintain actually the patches, updates, upgrades, et cetera in a consistent manner? For example, one of the problems we faced was as OpenStack was moving towards Mitaka, Cloud Foundry was moving from micro-bosh to wash in it. And we went through a lot of permutations and combinations and you can only find so many one particular permutation or combination which can stick with that combination because the community has already moved towards Bosh in it which was working on Mitaka. And on that last point, I mean, I think this is one of the key pieces that ultimately led to sort of the decision or focus on this as a service model. So in OpenStack, obviously we have six month release cycle which a lot of organizations find to already be a challenge when then you add on top of that two to three week release cycle of Cloud Foundry, what we often see is organizations just installing a version and leaving it there without ever getting the updates in place. And then when you try to go actually finally do that update, when you're skipping multiple versions, there's a lot of associated pain and headache that gets involved because those upgrades are meant to be done sequentially. And so keeping those environments up to date whether it be a Cloud Foundry or OpenStack becomes really critical, particularly when you're trying to work with both of them in parallel. So from a user experience perspective, this survey comes from the... We did it as part of the community last year, the Cloud Foundry community. Yeah, from the Cloud Foundry community, yeah. And so the question was what is your level of experience with OpenStack for folks using Cloud Foundry on OpenStack? And in this event, most of the users were intermediate or expert users of OpenStack. So they felt like they had a good handle on OpenStack itself. But then what were the pain points that they were experiencing with OpenStack? And many of them, or how much pain I should say, did they have deploying CF on OpenStack? And the large majority had significant issues with that installation. And other thing is that close to 50% of the users have to actually customize their OpenStack for Cloud Foundry deployment. So the thought that you can take a vanilla OpenStack and install and be able to run Cloud Foundry on it, no, that doesn't hold true. There are certain steps you need to make sure that your OpenStack is ready for a Cloud Foundry deployment. If you also see the reason most of the users' experience problem was 70% of them said that the instability of the OpenStack has been one of their issues. And more than 50% of the users said that the initial setup, in terms of getting that initial configuration, right, to get Cloud Foundry on OpenStack, that has been very hard. Most of the users have been on Juno and Kilo at the time we surveyed last year. So one thing which we noticed, typically as OpenStack moves to different releases, most of the enterprise users, you will probably find two releases behind. So they're typically one year running behind the next open-source release, which is coming. And close to 50% of the users run their own OpenStack rather than getting it from a service provider or services support or a managed service. But this problem depicted on the left that enterprises are lagging behind on versions, this by far and large is often one of the biggest challenges with that Cloud Foundry OpenStack compatibility component because Cloud Foundry moves so much faster keeping the environments in sync and the development of Cloud Foundry is done on the latest versions of OpenStack. So keeping that all in sync becomes very key. So with that, let's just briefly talk about our OpenStack and Cloud Foundry offerings. The first one is BlueMix Private Cloud. So we renamed BlueBox this week, BlueMix Private Cloud. It is a OpenStack as a service offering that can be purchased in a dedicated capacity in software or a local capacity in a customer's data center. IBM manages the full lifecycle of the OpenStack installation and its operational capabilities for the user. So the user gets back an API and an SLA. BlueMix, that's our platform as a service offering. There are three deployment models. There is a public BlueMix offering, which is available. Then you have Dedicated, which is essentially you get your own BlueMix on software, which is our public IS Cloud. Or you can get BlueMix in your data centers, which is a private model. Currently on our public platform, we have 1 million plus registered users, 100,000 plus running applications, as well as a catalog of 500 plus services. These include services from our Watson portfolio, a lot of big data services, serverless capabilities, which are provided on top of BlueMix. What essentially is BlueMix? It allows you to build, deploy, and manage your applications while tapping into a large ecosystem of services. Now these services not only come from IBM, but also from third party providers, as well as from the open source. And it's not only a platform to run your applications, it also gives you capability to build your applications. So there are browser based Eclipse tooling, which we provide where you can code in the cloud, run your CI CD from the cloud. Once you click and save your code, it's directly pulled up and then deployed on top of BlueMix. With that, some of the lessons which we learned, we'll go through them. So the first thing which we should do is test if our OpenStack is fit for a Cloud Foundry deployment. There is a link which I've added here where you can actually go and at least all the manual steps you can take to make sure that OpenStack is currently fit for example, some of the basic things like can you access OpenStack APIs from within an instance? Can you reach out to the internet from within an instance? Can you come back from outside back into the instance? Can you provision and mount to large volume, et cetera? If you don't want to do it manually, as most of us wouldn't want to do it manually, there is a Cloud Foundry incubator project which actually has automated tests which you can run on top of your OpenStack to test if it is fit. It tests all the basic CPI methods like creating a VM, deleting a VM, creating a volume, attaching it, et cetera. But apart from that, it also tests a lot of non-functional characteristics. So it checks your API rate limit. Can you take a large invocation of VMs at once? It tests the versions of the OpenStack projects which are required. They work with this particular Cloud Foundry. It tests all those network connectivitys. Can you go outside from a VM? Can VMs ping each other? Can you come from outside in? So I definitely encourage for anyone who is deploying Cloud Foundry on OpenStack to include this as part of your CI CD validation to ensure that once your OpenStack is getting handed over, it is fit and it can seamlessly take a Cloud Foundry deployment on top of it. As I mentioned, sizing is a great big issue which we faced. So make sure that your sizings are correct. There are some links which I've added there which are public links which give you some sample sizing configurations to get started with Cloud Foundry. As you set your quotas in OpenStack, that's very important. A lot of the number of times we actually get out of quota errors for number of ports, et cetera. And as again, as I said, some of the times the errors which you see from Cloud Foundry are not exactly telling you that this is a quota issue. She will probably be spending three to four days debugging until you come back to this. Also the recommended OpenStack disk flavors, they actually go in the ephemeral rather than the root disk when you create the flavors. So that's something to keep in mind. We recommend around 10 gig of root disk still. Another thing, when we started, a lot of the OpenStack environments which we got, the default scheduler, it actually was configured to pack host one by one. So that means you pack one host machine with the VMs and you move to the second host where we found a lot of failures regarding Cloud Foundry deployment. So make sure you are using a scheduler which actually is distributing the load across so that it can actually take that large number of invocation for the VMs which is coming. And essentially the failures you're seeing there, if it's packing that one host and it's trying to spin up 10 or 12 or 20 VMs all at the same time, you're gonna run into IO contention on that one VM, you're gonna run into timeouts as those VMs take longer and longer to spin up coming back to Bosch. Also you can change the OpenStack settings to configure it for large number of API calls. As Jesse was mentioning, when Bosch actually deploys Cloud Foundry, the whole deployment goes at once. Now, depending on the size of your environment, it could be anywhere between 40 VMs to 200 VMs which you are invoking at once apart from the volumes you are provisioning. So make sure that OpenStack is configured to handle those number of API calls. Avoid name-based security groups, what we have seen that it requires activity which is proportional to the message bus as well as the database updates, they're proportional to the number of VMs you are provisioning. So avoid name-based security groups if you can. If your neutron is configured with VXLAN or JR Eternals, make sure you're using the right M2 settings under the covers. The other thing which we have noticed and we kind of stepped onto it is that one of the environments where we were deploying Cloud Foundry didn't have enough space from the root disk. So the host on which the VMs were landing, they didn't have enough space. So while doing that, we figured out that we can actually configure Bosch and Cloud Foundry to use the block storage instead of the root disk for the VMs. There's a setting in Bosch which you can actually set. So that's very handy in case you are running into environments where you want to take some space from the root disk and some form of your block storage, et cetera, move around. Also, most of the OpenStack environments are set up to get metadata from the HTTP service, but if your OpenStack is configured to use CD DOM instead of the HTTP metadata service, make sure that you tell Bosch by deployment that the OpenStack config is coming from a CD DOM drive. When you deploy Cloud Foundry and Bosch, there's lots of NATs message burst activity which is happening. Actually, sometimes you'll see failure when Bosch is not able to ping certain VMs whether they are up and running. You can actually go into Bosch and change the NATs messaging timeout or ping interval to actually get around it. That's particularly true of the environments where there is heavy load. At times you will see the deployments failing just because it's not being able to ping within a certain time frame. Also, for optimizing the routing and bandwidth, we suggest putting the Cloud Foundry components, the management plane into its own network as well as the DEs and the services that go in their own network. If you are using any supporting services like logging or report generation, make sure that any communication between the VMs which are sending logs as well as the component which is accepting logs is happening over the private network. In some of our environments, the floating IPs or the accessible IPs were used as the target for sending logs. What happens is, it's a round way communication, you're paying twice the cost because the logs are coming from your private network going to the public network, so to say, and then coming back into the private network. Make sure that you are using just one internal network for sending those logs. Some of the common sense things like only open the ports which are needed. If you are providing credentials in the manifest which you have to only provide the tenant credentials instead of the full OpenStack Cloud admin credentials, if your OpenStack is using a self-signed certificate, you can also configure Bosch to tell it what is the self-signed certificate it is using. And then you can, the other thing which we highly recommend is minimizing the number of floating IPs. Conceptually, apart from your incoming device, which if you are using a native CloudFormer deployment, that's HAProxy, but in our case, for example, when we are putting a data power in front or some of the customers when they put an F5 in front, that's the only device which should be on public network. All the CloudFormer fabric components should be on your private network behind the firewall. As we mentioned, the other thing which we noticed when we started doing some of these deployments, almost any customer environment we are encountering, there is a proxy or a firewall which is in place. So in our case, since we take the CloudFormer release and package it, the CloudFormer release itself was not a problem, but a lot of the other supporting packages which we deploy or services which we deploy or runtimes which we deploy, for example, Node.js, they need to reach out to the internet because as the deployment is happening, they are downloading some patches or the latest binaries, et cetera. So we need to make sure that when we are going in a customer environment, we create a list of all these outgoing URLs we are supposed to reach. So we have now a standard set of URLs. It has been minimized a lot, now only just six URLs which we need, which we hand over to the customer to make sure that they are allowed in their proxy or the firewall configuration. Some other customers on the other hand, they become a lot more restrictive than that. So it's not only the outgoing URLs, they also want to know the source IPs of everything which is sending the traffic outside. So in that case, if your VMs have Neutron virtual IPs from the Neutron network, those IPs will not be presented to the external firewall. So make sure that if your VMs don't have any floating IP or a customer accessible IP, then the IP which you are giving is the gateway IP of the Neutron router to the admin, the firewall admin, to act as a source. If a VM has an IP from a customer accessible network, apart from the virtual IP, then that IP will be presented to the firewall. The other thing, CloudFondry doesn't support out-of-the-box SSL inspection. Now this is a lot of our customers, they actually insert their own SSL certs as the packets are coming from outside. CloudFondry doesn't support it, so make sure you have that communication with the customer that custom SSL certs won't be supported in this particular case. The other thing which is very important and Jesse will talk about is that we need to make sure our OpenStack updates and upgrades are seamless and they're not affecting the CloudFondry experience. So that's one of the key tenants about the offering that we have and the code that we've implemented to do that, which I'll talk about a little bit more in a moment, but as mentioned, there's a release every six months and during that release, we've got to do an upgrade for these customers and so being able to do that in a way that doesn't disrupt the CloudFoundry implementation is very important. And then even within a release cycle when there's a security patch or minor releases to the OpenStack services, again, being able to do that without interrupting the CloudFoundry implementation is very important. Those, the minor releases tend to have a much smaller impact versus the major releases which obviously have a much larger impact. And then being able to change configuration variables. So OpenStack filtering preferences for Nova, those types of things or load balancer configurations, being able to do that on the fly for customers, being able to think about how that's gonna impact you or OpenStack, your CloudFoundry implementation is important. So I mean, the point here is one thing, one comment which I do want to make is if you see Bosch and CloudFoundry, that's architected from day one for updates and upgrades. Unfortunately, that has not been the case for OpenStack. So it's mostly OpenStack is getting into that direction but we have seen the updates and upgrades have been a hiccup. So definitely with BlueBox coming into IBM, we saw 100% improvement on that case because they followed a very disciplined model and recently all our BlueMix deployments where we had on top of running on top of BlueBox, they were updated and upgraded without impacting CloudFoundry even once. The other lesson, definitely you want to automate everything with that BlueBox uses Ursula and Raleigh and as you can talk about it, how do they actually do that? Yeah, so the core differentiation our offering is the operational capabilities. It's not the OpenStack technology itself and so we actually make the OpenStack code that we use publicly accessible. So if you go to the GitHub and you look for BlueBox group Ursula is our project and Ursula contains all the Ansible code that we use to deploy and upgrade OpenStack itself. And having this, here go back to the previous one for me. Yeah, so having that set of technology and that allows us to do both our dedicated and our local installations. So every one of our implementations runs the exact same set of code all powered by that one automation framework allows us to ensure consistency in those environments and we like to think about it sort of like the iPhone or the Android model. So while we have less flexibility in our deployment in that it is a very opinionated set and of OpenStack technologies and a very opinionated configuration of OpenStack technologies we know exactly what's on every one of these implementations and so we know that when we do validation in our lab to go do those upgrades and it works there that it'll work everywhere else. This is one of the most challenging things that we see when we're acting with customers that do OpenStack on their own particularly with customers that have multiple environments. Ensuring that you've got consistency in those configurations between those environments both from a code perspective and a configuration perspective becomes really key as you think about ways to update OpenStack consistently and ways to ensure that your cloud foundry implementation will operate across your suite of deployments efficiently. The other thing that we do is we use a rally for validation of the OpenStack environments and today we use that predominantly in the lab during our development processes as we're validating each release that we do of our product and we use rally both to measure performance from release to release and we use it to validate the API and behavior has not changed from the previous release and rally has this neat capability where it allows you to track the historical runs from each implementation or from each execution of rally and so you can see over time how your implementation has changed from performance and a consistency perspective. The other thing which we do once OpenStack is deployed we use automation under the covers leveraging the fog gem which Bosch also uses to do a lot of discovery on the OpenStack environment so in terms of discovering security credentials what are the flavors, what are the subnets, security rules, we discover all that information from OpenStack in an automated fashion and if some of these things don't exist we actually go ahead and create in the background the security credentials, the key pairs, the different flavors for the DEAs, controller, health manager, et cetera, create the network security rules and finally a combination of Bosch and Ruby is used to automate the whole CloudFondry deployment on top of OpenStack be it uploading the stem cell uploading the CloudFondry releases using Bosch and then finally doing the CloudFondry deployment itself. Now as you see based on all these lessons led and as Jesse has been talking we decided that it makes sense because of this model since they require this constant care there are frequent updates and updates needed to offer as a service. From a CloudFondry perspective you have new release every two to three weeks but our pass is not only CloudFondry it is a combination of CloudFondry plus 150 other services and if you look at the permutation and combinations which they can create with the versions and mismatches it can be a huge version in sprawl. Also we want to make sure that our public dedicated as well as local deployments are in sync so we follow that cycle where the BlueMix or BlueBox which we have running in public environment are dedicated with our sync with the dedicated and local. Similarly with OpenStack you have two annual releases and the complexity actually requires expertise in many operational areas. Also a lot of our users they want to work with OpenStack and not on OpenStack in terms of configuring and maintaining it. So with that Jesse will talk about the IS Relay or the BoxPanel component of the BlueMix Private Cloud. And so I think this is particularly interesting for organizations that are thinking about and our fee deployments or other organizations that have to manage multiple environments. And if you back up to the AT&T Summit at Austin you saw that they created their own internal management system to handle the complexity of having all of those individual installations. So our offering again is to provide those individual OpenStack environments to many customers and so instead of managing one or two OpenStack installations the plan and goal is to manage thousands. And so again to do that we needed a mechanism that allowed us control and automation across that entire platform. And so we've done that with a technology we call BoxPanel and the technology we call Relay. BoxPanel has become sort of our system of record for all of the data that you need to track a release. So think of every sort of object that's involved in an OpenStack deployment whether it be the data center of the racks, the networks, the switches, the machines, the subnets, the IP addresses, the customer contacts sort of every little piece or object exists in the system and it allows us to have a very flexible way to join together data about a deployment in a way that can be modeled out as we do that deployment. And so if you're thinking about as an organization being able to run these types of technologies think about how you're gonna plan for this data. Whether if you're gonna have many installations you might wanna come up with a system or a methodology to have that in an actual usable point of record. And if it's just a small number of deployments you can do it in something as simple as Excel. But the key is ensuring that you've got that data recorded so that as you think through the transition of those environments and as you think through the upgrades of those environments that the data is consistent. BoxPanel this is a screen that shows sort of the user experience that our operators have. So today in our offering this view is exposed to our users, to our, excuse me, to our operators that are managing all of these clouds and not directly to users. So our goal is to take that headache away from an individual organization. And so to do that BoxPanel is a central for us, a central software as a service offering that then talks to what we call Site Controller. So again in dedicated that's running in SoftLayer and SoftLayer has 40 plus data centers around the world. So we needed a mechanism within each one of those data centers to control each individual deployment in that data center. And then in the local context that cloud sits in a customer's premise. So we needed a methodology to actually update and maintain that capability. And our goal with dedicated and with local is again to keep the code as consistent as possible. So the Site Controller technology actually sits in a specific geography and can manage multiple environments behind the scenes and talks back to BoxPanel to get its instructions. And it handles things like network automation at the site, power distribution automation, all of the monitoring and telemetry. So things like logging, log aggregation and telemetry aggregation go into that one Site Controller versus coming all the way back to a centralized BoxPanel site. We found this methodology to work very, very well for managing a multitude of deployments. And again, looking at the work that AT&T has done I think they've done similar implementations. So thinking through as an organization how are you going to manage all of those disparate environments that you have becomes particularly important. So with that let me talk a bit about the Bluemix relay. As you can see, what you have at the bottom is a customer data center where Bluemix is deployed, a CloudFondue is deployed. On the top is the whole backing infrastructure which is running in IBM, which is managing this remote Bluemix deployment or the satellite Bluemix deployment as we call it. The one key thing apart from the CI CD process which I mentioned, which first transfers the code to the public Bluemix, then to the dedicated Bluemix in software and finally in customer data centers. There is a technology which we use called Urban Code Deploy. That in a nutshell is connected back to a lot of other IBM repositories as well as some external repositories. So we pull the CloudFondue code from an internal IBM repository plus any other supporting services. So all these IBM software which is available on Bluemix as a service, for example, our big data suite, our Watson set of services, caching, queuing, all those are available in different repositories in IBM locations across the world. So Urban Code Deploy actually helps us in terms of pulling that code from different repositories, creating a single consistent release for a Bluemix deployment and then being able to push out to all these satellite Bluemix locations which are across the world which we have deployed. So that's one key piece. The other thing is that there is the security piece, what we call internally the Curator which manages these Bluemix deployments in terms of being able to collect logs and generating reports. So who is accessing your environment? How much CPU, et cetera, or triggers are being sent into your environment? Is the memory reaching its limit? All that information is collected through the Curator console and reports are generated every day for the customers to see what is happening in their environment. The other thing is we also run all the VMs, the CloudFondue components VMs which are deployed in that environment. They are connected back to an Active Directory server which is running in IBM so that every login and logout to those VMs is being tracked and recorded. So apart from the security, the patching and maintenance of the CloudFondue VMs itself in terms of the operating system patches or any security vulnerabilities, that's also we handle remotely from the IBM data center. This is a view of the Bluemix operations console. One of the things is it follows as Jesse was talking about the iPhone model, you get updates as you can see on the top right that there is an update pending and customers can then select what dates or times within a defined period, they would want this update to be applied. So that's how we actually make sure that we are keeping all the satellite, local, private, Bluemix deployments in sync across the world by sending these constant updates and customers can then schedule and provide their dates. The other thing, it also gives a view in terms of the usage of the environment apart from the basic IS capabilities like memory, CPU, et cetera, how many users are accessing your CloudFondue or Bluemix, how many apps are deployed, how many services are deployed, so it tracks all those other things. There are a lot of other capabilities within this ops console, for example, in terms of being able to integrate your CloudFondue deployment to a customer LDAP, being able to add users from that customer's LDAP or other things like if you want to do catalog management. For example, if you get a set of services, what plans do you want to expose to certain users? Should they be able to provision those services or only a subset of users should be able to provision? You can do all that through these operation console. So to wrap up, this is what we offer, Bluemix Private Cloud platform and infrastructure as a service in the data center. Right now, as you can see, there are two different relays operating and managing the IS as well as the past portions. There are definitely plans and discussions in terms of making this one connection moving forward in the future. And finally, we want to say that both CloudFondue and OpenStack, they are a great fit of 100% open source, strong and vibrant community on both of them and they're really complimenting each other. Bottom line, it's a match made in heaven. Thanks. Any questions? So the question was, how close should the CloudFoundue and OpenStack operations team work together? In our case, they're different. So in our model, the entry model is for anyone who is opening up a ticket on the CloudFondue side. The CloudFondue operator then determines at some point whether it's an issue with the CloudFondue itself or the OpenStack. And then they go through the formal channel to the OpenStack to open and say that this is an issue with the OpenStack. So they are different operations team as of now, not to say that this is the model we want to follow in future. We definitely want to bring them closer together. The other things which we have noticed is the credentials which you create, typically when you are deploying CloudFoundue on OpenStack, because you need to set quotas, you need to set flavors, you need to give CloudAdminCredentials or OpenStackAdminCredentials to the CloudFoundue deployer, which is not a right fit, right? If you just get them the tenant credential, then the whole job of being able to set quotas, except for falls on the OpenStack operator. So these are a lot of issues which we are going back and forth. There is a wave within Neutron, though it is not available through the Horizon console, there is a wave within Neutron where you can create partial rows. So those rows actually allow you to just be able to create your own tenants, create your own flavors, set the quota for that, but you cannot affect any other tenant or any other workload which is running on OpenStack. It's, for some reason it's not exposed through the OpenStack Horizon UI or CLI, but there is a configuration file which you can prepare and then use that particular role. In my perspective, I'm a big believer in SLA contracts and so if you have an agreement with your operations team that's running the OpenStack environment around what they're going to expose and how they're going to expose it, as long as they commit to that contract, then you, as a Cloud Foundry operator, should be able to trust that works as expected and I think it's actually preferable to be able to have those teams be split and focus on the specific technology at hand versus trying to have two worlds of technology in your head at one time. Good question was have we grown running virtual network under a Cloud Foundry installation? You mean the number of different virtual networks or number of machines? You're talking like subnet size. Yeah, subnet size. Fortunately, no, not so far because as I was mentioning in one of the models which we have followed is the subnet which we create for the Cloud Foundry management control plane is different from the subnet which we create for DEAs or supporting services. So most of the growth which will come, it has to come in the services side. So the services, they either go in their own tenant on the open stack or if they have to grow and be residing in the same tenant as Bluemix, for example, our logging services, they need to reside in the same tenant where a Cloud Foundry is running but for that, they use a different network altogether. So in that case, we, I mean, because our data plane is separate from the management plane and management plane is predetermined in our case and we still creep a lot of buffer. We use a slash 24 or slash 23k. Yeah, so the question was, does Cloud Foundry support IBD-6 internally? We haven't used for on, do you know? No, better than Lee. Anything else? Great, well, thanks for joining us this afternoon and have a great rest of your week. Thanks.