 And these are my colleagues Jim Bush and Andrew Bordine Today, we are going to talk how to deploy a production grade past platform like blue mix on top of open stack Some technical Give me a moment. Okay So according to a recent survey done by the Chris research company Open stack and Cloud Foundry Sit at the very top, which is not surprising Not because they are just very popular open source technologies But also the way they complement each other you don't get overlapping functions So now with that let's talk about blow mix and what it is So blow mix is our platform as a service offering It's built on top of Cloud Foundry and it allows developers to rapidly build deploy and manage applications While tapping into a growing ecosystem of services Not only from IBM, but even from third-party service providers as well as from the open source community And it's not only a platform to Run your applications. It's also a platform to actually build your applications and we provide an integrated DevOps capability With browser based as well as Eclipse is tooling So there are hosts there are a large number of services which are offered with power blow mix mobile services API management Internet of things I would strongly encourage all of you to go to blow mix.net and try it It's a global platform our shared hosted offering It's actually running out of United States and Europe and Leveraging the global presence of software The public cloud infrastructure on which it is running we can launch dedicated blow mix across the globe It's based on Cloud Foundry as I mentioned Now Cloud Foundry also has a foundation much like OpenStack, which was established at the the summer of 2014 and In February of this year. They got the executive director and other board members So why Cloud Foundry? First of all, it's 100% open source So that means the tools and the technologies and the apps which you are building. You're not getting any vendor lock in Secondly, it meets developers needs if you're going to Cloud Foundry with your apps It's smart enough to detect what run times you need and it provisions them and binds to the services you requested and Last but not the least it has a very strong and vibrant community more than 800,000 lines of code have been contributed and More than 1300 developers across different companies like pivotal IBM HP SAP are working on it and contributing So as I mentioned blow mix First and foremost is an app platform So it's a platform to run your apps be Java apps. No.js apps Ruby apps All these are supported from IBM and then you can bring Runtimes from the open source community For example PHP Python go there are build packs available in the community which you can bring in into blow mix So how does a developer experience look like on top of blow mix? You can go to blow mix either using a CLI your favorite ID like eclips or browser and Another covers it actually runs a command called see a push as you push your application And once it's deployed you get a URL back and that's as it is as it is Now what's happening under the covers as? You are pushing your application it hits what is called cloud controller Which is the heart and brain of a cloud for a deployment Cloud controller is responsible not only for the deployment of your applications But also for the life cycle beyond the initial deployment And once your application is deployed the routes are registered in the router and you get the URL back Now in the back your applications are actually getting deployed on a set of VM pools Now these are not just any VMs which are provisioned these are specialized VMs called DEA or droplet execution agents So this is the VM pool which is spawned and your application is actually combined with the runtime and the framework and deployed in Containers running on those VMs cloud only uses a container technology called warden Which is the basis and which is where all your applications go? Now once they are deployed there is a health manager component in Cloud Foundry Which is responsible for managing the health of the whole Cloud Foundry deployment and all these different components interact with each other Over this gnats messaging bus So BlueMix is not only a runtime platform. It's also a services platform So as I mentioned there are a large number of services available in the BlueMix catalog around mobile security big data database Watson So I would strongly encourage you to go ahead on BlueMix.net and try Now how does the service interaction look like in the background so when you? Either from your CLI or Eclipse or browser you tell BlueMix to create a service Under the cover and covers the command which is executed is create service command It actually Hits the cloud controller now for every service to exist There is a service backend which a service provider has implemented and there is a service broker as You tell cloud controller to actually create a service it fetches the catalog of the services from the service broker Then based on the service which we have selected it provisions an instance of that service Now provisioning of the service instance is just one side the second side of it is That you need to bind these services to your app So then there is another command which is executed under the covers it actually binds your service instances to your app so that's About the internal architecture of BlueMix now BlueMix is actually evolving to be a hybrid offering So when I say that what I mean is when we started BlueMix the focus was solely towards platform as a service Now over the course of last one year We have added Docker container as a service open stack as a service you get open stack based virtual machines or containers Which are running in the background on open stack? through BlueMix catalog as well and In February at the IBM Interconnect conference. We also announced that there is going to be a new offering called BlueMix local Which is BlueMix in your data center, which is something you can deploy in your own data center So we have now three flavors the shared hosted BlueMix the dedicated BlueMix Which is a dedicated pod on our public cloud for you or you can get BlueMix in your data center Now a quick introduction of IBM cloud manager with open stack So this is our open stack based distro And it actually takes all the capabilities of open stack and add some things on top of it Now a few of the things is for example instead of just managing the x86 platform It also manages our power systems as well as system Z so you can actually deploy virtual machines there as well These drivers have also been contributed to the open source community in addition to that There are things like approvals billing accounts metering reports, etc Which are very important for our enterprise customers which have been added as well Now what's our goal? Our goal here is to actually deploy BlueMix on IBM cloud manager with open stack So this is what we want to achieve and how do we go about it? So essentially the first thing is definitely we need to get open stack in place get it configured Then what we do is we boot up a machine called inception machine Which essentially has all the bill of materials which we need to deploy BlueMix So it has all the BlueMix releases the code the client and it also has a special agent call Urban code BlueMix deployment client and we'll talk about it as we go through it now once this machine comes up It actually initiates a connection SSL VPN connection back To our remote orchestration and management server urban code deployment server, which is running on software Once that connection established the first thing which our remote urban code deployment server Which is running on software does is it checks whether the bill of materials on that inception machine is current or not if the Releases which were baked in or the stem cell files or anything if they're outdated it actually updates it back Now once it's updated Then urban code deployment server remotely from software Orchestrates the deployment as well as the lifecycle management of this whole BlueMix platform on top of open stack So now let me talk about Bosch, which is one of the tools Which is at the heart and center of this whole BlueMix deployment the whole core cloud foundry components are deployed using Bosch But not only cloud foundry components a lot of the components which we are adding for BlueMix onto the management side Within IBM. We are also creating Bosch releases and Bosch packages for it So Bosch as I mentioned is the deployment as well as the lifecycle management tool for BlueMix and cloud foundry Now Bosch has an architecture of its own where there is a Bosch director, which in turn interacts with a Database but it's essentially a client server based architecture whether a lot of Bosch agents Which go into all the BlueMix components which are deployed and then it uses those agents actually to make sure that BlueMix is running up up and running properly and if not it can actually take corrective actions The other thing which is on the right side of this slide is What we call a cloud provider interface Which is how Bosch interacts with different infrastructure as a service platforms be it OpenStack, be it VMware, Be it Amazon Web Services or Google App Engine If we need to run cloud foundry on top of an IS that that CPI needs to be implemented and The OpenStack CPI has actually been implemented and uses the fog Which is open source cloud library under the covers to interact with OpenStack to create VMs to create networks Create persistent volumes bind them except So how does Bosch work Bosch actually takes a release which is a collection of software packages a MySQL package or a cloud foundry package It then takes a base OS image, which is called the stem cell image and then there is a deployment manifest Which is the contract in terms of what all needs to be deployed on top of OpenStack So that deployment manifest tells Bosch that this is where OpenStack is running These are the credentials to get into it use this base OS image. These are the releases of cloud foundry and blowmix Bosch takes all all of them together Spons virtual machines on OpenStack and then converts them into different cloud foundry and blowmix components This is a sample manifest as you can see here Typically they can run into thousands of lines based on all different things you are deploying and you can specify things How many instances of some particular thing you want exceptra So As I mentioned Bosch is not only a deployment tool. It's also a life cycle management tool So you can actually use Bosch to scale your cloud foundry and blowmix instances So for example some of the components of cloud foundry like cloud controller routers Exceptra as we define in that manifest and if we change the numbers Bosch then detects that there are changes in the manifest and you can initiate to scale this environment or reduce this environment on demand Okay, with that let me pass on to Jim Bosch who is going to give a bit of details about how did we configure OpenStack or ICM in our case for this particular blowmix deployment Thank you Hi, my Can you hear me great? My part of the project was to get to open stack up and running in this Environment so that animash and Andy could get the cloud foundry and ultimately the blowmix local up and running and to get open Stack in my environment. I went with the IBM cloud manager the the 4.3 version and Let me get that So to get the open stack running the IBM cloud manager running it's actually very simple It's a single executable binary that you can download and install on your server The first server you install it on is called a deployment server and It takes about three minutes or so to unpack and install with that single binary And you end up with a running chef server at the end of that where you can run use to then deploy manage update and and Use that to work with your cloud Let me go back the The cloud manager it has sample files that it comes with it has a lot of JSON files for advanced configuration if you want to do Advanced deployment of your cloud, but also with IBM they providing YAML files, which are very simple configuration files, and so this is an example of ours, it's hard to read But you get this as part of the ICM code What you need to do if you use the YAML file is just modify your host names your ethernet interfaces if you have differences from the sample and And then you can just deploy and run your cloud If if you want we have some examples here of some advanced configuration that we've done we've modified some of our quotas We've modified compute node numbers. We've modified Like we're auto installing some of our glance images as part of the cloud deployment So it makes it easier and in following steps. There's less configuration that you have to do from the start So once you have the ICM installed and running you end up with a kilo environment So this is the latest open stack that you can have it has all the open stack apis and All all the CLI commands that you expect and you're familiar with And then with our YAML file if you deployed with it You end up with all the compute nodes and the controller nodes that you requested and asked for We use logical volume manager for our sender storage You can also ICM supports if you want a higher availability. You can use SAM Like net app you can use a EMC XIV or store-wise storage and then there's an option with the latest version of ICM to use the HA Configurations and we'll go through that in a moment. So to get my environment up and running I had Red Hat 7.1 and I needed a minimum of three computers the one to start with is the Deployment node where the the knife that the chef server is running second We need a controller node and then you need enough compute nodes that the manager in and support all the Blue mix and cloud foundry of VMs that are running So we end up in our environment with our blue mix local about needing about 250 virtual CPUs About 500 gigabytes of memory and then several terabytes of disk for sender and also the compute nodes So they're able to to grow and expand over time the other requirements we need or we need a DNS server so that the the cloud family can get out the ucd and deployment servers can it get out to the the Data power can get out to the internet We need a wildcard domain name for the cloud foundry deployment And then we need yum repositories so that any of the red hat pre-rex that we need as part of the open Sack install is there and available If you were to use the h8 deployment That's a sample that comes with icm that there's some sample yaml files and json files And if you use the sample you'd end up with a three controller configuration for h a and you can go as many as 10 Controllers if you want it but but it starts with three The nice thing about this is it not only gives you eight h a very easily through those yaml files But it also gives you some scalability for advanced scalability You end up with multiple copies of all services over multiple controllers And all load balance use an h a proxy and the pacemaker and then the db2 Under the scenes for the database they use an h a dr replication across all the controllers for the db2 and then icm of course has the Horizon dashboard that you know and love but to help reduce the complexity the PhD that that they were talking about in the key Note that a lot of users need They've they've put in for for end users a self-service UI and you may notice that this looks fairly similar to the blue mix UI that that we have up in soft layer So they're trying to get a unified look and feel and common common feel to using the dive in cloud and From this you can start instances see your instances, you know destroy instances then a key component once we had open stack up and running was to get the the Cloud Foundry software and to get the blue mix software on and running in this environment and so what we have is this blue mix and deployment inception image and we actually share this with several of their projects at IBM and so there's a team that actually makes this inception image and they it's a An Ubuntu image that they've embedded data power. They've embedded the the Bosch Software the cloud Foundry software and so they're using it But it's in a VMware environment. So we needed to convert this over be able to use it in our environment So what we ended up doing for this is so they had already created the image So we imported it and tried to use it and the first thing we noticed is that the Their swap they they put on their VMware instance swap in the last partition of their their drive So when we tried to deploy this with open stack And we tried to go with any large flavors that this wouldn't resize automatically It would get the larger size, but the file system was an auto auto resized So we suggested to them to either remove the swap or move the partition out of the last last the swap out of the last partition Second we needed to install cloud in it and the DHCP software so that the metadata service would be able to run What this allows us is to get once the image is deployed it gets the network gets the IP address the host name And any of the advanced user data that Andy's going to talk about in a moment for the data power Required that this metadata service and then actually to do the the convert from VMware to KVM we use the QEMU image tool It's a tool with red hat comes with red hat and you convert it from VMDK We went to raw imported this image right up into the horizon horizon UI the glance UI and Glance repository and then we're instantly able to use and and reuse this inception image With that I'll hand it back over to Anna mesh It was a great overview of how did we configure ICM now one of the configuration points in ICM was also an open stack How did we configure the network so as to deploy a blue mix on top of it? so essentially Taking a look at it from a networking point of view How did we actually do that? So this is a view of our blue mix environment On open stack from an open stack perspective if you see as Jim talked about Essentially, we have a controller node We have a chef node and then there is a center volume node And then there are compute nodes on top of which your blow mix deployment goes now. This is a non-HA model But the key thing about here is the networking so essentially we decided to go with three networks So one is our private open stack management network, which is Essentially the communication between the controller and the compute nodes that goes over the private open stack management network as We were deploying blow mix There are like around 40 to 60 VMs you can get All of them actually go on top of that blow mix private tenant VM data network, which is created using gary tunnel Now the advantage of using the gary tunnel in mechanism is two-folds one We get a totally isolated environment for all the blow mix virtual machines The second is that we can leverage the virtual IPs which come from neutron To assign to these set of 40 to 50 VMs which we are provisioning and there is less load on the infrastructure So this is a networking view in terms of how ICM was configured So all the compute nodes essentially definitely need one ethernet interface which is connected and used for your private management network You can create your VM data network using the same ethernet interface But our recommendation is to actually have one more ethernet interface because blow mix or cloud boundary is a very chatty environment So you want a different bandwidth for all the communication which is happening between different blow mix VMs And then essentially you need connection from out wire outside To some of the blow mix components like data power, etc. Which are going to act as a gateway for everything into blow mix So for that we actually connected our controller node also to the external shared network And then neutron uses that external shared network to assign floating IPs to any of the blow mix VMs which will need it And this is essentially a view from how it looks inside from a VM's perspective when that network is configured So one of the things which we had to do with respect to a lot of the blow mix components was that work with that model that All the VMs on which they will be deployed. They will just have one ethernet interface That's all we actually configured from a VM's perspective There is just one single ethernet interface and a private virtual IP which comes from neutron if you do need publicly accessible IPs for these VMs Neutron handles them using that translation and they are floating IPs, but the VM itself is not aware of it and Then there are other tenants they can have their own specific networking schemes for services with that Let's talk about blow mix data power Which is the gateway for everything into blow mix So essentially all the management traffic as well as once blow mix is deployed any app, which is there Data power is the gateway into the blow mix environment. So Andrew Bodine from our team is going to talk about data power for a bit Hello, awesome So I'm one of the developers that have been working on automating our blow mix deployments on OpenStack local our blow mix local sorry and So just some real points just real quick points about data power It's kind of what you come to expect from an enterprise gate grade gateway all of your unsecured secured streaming traffic is going to be handled via the data power instance and sprayed to the internal components of our blow mix platform You know just kind of some of the other Properties of a gateway for edit enterprise grade you expect it does URL rewrite service level monitoring as well As acts as a platform and a platform enforcement point for you So just getting into a little bit of the topology of Our deployments you can see all of our ingress traffic is being handled through data power Cloud Foundry specific traffic will be proxied through to the the go routers and into the individual components of cloud for Audrey and all of the Other blow mix specific services that are a part of that platform. We're handling all the routing simply through the data power gateway moving to more of a networking view of this topology We had you know in order to deploy the data power instance we had the requirement that An individual instance needed to have a single interface with multiple IP addresses associated with that from the tenants private network and We were able to actually automate that simply by using the open-sec APIs. So first you just go ahead and ask Neutron create a port and however many IP addresses from the tenant network that you want associated with that and So we get that back and we go ahead and boot our data power image and associate that port with the image at boot time and then through the use of a User data file or a cloud or net script We are going ahead and adding those IP addresses to that interface on boot time and then modifying the interface config so that that configuration will persist through reboots and This is necessary so that we could support the multiple domains that data power needs to be able to handle into the the platform components, so whether that be your the app domains through the cloud foundry or the bluemix specific domains and So we were able to do this just simply by using the open-sec APIs and automating that deployment And it's from from the VM's perspective. It's just a single interface on the on the the tenant network So with that I'm just going to hand it on to Anamesh he's going to talk a little bit about how we're a little bit more of how our orchestrating these deployments Thanks, Andy So that was a greater overview of data power. So data power is actually our entry into bluemix. So it acts as a device to Terminate your SSL traffic or offload your SSL traffic Route your app request to different domains as I mentioned. We have different domains For administrators as well as the apps which are running and also the customers or the environments where we deploy They will have their own customer domains. So for all these different domains, we need to Provide different SSL certificates So that was one of the requirements that we need to have these multiple IP addresses So as to provision different SSL certificates to different kinds of plans who are coming on these different domains So with that we covered Our open stack configuration we covered data power which is the entry into the open stack We covered the inception machine Which is what is used by our remote server which is running on software to orchestrate the deployment in the life cycle Now let's talk about this remote server Which is the remote component which is running on top of IBM software and actually responsible For orchestrating the deployment as well as the life cycle management of this whole platform, which is running So it's called urban code deploy that is an acquisition which IBM made a couple of years ago So one of the key functionalities which urban code deploy provides is essentially being able to assimilate code artifacts and binaries From a host of repositories. So as we mentioned bluemix in a mal bluemix itself is an amalgamation of a large number of components So that means there are automation codes associated with all these components including data power for example Or the inception machine and these are residing in different repositories SCM based repositories like github RTC, etc So data power acts at the central point in terms of being able to pull code from all these different repositories And it stores in a local code station Now once it has pulled all that code it can actually push it Right. So there are two ways to go one is it can actually use a relay server So the relay server is something which actually can co-locate and co-reside with your agents where you need to push the deployment So if you are multiple agents where you are pushing the deployment or multiple environments The recommended architecture is actually to have a relay server so that it can cache the data closer to your environments Secondly what it also allows if your network is very restricted You don't want to have multiple firewall holes punched if you are having multiple agent machines Which are running in your private environment. So the relay server actually Can be just one gateway in terms of entry to that environment and your multiple agents can be connected through the relay server Now it allows you to define the design of your Process in terms of deploying what you are going to deploy it So as you can see on this screen some of the basic cloud phone re-components like Bosch CLI micro Bosch Bluemix Everything you can define as a component and you can deploy design this process flow in terms of going through a step-by-step Instructions and you can have checkpoints. You can have decisions Markers in terms of if it is a VMware platform do this if it's an open stack platform take this route You can have that all that automation and define this design process Once you have defined this you can actually push a deployment Now as the deployment is going on it will go sequentially through all the steps which you have defined in that process flow and Stream live logs to you So you can see all the logs from that remote environment being streamed back to the central ucd server If something is going wrong It will actually tell you there at ectra and you can stop and the next time when you actually push a deployment You can start from the same point again It also has this component Concept of components and versions so it actually knows what versions of each component it has deployed So if you want to go and do later on Updates and upgrades you can select that Only if it is the latest version or a later version of what you already deployed then push that so very very smart from that perspective Now what are some of the things which is odd that it is automating under the covers? Essentially, we are using fog for example to do a lot of discovery from open stack So we are discovering a lot of information with respect to security credentials VM configuration sizes Network subnets, etc. So as to craft that manifest file, which I talked earlier Which is used to deploy a blow mix or cloud for me, right? Not only discovers the information if there are certain things missing in our open stack install for example Each of the blow mix components that require different flavor sizes or different VM configuration sizes It can actually go ahead under the covers and it creates that it will also create certain security keys, etc Which are needed and firewall rules if they don't exist which are required for our blow mix deployment to work Now Andy talked about data power, right? So data power was a unique case where we actually needed to Instead of directly going through the nova we had to go first to the neutron create a port and From that port we had to request like for IP addresses because we needed to support this custom SSL certificates Which are different for different domains So that also we automated using fog So we first go to neutron get a port with multiple IP addresses then we go to nova Actually bind that port with that Nova compute instance and last but not the least Once that VM is provisioned the data power VM those IPs the other IPs which we added They are not bound to the interface so that's something we do by leveraging the metadata But all this is also automated using fog under the covers and finally definitely we use Bosch and Ruby templating mechanism under the covers to do a lot of cloud-fondly deployment automation in terms of creating and uploading your releases stem cells Deploying them, etc And the great thing about Bosch as I mentioned that it also allows you to manage the lifecycle in terms of updates and upgrades So we can use the same central urban core deployment server, which is sitting remotely To do a remote update and upgrade on your environment So we kind of covered all the core pieces which form our blue mix deployment now a Couple of things which I also want to mention is the work which we are doing on the monitoring side on the logging side Ceptra So and these are key because for any platform which is going to be remotely managed Remotely updated and upgraded you need to have a view into the monitoring you need to have a view into the logs, etc So for monitoring we actually use open source Project called graphite and then there is another version of edgrafana So essentially it's a collector dashboard agent-based model where there are multiple agents Which are collecting monitoring data for you. They are streaming it back to a collector and then there is a graphite or graphana dashboard Which actually displays it for you The database which it uses under the cover is influx DB, which is a time series database Which is very fast at processing if your environment is very distributed in terms of putting pulling up that monitoring data and Analyzing it and producing it for a monitoring view. So this is our architecture and in terms of deploying the monitoring stack into blow mix local environments The other thing which we are using for logging is Elk stack as it is very popular right now a lot of you Might be working on it in different formats So essentially Elk stats for Elastic search log stash and Kibana Elastic search component is essentially needed for indexing storing your data Log stats plays multiple roles in terms of parsing archiving, etc And then finally you have a Kibana dashboard, which allows you a view of the logging data So again, this whole stack we will be actually deploying in a customer environment In your own dedicated environment and then finally we can collect the logging data Now there will be log stash servers, which we will also have on software So as to pull some of that data in case of failures in case of when things go wrong We need to debug So there will be Collectors which will be running in our software environment in terms of being able to pull some of that data now that data will be filtered We cannot pull everything Definitely not any app related data so The last point which I want to touch is as I was mentioning blow mix is not only a runtime platform is a platform for services Right, so the multiple services which we are going to offer when we take blow mix in a data center I Cannot talk through all of them one of the services which we are enabling is elastic caching Where we are actually creating a Bosch release of elastic caching and that's why Trying to say that Bosch is not only for Cloud Foundry components for a lot of other services Which we are deploying we are creating Bosch based packages so that we can deploy it through Bosch So that work is almost complete the other work which we are right now Undertaking and in process of defining is how to launch container service on our Blow mix platforms which are going to run in a customer data center Now if you have gone to blow mix you would have seen there is a container service available there Which works on a certain architecture One of the tasks which we needed to do was take a blow mix architecture, which we have defined The networking concept which we have defined and fit the container service into that model So this is essentially One view in terms of how we can deploy container service The container service which we have right now It uses you won't do as host machines and spins up containers on top of that Our ICM environments, they are red hat based the red hat are the host machines So one model which we are actually trying out is creating virtual machines You wouldn't do based virtual machines and then adding them back To open stack to treat them as hypervisors and then using the Nova Docker driver there to actually spin up containers So that's the model and in terms of how the tenant or networking model changes in that context is That we are creating a specialized tenant called container management tenant Which essentially is going to be the tenant For all the users or all the customers of blow mix were registering and for every user who is getting registered onto this Particular container management service we spawn up a different tenant and he goes into that particular tenant So what you can see on the right-hand side is container tenant one container tenant two so part of the reason is also When the container management service which we have which was written we are relying heavily on Open stack tenant based quota and quota and family enforcement policies for doing billing metering charge back So this is one model in terms of if we align with the container models or the tenant model We can actually enforce for each user the billing in the quota and that is already in place Now there can be a better way to do this or an alternative way to do this Which is essentially you get all these different users registered in the same tenant What we do need to that's an exploration. We are doing in the background is How can we enforce quotas and policies within one tenant for different customers who are getting registered? So this is an example how we are getting different services within blow mix which you see in the hosted catalog So with that we covered what we wanted to cover in terms of taking you through our journey of how we are taking blow mix and running it on open stack in Parallel there is also an effort to make and ensure that we can run blow mix on VMware And you will be hearing very soon from us in terms of some solid Steps going forward with respect to this Any questions comments and you can also reach out to us on our Twitter handles if you have any further questions beyond Yeah He had me do it. Yeah Okay, thanks everyone