 Good afternoon, guys. Can you hear me? Did you guys enjoy your lunch? That's good. All right, because of the projector situation, I would recommend that people on this side of the room to move to this side, because I have a lot of demo and re-show slides. So I'm not sure if you guys can see them from over there. But it's up to you. All right, so let's get started. My name is Victor Phong. I am not Brian Roche. Brian Roche is actually sitting right there. Somehow the foundation mixed up our names. So I am working from the Dell EMC Dojo. We have two goals from the Dojo. The first goal is to contribute into open source. And the second goal is to evangelize the way that we have learned from pivotal, such as pair programming, test-driven development, and CI-CD, and put that into Dell EMC and try to transform Dell EMC as a whole. So we have a blog. So if you like what you have seen, please go into our blog to check out more articles from us. The URL is dojoblog.emc.com. And also feel free to follow us on Twitter. Our Twitter handle is Dell EMC Dojo. With that, let's get started. All right, so the Rack HD CPI is the first project that the Dell EMC Dojo has worked on. The Dojo was started about a year ago. The first project that we took up is the Rack HD CPI. At that time, we have spent about six weeks in San Francisco to learn from the pivotal team. So we have learned a lot of the methodology that they practice, and we bring that into Cambridge. So the Rack HD CPI was created using pair programming where two engineers are sitting together to solve the same problem at the same time. And test-driven development, which means that we write the test first and write implementation second. And the whole CPI is created using automated continuous integration pipe nine. So you can see that on the bottom. We have a couple of levels of tests. The first level is integration tests. Second level is lifecycle. And the third level is the Bosch acceptance test for both Open2 and SandOS themselves. So what is Bosch? I'm pretty sure that almost all of you will know what is Bosch. So it's a tool designed to deploy Cloud Foundry into different types of infrastructure as a service environment. So it automates the deployment, the health monitoring, upgrading, and scaling, and cleaning up for Bosch releases. It's primarily used to deploy Cloud Foundry and also the data service used by Cloud Foundry. It's CI CD2, and it communicates with the IS layer by this layer called the Cloud Provider Interface. So at this point, I think there many, many Cloud Provider Interface out there. When I first got started, it was just ReCloud Air, ReSphere, OpenStack, and AWS. But now I think we have the addition to Microsoft Azure and also Google Cloud Platform, and I think SoftLayer as well. And of course, the RecHD. So what is RecHD CPI? It enables Bosch to work on bare metal machines. So it makes full use of bare metal machine without virtualization. It can automate the deployment, health monitoring, upgrading, scaling, and cleaning up on top of bare metal machine directly as if it's our VM. So it provides CI CD capability for bare metal machines that wasn't possible in the past. So what are some of the use cases for that? So it allows you to run Bosch release directly on top of bare metal, so it eliminates the virtualization tier. Just take that out, so that is one less thing for you to buy, deploy, maintain, scale, and upgrade. And your Bosch release will work directly on top of real hardware, so that should provide additional performance for you. The second use case is actually more interesting, because Bosch, in a traditional sense, has worked very well on top of the virtualization tier. But using the RecHD CPI, it will allow Bosch to also work on the underlying tier as well. So it actually enables you to create your own software defined data center. So all you have to do is just use Bosch with RecHD to deploy SDN, software-defined network, SDS, software-defined storage, even virtualization tiers such as Resphere or OpenStack. Then it will make everything very easy. It will allow you to create an easy button to deploy, scale, upgrade, and health monitor your data center. So what is RecHD actually? So RecHD is an open-source technology created by EMC code. It is a technology to allow the user to automate hardware management and orchestration. You can make use of RecHD by using its Resphere API. It's just client server. So you would set up a RecHD server in your data center, and then it will be able to monitor and work with your bare metal machines. So that's enough dry talks. Let's look at how this thing actually works. Can you guys see this? So let's say you have a RecHD server set up somewhere. And that is connected to a network into three bare metal machines. As the bare metal machines power up, they will send out pixie signals. And the RecHD server will be able to pick that up. Now, at that point, the RecHD server would store those nodes and metadata into its database. Once that happens, RecHD server is ready to work with those new nodes. So now a user can come in, and he can say, install Ubuntu on node one. And RecHD will be able to do that for the user. And where does Bosch come in? It's the same scenario. So now, let's say the pixie signal is sent out. And the RecHD server has picked up those signals. A user now can come in and say, Bosch init. I'm pretty sure we're all familiar with Bosch init. So that will sell a Bosch director in your infrastructure as a service. And then the Bosch director will be ready at that time. And the user can then say, Bosch upload stem cell. So the stem cell will be uploaded to the Bosch director. And after that, the user can say, upload release. So in this case, we are using Redis as an example. So the Redis release will be uploaded to the Bosch director. And at that time, we are ready for Bosch deploy. So Bosch is going to talk to RecHD and install the stem cell on one of the nodes. And then Bosch is going to deploy whatever release that you want into that node. And if the machine goes down for any reason, Bosch is going to detect that because it's keeping an active agent inside the node. So it's keeping active heartbeat. If the heartbeat goes away, then Bosch is going to assume that the node is done. And it is going to provision a new node for you. So in this case, this is automatic. And it's going to put in the stem cell again and install the version again. So let's say if you need to upgrade to a second version of Redis, you can just do Bosch upload release. And a new version will be uploaded to the Bosch director. And once you do Bosch deploy, then the new version will be installed in the existing node. Awesome. So how do we make use of this for Cloud Foundry? So there are a couple of options in this. The first option, of course, is to run every component in Cloud Foundry in a bare metal machine. But that's, of course, not a very good idea, because you have many smaller components, such as the navigator, the UAA, the Cloud Controller. So if each of those occupies its own machine, then they might not be utilizing the entire machine. So that's probably not a good idea. You could, of course, package all of those jobs into just one big machine. But then you're losing the resource aggregation that you get from a virtual environment. So the best possible option, of course, is to run a hybrid environment where you run most of the components of Cloud Foundry in a virtual environment, but you run the runtime environments, I mean, runtime components in a bare metal environment. So that's what we have. So imagine if we have the same scenario as before. You have the RECHD server. At this point, you can do Bosch in it to create a vSphere Bosch Director. Using a vSphere Bosch Director, you can then deploy Cloud Foundry. And using the same vSphere Bosch Director, you can deploy a RECHD Bosch Director. So the RECHD Bosch Director will be able to make use of the RECHD and deploy runtime into those bare metal machines. Now we should probably do a demo. It's because all my infrastructure is stuck behind firewall. So that's why I have to pay a video. I apologize for that. Anyway, so on the top here, you can see that there are two nodes in my environment. The first node, well, both of them have status as available. So now I'm targeting to my vSphere Bosch Director. And then I'm going to show you all the VMs that I'm running inside that Bosch Director. So this is the RECHD Bosch Director. Remember how I said you can use the vSphere Bosch Director to deploy the RECHD Bosch Director? So that was that. And then you have a very simple Cloud Foundry deployment and also the runtime. So now let's continue, clear up this. And we are going to target the RECHD Bosch Director now. Can you guys see this, by the way? OK, good. All right, so we are going to show a VMs for the RECHD environment. At this point, there is no VM at all, because we haven't deployed anything. So now we are ready to deploy something to the bare metal environments. So we are changing the runner from 0 to 1. And we're going to deploy that, just setting the deployment now. So once the deployment is set, we can just do a Bosch deploy. And Bosch is smart enough to figure out that we have changed the number of instances from 0 to 1. So now we are deploying into the bare metal machine that I have. So if you pay attention to the top, you can see that it's running like a reserve node workflow that is just telling RECHD that we are going to make use of this node so no one else can use it. And once that is finished, now we are in a provision node workflow. And you can see that the status now is changed to reserve. And Bosch is going to give the node a CID. And now the node is almost ready for Bosch to use. And now Bosch can just install the jobs into that node. And now the runner is fully running on the bare metal node. And Cloud Foundry can actually make use of it. And create containers on top of that node. And I think very soon we are going to see a CF push. All right, so now we are pushing a demo application into the CF environment. And that is going to be running on top of the bare metal machine. This is just a very simple applications that we roll. It's the dojo-snick game. You can find this in our GitHub. It's just written in Ruby. So it's downloading all of these build packs. And now it's actually running, which means that we can go into the browser to access that node. Cool, so that is the game. And you can see that it's actually running in a bare metal environment. So now what is next? In my talk from Santa Clara, I actually stopped here. But I get something more exciting to show you guys. So since my talk from the last time, there are a lot of new capabilities from Cloud Foundry. So I am going to be running a Minecraft container inside the bare metal environment. So this Minecraft demo is demonstrating a couple of things. The first one is running a Docker container in the bare metal environment. The second one is that it requires TCP routing because Minecraft is not HTTP. So Minecraft uses TCP on a custom port to communicate with its clients. And also, it requires persistence. It's because Minecraft is actually a stateful application that it leads to store its game state into a third party. In this case, it's a network-attached file system, which is EMC Isilon. So is that going to work? Imagine if we have the same environment, but with the introduction to Isilon. So a user can push Minecraft into Cloud Foundry that is going to get into Cloud Foundry. And then Cloud Foundry is going to dedicate that into one of the bare metal nodes. And it's going to be running as a container in one of the nodes. And then the game state is actually going to be stored in Isilon. So that's how it's going to work. And we will see a demo of that. Can you guys see the phones? I apologize for the phone size. Toward the end of the video, it will be bigger. Hold on. I don't know what that is. It's off, actually. Yeah, it's off. Got it. Awesome. All right, so at this point, we're pushing the Minecraft container into CF. And it's no route, no start. So how it works is we have to first push the Docker container into a Docker hub, first, or repository. So at this point, if we look at the number of servers, we have two services. We have Isilon and ScaleIO service. And then we can do a CF-bind service to bind Minecraft to the Isilon instance. I mean, the ScaleIO instance in this case. Sorry about that. And then we would have to tell the TCP router of how to route our application. So notice that it's using port 25565. And that is the default port that Minecraft server is using. So at this point, we are ready to start the Minecraft container. So it's going to create a container very soon. So we have things running now. Running, running, running. All right, so now that it's running, I think we should be ready to use our game client. Oh, we are looking at the number of servers. And now, if you look at the service, it would say that the ScaleIO service is binding to the Minecraft application. And now we can connect to the game client and give it the CF route. And that is going to talk directly into the Cloud Foundry deployment, which actually have the container running in a bare-meadow machine. And then when we get into that state, we would be able to access the game state that was saved previously in ScaleIO. So now, I'm going to demonstrate deleting the application. Actually, we'll go into the container first. OK, so if we type in ls, then we can see that Minecraft is actually pointing to the storage location that is made available by the service broker. And if we go into that directory, we would actually be able to see the game state of Minecraft, which is a bunch of gibberish, of course. But it's a data file. So at this point, we should be ready to delete Minecraft from our deployment. I really want to deploy, delete. Yes, and now Minecraft is gone. So now, if I repush the same container and attach that to the same ScaleIO instance, the same world state should exist because the game state is actually persist outside of the application. So now we're doing a CF bind service again to the same instance. And we are remapping the route for TCP. And now, we should be able to start the Minecraft container. All right, once that's started, we can join the server again. And hopefully, the game state should be the same, which is it is. It even store where was I, you know, what was the step that I was in before I delete the server. So now, we would be able to get into Minecraft again and just kind of look at what is stored in the container. All right, so if we go into that directory, we should be able to find the game state again. It's doing ls, and then eventually, we should be able to find that gibberish file, which stores the game state. So all the game state information is stored as binary, of course. And now, we can actually unbind the service and bind it to a different service. So now, we're unbinding from scale.io, right. And then, we can bind the same container with Icelon, which have a different game state and is using a different type of level file system to store the data. And if we stage the application, then very soon, we'll have a working Minecraft again. So it's running, running, running. All right, cool. So now, the container is running. Again, but using Icelon this time. And if we look at the number of services, you can see that the Minecraft applications bind it to the Icelon instance. And we can go into the game client again and connect to the world. And this time, we should say Icelon in the game state. So that's that. So now, going back to this slide. So just to recap, just to recap. So the RECHD-CPI is an open source technology. It was a combined effort between the Cloud Foundry Foundation, Pivotal, and EMC. So if you'd like to try it, you can find it in the Cloud Foundry Incubator repo. It's under Bosch RECHD-CPI release. So the purpose of RECHD-CPI is to bridge between Bosch and bare metal by using RECHD. And it currently supports Ubuntu and SandOS stem cells. And it would enable CI-CD with bare metal machines. And hopefully, one day, it would enable 40 automated data centers. So thank you very much. And feel free to follow us on Trader at Dell EMC Dojo.