 Today we were talking about building homes, so lost track of time. All right, well, welcome to Deploying OpenShift on Dell's OpenStack Cloud Reference Architecture. My name is Grant Shipley. I'm a manager of OpenShift at Red Hat. I'm going to start the presentation with just a little bit of overview of OpenShift and then turn it over to Judd, who works at Dell, and he'll walk you through how they're actually doing it. But first, you know, why is OpenShift important, right? We have been living in a unique time over the last 10 to 20 years, to where the world is completely changing, right? This is a picture from 2005. I believe it was the Pope making a visit to a group of people. And they all came out to view this event, right? And there's one little, I think that's probably a Motorola flip bone in the crowd, right? And so this was pretty much what we saw just 10 years ago. In 2013, the Pope made the same visit. And this is how the audience has changed. And you guys have probably seen this. If you go to concerts or to even your kids' soccer games, people live through the lens of their phones now. They record everything. And so we're going through this digital disruption to where traditional enterprises are being completely disrupted by these small players, this internet age that we've been living in the last 10 to 20 years, right? And you guys have probably seen this on Twitter, but just how the world has changed recently. The largest taxi service company in the world doesn't actually own any taxis. It's Uber. The world's largest media company doesn't create or own any content, and that's Facebook. The world's most valuable retailer, Alibaba, doesn't actually have any inventory. They just let people sell on their site. The largest accommodation provider, Airbnb, doesn't actually own any real estate. And so these traditional huge companies, like taxi companies, I saw a presentation earlier today to where the fairs in New York City for taxis has dropped 60% just in the last couple of years. The price of a medallion in San Francisco has dropped. I think that maybe that was in the keynote this morning, is that where we saw that? And so the internet and this fast speed to market from small companies are disrupting everything. Anyone can compete in this age with cloud computing. But out of all of this disruption, clarity emerges. You have to be able to compete even if you're a large enterprise. You have to be able to compete with the small companies. That's kind of odd, right? As a large enterprise, you have to be able to compete with the 50 to 100 person companies. And so how do you do that, right? When you're a slow moving enterprise, you need to kind of get some speed under your belt, right? And stop the paper pushing all day and be able to deliver faster. And that's what we're seeing with cloud computing. And just a trend over the last six months is that containers are going to solve all of this, right? We've all heard about Docker, Docker, Docker, Docker, Docker. We've got to move everything to Docker, right? Well, Docker is just a single container format. There's also Rocket. But just having a container based deployment is not enough, right? Because you need to orchestrate where those containers are actually deployed and living. And so what we see at Red Hat is people developing their own container orchestration systems and deciding where to place these containers inside of their data centers. And their container and their data center infrastructure kind of looks like this at the end of the day. They've, checkbox, I've deployed all my containers, but they're not really in a state where they can be managed by the operations team. And as this continues, their infrastructure begins to tip over, right, because they can't maintain these containers anymore. And so with OpenShift, we work closely with Google on the Kubernetes project to where we can deliver these containers in a more streamlined fashion and orchestrate them across the nodes in your environment. To where at the end of the day, your data center will look more like this, right? Very, very sane environment, easily maintainable, right? So we've talked about containers and the need to have some type of enterprise class orchestration system that'll help you determine where those lives, but that's still not enough, right? Just if you have these containers out there, they're gonna probably end up looking like this. You haven't achieved anything differently than we did with virtualization 20 years ago, right? The operations team, if you're a developer and you guys are probably more on the sysadmin side, you give these virtual machines out to developers, but you don't know if they're actually being used, right? And they sit around forever and ever, and you have to try to reclaim those at some point. And so sure, you can deploy containers out with Docker and Kubernetes, but what we wanna do is make them useful, right? We want developers to be able to take these containers that are deployed to the nodes in your infrastructure and do something great with them. And here's my personal favorite, just because it's a bar made out of a container, right? And so what do we do with OpenShift on top of just Docker and Kubernetes, right? And it's the whole experience for the developers and sysadmins, the value add we bring on top of that. And what we're here today for is to talk about the deployment of OpenShift v3 on Dell's reference OpenStack architecture. And that was made possible through this project we have called OpenShift Commons. Has anyone heard of OpenShift Commons? Few people, probably the people raising their hands belong to it. But what OpenShift Commons is, is a true open source community for all people who use or operate OpenShift PaaS environments, can get together and collaborate. We don't charge to join this. And the only ask is that if you want to join OpenShift Commons, that you have an interest in OpenShift and want to work with and collaborate with other companies on the OpenShift platform. And doing interesting things with deployments. We released OpenShift Commons about, think about two months ago. Today we have just over 75 organizations that have joined the OpenShift Commons. We add roughly one new company a day, okay? And that has not slowed down since we released it. And they're anywhere from startups to large companies like Accenture and Dell have joined, right? And again, it's a true open community. We don't charge for it. There's no contributor license agreement or anything like that. There is a LOI, a letter of intent that you have to sign as a company, which just means that you are interested in OpenShift and that you will work to collaborate with other people. And so as part of the OpenShift Commons project, we hold briefings every couple of weeks to where operators of OpenShift can get together and discuss best practices. They're generally in a webinar type format to where you have someone presenting. And then people answering and asking questions about it. And what's interesting with OpenShift Commons is sometimes we have our own operations team give briefings. OpenShift Online, which is our publicly hosted OpenShift, has, what's the number of apps? It's almost two and a half million apps that's been created on it. We do about a billion requests per day on OpenShift Online, right? And that's the same code that is running in OpenShift Enterprise and OpenShift Origin, which is what we're going to be looking at today. All right, so any questions about OpenShift Commons or about OpenShift, okay? So just to recap, what we're going to look at today is the next version of OpenShift and that's what we're talking about, OpenShift v3. And that's where we've re-platformed from our existing Linux containers, which was SE Linux, Linux control groups and Pam namespace-based over to support Docker containers, okay? We have also created a new utility called STI or source to image. Cuz we believe as developers, and that's what I am, the developers want to take advantage of Docker containers and get them deployed to OpenStack. But they don't necessarily want to know the ins and outs of how to create containers. They don't really want to know how to create Docker files and specify things. They want to write software and to deploy it. So what we allow with source to image is you can take an existing Git repository written in any language, basically. And you can deploy that to OpenShift. And we will take the source code and build a Docker image under the covers and then orchestrate that out with Kubernetes. We also will allow the running and provisioning of any existing Docker image out there, right? So we'll be fully Docker API ready to go, right? But it is just a container API. And so our most important thing with OpenShift is that you're never locked in to any vendor, right? So we don't have any proprietary hooks or anything like that. And so if the industry starts moving more towards Rocket or something like that, our container API will allow us to support that as well. But we're gonna come out of the gate with Docker support. So when is all this coming? It'll be GAID next month. If you wanna count down to it, you can go to OpenShift.com. We have a little counter on the bottom of when it'll be GAID. If you are in the commons community, you have access to that early. It's also fully open source. So you can run this today without being a member of the community. The open source project is called Origin. You can get to it at openshift.github.io. All right, so let's switch over to Judd. And I promised him I would take 15 minutes and I've taken 10 minutes. So he's probably gonna get angry with me, which means more time for Q&As. So I do wanna thank Judd for coming out and talking about OpenShift today on their reference architecture. Everybody, cool, and I can see the next slide. How many folks have done an OpenStack deployment? Okay, half the room, little more than half the room, cool. How many have done it and ordered all the right gear the first time? And all the right switches, your top of rack switches, your, yeah. Well, we put in a lot of effort over the past year working with Red Hat to create a reference architecture that will meet most of the use cases that we've seen in the field in order to get you up and running really quickly with just enough automation and just enough flexibility. So you can really take OpenStack out of the gate and start doing interesting things with it. My team at Dell, fortunately, we're a hardware provider, so we got lots of gear and we get to play around with a lot of gear. And I'll be getting to some of the specifics of the gear that we've chosen and we've been testing and the gear that I've been deploying OpenShift on top of, on top of OpenStack and the challenges that we face and some of the conceptual transformations that have to happen from the normal notion of tenant and project with an OpenStack to this new level of multi-tenancy that we get to when we have containerization. And a full sort of DevOps mindset where developers are acting or interacting with shared systems. So first, what's the paths? In OpenStack land, we're very comfortable, most of us with infrastructure as a service are typical friends of Nova and Cinder and Glantz and Neutron are all providing infrastructure services. What are these platform services? Platform services are basically developer tools that hide away all the rest of the infrastructure and allow you to deploy code, update code, create a continuous integration pipeline to build your apps, to bring in all the dependencies to make sure the operating system is configured correctly and the networks can configure correctly and the storage is configured correctly to be able to serve your application and then interact with the other networking gear that can detect at what level of utilization your entire infrastructure is running and deploy more resources in order to match the load that you're at. And even redeploy resources if the load on the load balancer has gone down. Especially if you have a spike sales and folks who run retail establishments will often have their noontime sales where you'll want a lot of capacity all at once and you want to be able to scale that back. OpenShift on its own can do that on your bare metal. We at OpenStack, in the OpenStack conference love the operational efficiencies of having virtual machines. So much easier to move around workloads, so much easier to divide your customers or your different departments into separate virtual machines and attach them without rolling a crash card or getting too many people involved to attach to new storage. If suddenly your image hosting service is blowing out of the water and you've got too many images or too many orders. We love the virtualized platform that the infrastructure as a service gives us. And the integrating of the infrastructure of a service and platform of service is really the key next step to be able to achieve the levels of automation and flexibility that take operators out of the line of fire and reduce the incidents of downtime. Or degraded service. In our reference architecture, I love showing pictures of gear. We are recommending our 630 servers for our admins, for the controller nodes, and for the compute nodes. These are just packed with CPU and tons and tons of RAM. I don't have the specs right here, we can get into it, and two bonded interfaces, our entire reference architecture is fully HA. For storage, our reference architecture specifies Ceph for object and block storage, so this would be your beginning storage array. And you could also choose and we will show you how to use Dell's equal logic storage arrays. In addition to this, if you need a larger and larger storage capacity. We also specify in great detail which networking gear we're gonna be using. For top of rack switches, a pair of these 10 gigabit with 40 gigabit up links. For inclusive of the compute networks and the storage networks. And just one at the bottom, the one Dell S55 for all of your control black plane, that is all your IPMI traffic. That's definitely stuff you wanna keep off of anybody's potential attack vectors. IPMI is notoriously vulnerable to those sort of attacks. And we also wanna make sure that those are available to be able to bring services on and off line. So we spec one Dell S55 to attach to all the BMC IPMI interfaces. And with our tools, you can start and stop gear, bring gear on and off line. If you are a very bursty service and you actually wanna, one of the things I really like to do and my buddies at the Rackin company are doing this, based on load, you can shut down servers in your infrastructure automatically to save wattage, to save money, to save heating, save cooling and be ready to burst when you need it. This is what the rack would typically look like. Our solution admin host is where everything else gets kicked off or all of OpenStack gets deployed from and that's 1.630. It also houses, if you guys know Red Hat's OSP offering, Forman is an essential part of deploying this that drives the puppet modules that deploys OpenStack. Three controller nodes, three compute nodes, and three storage nodes. Here we start to get into the details of the complexities of a typical OpenStack deployment. And after a few slides of this, we'll get into how laying OpenShift on top of this creates a little more complexity, but the wins are very huge. These five networks at the bottom and a third one, a third and a fourth up on top, the only ones that your customers would really care about is that middle green one, which is the internal network for tenants. The provisioning LAN, that ugly color green at the top of the stack there. That's your S55 switch, allowing a solution admin host to switch on and off gear as necessary. Any questions so far? No, we've got bonds on a few of these, especially the storage networks. So you'll have bonded, we're doing AB bonds right now. You can switch them to full bonds. They will pick up a good deal of the traffic. I'm focusing in here a little bit on the complexity of each of these. We specify which NICs you can get and we will set them in the proper order. So you can really just turn on your gear and start letting the solution admin host install OpenStack for you. But then we put OpenShift on top of OpenStack. So you've deployed, let's say a whole bunch of virtual machines. You have one tenant, you have one customer. We're going to be deploying, say, our friend in retail departments, five or six applications on OpenShift. They've been writing on OpenShift. They're pretty comfortable with it or in a testing environment. What are the parts of OpenShift? The very top, the user experience, that's primarily Git, checking in and out code. When you're working with OpenShift, you're cloning your repository of your code into OpenShift. So I would even imagine, I've got a slide about this later on where you have your own Git code repo in one virtual machine on perhaps on a separate network. And you're able to check out of that into Push, actually do a Git push into OpenShift. And OpenShift triggers a build and through your test network by potentially pulling down more containers from the Docker Hub and the OpenShift marketplace. Kubernetes is a joint project with Google and it's an open source project that does similar to the Nova scheduler when and where the containers will be deployed. Docker are those containers and the standard ID interface for them. And the container host, which I'm not gonna go into now, but you can use Fedora or REL. I haven't tried to deploy them on my sort of on Debian which I started with. I'm kind of converted now to REL and Fedora. These are the names again of the great products I spend most of my life in Git and I just love pushing code directly to Git. Here is an architectural diagram of how OpenShift is working. As if it's all in one virtual machine. Now these two larger blue boxes can be split into separate virtual machines if the load requires it or if the isolation that's required for your multi-tenancy requires it. On the top left, the OpenShift command line tools say you wanna create a new application, creating a new application involves creating the Git repo, which will hold the code and the indication of which sort of runtime you'll be using, whether it's Ruby or even it's Perl or it's PHP or Red Hat is going through great lengths to make Jboss really a first class citizen and all the great Jboss middleware that Red Hat's been working on for the past 10, 15 years available and deployable automatically in your enterprise. Through these command line tools, there's also a web GUI. I wish I could show you but my VPN's not working. So the OpenShift API server and the build controller and the deployment controller are those parts that you would interact with the most as a developer where you check in new code. It will automatically build. You could switch it to build on a cycle. You can bring in Jenkins hosts to do complete continuous integration. Then the deployment controller once tests pass can then go ahead and manage deployments of code over to the right side. Then below the line on the left in the Kubernetes master is all the scheduling in Kubernetes parts that manage the system's knowledge of where your containers are. Over on the right would be in most cases a separate or very many virtual machines running all of these containers. There's the kubelet down on the lower left of this right side. The kubelet is the client for Kubernetes that is taking commands and sending information back to Kubernetes in order to make intelligent decisions about the scalability requirements and network configurations that are necessary in order to run all your Docker containers. Kubernetes organizes containers into, I gotta zoom in on this, into pods, one operating system or virtual machine running Kubernetes is called a Minion. Kubernetes breaks them down into pods that allows you to create associations say between a web browser, I'm sorry, a web server, an app server, and a database server. You could have those three containers all affiliated, associated to each other in a pod and replicate those pods across Minions. You would also have different types of pods or different types of workloads. Focused familiar with EtcD? It's a highly, highly reliable key value store, which provides Kubernetes the overview of what's going on. So all of these are constantly checking in their data into the local EtcD and that information is being replicated throughout the cluster. This is my plan and the red had recommended layout of virtual machines. Over on the left, we'd have a separate virtual machine or two, depending on your number of customers or tenants, to hold your code. And then you'd be checking in that code into OpenShift, where we'd hit Jenkins Pipeline to ensure that it passes all the tests and it compiles correctly. Then down on the lower left to App Execution Test Network and Framework, so the QA folks can go at it if you need to, if you haven't automated all of that. All checking into the center one, the Kubernetes registry of EtcD. And then when you're ready to go production, when you're ready to deploy, it's just bringing those very same containers into production and perhaps doing a load balancer drain or something like that to allow you to run your Java or Ruby applications off of databases that are also containerized. But the big question is now what is a tenant? If OpenShift has its own user space, its own namespace of users, and OpenStack has its own namespace of users, where do we need to mesh them together? And this is the software project that we at Dell are also doing in the open to bring this fantastic Paz into an infrastructure service and allow, say, a tenant to request more virtual machines based on the activity within OpenShift. The complexity starts growing very quickly. As soon as you start getting a little creative, the first points of integration are clearly neutron. Configuring VLANs to keep the traffic separate between different containers is some heavy lifting. OpenShift is built to talk to any sort of SDN product. It comes with a basic SDN product, which is really open V-switch. But the APIs that they've set up will allow really to call out to any SDN product to reconfigure the network. So integrating the neutron API into the OpenShift calls is what I'm working on right now. I have some details about that. Our typical OpenStack networks, we have a provisioning network, a management network, a private OpenStack API network. So Nova can talk to Nova and Keystone and everybody. A typical storage network to get to your SEF or your SWIFT or your Cinder volumes. And then most importantly, highlighted there is your tenant networks. And you'll have more than one tenant network because you have many tenants and maybe they want back end, they want private networks between their virtual machines and then they want public floating IP or flat network spaces to be able to let their VMs get to the internet. But if you have multiple different tenants in OpenShift, all working on the same VM because you can draw more value out of each processor that you're using by having several separate tenants. Several different tenants on the same virtual machine. We have a real namespace issue here and a networking issue here. OpenShift has a fairly simple network, just two basic networks. An orchestration network for Kubernetes to be able to do its work. And the load balancing access network where there's always a load balancer involved with OpenShift and new client requests that hit the load balancer are then split off into a private network where all the requests are served. How those mesh together is really the challenge that we're starting to face now. The auto scaling requirements would really force us to create and remove VLANs and ensure that our VLANs are being created appropriately. So our customers traffic doesn't ride over each other. I see this only as possible with VXLANs and more work in SE Linux and IP tables. And I'm working on that right now and I'd love for you to join me. You can find me on the OpenShift mailing list. Other aspects of OpenShift that we are requiring integration, in order for this pass to really fully take off, is the identity problem. Within OpenShift you have a variety of different types of identities. You have the overall administrator, you have project administrators, you have project users. And which ones of those are allowed to request more services from the various OpenStack services? The Nova scheduler, when will we be launching virtual machines? At what rate, based on what CPU availability on which nodes? While this is laid out and there's good quotas in OpenShift and there are good quotas in OpenStack, how are we going to merge those two sets of values together? I'm gonna be, fortunately, I'm very glad this talk is early in the conference because I wanna hear from you guys as I start planning all this out. The third, well, the fourth after Neutron is heat. Where when you know you have a customer coming in, if you're a large organization or you're a service provider and you know a new department is coming on board, and they are bringing a large application on board, you probably will have a bunch of heat templates to lay out all the networking and lay out all the storage and lay out all the virtual machines instead of having to do it by hand. The integration with that, with OpenShift and ensuring that we can then get OpenShift installed on these virtual machines is key. That's all I got for now. I've done a whole bunch of work behind, unfortunately, the Dell firewall. And everybody's probably suffering with the Wi-Fi here now, trying to get to their own home networks. I'm completely kicked off of mine. So I'm really curious what questions or ideas folks have in order to begin integrating these. Thank you. There's a mic somewhere. There's a mic, stand. So just for clarification, you are talking about containers on VMs, not containers on bare metal. Not bare metal. Okay, but you are then talking about multi-tenant VMs potentially. Yes. Okay, I understand that's a challenge. I'm not going to say anything else at this point. And there are going to be a lot of talks also this week about Docker on bare metal, OpenStack running Docker containers. But I think we lose a lot of the operational efficiencies and the gifts that virtualization brought us. So can you please turn to page 17? It's a previous page. So why did you replace page 17? Yeah, so why did you replace normal schedule with Kubernetes? So what is that? No, no, no, I need the Kubernetes scheduler to be able to interact with a Nova scheduler in order to launch virtual machines on demand. Not replaced, but have them integrated with each other. So as we look at SDN software to find networks and non-demand network creation integrating OpenShift and Kubernetes with Neutron. We need to also integrate OpenShift and Kubernetes with the Nova scheduler in order to request more resources. Okay, so that Nova boot will still create VMs for Kubernetes manors on whatever. I'm sorry? So the command of Nova boot will still create Kubernetes manors, right? I mean, the command of Nova boot will create VM. So the VM will be a Kubernetes manor or a Kubernetes master. Also, depending on need, right now for proof of concept, I would just not a Kubernetes master, but a Kubernetes minion. Would be for more application. You're out of frame. I'll put you on the mic. All right, let me answer that as well. The difference between the two is in OpenShift when we talk about scaling up containers or these pods, right? Say you have a pod with a web server, two app runtimes and a database, right? When we scale that up, we are not aware of the underlying infrastructure. And so when you run out of predefined Kubernetes minions, OpenShift by itself today cannot spin up these additional minions because we don't have access to the underlying infrastructure. And so what Jeff's talking about is integrating these two together so OpenShift can make a request with the correct authorization to spin up additional VMs inside of OpenStack and then lay down more Kubernetes minions to deploy the apps out and to scale it. Okay, thank you. So last question. Did you use heat to deploy the whole Kubernetes cluster? I'm sorry? Did you use heat to deploy the whole Kubernetes cluster? Did I use what? Heat? No, no, no, no. Okay, so. Then why not use that for your scaling instead of manually? Right now I'm in proof of concept and heat. Sure, you could create a new heat manifest and deploy that manifest again. I'd be concerned with overriding existing virtual machines and getting networks wrong. But I'm not against using heat at all. But I prefer a much tighter integration between the two to get the faster response. The scheduler, I'm relying on the Nova scheduler to understand what's going on on the gear much better than heat knows what's going on on the gear. Yeah, I don't want to start deploying 50 new virtual machines to a server that really only has a couple of cycles to spare. Any other questions? I have one more. Cool. At this point, Docker only. But what's the plans and when can we expect not to have a whole stack of Docker stuff that's not container specific? I mean, Docker has a lot of things associated with it that you're mentioning, the marketplace, all that stuff that most people just don't want. They want containers. At what point will something like Rocket be integrated? I don't know. That's the red hat guys. We're just Dell. Yeah, so I can't actually give you a truthful answer right now, simply because I don't know, right? Yeah, I mean, we're working towards actively or? Yeah, we realize that people are going to want more container formats than just Docker, right? And so when we're releasing, we've spent seriously the last year and a half just getting the Docker stuff brought up to where it needs to be to deploy it in the enterprise. And once we iterate on that, then we'll look at other containers. I guess one of my concerns with Docker is the same thing, same concerns Rocket has, right? It's creep and all this other stuff where you want a container, but I'm seeing three VMs of marketplace, blah, blah, blah, blah, all this infrastructure that you only need if you're going to basically drink the Docker Kool-Aid. But if you just want containers, we've taken something like LXC and now wrapped so much stuff around it that you can't now use it by itself in a beautiful system, that's something that's becoming a beautiful system, is now encumbered by this massive thing out of Docker. Yeah, our plan is to fully support more things than Docker. But let me not selling any of the marketplaces or the Docker hub. The registry is really important, even if you're only registering your own containers, because that way you reduce sprawl and confusion about versioning containers. And without a container versioning system, then, you know. Yeah, I mean, most enterprises will not want to do a Docker pull from Docker. They will not want, they can't legally everything else, right? There's a million reasons you don't put your stuff out in the outside world. And by doing that, you're basically saying, here's my image, you manage it for me, and 90% of the enterprises will say hell no. Yes, we're actually doing a lot of work around container certification. And so if you're a Red Hat customer, we will provide these for you so you can trust them and you call us for support. We're working on all, you can use satellite as well. Especially the, I see the real value in the Java middleware, where there's Java middleware everywhere, and finally can be reined in through containers and a really good registry. Thanks a lot everybody, and I'll be at like all the Docker talks. Thank you. Thank you.