 So this morning, we have with us Eric Jacobs who's going to give us the second half of the OpenShift v3 training. The first time. Someone's got me on echo there. The first one was a developer's workshop. This one is going to be from an operator's perspective. So I'm going to let Eric set up his machine and you get to watch while he does that and let him do the talking today. So thanks for letting us take up some of your time, Eric. Yeah, no problem. So for those who may not have run into me or run across me before, my name's Eric Jacobs. I'm the technical marketing manager for OpenShift Enterprise. What that means in simple terms is that basically I produce a lot of technical content, technical materials that Red Hats Salesforce consumes when they interact with customers and partners and system integrators and etc. So one of the things that we've done is we've had a high touch beta program that many of you have participated in. And that high touch beta program came with some quote unquote training material that we wrote to sort of help guide users through interacting with OpenShift 3 in its current state. So if you go to github.com slash OpenShift slash training. That's where some of this material lives and the current beta drop is beta 4. From an architectural perspective, the deployment that we were doing with beta 4 was basically a single OpenShift master that also included the node and then two additional nodes. And we set up some stuff with regions and zones and we're actually going to talk about that today and go through sort of the basic administrative experience. The one asterisk I'll put on this whole thing today is that this is beta 4 code, which basically came out a couple of weeks ago. Some of the things that are in here will be different for GA, namely we currently do not as a beta 4 we do not have an interactive installer. So there's a couple of other things that are going to be a little bit different, but this is sort of the general idea about what's going on. So what we're starting with today is a base Red Hat Enterprise Linux 7.1 system. You'll see there's a note here about OpenVSwitch. We create a software defined network or inter docker communication. So basically, so systems on different nodes can talk to one another. They do it over this OpenVSwitch based software defined network. We've chosen the minimal installation option. One, because that sort of minimizes potential conflicts. Two, because it's easy and small. And then specifically, my environment is actually all running on my laptop. It's just a bunch of KVM virtual machines and you can see that I've got three of those running right now. So I'm going to log in to all of these machines and Diane, let me know if any of this is too small. And I know that I'm moving a little faster than the screen is updating. That's because most of what I'm doing right now is not important. A little bigger would be helpful. Okay, you got it. So as far as installation goes, this is just notes that I've taken that I pulled out of the training material. There's a routing infrastructure in OpenShift 3 where we still use HAProxy, but we run it inside of a docker container. We talked about this a little bit during the developer experience point. But one of the things is that to direct people outside of OpenShift to get to the routing tier, the easiest thing to do is to create a wildcard DNS entry. So I'm actually running DNS mask on one of my systems to provide DNS resolution for those wildcards. So for example, if I dig food.cloudapps.example.com, the response that we get is .2. This is my master system. And what we're going to do when we set up the OpenShift environment is we're going to create a special place for the infrastructure of OpenShift to run. So we ensure that the router will always live on this master. And that way, anybody that comes in on that wildcard DNS entry is going to hit 480 on my OSC3 master, which is where the router is. I'm going to set up the other nodes so that they use that name server. And then I'm going to grab the current materials and the AnsibleBase installer. So some of you will perk up when you hear AnsibleBase installer. What does that mean? So as of right now, we are essentially directly exposing Ansible to perform the installation of OpenShift 3. We will ultimately be wrapping Ansible in a text-based interactive installer. So you really won't know that you're using Ansible and it won't matter ultimately in the long term. It will look very much like OO-install if any of you are familiar with the V2 installer. But basically the purpose of the installer is ask the user questions or look at a config file and then appropriately set up the OpenShift environment based on the answers to those questions. So you'll see that I copied some files into the Etsy Ansible folder. There's really only one and it's this Ansible hosts file. So we'll take a look at that real quick. So there's a bunch of stuff that's commented out that's totally not important, but a couple things that are interesting. The installer expects currently that it's going to use root, but as you see there's an option to use sudo instead. The way that Ansible works it actually will SSH into these systems and perform all of the activities. So you need a user on that system that has some level of administrative access to install software and edit files and Etsy and other things like that. The other interesting components are the list of masters and the list of nodes and then labels and authentication. So for demonstration purposes, we configure OpenShift to use HT password authentication. So basically we're going to have OpenShift look at a specific HT password file and when users try and authenticate to the API, it's going to check their passwords against HT password. And if they provide the correct credentials, they'll be allowed to do whatever they're allowed to do. On the nodes, you'll see that we apply these node labels of region and zone. Now in OpenShift version two, regions and zones were built into the product. And what I mean by that is you had the concept of region and you could configure them and you had the concept of zones and you could configure those. But the cool thing about OpenShift three is that we don't actually care about your topology. We give you a scheduler that you can completely configure all by yourself. So if we look at the documentation for the scheduler, we see that there's this concept of predicates and a concept of priorities. And to like really, really, really oversimplify this, essentially predicates are filters. So when a user tries to schedule a workload, the first thing that happens is we filter out any nodes that are not eligible. After that, we try to figure out of the nodes that are eligible, what's the best fit for this workload and we do that with priorities. So there's some information about default options and etc. But what we have implemented effectively with the installer is we've implemented regions and zones with the installer, but you don't have to use that. You can configure your own customized topologies or whatever, but basically the region is our filter and then the zone is our priority. So we will, based on the way we configure our scheduler, we're going to do anti affinity within the zone. So we're going to try and spread instances of the workload throughout the zone. Hopefully that makes sense to everybody so far. We'll come back to that in a little bit. So again, Ansible is going to set up authentication for us. It's going to configure OpenShift to authenticate correctly. It's going to install all the software and configure the nodes to be members of these regions. We have a primary region, which is where user workload will live, and we have an infrastructure region, which is where OpenShift components will live. Cool. All right. So we're going to run the Ansible installer and that's going to be a lot like watching paint dry. And then what we will do is take some questions if there are any questions so far. So this is going to run and do its thing in the background. And let me see if there's any questions. No questions in chat. And I think then you've got everybody muted. So just a minute. So what Ansible is doing right now is it's ensuring that the correct firewall ports are open. It's installing software on all the systems that may not already have the software. For example, installing the Docker daemon, configuring the Docker daemon to work correctly in the OpenShift environment, opening the right firewall ports to enable communication between the systems, et cetera, et cetera. And it's nice enough or polite enough to not clobber your existing firewall rules. So as an example, I set up DNS mask on one of my nodes so I had to open port 53. And since that firewall rule already existed in the IP tables configuration, it politely did not clobber that for me. So what it was just doing right there was actually registering the nodes. So the installation process essentially involves telling OpenShift master about the existence of nodes, which generates a bunch of SSL certificates and other things. Then you put those files, those certificates and et cetera, and configure the node to run. And then you start the node software and it starts communicating with the master. So there is one question now. Sebastian is asking how much resources are recommended for a test workload KVM based OS? So you're an easy way for me to do this. There's probably a verse command for it. And just wrote. Requirements. So in the documentation for sort of a small environment, we recommend at least four gigs of RAM and 20 gigs of disk space. The OpenShift software itself really doesn't consume a ton of resources when things are small. But if you think about a very, very large environment with lots of user workloads and lots of builds that had happened and all that other stuff, all of that data is being stored in at CD. Oops, we seem to have lost your vocals. What do I actually have dedicated here? I think I'm doing four gigs as well for my system. Anywho, the installation process is almost finished. All right, we're done. So how long did that take? Two and a half minutes? Three minutes to set up basically a three node environment with a single master. That's pretty quick. Similar to OO install. The other thing is that adding nodes extending the current OpenShift infrastructure would be as simple as editing this host file and then adding another entry for node three or whatever. So pretty easy to just add, extend the infrastructure. If we look in the master folder, we see a bunch of certificates and keys and other files. Everything in OpenShift three is SSL protected. So all the communication between all the different components requires SSL, and every component has a unique SSL certificate dedicated to itself. So the router has its own certificate, the Docker registry has its own certificate, the deployer thing that's going to actually launch our workloads that has a cert and et cetera, et cetera. All of the master configuration is just in this single YAML file. Look at it. There's information about at CD and who's going to access and all these other things. But the two things I wanted to show are the HT password off. So the installer has configured HT password authentication for us out of the box. OpenShift default configuration is any password, which I know sounds completely ridiculous. But if you're just setting up a dev environment, you don't really care about passwords. You just want to get in. So this configures it for us. It tells it which file to look at. And then it expects that we'll put users and passwords into that file. The other thing it did for us was it told OpenShift to use a particular scheduler configuration. It's not using the default inbuilt configuration. So if we actually look at that file, see just like in the documentation, we've defined region. We've defined a zone. Cool. So if we ask OpenShift, hey, OpenShift, tell me about all the nodes that you know about. Going to tell us. Notice that I didn't log into anything right there. What the installer does is on the master system, it will configure the client tools to use the cluster administrator account by default. So if I cat my OpenShift config file, we get a bunch of junk because that's all SSL certificate information. But we see that we're talking to the master cluster and we are the system administrator. Cool beans. All right. Next steps. Well, the first two are boring. I'm going to add two system users to my master system here so that I can use those accounts to like interact with OpenShift as if they were developers or users. So that's kind of boring. And then I put their entries into HT Password, right? Password Dread Hat, super skewer. That's it for the boring stuff. Now, interesting stuff. Going to create an SSL certificate using the built-in tools for OpenShift. I'm going to create a server certificate. And you'll see that this server certificate is for a wildcard. Well, why are we doing that? So the OpenShift router is HA proxy again. We basically have a Docker container that runs HA proxy. It has a little bit of OpenShift software alongside of it that talks to the OpenShift master. And as routes, and we'll get into routing as routes are created or destroyed, OpenShift learns about their existence. And then updates the HA proxy configuration. But HA proxy can do a couple of different things when it comes to SSL termination. We can terminate SSL at the router and then go insecure behind the router. Or we can terminate all the way at the edge, have HA proxy just pass everything through. But if we're going to do edge termination where we terminate at HA proxy, HA proxy needs to provide that SSL certificate. So we're going to create a self signed SSL certificate for the router to install so that it provides that to us when we talk to it. First command generates the key file. And then we're going to hit a PEM file. And now what we're going to do is we're going to install the router in OpenShift. Currently the installer doesn't do this for us. The plan is for the installer to actually do all of this stuff for you. But if I ask OpenShift, hey, what's going on in the, well, talk about projects. So the default project is the place where OpenShift infrastructure components run. Users, regular users don't have access to this. So if I ask for a status of this project, there's really nothing interesting in here right now. So when I create the router, I'm going to put it into this default project. So we created the SSL certificate. Now I'm going to install the router. Right. So this command OSADM router will take that certificate that we just created. The router needs a configuration for how it's going to talk to the OpenShift API. We're going to tell the router that it should live in this infrastructure region. And then we're going to tell the router the format of the image that it should get launched with. So basically the Docker image for the router is going to be registry.access.com. OpenShift 3 beta OSC dash router with a version which is 522 that's the current version of OpenShift. So I'm going to run this command and then we'll do a watch OSC get pod. So what we're going to see is OpenShift fires up a deployer which then launches the actual router. And eventually the deployer will disappear once its job is done. And we're left with a running instance of HA proxy router. And I think this will work. So this is experimentation. See if we can do this OSC exec pod. I need the container name. You know, that's too much work. Can you explain a little bit about the registry access redhat.com where that is and what that is? Yeah. So Docker, right? So Docker images come from Docker registries. Red Hat actually maintains its own Docker registry at registry.access.redhat.com. That is the place where all of the Docker images for OpenShift are going to potentially come from. It's also the place. Well, I'll extend that. When I say Docker images for OpenShift, certain components of OpenShift are Dockerized like the router, like the deployer, like the core STI builder. The other thing that's Dockerized is the builder images themselves. So, for example, Ruby, the Ruby builder. So Docker images.ruby. So we see registry.access.ruby.rel. Docker images.redhat.router. So we have OSC HFC. So what did I want to do? Right. Yes. So now I'm inside the HA proxy router container. And inside this container, we have the JSON integration for HA proxy. We don't have any routes really defined in OpenShift right now. So this is a pretty boring file, right? The next step is to install the Docker registry. OpenShift will run its own Docker registry, which is where the images that people, developers, build will get deposited. So the first thing that we do is we make a folder for the file system for this Docker image. If you think about what the Docker registry does, remember it's holding Docker images for the users. It's holding the images that they build. By default, Docker images are immutable. They're non-changeable. So if you write stuff to their file system and you blow them away and recreate them, all your changes are gone. Well, if the Docker registry is running in a Docker container and it's holding all the images for your environment, you probably don't want to lose all those images just because you restarted the registry or because something happened. So what we're doing is creating a local file system mount point for this. OpenShift also supports things like NFS shares. I don't know if I'll have enough time to go through that today, but it is in the training materials for those who are in the beta or who just want to look and see how file system volume stuff is done. So we're going to make that folder. We're going to create the registry, and we're going to tell it where to mount that folder. So we run this command thing. We're going to get this deployer, which is going to launch the Docker image that we actually want, which is our registry. The deployer succeeds, and then eventually the registry will start. All right, that's running. We talked a lot about services in the administrator one, but basically we can curl the registry to verify that it's actually alive and working. And we should see the Docker version come back. And there is one question now. Yes, sir. When you are installing router and registry, is it installing in default project? Yes. By the way, the reason I got a 404 here is because beta 4, we put the router behind authentication. So this is actually the wrong string to validate that it's working. But trust me when I say that it's working. OSCE status, right? Again, in the default project, we've got our router. We've got our Docker registry. I think the installer will just put them in the project that you're in by default. I'm not sure if you can put them somewhere else, but when it comes to infrastructure components, you want them in default anyway. So hopefully just put them in default. All right, so at this point we have a registry and we have a router. So from an administrative perspective, most of the basics are kind of done, right? We have a functional OpenShift environment. We have validated that OpenShift is working because the registry and the router are both workloads that are running inside of OpenShift. So at this point it's very clear that OpenShift is capable of running Docker images for us. And if we do OSCE get pod again, we see that the Docker registry is running on... The line wrap is terrible here, but it's running on OSCE 3 Master, right? And again, if we look at the nodes, we see that Master is part of this infrastructure region. So when we told OpenShift, hey dude, please put the router and the registry into the infrastructure region. OpenShift obliged and put them in the correct place for us, right? A couple more administrative things and then we'll see where we are on time. So we've done the installation, talked about scheduler, we talked about installing the router and the registry, projects. So every user project, and that's where they put all of their stuff. All of their builds, all of their applications, all the things that a user interacts with live in a project. And multiple users can have access to a single project. They have administrative privileges or not. We'll try and show that a little bit real quick. So I'm going to create a project for that Joe guy that we created earlier. Create a new session here. Login is Joe, right? So I'm the user Joe on the system. Joe is not logged in to OpenShift. All right, so first thing we're going to do, create a project for Joe. We've created a project called demo. If I do OSE get project, we see the demo project. Cool, all right. I can log in at this point now from the web UI as Joe. And his username and his password should work because we configured them with HD password. J-O-E, super secure password red hat. Hit log in. And look at that project demo. Cool, all right. But look at the settings tab for this project. We notice that there's no quota and there's no resource limits. So quota is the maximum amount of stuff that my project can consume. So Joe would be given a project quota of maybe two gigs of RAM. That means that whatever Joe provisions, he can't use at any point a total of more than two gigs of RAM across everything that he does. Resource limits are individual limits on components. So you cannot provision an individual Docker container or image that uses more than 500 megs of RAM, whatever. So let's look at the administrative side of that. How do we apply those things? Resources, number of different resources. Here's an example for the quota. Right now this is done in JSON. There's going to be tooling to do this by hand so you don't have to write JSON. So is the quota by individual or by project? Quota is applied at the project level. So regardless of how many users are interacting with a project, the quota is on the project. A user may be attached to multiple projects and each project may have individual quota. There's no concept right now, although it's on the roadmap for like an Uber quota that is split across multiple projects. So as of right now, quota is applied and resource limits are applied at the project level and apply to everybody and everything inside of the project. The first thing we're going to do is apply a quota to the demo project. So here's the JSON for this quota. I don't think there's a quota command yet. There's not. So we're going to apply this quota to the demo project. And then so what we've done is we've said, okay, you can use a total of 512 megs of memory and a total of 200 millicores of CPU. You can deploy three pods, three services, three replication controllers. The resource quotas is sort of a funny thing. You're only allowed to have one quota on a project. So we quota the quotas to ensure that you don't accidentally apply more than one quota. I know that sounds ridiculous, but it actually is sort of an underlying thing that is a security, not security, but functionality requirement. If I refresh this page, we now see that Joe sees quota. Resource quotas, I feel like we should probably not show this. But anyway, we've reached our limit of quotas, which is silly, but whatever. CPU 200, memory 512, we haven't used any. Cool. All right. So we'll deploy some stuff and we'll show you what it looks like. The other thing was resource limits. So resource limits limit the minimum and maximum way that we can deploy workloads, right? So if we apply the resource limits to the project, see that there is a limit applied. And if we refresh this page, you should see our limits. All right. And this default is really important. So what this default means is that if you don't tell me how many CPU or how much memory you want something to use, I'm going to enforce 100. So if you don't tell me anything, I'm just going to use or I'm just going to do 100. All right. So let's actually try to deploy some stuff as Joe and see what happens. And I have not actually tried this with limit ranges applied. So if this blows up, we're all in for a surprise. All right. Joe needs access to some of the content. We went through build stuff in the other example. You can do it or sorry, the other session, you can do a lot of it from the UI. This particular example, I'm specifying a bunch of stuff by myself and I'm doing it in JSON because there's not really a way to do this directly in the UI. And frankly, doing what we're doing doesn't really make sense from a UI perspective. So just trust me that this case dealing with JSON is actually not a bad thing. So I've got this complete example, which is a pod of service and a route. We explained all these things in the other session, but basically we're going to deploy this arbitrary Docker image. And we're going to assign it 10 million CPUs and 16 megabytes of memory. And if we look at our limit ranges, the default, the minimum 10 million CPUs. So we're okay there and the minimum memory was five. So we're okay there. We're above the minimum and we're below the maximum. The open shift should not complain when I try to do this stuff, but Joe still not logged in. So I need to. And where is that registry that you're grabbing that hello world Docker image from. Yeah, so in this case, this registry is actually Docker hub. Okay, so because the we're not dealing directly with an image stream again covered in the other session. And because my systems are open to the internet. Docker hub is reachable the user can reach out to Docker hub to find any arbitrary image on Docker hub. This could have just as easily been, you know, Joe's malware slash, you know, still credit card numbers right. So again, there's some administrative network stuff that you have to do to ensure that, you know, maybe your users can't go out to Docker hub or only have access to registry dot access or things like that. Right. So in this case, we're just deploying sort of a arbitrary Docker image. So I'm going to log Joe in on the command line. Joe only had one project demo so the command line tool said well since you've only got one I'm going to just put you in there because that makes sense. Now if I do a status. I see there's nothing in my project. So I'm going to create that test right. We should see a route a service in a pot. So you get pot. This is where I think it's going to explode. Oh, maybe we fixed it magic. All right, so my pod is running. If I refresh this page, I should see that I have used some quota. Look, I have used quota. I've used one pod. I've used 167 and whatever bites of memory. Why that's in bites. I don't know I filed a bug on that a while ago. 10 mil cores. Right. Cool. Okay. If I curl my workload since it's running answer for a magic right. And if we look it landed on node one. Pretty cool. Why did it land on node one? Because when I define this workload I told it to select the primary region. That's good so far. We might have enough time to scale and show how that works, but I don't know what city. So quota enforcement right. If we remember, we have a quota limit of three pods. So if I try to create more than three pods complaining at some point like, Hey man, you can do that. So it does the first one does the second one. It does the third one. And then it says sorry. You're limited to three. You can go back to the UI and we refresh. We will see. We have reached our limit. Well under our limit of CPU or under our limit of memory question. I have a quick one for you. I'm a little hooked on the registry problem today. If an operator wants to restrict people to using just another. Registry, so they wanted to host a registry. I know if we've talked about that a bit in the past on OpenShift and make everyone have to use images from that registry. How would they go about doing that? So currently it's a blacklist as opposed to a white list. So basically, well actually I should say it is a white list as opposed to a blacklist. So from a networking perspective, you would restrict access to any registries, but the ones that you want to allow users to use. Okay. Does that make sense? It does. Is that done here from OSC is that you know where's where's basic network type stuff. Okay. So you would you would block these systems from, for example, if you didn't want to allow people to use Docker hub to get images from you would make sure that the OpenShift systems, the nodes have no access to Docker hub. Okay. Right, so like I can ping hub dot Docker dot com. All right, so if I wanted to prohibit access to Docker hub, I would just ensure that the systems don't have access to Docker hub, right? And for most shops, you're probably going to have, I would say most security conscious shops either have proxy configurations already set up or, you know, their systems aren't accessible to the internet at all anyway. So, you know, those types of things would would be the norm. And then the other thing is that if I look at the images that are already on this server, the administrators can pre populate quote unquote known acceptable good images on the various servers in their environment, using standard Docker tools. And effectively, the system will try and run whatever's already there first, before it tries to reach out somewhere else to find something. And if there was a quote unquote Docker hub image that people were supposed to be allowed to use, like Hello OpenShift. If I wanted to give people access to it, I would install it quote unquote on all of the systems in the OpenShift environment and then tell people Oh hey, here's a list of approved images that you can use that might come from somewhere else. They've already been installed across the OpenShift environment. And therefore if you define a workload that references them, you it will just run because it's already there. There are a couple more questions now in the chat room, one from Anand and one from Nicholas. How to debug, build deploy pods fails with tools are available. All the tools. There's no. Okay, so for example, all of this stuff is just Docker images that are running. So for example, let's say that a deployment failed. The pod, I can do OSC logs of a specific pod in my project, and this will barf out all of the logs from there. But for whatever reason, let's say the thing was kind of already deleted right. Well, when you delete or when you stop a Docker image Docker image, sort of information about it hangs out for a while. So this OSC pod or whatever is that still running. Running. You find. Here we go the player. Okay, so this is exited right long, long gone doesn't exist in OpenShift anymore, but that's Docker for the logs of this guy to say oh like what if this had failed, you know, what can I find out about this Docker itself. This is the Docker log file dockers a system D service. So I can look in the Docker log file to see when things happen if they were good or bad or whatever so I know this is hard because of the line wraps but here's an example where Docker was looking for information about something. Right so we can look in the Docker logs. Open shift node when it tries to deploy workload, you know it has its own log. So I would say like the the hierarchy of troubleshooting would be if the pod is still around look at the pod logs if the pod is gone try to find the Docker container that it was playing with. Look at that Docker containers log if that doesn't make any sense. Look at Docker's own log look at OpenShift nodes log to see like oh maybe OpenShift node never got told to launch the thing or something else weird happened or maybe look at the master. Oh, you know, we tried to schedule this workload but we couldn't because you asked for more memory and there wasn't enough memory left in the environment. The other also describe which will give you a little bit more information than the status. So if I describe the registry, it's going to show me a bunch of messages related to the last few things that happen. So sometimes when a when a pod shows like pending, if you describe it you might see like oh waiting for Docker image to get pulled or whatever right so there's, I wouldn't say there's specific tools other than the OpenShift command line and everything else is just basic Linux type tooling right looking at logs and using you know grep and set and things like that. The other question was are users able to create their own projects. Yes, so the default configuration is that a user can create a project. Back to the top as the user Joe. There's this create button. I can hit create. And do just that right my project like I'm awesome. Right and then I hit create. And now I'm in my project. Default quota for all projects is kind of something that's coming for GA didn't make beta for right yet so as an example Joe just created a project with no quota. So in theory he could deploy all the things and consume the entire open chip environment. So obviously that's a problem we know that has to get fixed for GA right. What else. Other questions. So administration functions. Did you have something. No I was just going to say we are getting close to the end of the hour if you wanted to show it when you close the GitHub page with all the training materials on it. Again just. Oh yeah. Yeah well. So again GitHub dot com slash open shift slash training the current incarnation is beta for I know a lot of you are in the high touch beta. Most of this stuff should work against origin directly origin is quite a bit newer than we are. So some of the JSON syntaxes and command line syntaxes are still kind of in flux until we hit GA. So you know if you run into an issue you might have to take it up with the origin GitHub as opposed to as opposed to our training repo assuming you're using origin and not. Not the beta code. We talked about. Project. Where is user. There's there's a couple of questions to not not just popping in popping in. Awesome. Judd is asking what plug in capabilities are there for granular identity and permission systems like creating projects and quotas like you just showed. So. So there's two there's two things to talk about right there's there's. Off and which is just basic authentication and then there's off the Z which is authorization. I'm trying to find the. Example of. The reason I'm looking for okay. Project administration. Oh that's in here. Oh it doesn't have its own thing. Okay project administration. So as an example in the project administration section we take Joe's demo project and we make Alice and administrator right. So let's go back and talk about off and off the Z. So off and authentication. We support a couple of providers out of the box you can write your own off plugins. We support the same stuff that we supported in OpenShift to which was using Apache as an authentication proxy which allows you to use or consume the world of Apache authentication modules. We provide active directory. You know third party identity management system plugins etc etc. And we provide documentation in the beta. So we provide an example of using. All that off with Apache proxy in front of OpenShift. Now off Z authorization I am Joe and this is my LDAP group what am I allowed to do. Much like V2 the expectation is you will sort of define what it means to be a member of a group. And then we'll provide the tools for you to enforce configuring those users with the policies that you want. So for example well and it's a little tricky because of the whole sort of project thing until there's a project for a user. How do you apply policy to the user which is relative to the project etc. But we'll basically provide similar tooling to V2 to allow you to configure. Authorization to configure a user to have the right level of access and quota and etc. Based on the information provided in your enterprise directory. Hopefully that answers. That question. Red Hat considers the image. Because the default device mapper files backing store for Docker has fairly awful performance. Especially in crappy laptop based POC environments. So we had a couple of people come to us and say oh you know I was using virtual box and I set up the OpenShift beta and it like it took forever to do my build. Well yeah because you're doing file back Docker image on top of a file system that's running in an emulated virtualization environment and yada yada yada. So ultimately thin pool dev that uses LVM under the covers is the preferred performance supported mechanism backing Docker images and in the documentation we provide. We provide the instructions for how to set up thin pool dev which basically just requires an empty LV for you to use. Matt Mariani described differences between RC one and origin and beta. I have no idea what RC one and origin is so I am not able to provide or describe differences between the two. I would assume that RC one is closer to origins. GA release but I don't really know what that means. I would think they're pretty close ish right now. We're probably about a week and a half behind whatever origins doing with enterprise and as we get to GA those differences will start to diverge right much like how we do J boss where the community sort of starts to speed away. More frequent releases less concerned with breaking API APIs whereas OpenShift three is probably going to have the same two to three year lifespan as OpenShift two where we have to guarantee API and API stability and all of that lovely enterprise stuff that you pay for. Yes. Cool. I think that brings us to a little bit of closure. So again if you can throw up the welcome the GitHub page again and then we'll have that as the closing slide so people can find it really quickly. Yeah, so and the other thing is that this is public right anybody with internet access this GitHub is a public repo. We love pull requests from customers and partners and and and the community and etc. We've had a number of beta customers submit pull requests and file issues of against things that didn't work or typos or things that weren't clear. So by all means I'm not mean. I wrote most of this stuff with the help of the enterprise engineering team were more than happy to fix our mistakes and to provide more information. We're going to try and keep this up to date through GA so once the product goes live will actually transition this from beta to just set up. And we want to try and keep this relatively up to date to help people who are doing evals to help people who are doing POC's customers and our consultants and etc. So this is going to be a living breathing document as we add features and finish finalizing finish finalizing features will be updating those as well. So one thing that's not in here is how to configure the routing tier to be a J using our keep alive the implementation and the tools built into open shift. I'm going to be adding that at some point this week right. So stay tuned here. This is definitely going to be an ongoing source of good information. And then of course Red Hat will additionally provide GLS training classes as GA gets closer. They're working on that right now. They'll start with the administration course and then there will be a developer course much like we did with V2. All right. Sounds like there might be an H.A. briefing in our future too. Yeah. Absolutely. But that'll probably be post open post the Red Hat Summit and we all decompress from that. So thank you very much Eric. If any of you are coming to the Red Hat Summit in two weeks time please come and find us at the OpenShift booth. We'll have a big area set up for the community. So we'd love to meet you all face to face. And again thanks very much Eric for taking the time out today to make this happen. All right guys and gals. Thanks. Bye bye.