 Yes, please. Yes? Hi, everyone. We now have Thomas Cameron presenting you on OpenShift Operators. Hello, everybody. My name is Thomas Cameron. I am a Senior Principal Cloud Engineer at Red Hat. And I'm going to cover about four days worth of training in 35 minutes. So buckle up, buttercup. My contact info is up there. I'm thomasatredhat.com. Yes, I've been there that long. And you can follow me on Twitter at thomasdcameron. We are going to be covering quite a bit of stuff. I'm going to talk about setting up the machines for OpenShift. I'm doing this in a rev environment. I'm assuming you're in an enterprise environment that has something like rev or VMware or something like that using satellite. But everything that I talk about here, you can do manually. I'm just showing you how to do it a little bit easier in an enterprise environment. So we'll talk about satellite rev, creating a temple, the build, some add-on software, storing the template, using the template, installing packages, and using Ansible Playbooks. And then we'll get into the configuration and installation. So very, very briefly, when you're setting up the environment to install your OpenShift nodes, I set up content views inside of satellite. And this kind of got me. Be aware that if you're using satellite, when you are syncing the channels for the Ansible components that are required for this, they don't appear under the operating system branch. You actually have to go over to the other tab and then scroll down and enable Ansible Engine 2.4. And it needs to be 2.4 per the installation docs for OpenShift 3.10 Enterprise. So the way that this looks is you create the content view and make sure that you've got all the components in the content view. I did one content view for the operating system, so all the bells and whistles for the OS. And then I did another OpenShift content view that I added Ansible Engine, the fast data path in the OpenShift container platform, and then I created a composite content view with both of those so that I had all of that content available to the machines that I'm installing. I created an activation key so that when I register the systems to the satellite, everything just works. So I created the activation key and I added in the repositories that were part of that, so it's super easy. Again, I don't expect you to pick all of this up because we're moving quickly, but I'm just talking about the software channels that you need to make available in an enterprise environment to make this work. This is what it winds up looking like when I go through and I set up my repository sets. Notice that I had to manually override almost all of the ones, well, all the ones that you see that are overridden by default when you add those repositories to your activation key, they're not on. The repositories are not turned on, so I overrode that in the activation key, and that way when you register the systems, they have access to all the software repositories you need. From a rev perspective, I built the OS, set optimization, set the name. Again, I'm going to move through this very quickly, but I created a template first because I'm using a virtualized environment. I don't want to kickstart a whole bunch of machines, so I just create a template and this is what that looks like. I build it, I set the operating system, I set the optimization for server, give it a name, I set up the memory amounts, or actually let me go back one. The other thing that I did was I set up two disks for storage. That was actually a requirement from a previous version of OpenShift, so you can ignore that second disk if you want, but I did that for the 3.7 and then I realized when I was doing 3.10, oh, they took care of storage for us, so I don't have to do that anymore. Yes! Alright, so set memory, I did eight gigs of memory, but you can change that once you're building from the template. I built the machine. I didn't partition that storage drive because, again, we take care of that now. Make sure that you install the Catello Consumer RPM so the system can be registered in the satellite server and you install that off of the satellite server. And then I installed the OS and common packages needed so I registered the machine to the satellite server. I made sure that it was registered correctly. I installed the necessary software to interact with the satellite server just so that I can install all my packages. And then I also installed the packages for REV agents so that the systems would show up under REV. So REVM guest agent common, it installs that. Install the packages that are recommended in these installation docs. So WGIT, GIT, NetTools, blah, blah, blah, blah. You have to install all of those. Now, I'm lazy, I cheat, and I do a yum group install base so I have all of the IF config and all the old school Unix stuff because I'm old. And so that stuff gets installed. I updated the machine and rebooted as per the instructions so do yum-yupdate and then reboot, blah, blah, blah. And then I installed OpenShift Ansible. So this is a change from previous versions. It used to be the atomic installer. So OpenShift Ansible, it drags in a bunch of other packages and then you need to install Docker as well. So just yum-y install Docker. So you get those packages installed. Then what I do is I remove all the unique host information for this VM. So MAC address, network UID, SSH server keys, and so on. So I just edited the IF config file, took out the UID because that is unique and this is going to be a template. And then I also deleted all the SSH keys and I unregistered the machine from the satellite server and then shut it down. So you can do subscription manager unregister and then it's zero. So down it goes. Create a template from that. I'm not going to go into the details super deeply because it's going to depend on your virtualized environment. But I just created the template. In this case I did Q-Cal 2 so that it could be spun up quickly in a production environment. Use raw. Q-Cal 2 is copy on write. So your initial writes to the disks are going to be slow and that's going to make your open shift environment slow. So I made the template. You can see that it's locked while I'm creating the template and then it goes from locked to available and we're in good shape. So that's what the template looks like. You can then delete the original if you want to from the template and I'm going to move through this or delete the template from the satellite server and then also delete it from Red Hat virtualization as well because we don't need that original. We just need the template. So I go in there, delete it and it's good to go. So then I create the new VMs. Again I'm going to go real quickly through this. The one thing that you do want to do is on the master node it needs 16 gigs. Remember we created the template with 8. So I just go in and I customize it a little bit. Give it the name. Give it the memory. Change that from 8 gigs to 16 gigs and the machine is locked until it gets created and then lather rinse and repeat for the other nodes. So boom. You create OSC2 through OSC5. I'm doing a five node cluster. So now we've got the machines up and running or we've got them installed. Now we're going to get them up and running. I'm going to change. Now here's kind of a gotcha. I like to use DHCP for all of my network stuff. So what I had to do was I had to go get the MAC addresses from these machines, add them into my DHCP.com file. Then you have to create DNS and I'll talk more about DNS in a little while but if you've installed OpenShift you know that you've got to have this zone that's specific to your OpenShift environment, this child zone. So my zone in my home office is tc.redhat.com. I created the OpenShift, not really subdomain but I gave it names and then I'll talk more about the cloud apps in a minute. That's where your applications are going to live. So reboot them. They come back up. And when they do come back up, they've got the right IP addresses where in good shape. Oh and the host names resolved correctly before they were just like host one, host two, host three. So that stuff resolves. You have to set up passwordless SSH so that the machines can log into each other. It's just SSH key gen and then copy the keys to the other machines. Like I said in DNS you need to set up the DNS wild card for the subdomain. In this case I did cloudapps.openshift.tc.com that's where all the apps are going to live. That's what that looks like again. So I've got the hosts that are going to be in the cluster and then the applications that I'm going to spin up will live in the cloudapps.openshift.tc.redat.com zone. And they all resolve back. This is a gotcha. When you do this make sure that the host that you are pointing at with that wild card is one of the nodes that's actually going to be serving up content. The first time I did this I didn't read the docs right and I thought oh well it needs to go to the management node. That is not correct. You want it to point at one of the worker nodes. You want to make sure that DNS is working in both forward and reverse. This will bite you. I promise you want to make sure DNS is working forward and reverse. And you want to check the wild card. So I did host foo and I got that .48 and then host bar and I got that 40. So you want to make sure that that wild card is working. Anything that you search in that zone should come up with that address. Firewalling we actually, I'm going to skip over this because this is also old information. The new installer for 3.10 just works. It just gets all the firewall rules set up. So I'm going to blast past this. Let's see. Yeah. Yeah. There's a few steps involved. Now I'm lazy because I was doing this in a lab environment and I just said open up all the ports for my local subnet from 1 to 65.534. Don't do that. That was being lazy. Hey, what can I say? So then you're going to re-register these hosts to the satellite. Again, use that activation key that you used earlier. Boom, boom, boom. For one-in host, blah, blah, blah. Register or use an Ansible Playbook for it. They're all subscribed. They show up correctly. The host name is correct. Everything's good there. Install NFS Utils so that if you're using NFS back storage you can actually access it. So I installed NFS Utils across all the machines. Here's where we get into sort of the intricacy of the installation. Install the Ansible Hosts file. There's an example at access.redhat.com under the OpenShift documentation. This was really, I will not lie. This was a little bit challenging to get this host file set up correctly because it's just a generic example and there's some things in it that are frankly just not, they don't work right. So you literally just copy and paste one of the examples. I used this one where there's a single master or single EdCD, both running on the same machine and multiple nodes. And so I copied that over to my local machine whether it's the Ansible Hosts or you can put it in another location but that's a default location. Verify that the type of installation is OpenShift Enterprise. If you're doing Enterprise, if you're doing Origins that it's Origin. And then you've got to define the EdCD server, the master and the workers and the infrastructure nodes. So the way that that looks is you're going to define what your master node is, what the EdCD node is, and then all of the worker nodes. So in the versions of OpenShift, the master node was not schedulable. We couldn't run jobs on it. Be aware that in newer versions, 3.9 and 3.10, it is schedule. I think it's 3.9 and 3.10. But it is schedulable. So your master is no longer kind of a wasted node just doing management stuff that can actually serve out content. Now, if you don't uncomment this line, the default behavior is that no one can log in on the web UI. So we did that intentionally. We want you to make a conscious decision about what type of authentication you're going to use. So I just uncommented the line so that it's going to use HT passwords. You can integrate it with all kinds of authentication back-ins. Just be aware. The ORED URL line, comment that out because it points at a bogus location. If you comment it out, it just uses the default and it goes and grabs content from us. So comment that line out and that one killed me. I fought with that for like an hour trying to figure out why my OpenShift node wasn't being able to come up. So just be aware of that. Then you run the playbook. So I actually said ansibleplaybooks-i but since I put it in the default location since I did it under Etsy Ansible hosts, I didn't really have to do that. But this syntax is important if you put your host's file somewhere else, that config file that defines how the hosts are going to be set up. So you run the prerequisites YAML file first and again this is all in the documentation. But you run the prerequisites first and then let that run and it gets through and it completes successfully. And then you run the actual deploy cluster YAML file. These take a while and it's going to take longer if you have multiple nodes in your cluster. So mine was a five node cluster running on some big honkin ProLiant machines with a ton of memory and very fast CPUs and super fast storage and it still took like half an hour. So it takes a while. So it'll run through and hopefully if the OpenShift gods are smiling and green, okay. It took me a few tries because I had to figure out some of the syntax changes in that host's file. But when it gets done now you should be able to set up authentication and log in to the console. So what you can do is verify in the Etsy Origin Master, masterconfig.yaml file that the HD Password or the identity provider section is set up for HD Password auth and figure out where the file is that you're going to use for your authentication. So Etsy Origin Master HD Password is the default location based on that example file. So what you can do is you can now create the user. You have to have the Apache Utils package involved for the HD Password file. But you run HDPassword-C to create a new user to the HDPassword file and the user name is T. Cameron. It'll prompt you for the password and once you're done with that you can take a look at that HDPassword file and you'll see that you've got the password in there in an obfuscated format. So now what you can do is you can log in with the web UI and kind of test to make sure that the system is up and running. You're going to connect to the machine at https.com slash slash your URL and you'll get the sort of standard pop-up that says hey this is not private be aware of that. You're just going to accept it and get logged in and you're going to use that username and password that you created earlier on the command line. So T. Cameron in the password and now you've got the web UI. You can start creating applications. So life is good there. The system came up the way that we expected. You also want to test from the command line that you can run commands logged in on the console. Now you can do this if you have the OC commands installed on a workstation like at your desk or something like that. You can use OC log in and it'll create your config file. So OC log in and then the URL if you want to do it. In this case I was on the master node so I just did OC log in dash UT Cameron and it asked me for my password and it said you don't have any projects it's a simple environment. You can create a new project or a new application from the command line but you can look at the config file and you can see it's got all kinds of information in it. It's got authentication information and stuff like that. You also want to log in as admin. This is kind of important because until the admin account is created in two places and until you log in from the command line as admin it doesn't get created on the back end and I'll show you what that looks like in a little while. So this only works from a node so on the master I did OC log in dash U system colon admin so I'm using a system role and administrator and it creates my user account and then gives me access. And you'll notice that once I'm logged in as administrator I can now see all of the projects which are all the services that are running that help manage the environment. So I can see all my Kubernetes services that are running, logging, the OpenShift service itself, the Ansible service broker infrastructure, node services and so on. These are all actually containers that are running in the environment that are managing all of the services that are being handled by the OpenShift cluster. Now I can do something like OC status once I'm logged in as administrator and this will give me a cluster-wide status and you notice it points over to the management node OSE1.OpenShift blah blah blah and it'll tell you about all of the services that are running and I mean it's page after page after page you get a lot of good information of what's running on the environment. I can do OC gets nodes just to see what my machines are doing, what the nodes in the environment are doing and this is actually really helpful if you see like a status not ready you can start digging into logging and looking at OC status on the node and try to figure out what's going on there and I actually did not even catch that it wasn't ready I gotta go look at that. I literally finished these slides like ten minutes ago so in fine red hat form right it's a new version of the software got up at like four o'clock this morning running through the labs trying to make sure that this stuff's all correct and then you can do OC get pods to see what pods are running in your environment what they're doing so you got like the registry the console the router services you can see what the status of them is if they're working if they've had to restart or anything like that and then for a lot more information than you probably ever wanted you can do OC describe all and then pipe that to less or something like that I mean it is page after page after page this is actually as an operator this is really handy because you can dig down and look at all of the service descriptions you can dig down and see if there's a status you know if something isn't ready you can page through this and see everything that's going on on pretty much every node in your environment so now what you can do now that it's all up and running and you've logged in and you've tested that your connectivity is there you can actually create an application now I'm an operator I come from a long background I was a I was a novel sys admin back in the early 90s okay there's somebody out here as old as me so I was a novel sys admin I went to work for Microsoft back in 94 because I was kind of the new kid on the block so I was an MCSE after that been doing this for a long time but my point here is I went you know through all this whole career of administration and operations I am not a developer so when I was doing this presentation or when I was submitting presentations I was like I want something for OpenShift that's targeted at me and people like me so I'm going to talk about building the applications but honestly man me building an application is silly because I'm not you don't want me writing code like it's just it's not good not at all but here's what the UI looks like once you get logged in you do get a UI and this is one of the things that I love about OpenShift is that it has got from the factory sort of we've got a ton of options for you know platforms for languages for application services and so on so you can go in and you can drill down and start building applications like in this case I decided okay I can do Apache right I can do a simple Apache server that's not a problem so I go in I define what I want my Apache server to look like I give it a name you know my first Apache project I don't really need to fill a whole lot of the other stuff out because it will it will know to go and grab depending on what you selected we pre-populate like where it's going to go grab the content for the container for Apache so the git URL is in there now here's something that I have run into in the past I will verify the application host name and in a lot of cases like whatever I put up above TCA Apache or whatever make sure that you put the domain name for your environment if you let it auto-populate I've seen cases in the past where it will auto-populate with some name that actually doesn't resolve in DNS I don't know exactly why that's like that but so verify that your application host name has it is resolvable or at least that it comes back you know that wildcard to cloud apps remember that I talked about earlier that we set up in DNS so some name that shows up in the cloudapps.openshift.tc.redhead.com or whatever your domain name is and so you click on create and it takes a few minutes and you can click on continue to the project overview and then you can go and watch the process where it's downloading the software from github and so on and so forth so here's what that looks like you can watch the builds and if you expand this out you can actually see you know what the URL is that it created it'll tell you that the build is pending the Apache web server is pending and as you watch it for a while and the screen refreshes you should actually see you know you can even see where it's going and grabbing stuff out of github so you can watch the progress there it's pretty cool and from an operations perspective that's actually really helpful because you will get you'll see where if it can't connect or something like that you should see that information as well so now once it's all completed and you transition from see how over here the pod is kind of grayed out once your application is up once your container is up that pod will then turn solid blue and now you know that the system is or your container is up and running so you can look at the URL right there and open it up in a new web browser and there's your application at this point you would grab the content you would clone it you could make changes push it out there and put your application into production not me though because I'm a terrible developer so and at this point you've done you've jumped through all the hoops you've got all this stuff set up and you can start allowing developers to access the containers as I said you know because we got 35 minutes and oh man I actually finished way early I wanted to move really really quickly because you know these are short sessions and because the key points that I really want to make is or the key points that I want to make are when you're setting this stuff up in an enterprise environment you know you really want to make your life as operators as easy as possible as an operator because you know you're usually going to be using virtualized environments that are enterprise virtualized environments not like you know a KVM instance on your laptop it is important and you guys will get this slide deck later on but it is really important that you pay attention and make things like your templating and make things like your software distribution through satellite or through RHN or through you know whatever make sure that that stuff is nailed down upfront as I've gone through all the different iterations of OpenShift you know we do change things from release to release you know like I said I was up in the speakers room like 10 minutes before this like cursing because I couldn't get some stuff to work now it turned out that it was still your DNS issues and mostly on me not on the software but the big thing that I want you to take away from this is if you get your fundamentals from an operations perspective nailed down make sure that you've got a good template make sure that you've got good access and make sure that you do update your systems so that when you're building your environments out they are secure then as an operator your life is going to be so much easier just out of curiosity because I did finish a lot earlier than I thought just out of curiosity how many folks work in enterprise environments and are dealing with OpenShift okay so and then raise your hand if you're using virtualized environments like VMware or Rev or something like that okay cool so and then raise your hands if you guys are doing Enterprise Linux or raise your hand if you're using Enterprise Linux versus a OpenShift or something like that okay cool alright excellent so really that's it I mean that's a ton of information in a ridiculously short amount of time but I think what I'll do now is just open it up to any questions that was faster than I intended sorry yes sir so my question goes beyond sort of what you presented into the developer's use of OpenShift so I know with OpenShift it's sort of a PAS experience you push your code directly to OpenShift as it get endpoint the way we're used to building software is we build it and we generate an artifact and then we promote that artifact for environments and I'm like how do I how do I create a similar workflow with OpenShift that's a great question for a developer in all seriousness I don't know if you saw when I was going through the screen that had all the pre-hand applications we actually have the ability to set up an entire Jenkins infrastructure so you can do a workflow you can point it at either an existing Git repository and then you can kick off automated build processes or you can point it to upstream GitHub and start build processes that way so the whole concept of having that sort of workflow pipeline, the development pipeline we've built that into OpenShift with an expectation that yes you're going to spin up Jenkins or whatever your favorite CICD environment is you can set that stuff up and then your developers who are going to know better like what your Dev and QA and UAT and you know whatever that cycle looks like but yeah you can absolutely do exactly what you described either behind the firewall on a private Git repository or out on GitHub or whatever does that answer your question whoo answer the developer question are you proud of me actually it's really more operations but you know what else yes ma'am what exactly is the licensing on OpenShift it's a little confusing when I look at it it says something about the Apache license yep it is just like every other product that Red Hat releases it is an open source license we don't sell licenses for software we sell subscriptions and those subscriptions cover open source licenses from the Apache software foundation license to GPL various versions of GPL and so on so when you purchase a subscription for OpenShift you're not paying a license for the software you're paying for the support you're paying for access to the documentation you're paying for the updates the bug fixing all the engineering that we do on the back end so you can absolutely go and here's the other cool thing you can go and download the upstream OpenShift and you can run it in your environment you just and it's pretty similar it's almost identical to what we've got depending on how far ahead the community is versus our commercial product but you don't get support you don't get consulting services you know it's basically what we're doing is we're wrapping up support hardening certification with third parties et cetera et cetera et cetera and that's what you're paying for when you buy the subscription not the software itself so you would if you upload it and put it on your own server you would get the community updates if you wanted exactly how often does red hat incorporate community updates what are we at about a six month release cycle right now Dan about six months three months three months oh gosh okay yeah which is why I can't keep up so we will take from upstream and what we do is we're behind upstream because what we do is we'll take the upstream release we'll code freeze it at a certain point in time apply a bunch of bug fixes to it certify it with third parties generate documentation in multiple languages get our consultants trained you know blah blah blah there's a ton of stuff around it and so yeah I mean the upstream is actually awesome but it's kind of wild west and I will be the first one to admit I've been doing this since 1993 I'm not an idiot I think and man there are times where I struggle and it's silly stuff you know I messed up my DNS config you know and it blew up and I'm like pulling my hair out trying to figure stuff out but but open shift is not a trivial product there are a lot of concepts you need to understand you know we talked about everything from like storage back ends we talked about you know doing updates we talked about how to build your cluster what the various roles are and so on I mean there's a lot involved in setting it up so with upstream man it's a lot of fun but it can be challenging any other questions I know I'm the last person between you and lunch everyone's like no shut up alright was this helpful from an operations standpoint good because everything else here seems to be developer focused and I feel like the last man standing hey guys thank you so much for coming on behalf of red hat we appreciate you being here on behalf of devconf we appreciate you coming you guys have a great day before we head to hang on announcement just a couple of announcements