 Hello and and welcome everybody to yet another OpenShift Commons briefing and this time I'm really pleased to have this reference architecture being talked about. I've talked about it a lot coming from Dell running OpenShift on OpenStack and Dell Maltwin who's been a long time OpenShift fan and a good friend of the OpenStack community and worked on lots of stuff with us from heat templates to unbelievably strange things and made them all work for us and Judd's been doing a lot of work making sure that OpenShift runs on OpenStack and on the Dell release so I'm going to let Judd introduce himself. He's going to talk for 20, 30 minutes, demo some stuff. You can ask questions in the chat and we will try and answer them through via chat but after he's done we're going to have a Q&A and do more chat questions but I'll also put up the microphone so we can have a conversation as well so without further ado I'm going to let Judd introduce himself. Hi everybody I'm Judd Maltin systems principal engineer at Dell. We, Dell's a very big place, 100,000 employees and merging with EMC so things are about to get even stranger. We have an enterprise services group, I'm sorry enterprise systems group. The head of our enterprise systems group spoke at Red Hat Summit and touted this particular solution on top of Red Hat's OSP OpenStack platform. We've been collaborating with Red Hat on the OpenStack platform for three years now and what we do is we build up a hardware reference architecture. We deploy OSP on it in all of its beta releases, communicate very closely on a daily basis with Red Hat sending bugs upstream or committing directly to OpenStack to get things fixed up so it runs great on our hardware it makes it super easy for our customers to order up a small medium or large size OpenStack cluster and what I've been tasked to do I'm the loan resource who is doing Paz and containers on our particular version our particular release of OSP. We are actually not distributing the software, software all comes from Red Hat, we are just distributing the documentation and a little bit of the open source code for some of the automation we've done to make it really easy to just get the Dell Gear up and running and OpenStack deployed. My job to automate the deployment, automate test and deliver the deployment of OpenShift and CloudForms to make even OpenShift more valuable on this OpenStack platform. So a little bit about me. I started a long time ago as assistant in a Perl developer, did a lot of work in web dev and administration in mid-size MSPs, I did video delivery, I did the first online shopping cart in the Middle East when I was living in Israel doing graduate work and as an essay and Perl developer back in 2007 did a really interesting virtual world deployment on second life and automated by hand 200 EC2 instances for a television tie-in feature that we did that lasted for a night. Then I did 10 years of identity management work with the likes of Netscape, the New York Stack Exchange and some local and international consulting companies and in the last 10 years I've been doing just DevOps before DevOps was a thing. At the New York Stock Exchange, I saw the most resilient infrastructure I could ever imagine and the most strict and reliable operations procedures you could really ever imagine and I said there has to be an automated way to do this and so back in 2003 I started with CF Engine and follow the whole world of infrastructure automation up until today where now I'm focused on cloud and containers and paths and automating that infrastructure. I came to Dell to work on the Crowbar project which has since spun out of Dell and based their own business on it and have renamed it Digital Rebar. So let's talk about OpenShift, no more talking about all this past. The things I want to cover in this presentation are why are we running OpenShift on OpenStack anyway and why not run OpenShift on bare metal, why run it on any infrastructure as a service. Second section we talk about is some of the details on our validated OpenStack platform, our reference architecture and the deltas between what we constantly deploy and redeploy in testing at Dell and what you need to get OpenShift up and running and scale OpenShift which leads me to the next section which is creating the technical guides. We produce three technical guides for OpenShift on OpenStack. One focused only on OpenShift, the second a cloud forms deployment guide and the third guide would be is to integrate OpenShift and cloud forms together together, really robust experience. And then finally I'll talk briefly about how we're integrating, how we're deploying and integrating cloud forms with OpenShift and finish up with a little walkthrough of what I've deployed and documented for managing the whole stack from the metal to the containers in cloud forms. So back to our initial question, why run OpenShift on OpenStack? First thing, most places have some sort of OpenStack investment and they've gone through the radical transformation from brick and mortar to cloud, basically whether it's an on-premise cloud or they're buying their OpenStack from somebody as a service, they have made the intellectual leap and the business leap into a cloud way of thinking about resources. The folks who've done it on-premise will enjoy commingling container and VM-based applications. If they're doing network function virtualization also that technology itself is rather new but very useful especially for telcos, network providers, people are putting firewalls in virtual machines, routers, switches, doing all that stuff within VM and then packet inspection in VMs, that investment is living in virtual machines, living on OpenStack, great that our container containers would have access to them and close access to them if they're all on the same infrastructure as a service. If you put your OpenShift on top of OpenStack, you get a full stack multi-tenancy where your authentication services for and your resource isolation is happening throughout the entire stack and you can leverage whatever existing multi-tenancy you have in place. Also for self-service, if you've already set up divisions of your company or your partners to be able to deploy resources in your infrastructure through OpenStack, this just continues that paradigm and makes things easier for you. I see the biggest win in a second to last bullet point where you're taking advantage of existing operational procedures. Operations is the heart of your organization, without that we're not in business anymore and people of generally corporations, small businesses, medium-sized businesses, when operations fails their entire business goes down. The investment you put into the operational procedures around your cloud, around your infrastructure, and your virtual machines is nothing you want to pivot away from very quickly if you're going to pivot away from it at all. Especially features like workload migration, snapshotting, and especially disaster recovery, those are probably well tried and true tested in your organization and don't give that up. Layer OpenShift on top of that and you'll have the same recovery that you had in the previous instance or very similar. Also in interacting with vendors like Dell, you're going to continue the similar purchasing patterns. It makes it easier for everybody if when an architecture is defined and a bill of materials is well defined that it makes it a lot easier to order and allow more predictable to order so when gear arrives on your loading dock it's actually what you expect and your operators and your data center managers are able to put together gear that you're familiar with. You've already done all the work for OpenStack, there's almost no stretch at all to do it with OpenShift. Now I want to talk about the details of our reference architecture and how they compare to the requirements of OpenShift and what I had to do. So we'll talk about the sizing goals of Dell's reference architecture of OpenStack and those deltas, how we scale our reference architecture and how we promise HA, and some of the flexible hardware details. I'll get into a little bit of the security issues and then review those three documents that I produced for you in order to make this stuff happen. First of all our architecture goal when looking at virtual machines, our minimum configuration was to have something that was scalable and flexible. That is flexible if you wanted more storage than compute or you wanted more compute than storage you'd be able to easily size that and order it from us with really a minimum of hassle. We have a bill of materials guide that gets you everything that we've tested a million times and we actually do our ordering for our little team within Dell. We don't do, we don't order our gear through Dell's internal ordering system, we use the same ordering system that any of our customers do, so we know exactly what kind of customer experience you're going to have with Dell when you use our RA. We try to make it as easy as possible for you. We figured our base virtual machine would be two cores of four gigs of RAM with 40 gigabytes of local ephemeral storage and that's what we sized it off of. Then we did a lot of work validating our storage solutions, the equilogic backend, the Dell Storage Center which used to be called Compellent backend and we wrote drivers for OpenStack, did a lot of work on Cinder and all of that is upstream in OpenStack and all of it is available thanks to Kubernetes implementation of a Cinder driver. Being able to mount Cinder volumes is available within OpenShift, yay. We also looked very closely at the networking, there'll be even more network traffic with OpenShift with containers talking to each other in Kubernetes in order to keep things alive. We have at the bottom the last line there is our basic switch configuration and here on the next slide I detail a little bit how that minimal storage plays out. You're doing one solution admin host which has your OpenStack director, your Cep front end and your Tempest testing virtual machine. We're giving you the opportunity and the capabilities to test and retest your OpenStack deployment when you deploy it. There's no second step that you need to take to test what you've deployed when you use our reference architecture. We're deploying three controllers, three compute nodes. You have your choice of three different chassis if you want to put more nicks in the box and the three major storage options. The R730XD is really just a nice 2U just jammed with drives and you would normally run Cep on top of that and have just plenty of storage space. If you have the operations folks for it, you have an investment in compelling or equal logic, we totally support it. Then this is how we scale up. From three compute nodes, we've tested in our first rack eight compute nodes or seven storage nodes. You can also mix and match and get four computes and three storage if you just want to scale up that first rack or you can go to three racks and just end up with tons and tons of compute and RAM and mix and match the storage as you like. If you're going to have a lot of network throughput, we also indicate this the first line on the left under network, the Z9100 switch can really handle very high data rates that you might have depending on how you'll be using. If you're using a lot of NFV, you'll probably want to use the Z9100 switch in your central rack. So I looked at what our minimum configuration was and what it was offering us and how many, how this would translate into OpenShift's terminology. Red Hat recommends our each master VM have a minimum of two virtual CPUs plus one CPU per every additional thousand pods, eight gigs of RAM, another 2.5 gigs of RAM per thousand pods and 30 gigs of storage. That compares to and the nodes, I don't really have to read all this up, but really what you end up with is, well what you've purchased, if you purchase the minimum is over here on the right, you have a physical solution of 72 cores available to you in 348 gigs of RAM, making about approximately 5 gigs of RAM a little more per core. In a non-oversubscribed situation, where you're really using, you're giving every node, every pod one to one virtual CPUs, you can deploy 14 instances of four CPUs. So you would have of the 56 cores available, you could have probably make, probably create 14 instances, virtual machine instances with the OpenShift node running on it and then you probably, if you really want to maximize your available performance for each of the nodes and for each of your pods, you would just deploy four pods or maybe five pods per node. It's if you're doing incredibly compute intensive work. If you're not here on the next one, let's compare subscription versus non-oversubscription with oversubscription. If you oversubscribe, oversubscribe our very base system with 72 cores, you would probably have up to 200 pods running in your system. So that's the basics for you. Pivoting here a little bit, switching to a whole other subject on the left column is what we are offering you as tested and validated in our labs on Red Hat's OpenStack platform. These are the OpenStack and OpenStack associated features that we're offering. We're doing the OSP director deployment, everything is automated with Keystone and Keystone authentication and just the whole list of typical basic OpenStack features. For OpenShift on the right column, these are the features that I had to enable, grab some packages off of OSP that don't come out of the box for us. I had to add the neutron load balancer, so I had to reconfigure the controllers a little bit, also to support the heat cloud formations and the heat metadata service. I'll go into a little bit of the complexity of how OpenShift on OpenStack works. This metadata service provides data into Ansible and then to OpenShift to get OpenShift set up properly. Here's the iChart network diagram. It's a complex network to be able to support all of this storage, have an out-of-band management network, provide the OpenStack API, provide floating IPs to OpenShift or to more than one deployment of OpenShift and all the red stuff in the middle, those are the virtual machines that are running. There's cloud forms, there's the infrastructure node that's delivered by the OpenSource project to deploy OpenShift. There's the neutron load balancer and then those three on the right are the three master VMs and then a whole other long horizontal red band near the bottom, which are the networks that are private to the OpenShift deployment. Then down on the bottom are however many OpenShift nodes that you're going to be creating and its private network. The security issues that I found as I embarked on this were, first thing was the divisions of concerns that we had set up on the previous slide by network suddenly had some holes in it when we tried to do full stack management with cloud forms. The second issue that just jumped out at me was that there was no immediate keystone integration, it wasn't something that Red Hat was testing in the OpenShift and OpenStack project, it was very much a work in progress. In order to get the full stack single identity multi-tenancy, there still needs some work needs to happen in order to get that done. And that's one of my to-dos for my next release. What came out of this are these three very detailed follow the directions and you'll get a solid deployment, two technical guides and an integration guide, one just for OpenShift on OpenStack, one just for CloudForms on OpenStack and then a third to integrate CloudForms and OpenShift together on OpenStack. Here's what I really went through. Discovered the project that was very nascent to build OpenShift on OpenStack as heat templates and dash scripts calling out the Ansible work. Discovered it about eight months ago. Saw that there were a bunch of feature deltas that would have to be addressed. Then I want to describe to you a little bit of my life deploying unfinished infrastructure software as a practice because I had to keep deploying this over and over again to address bugs, to test it for all of y'all. OSP actually was not initially supported and there were a few features that I had to add in order to make it for the common customer outside of Red Hat to be able to use their OSP licenses in addition to their OpenShift licenses and then if we have time I can dig into some of the how you go about debugging and coordinating the deep layers of many technologies that are used to deploy all this stuff. So what is this project? The project is very well coordinated. Heat templates, dash scripts, Ansible modules, custom Ansible modules, and YAML configs. Those all work together in order to do the steps in these two columns. First, heat creates the networks necessary, the security groups, etc., etc., within OpenStack, creates the alarms and salameter, and then first deploys the infrastructure. Now the infrastructure server is not a part of OpenShift. It's something that this team has created in order to provide secondary services such as DNS, such as the host to kick off all the Ansible runs in order to deploy OpenShift and configure the cluster on all these nodes. So then heat goes ahead and deploys the master VMs and runs the cloud init scripts to set up the master VM boxes. Then it deploys the nodes and then gets them started and then deploys the load balancer and checks that all of that network connectivity is working. Then it goes ahead and does the Ansible runs the right column off of that infrastructure server, that infrastructure VM. So it deploys the OpenShift masters and nodes and then can deploy an OpenShift router and an OpenShift registry if you so choose. The changes I had to make on my OpenStack were that it was actually quite easy to enable heat cloud formations in the heat metadata server. Didn't have much trouble there, but there was a little bit of a delta off of our published OpenStack deployment guide. Also load balancing as a service was pretty easy to configure, just a delta off of our configuration guide, where things got more interesting, where the DNS server that's deployed also needed access to our corporate LAN. And I had to add a bunch of names, enable zone transfer in order to allow our DNS servers to pick up what was going on in this bind server running on the infrastructure server, especially so I could use cloud forms to access the OpenShift features, OpenShift services by name. And finally, and the most complex part which I really won't dig into too much here, is that as I was doing these deployments over and over again, I saw that the default SDN that comes with OpenShift, which is the OpenV switch one, did indeed have the de-encapsulation, re-encapsulation performance bottleneck. So I switched our deployment over to Flannel and it worked really well and the performance was really great. In the future release of this, I'm planning on handing it over to our within Dell, we have really an amazing performance management, performance testing team. And Red Hat's been putting together a lot of tools to do performance testing of OpenShift and containers in general. And I'm going to act as the interface between those two groups to get our performance team able to really run OpenShift very hard in performance testing. Here's a little tip, just you're going to be, if you're using this to test out and you're developing along with us to make this a great product, the meaningful stack names becomes really important. So this is the little script I put together to just create stacks that had unique names and that I could update over and over again. I was doing parallel multiple simultaneous deployments of OpenShift, which were just fine as long as the networks didn't collide with each other. And since our architecture is sold to our customers in a far more waterfall method, Dell is far more traditional that way and towards our customers it's far more waterfall. We're not an agile software company yet. The PDFs and documentation that I release is really bleeding edge stuff. So I put the GitHub Shas in our PDF to ensure that any code that my customers grab from GitHub to run is actually the code that I've tested. And I have the Shah of the particular commit where I know things are good. And so my next release of the documentation and our next release of the OpenStack platform, I know all my stuff is going to work because the Shas are in there. As long as you copy and paste those Shas, your system will be exactly like mine. Early in my testing, notice that all this great open source code that was doing the deployment, there was not a feature in the heat templates to indicate what's the pool idea of your OpenStack that you bought from, you bought a subscription from Red Hat. So I added a feature and if we have time, it's a little hard for me to see how much time I have left. But I could even just take a quick walkthrough of how to hack on this. You have plenty of time. So please do. Okay. Let's have a look at it then. I will share my terminal. So here we are on the director node, which is the result of the under cloud deployment and then leads the over cloud deployment of OpenStack. And normally one kicks off a deployment of the OpenShift on OpenStack product by running this heat stack create. And there's our open source solutions, Dell, and these are the parameters files that one uses. That's the parameter file that the end user modifies in order to kick off the deployment. So let's have a quick look at it. And when I first joined the project, there was only this value, only the... So that was great. That was my OpenShift license, but I couldn't indicate my OpenStack license to get access to the repositories that I needed. So I added this variable here. Where does... What did I have to change in order to enable them? Well, let's look at it the really easy way with our friend, Greb. The first file that's important is the OpenShift.yaml. And I went in and I checked out the OpenShift.yaml and I saw that there was the RHN pool. I was like, I need another RHN pool. So I copied that structure and changed its type from a string to a common delimited list to allow folks to add even more pools if they needed to. And really just went through and added those parameters. We'll see the code in a minute, but actually there's a lot of duplication within heat template files. They all have to know about each other's variables. There's a lot of testing for variables. There's a lot of defaults for variables. And you just have to make sure that in each one of the features that are implemented in heat that your variable gets expressed there. Lately in the project, there's been a drying out. Don't repeat yourself. And I'm excited to see how dry we can actually get these files. But the shell scripts themselves that these end up in are in these fragments. When the master server boots, there's this shell script which runs as the master virtual machine boots. It's the cloud init script. And these are functions to add to the cloud init script to be a repository in these unless they've recently been changed in a little bit. So bear with me for just a second. No worries. Yeah. In the... I was looking at the R1. Master boot sets where there are some functions where it adds the repositories. It's been a little while since I've looked at this stuff. There we go. Set extra repos is called. This has changed a lot since the last time I did it. But it's really just hacking these shell scripts that get coalesced into one shell script that becomes the cloud init script that runs as the first thing that runs the first time a virtual machine is brought up and it updates the operating system. It pulls in some client utilities from OpenStack. And that's basically how you add a feature. Here exactly is my extra pool ID. This is the code right here. It's so simple to add. To add more pool IDs and attach to them. Because the issue that I was seeing was I didn't have access to all of the OpenStack tools that the folks inside Red Hat or the folks who were using sent us had access to. So it was a very simple change. It just took me an hour or two of work and testing. And I was able to get the feature added really quickly. And the guys, Jan and the other guys on the project, took my patch really quickly. I think might have cleaned it up a little bit. And I was able to start committing to the project, which was just wonderful. Really easy folks to work with. Let's make sure that's really all there is to creating that additional feature. Awesome. I go back to my presentation. Did you put up a URL in your PowerPoint somewhere for where those technical documents are available? They are not yet available on the Dell website. They are available through me. They are available through our systems engineers, to our sales engineers. We don't have a spot on our website where they're being published just yet. We're reconfiguring our tech center to have more content from the cloud team. The overall reference architecture is available there. If you just go to, if I share my Firefox, I guess this one, if you just go to Dell OpenStack, can you all see this? Yep, we can see it. Yeah. Yeah. In our learning library, here's the reference architecture. Here's the Red Hat reference architecture. And all of the further documentation is coming real soon. It would be right up here. I actually have to do some updates to the docs. We don't have a spot yet for OpenShift to put these docs on. We consider it part of our OpenStack release. Cool. When the video of this comes out, we'll put it on blogs.OpenShift rather, .com, and hopefully we'll get some links there so that people can get ahold of that. Mm-hmm. And we'll put your email address so you can get spammed. Oh, great. I love spam. No, seriously. Let me switch back to my presentation. So, like I said, step two, write your feature in the best scripts. Step three, watch your feature fail in OpenShift Ansible. What happens is those best scripts come together to create the cloud init script that I said runs at the beginning, at the first boot up of your VM and appropriate to the type of VM, whether it's infrastructure or the master node, sorry, the master or the nodes of OpenShift. And OpenShift Ansible itself is another really big project out on GitHub that Red Hat is spearheading to automate the sophisticated deployment of OpenShift on a variety of different infrastructures, bare metal out on the AWS cloud or Google's cloud or on top of OpenStack. So this is a big project that's moving very quickly, and my feature initially failed on OpenShift Ansible, but they are also so incredibly responsive that I was able to get my feature fixed in OpenShift Ansible within a day. There are also a few habits that I picked up to make debugging all of this stuff easier. The way we set up the director is we have keys already set up, private keys and public keys, SSH authentication set up between the director node and all of the over cloud nodes. So what I did was I took that public key and I also put it on all the virtual machines that I had direct access to. The architecture, if I switch it back to my terminal, here I am on the director node, and if I look at my, when I'm running in Nova, I have, let's look at the infrastructure server. First, I have an infrastructure for a server. It's got a floating IP address, and it's got an IP address on the fixed network. Little sidebar here that's very important. The OpenShift on OpenStack deployment tool deploys two networks for you. It asks Neutron to create two networks. It creates a network that is only the core components of OpenShift and a second network that includes the, the, that also includes their infrastructure server. And you can see the difference right here where this master one virtual machine has one network that has two networks and it has three networks. It has what they, what we call the cluster network and the fixed network, and then a floating IP address from, from OpenStack that can rat out to the world. The infrastructure server, which is not a core component of OpenShift, has the floating IP and it has the fixed network. So it can communicate with the OpenShift cluster, but it does not have an IP address on the cluster network. And the networking goes so far as to prohibit the nodes themselves, which are far behind the scenes, from having a floating IP address. So anybody who's on the OpenStack, on any of the OpenStack under cloud, I'm sorry, any of the OpenStack over cloud boxes, the compute nodes or the, or the controller nodes cannot SSH in directly to the workloads of their customers cannot, cannot SSH into the OpenShift nodes where the pods are running. They can only get into the masters. So let's have a look at one of the masters and how just by this good, this practice of I, it got really frustrating for a while, if the keys weren't in place. So I just dropped the keys in place. And now I can SSH do this .20 address without the least concern. I dropped my, my key in place. And I can SSH into the, the floating IP of the master server. And there I'm at master zero. And then if I need to get to one of my nodes, I dropped the key right here, the, the, the public key right there, and I'm able to SSH into a node that I wouldn't be able to get to otherwise, like this, which makes debugging much easier if I can just call journal control from right there and see what my OpenShift is, is up to. And it's clearly up to a lot because I have a few workloads running on it right now. So that's the, the practice of, of whenever you deploy a, an OpenShift cluster on top of OpenStack, it's going to be hard to get to the nodes, drop your keys in place beforehand. So you don't spend your life looking up key names and having basically an awful time. Now, once you kick off the heat stack create, I timed it and I offered documentation of what log files you want to look at for how long to ensure the process is running correctly. May not want to go through all of this here and now. It's here in the presentation. It's also in the documentation that I released. It's, it's a further fleshing out of the documentation that's available on GitHub within the project. In short form, it's, there's over 20,000 Ansible plays that gets executed to ensure this is all set up properly. The first 10 minutes, you're watching heat for the the heat that the heat event list, which is just on the OpenStack level using the OpenStack API. But for the next 15 minutes, you're looking at the infrastructure VM and it's cloud in it. And as it prepares Ansible, and then for the next 10 minutes, you're watching, um, you have to log into, well, no, for the next 10 minutes, you are watching the, um, the OpenShift nodes get created. You can log into each of them and watch their cloud in it scripts. And sometimes things will fail in those cloud in it scripts. And that's the only place where you're going to find those errors by logging into the virtual machine itself. And then for the last 30 minutes, you're, if you want to watch what's going on, you're logging into that infrastructure VM again and watching those Ansible log files. And that's generally where most of my debugging happens is watching those Ansible log files grepping for failed amongst those Ansible log files. All of this is documented in excruciating detail, uh, in these three deployment guide, in these three technical guides and the one, well, the two technical guides and one integration guide. I thought if I had a little time, I would show y'all how, um, how, um, I could show you the, uh, the OpenShift front end, my running containers, my deployed applications, uh, but also show you how CloudForms is giving visibility and, uh, monitoring and even reporting to all of these resources. How am I doing on time? Well, you've got about 10 minutes, 12 minutes left. And, uh, the questions Silvain and Jan and Mark Lama are answering the questions in the chat. So I'd say, um, unless anyone objects, go on with the demo, um, and we'll, we'll run, run over a little bit if we need to for Q&A. Okay. Um, the, uh, one important thing I'd like to call out here that my boss keeps fetching about is the fact that Open Cloud, Open, um, I'm sorry, that CloudForms has access to everything and it looks really, really dangerous that CloudForms has access to the management networks in order to log into the, uh, the director node, into the compute hosts, into the storage hosts, and then log into all the virtual machines. And I, I try to assuage his, his concerns by saying, yeah, but we have role-based access control in, uh, in CloudForms, so everything's okay. And we've only opened up these specific ports and we've only opened up this one IP address having access to, uh, other access, other, um, to these core parts of your infrastructure. Um, that should definitely be a different talk. Um, yes. Yes. The, so let's have a peek at, um, oh, there's the windows. I have to access my, um, no, actually I want to show that one. Um, I have to access my stamp through a bastion host. I have to access all my hardware through a bastion host. Can you guys see, uh, this CloudForms management engines screen here? Yes, we can now. Yes. Great. Um, here we're looking at, um, the over Cloud nodes that have been deployed. Um, here you could see CloudForms 1. You could see the, uh, these are all the virtual machines that are making up, uh, all of OpenShift plus any others that I happen to launch, um, giving you quick at a glance status for how the machine is doing, what it's made up of. Um, it's saying, you know, this is running rel. It's an OpenShift. I'm sorry. It's an OpenStack VM. It's got, I don't remember what zero is, um, but that it's, it's running. You could also take a look at different parts of the infrastructure. The first thing that we set up when you start CloudForms, other than CloudForms itself, is the infrastructure provider, which is, whoops, which is to tell it that, um, you'll be using OpenStack. And this is, I have nine nodes in my OpenStack, uh, deployment that is, um, three controllers, three storage nodes, and three compute nodes. And if we drill into this, we can see a summary of all those and all of the, um, all of its nodes. I've done, um, what they call smart state analysis, which is a fantastic feature of, um, CloudForms, which goes in and does deep interrogation that even involve the processes on these boxes. So if I look at one of the compute nodes, uh, we can see all of the services, all of the services we care about, whether or not they're running on the box and how they're configured. You can see this running Nova and Keystone and Horizon, um, whether they're running the servers or the clients of these particular features of these, of OpenStack. Then we can look at the virtual machines on this, uh, on this particular host through relationships. It's running three VMs. It's running my CloudForms VM. And it's running, uh, when I open shift nodes, here in my OpenShift node, um, we can see all the details of this particular OpenShift node, also through, um, the smart VM, uh, sorry, the smart state analysis, which is called Fleecing, uh, to the developers where, um, what happens is CloudForms will take a snapshot of the running virtual machine, bring that snapshot onto the CloudForms machine and mount that drive, uh, mount the, the image and interrogate the image for all the processes, all the software that's been deployed on that, um, really doing a very deep inspection but out of band and not impacting the performance, uh, of that virtual machine. Um, then within this virtual machine, we have, uh, OpenShift deployed. And I believe somewhere right here, we can see where OpenShift is deployed. Actually, we can't. So, in the overcloud, we could see my nine nodes, uh, each of which have been interrogated. But if we go to, uh, compute containers, we can then see an overview of what, um, each one of my OpenShift deployments is doing. And right now, I only have one deployment of OpenShift, one provider. It has five nodes. That would be the, uh, the masters and the, uh, and the, um, the nodes. So, I have two masters and three nodes. And each of those has, um, a certain amount of, um, pods and services that have been, uh, configured by OpenShift. Uh, I've got a few bunch of, uh, a bunch of applications, um, running, trying to get Hocular. There's some, there's some, uh, certificate issues to get the, the really deep performance information out of Hocular is, is one of my to-dos. This is the very basics of what CloudForms offers you. It offers you so much more, uh, reporting and charge back, uh, a lot more, um, multi-tenancy and, um, custom dashboards than you, than you really could ever imagine in order to control and report off of what's going on. So, Judd, we've only got five minutes left, so I'm gonna see, um, is this a good jumping off point maybe to, um, to see if there's any questions out there in the land of, um, chat here. There was a little bit of chatter about Ansible Tower and whether or not you were using that. And I, that's a no, not using Ansible Tower. Good. So that's, that's, that's how we answered it. So, um, and let's see. If anyone else has got questions here, I think, and I'm gonna unblock, um, Sylvain and Jan, if they wanted to add anything into this, because I know they were two of the, the key folks that you worked with, um, they have themselves muted. So, yeah, they were both incredibly, incredibly helpful. There you go. Sylvain, um, Babbo and, um, Jan, you're both pro, pro Vasnik, I think. You're both unmuted now if you want to add anything in there. And, um, also we have, um, on the phone with us too, someone is asking, what kind of monitoring solution are you running? Um, and is that where your ocular stuff is coming into play? The monitoring we're doing, um, is, is only cloud forms. You can set up alerts for performance, through cloud forms, but we haven't invested a lot of time and energy in, in monitoring right now. It's all been about deployment and reliable deployment. Okay. We'll see if anyone has, uh, Mark, I have un, Mark Lamarine, I've also unmuted you as well. So if you want to add anything in here, we've, we've done, we've seen a couple of different reference architectures for open, open shift on OpenStack. And Mark was one of the folks that, um, talked in an earlier podcast that I'll include the link in, in doing some of this work. But you've definitely, um, done a whole lot more work. I'm waiting to see, um, all that technical documentation come online because I think that's going to be incredibly useful for the community and, um, to get started. So we'll have to see how we can incorporate that, down the line. The other thing that happened just the other day, there's a blog post, Jeremy Eater's been doing some work on performance tuning, um, around using the CNCFs cluster. Um, and I'm going to push Dell. If you haven't guys, haven't joined the, the cloud native foundation yet, you really ought to. I'm sure EMC is at some level. So, but you need to get more money because they have, um, some hardware that they're making available as well. So you have your own hardware, which is wonderful. But there's a great blog post about doing that. Um, that just came out deploying a thousand and I dropped the, the URL in there. And, um, Judd has very nicely offered to chair an open shift on OpenStack, um, special interest group for the OpenShift Commons. So, um, his coercion is my, my game here. There are a lot of people in the OpenShift community that are also part of the OpenStack community. So I think it's a natural to do that. Um, and we'll try and launch that sake in the next coming weeks, probably after Labor Day, after ready, comes back from holidays. And then, um, we are also going to be hosting on November 7th in Seattle. Um, and Judd has also been coerced coming there. Um, what we're calling the first OpenShift Commons gathering. It's going to be co-located with KubeCon. Um, you can register today. Now if you go to the OpenShift Commons, um, site and do slash gathering, you should be able to, to find all the details there. And we've got all of the SIG groups we'll be meeting there over lunch breaks. And it's really much more of a networking event so you can meet your peers and have face-to-face time with them. So we hope you all will join us there for that as well. Um, and if Judd, you can throw your final slide up with your contact information in there. So people who need to get a hold of you can and, um, do. And we will try and, um, not... Sorry. Jan, Jan, did you... Yeah, just a minor note about the CNCF, uh, CNCF testing. Our project was actually, uh, involved in this testing and we, uh, we had a part of this, of these resources available for testing OpenShift on OpenStack and, uh, really quite a successful scale-up, uh, testing. We were able to scale up up to 100 nodes with DC templates, which was the primary goal for this, uh, this project. So this is just a myth about the Jeremy's book and also, uh, thanks, Judd, for the awesome presentation. It was pretty cool. Thank you for the code. A lot of, a lot of, a lot of collaboration went on to make all of this happen. So, um, and we're really thrilled with the work that Judd's been doing over the past three years, keeping up to all of our revisions of OpenShift. And so you've got a slide up here about what your future plans are once you go through that, Judd. Yeah, with OpenShift 3.3 coming out real soon, with, is it Kubernetes 1.3 or 1.4? 1.3. Yeah, we're expecting, um, to be able to advertise support of a lot more storage backends, especially. Um, and I want to use our labs to test storage performance. We got a lot of storage here, and we got a lot of folks who are really good at testing performance that way. And I want to collaborate more with Red Hat in order to show some real numbers there. Um, want to do some .NET workload examples. We find a lot of our customers are very interested in being able to deploy .NET into such a flexible and reliable infrastructure. We'll have to hook you up with the click-to-cloud guys who have got all of that worked out. Um, and they'll definitely be interested in doing that with you. And we will, uh, we at Dell will be documenting in further detail the details of expanding our OpenStack cloud, because right now we just give you specifications, but no documentation, and actually how to go about it, um, in, in a way that covers all three, uh, all three racks and all the networking. So as they expand that part of the documentation, I'm going to expand my documentation, uh, for, uh, to match our reference architecture. Um, I'd like to try to get to, these are sort of nice to have as now the, um, the deeper OpenStack integration, especially with authentication, that my, um, my colleagues, my product managers keep asking for, you know, one account for a, from OpenStack and OpenShift. Um, and we're also hearing a lot from our customers the need for, um, the, um, the network segmentation for tenants and the billing and charge back. They, they really want to be able to do the charge back within their organization, or if their managed services provider, uh, like the big telcos, they want to be able to offer bills. And so I've already asked my questions. So there's a little Dilbert for everybody. And then this is how to, how to reach me. Awesome. And thank you so much. Well, thank you. It's been a lot of fun. It's been, it's been, uh, a wonderful collaboration. So Jan and Sylvain, thank you very much for your efforts and Mark Lamarina as well, um, in the past has done a lot of the, the heavy lifting and, and contributing the code to this, to make this all happen. So it's much appreciated. Again, we're going to be all gathering together on November 7th for the OpenShift Commons gathering. So, um, please do join us for that day. And, um, I will make sure that Judd's there and available and, and you can have your own OpenShift on OpenStack breakout room and, um, go into even more deep dives in, in person on that. So looking forward to supplying the beer for that and making that happen, um, in Seattle for you all. So again, thanks, Judd.