 Let's get into the big topic of open source, something that we actually have in mind. This is so awesome. We are an open culture that is actually able to fix that process that a developer or let's say, as the communities of the system really do. Welcome to this week's Ask an Open Shift Admin Office Sour. This is our second stream of 2022 and we have a very exciting topic, one that I've been looking forward to for the better part of like three months. How long ago did we schedule this, Johnny? Oh yeah, well before the end of the year. Yeah, so this is one that we're really, really excited to talk about today. I think I say that just about every week, but I really mean it every week, but especially this week. Well, it's always good, right? We've got plenty of nerd stuff to talk about and like when you're a nerd, you can finally get after it. It's normal for us, you know. Yeah, so I see Stephanie in our internal chat here saying that there might have been a glitch. So just in case, hello everyone. Welcome to this week's Red Hat Open Shift. Man, I just completely botched that, didn't I? The Ask an Open Shift Admin Office Sour live stream. Yeah, apparently it's going to be one of those days. So hello. Thank you everybody for joining us today. Today's topic, if you did not see the title. We are joined by VMware's very own Dean Lewis and we will be talking about kind of Open Shift on VMware in general. And we definitely want to have a conversation around the VMware Kubernetes drivers operator as well. So with that being said, I kind of want to introduce Dean first and foremost. Dean, if you don't mind, as what was it? Kindergarten cop, you know, who are you and what do you do? I guess that was who is your daddy, but close enough. Hello everyone. So my name is Dean Lewis. I'm a cloud solutions architect based VMware here in the UK. If you can't tell from my lucky accent. And then I've actually been working with Open Shift technology sat on top of VMware for probably about 18 months or two years now. And kind of just been working with some of our internal folk and yourself, Andrew and some of your team to really look at all the different integrations that we have, the different parts of the infrastructure that come together and how best take advantage of that. And then we presented together at VMworld talking about our joint innovation lab through your parent IBM and some of the work that's going on there. And obviously, as you mentioned, one of the things we want to cover today, a little bit on me is around the B-Sphere Kubernetes driver operator, which is actually one of the outputs of the joint innovation lab with IBM as well. Yeah. And great minds think alike, Dean, because I was just getting ready to mention our joint VMworld session. So if you happen to have access and I don't know if it's, if you had to have a pass, I don't know how that worked this year with VMworld. But if you have access to the sessions, you can search for either my name or Dean's name and you'll find that session that we did together. Yeah, it's still a free, it's still a free registration for the on-demand video. So if you literally type OpenShift into the video search once you've logged into the VMworld.com website, you'll find our smiling pretty faces. For better or for worse, right? So this is one of the Red Hat live streaming office hours series of live streams. And what that means is that office hours, much like if you ever had a professor or a manager or something like that, who had office hours, it's intended for our audience, for you all who are watching us to be able to ask us anything that's all in your mind. So VMware is, of course, it is always a popular topic. I don't know whether it's on purpose or by coincidence, but we always seem to have somebody from the UK on when we talk about VMware. Whether it's you, Dean, whether it's, we had Robbie on before, Reese, who's a Red Hat employee. So Reese has been on a few times also from the UK. He lives in Wales, I think, you know, to talk about VMware stuff. So I don't know, something more authoritative about the English accent, maybe. Anyways, so for our audience, please don't hesitate to ask any and all questions that come to mind about VMware, about OpenShift on VMware, about all that stuff. We'll do our best to answer those. We can't, or if we don't know the answer here, if we can't find the answer and we don't know it, here on the stream, we'll be sure to follow up and get those into the blog post. Last week's blog post, I just heard back this morning, it should be published either today or maybe tomorrow. So please keep an eye on cloud.redhat.com slash blog for all of those. And we're turning through the backlog. I did note that, or I was told that they are backdating those old blog posts. So we'll share those in Twitter or something like that. So that way you'll be able to see all of those and go back and find that information. But just be aware that they won't come up, they won't pop up brand new in your, gosh, nobody uses RSS anymore. I don't know, whatever replaced RSS. If you're watching the OpenShift blog, they will be back in whenever we did the stream. All right, Johnny, is there any, I see chat coming in. So I don't know if there's any questions that we should. Yep, there's one, and it's from a student, he's asking, like, hey, when I'm deploying OpenShift on VMware, should we use the CNI or use VMware NSXT? Yeah, so I've just replied back to it. So you see me come up as VHK because that's the name of my blog post. So I branded the YouTube account like that. So hopefully that's one of the areas we'll get time to touch on today. But I've already said to Andrew for, you know, I've got a lot of content that I can get through, but also more than happy to answer everyone's questions. So if we don't quite get through that today, then I'll come back for another session. But to give you a sneak preview of what I've got covered later, the answer obviously first depends on do you have VMware NSXT in your environment? And do you want your OpenShift environment to be part of the same overlay fabric as your virtual machines and consuming the same security sets? If the answer is yes, then you're going to be looking at the NSXT functionality. If you're looking at using a open source, high speed CNI, which also gives you advanced network policy control outside of what you get with Kubernetes in the box. And this way you'll be looking at Antraya. And there is actually a way to consume Antraya with NSXT today that we've just released as well. So again, as a customer, you've got the choice. And again, with things like Antraya, then you can use that regardless of the cloud that you deploy that into as your on-prem data center, whether it's AWS, Azure, whatever it may be. Yeah, you know, there's four and four plus options just between Red Hat and VMware integration. So, you know, whether you want to use OpenShift SDN or OVN Kubernetes, those are well documented by VMware or, excuse me, by Red Hat. So very much to Dean's point, it really comes down to do you use a Red Hat solution or VMware solution of which one do you, or which features, which feature set do you want to take advantage of. And I really like that NSXT and Antraya can now work together. I did not know that. And hearing that makes me very happy because that is a massively powerful combination. And that's one of the demos I've got prepared in the background today to take you through some piece of that, but essentially what we're going to look after is using NSXT to extend that policy management control from your typical virtual machine estate and then control Antraya with that as well. Yeah, our hope now, and I see you pick the one your organization is more familiar with, that's an interesting stance. And the reason I say that is because a lot of times we encounter customers who it's the VMware team and the OpenShift team are decoupled. And, you know, the OpenShift team is, you know, they're largely unaware of what happens on the VMware side. And so they'll kind of default to something like OpenShift SDN or OVN Kubernetes when in fact for other parts of the organization, it may be more beneficial to use, you know, the NSXT or the Antraya integrations. You know, the security team having, you know, control the VMware team and network team having visibility into what's happening inside of there, you know, all kinds of other stuff. So in general, yes, I very much agree with you, our hope nine, right, you know, kind of whatever you're most comfortable with is kind of my default answer in many cases. But in this case, it may be worth communicating with your peers and other parts of the IT organization to find out what is the total best solution. And, Dean, I'm not sure if you caught the question, but there's another question that came in about VMware and OpenShift and the SRIOV from Podrick. He's asking if there's any specific integrations for VMware and OpenShift when it comes to the SRIOV operator. So as it stands today, I believe that operator, is it produced by Red Hat today? I think it's an open source, an OpenShift one today. So the answer is no, it's not something we've worked on today. One of the things that you're probably going to learn is we try to make where possible all of our infrastructure integrations through operators. And then we started to certify them with Red Hat as well going forward. So we're slowly bringing that together as we could probably allude to, you know, in earlier editions of OpenShift. There's been obviously content produced by Out of the Box from Red Hat, from VMware, and it's choosing the right one that works for your business. We're going to start aligning more closely with that. Again, this also relies on the feedback from yourselves, the customers, the people are using this. The more you come back to us and tell us these are the integrations that you want, we can start to focus time on it. So one of the things that we do have is the IBM Joint Innovation Lab. So this is where we will work together and fund co-engineering. And again, I've got one of the examples of the outputs of that. And what we're expecting to see through the rest of 2022 is more of that joint innovation together. Very much so. I'm curious as to whether or not, and I don't know the answer to this despite sitting on several PM meetings around VMware integration, you know, whether or not we'll be able to see things like, you know, better integration with those operators and the cloud provider that does machine provisioning. So that way we can request special, you know, hardware, special features, special capabilities in that way. Today, I think you have to do it through machine sets. Basically, you have a machine set that requests an SR, you know, a template that already has an SROV device or a VGPU device or whatever that happens to be. But one of the really cool things in my mind and to be clear, I don't, as far as I know, this is not on the roadmap and I'm already drafting the RFE in my head is like, hey, it would be great if I use the special resource operator or something like that to request that device. It could talk back to the infrastructure and figure out how to make that happen. Yeah, so I think typically one of the questions that we get quite a lot is around GPU usage because for many years, I've been able to have a dedicated graphics card from your chosen provider out there. And you can add that to a virtual machine going forward. Once that's inside the virtual, how do you actually consume that within Kubernetes? Obviously, NVIDIA have been doing quite a lot of work around their AI piece. And with that, that means they put a big focus on Kubernetes with operators as well. So I don't think NVIDIA have drafted up, you know, an exclusive documentation around the OpenShift piece. Yeah, they're working on vSphere. So you can't quote me, it's definitely working, but I know it's one of the areas that they're looking into and these kind of some of the strides that are going forward as well, where there will be other ways to consume this. Again, if we take a step back, if you look at VM, where one of the technologies that we have is something called Bitfusion. And Bitfusion essentially allows you to create a GPU farm and then you consume that GPU farm through kind of a request software front end. So then you don't actually need to go through the same thing of thinking, oh, well, how do I design my virtual machine and add it all of it in? You've got a farm of resource in your environment and you call off to an API to consume that and it gets attached dynamically inside of Kubernetes through those operators as well. And we're seeing customers now start to go beyond that whole, hey, I've got some dedicated resource, dedicated hardware and I want to develop some capabilities to know, okay, how do I productionize this and what does that look like? And when I start to see more adoptions that customers move into production, whereas in the last couple of years, especially from my personal experience, it was more just customers saying, hey, I'm okay, I've been some dedicated hardware that one GPU added because I don't care if it dies, I'm developing stuff. I'm not trying to productionize this yet. Yeah. Yeah, and it's interesting you bring up Bitfusion. I was asked last week about whether or not we support Bitfusion. And I don't know the answer to that. So that'll be an interesting one. So I'm going to pause on the questions for just a moment. I want to hit our top of mind topics real quick, just because most of them are kind of follow-ups from last week. So I want to make sure that we go ahead and address those. But don't let that keep you all from submitting those questions. We'll keep track of those. We'll answer as many as we can. If we happen to miss any of them, just don't be afraid to bring it up to focus again. So top of mind topics. So following up from last week, we talked about how to troubleshoot operator updates, like what's happening if an update is stalled or what happens if it's failing. Remember, everything with an update and with an OpenShift deployment in general is controlled via operator. So really what you want to focus on there is troubleshooting the operators themselves. And let me find the rates. So I'm going to post two links into the chat here. So these two links have one is the documentation, one is a KCS article. And they describe how to do some of that troubleshooting, where to look, some things to investigate when it comes to that. I think I posted it in the right place, yeah. So just a continuation from last week, I did include those links into the blog post. Again, that blog post should be going out either later today or tomorrow. First thing. So the other thing that we were asked about, and I don't remember if this came up on the stream, but somebody sent me an email about it, and that is developer entitlements. So yes, if you look in the infamous appendix one to the Red Hat license agreements, OpenShift is included with the developer entitlements that you get with a developer account. So back with the whole CentOS thing when that changed to CentOS stream and we changed the way the entitlements work, right, you now get 16 rel entitlements with a developer account, you also get 16 cores of OpenShift. The issue thus far has been that we, you can't, if you go into console.redhat.com and you try and entitle that cluster with that, the entitlements don't show up. So I've been told, we first started investigating, I was told it was fixed, it wasn't fixed, I was told it was fixed, it wasn't fixed. So we're kind of going round and round and round, but we'll get that fixed. Just know that if you do leverage, right, if you are wanting to entitle a cluster like that, you can continue past the 60-day trial, console.redhat.com won't be pretty if you will, but none of the functionality will stop working if you will. So you can continue to do that if for whatever, some reason somebody comes knocking at your door like, hey, why haven't you entitle this OpenShift cluster? I think it would probably be safe to say, well, I'm using the developer entitlement, it's just not available. I can't click the button in there, but I'm hoping that that will get resolved in the not too distant future. And let's see what was the other things that I wanted to bring up. Oh, kind of the last one, actually for Johnny and Dean who are looking here, I have another one about vCenter, but we can talk about that in a minute. So the last one I wanted to talk about is something that, I guess, not too many folks were aware of, and that is, so with the global poll secret, so when you go and deploy an OpenShift cluster, you can, you provided a poll secret, and that poll secret is configured as the global poll secret. So it's available to all projects, all users, anytime it does a poll, it uses that secret to, for example, poll images from Quay or wherever they happen to be. If you're doing that disconnected, that poll secret would have the credentials for your disconnected registry instance if you are using them, for example. But you can extend that global poll secret to have credentials for other registries. So maybe, for example, you're using the docker.io registry docker hub. And, you know, of course, without authentication docker throttles your polls. So maybe you have an organization account or something like that that you want it to use so that way you can remove that throttling as opposed to doing it on a per project or per user basis where maybe you have higher thresholds but still get throttled on occasion. So you just add those credentials in there. Here, I'll post that documentation link into, I posted, I'm posting these onto YouTube. I don't know if they're also showing up in, I don't know if they're also showing up on here. I don't see them showing up on my, I'll post them. I see them in YouTube, but I don't see them on Twitch and I don't see them in Restream. So I don't know what's going on there. All right, there's the updating global poll secret on Twitch. And then here is KCS. And here is the docs on operator troll shooting. Just to make sure, just to be doubly redundant leisure. All right. Yeah, we can't see them. Thank you. Oh, you can't see them on YouTube. I see you're responding on YouTube. That's strange. Yeah, so I can see them coming into YouTube through the Restream bot now. So they should crop up there. Strange. Okay. Well, I guess I'll use Twitch to do my posting. Good to know. It's not a lot of technology. Yeah. And apparently it's just one of those days with technology. Google has been a bit wonky for me today. You know, Red Hat uses Google services and like G chat was being very slow to load threads and, you know, stuff like that. So I don't know. We won't dwell on it. Okay. So that was all the top of my topics that I had. So I definitely. I want to prioritize questions from the audience. But at the same time, Dean, I know there's a lot of stuff that we want to talk about to kind of lean folks towards some of the newer things that are coming out some of the newer things that we're working on. So what I would like to do is Dean, if there's if you would like to talk about in particular, I think the entry integration is one that folks are interested in. And we can also we can also talk about any kind of high level best practices, you know, common things that we see, and we'll just answer questions as they come in. We'll try and answer them in the chat and then we'll bring them up and we'll also bring them up verbally when we have a chance. So yeah. So I think there's two questions that are outstanding. So the first one is what about OpenShift virtualization on VMware. For this one, I'll be honest with you. I don't think it makes sense. And the reason for that is very simple. OpenShift virtualization I think is built for KVM, right? So it's how to consume KVM layer for VMs. Well, essentially then what you're trying to do is build a virtual machine layer to put another virtual machine in it on top of VMware. That doesn't make sense. Correct. If you want to consume your virtual machine side by side, then you know, you've already got that with VMware through vCenter and through any other operational tooling that you've brought with it. There's another question about when will nested networking be spotted on VMware? If you don't mind me pausing, I'll expand on the OpenShift virtualization quick. So to your point in your 100% accurate OpenShift virtualization is itself a virtualization solution. So having nested virtualization doesn't really make sense. You know, one, it's basically impossible to support because there's two hypervisor layers there that, you know, you don't know what the other one is doing, et cetera, et cetera, right? Even VMware, I don't think VMware officially supports nested vSphere on vSphere. It works. I know a lot of us use it in our home labs, myself included. But I don't think it's one of those, like if you're a corporate customer who like, I'm rolling this into production, they're going to be like, wait a minute, I think. And that's true of Red Hat with KVM as well. Yes. So the use case there when we talk about OpenShift virtualization is often I'm deploying applications in containers, but I have some virtualized component that I'm dependent on. And especially if you're coming from Rev or OpenStack or something like that, maybe it makes sense to keep those KVM based virtual machines there. If you're doing that with VMware, then you kind of have a choice of, you know, do I use something like the NSXT integration? So that way my VMs and my pods can be on the same SDN. And you kind of achieve the same sort of thing, right? They can do that native communication. They can talk to each other without additional abstractions. So it's definitely one of those, you know, what's your use case? What are you trying to do? Application modernization is a big one when it comes to that. I know VMware has been doing some pretty cool work with the supervisor cluster and the ability to do things like provision virtual machines using a Kubernetes API. And Dean helped me out here with what that's called. It's just the virtual machine API or virtual machine provider or something like that. Yes. Well, we call it virtual machine service. And essentially through our own runtime, we've got an ability to speak to that API and basically spin up a virtual machine at the same place. But essentially when you go onto the hub there of what we're doing with that virtual machine, it's being created the same way if you right-click create virtual machine. It's just the interface in which you drive to do that. So whether or not that changes and we bring in, you know, OpenShift in the future, that OpenShift can call out create a virtual machine on its own as well. It remains to be seen. Obviously we, you know, OpenShift have read how far on the way to do that as part of the bootstrap is installed when you've called virtual machines and pushed the images and so forth, but not as kind of like, I don't know, maybe a day three action. Yeah. Yeah. That'll be an interesting one to see as well. I don't know if that's on the roadmap or not because right now we use the standard vCenter API. Actually abstracted through Terraform to do those VM provisioning. So that'll be an interesting one to see. So sorry. I know I distracted us there, but ultimately it's one of those, depending on your use case, depending on what you're doing with your application, one, the other, or both may be the right answer. So definitely talk. You can always reach out to me, Andrew.Solomon at redhead.com. Happy to help answer those questions. And of course your red hat and VMware account teams. So the other question, when will nested networking be supported on VMware from Khalid? Yeah. So I'm not sure I fully understand that question. Because obviously if we're talking around using CNI, which kind of like almost nested networking at the Kubernetes platform layer, and then you've got the underlying networking of VMware. We obviously support that today through various integrations. We're going to start up Kubernetes use Flannel. That's supported if you're going to use OBS, that's supported if you use OpalShift SDN. That's supported. Obviously today I'm going to talk a little bit about the network inside of things. So I'm going to share my screen for now. I'm going to give everyone a quick preview of what I have ready. And then I'll move the slides around so we can start maybe jumping in some of those networking discussions. So for anyone that wants to follow me today, you can find me on Twitter. I'm saintdle. And you can always drop a message in the chat if you can't find me on Twitter. What I put together today is looking from infrastructure up point of view, where do we fit together? Obviously we've already started talking about practice and issues. Talk a little bit about assuming that underlying vSphere storage with OpalShift. We do see a lot of customers bring separate storage arrays to their Kubernetes platform. But then we're all starting to see especially with customers who have got vSAMP in place, which is the storage level at the hypervisor. They want to consume that as a structure up monitoring. So where does the cloud management suite from VMware sit in there as well? And then there's a whole host of other pieces from VMware that I've ignored for today. So I just want to start off with a little bit of a top level down to give everyone a bit of a baseline and again it might generate some more questions for us to answer. I'm going to do this through the viewpoint of what's called VMware Cloud Foundation. VMware Cloud Foundation essentially is the majority of our VMware products together in an easy to deploy package which is easy to manage, centralized, backed by infrastructure automation and so forth as well. But I just want to do it from that point of view because you get a lot of the products that they integrate with OpalShift running on top of it as you all contain a platform. So the first one from a compute point of view, we always get asked straight away, is OpalShift has been running for many years and that's from both the VMware side of the house and that's also from the Red Hat side of the house and that is seen by our joint documentation from a VMware point of view. We have reference architecture in our VMware Validated Designs which is just going under a slight rebranding at the moment to VMware Validated Solutions. You can also see designs for VMware Cloud Foundation. If we go into a little bit more of an example as well as design guides. Second piece, VMotion. I always get asked this, is VMotion supported? Yes, it is. It now actually appears in the OpalShift installation documentation just to avoid confusion. There is also a feature called Storage VMotion. Thank you for highlighting that, Dean. That's where the underlying storage for the datastore or the lund that it sits on that supports is installed disk in any auxiliary disk. Storage VMotion allows you to move that between datastores, between lunds on your array. That is not supported from OpalShift from Red Hat and actually we don't support it either for our runtime. I think I know why and it's because in the early days we, Red Hat we didn't do ourselves any favor by not directly answering the question of, is VMotion supported? The docs were kind of ambiguous, the KCS is out there were ambiguous. It was one of those, I think at one point we were even saying, yes it's supported but we don't encourage it or we don't recommend it and stuff like that. It's one of those, there is a lot of nuance behind that and a lot of other things that are going on. I think even today, if you go search in the KCS, certainly for employees, because I don't know if the in progress or the non-certified ones show up for everybody, you'll find there's a KCS out there that says, VMotion might cause a virtual machine, an OpenShift node to show up in a non-responding state or something like that and the point of that KCS is basically VMotion is fully supported but occasionally you might find this error or you might find this issue and if you do here's how to fix it. What the Splat folks have told me which is our, what is Splat stand for now, special platform team or specific platform team, they're the folks who like, Dean you probably know Richard and Joe and they focus on VMware. What they tell me is basically it's like if a VMotion fails or if something basically the VMotion takes a long time where the VM is stunned for especially a control plane VM is stunned for a long time then that type of scenario can happen, but if it's just day to day normal doing the standard VMware, put the node in maintenance mode or DRS doing its thing there's basically no risk there and worst thing that happens is the node goes unresponsive, the KCS tells you how to kick Kubelet and basically have it begin responding again. But yeah, VMotion absolutely supported. Storage VMotion is the one that you got to look out for because it causes the link, the connection between the VMDK and the data store and the PV object to become disconnected. So it loses track of where the VMDK is at and then it can't mount it of course. Yeah and for those of you that are using the older vSphere versions as well, I really recommend that you look at vSphere 7 especially in production of states and the reason for that is we put a lot of technical work into re... once a refactoring but kind of pushing vMotion to the next level where we can do sub millisecond migrations we have better support for high latency workloads and so forth again those maximums that we support increase massively and I'll have it prepared for today we do have some information as well around the performance games where we've moved like a massive SAT system virtual machine from one node to another on the older versions of vSphere to now and the difference is tremendous at those larger node sizes it comes back down to running anything on top of virtualization just be aware of the typical characteristics vMotion there is a stun operation there when it flips that memory and CPU usage all from one host to the other that's always going to be there because of how virtualization works those are typical when people run into issues which then manifests itself in your platform or application as Andrew said you may get a node not responding within the Kubernetes service and that's because it's been stunned for a couple of seconds so yeah another words are hard another reason to use vSphere 7 so with the CSI provisioner there's a lot of really good information that gets surfaced up into vCenter about the PVCs as well so if you're kind of a dual hat if you're a both the OpenShift and the VMware admin or if you work closely with them troubleshooting all that type of stuff that information is really really helpful yeah so if we go back to the slide I'll take everyone through the rest of some of the areas we've got that cloud provider support you do have the ability to scale your OpenShift cluster and manage the resource usage of the virtual machines that have been deployed that become your OpenShift node through machines so you can edit those machines so you can scale them up scale them down you can create new machine sets and I think it's machine groups I think OpenShift call it which is how then to deploy different types of nodes within your environment at different sizes I will check that in the background if I can so you do have control there obviously that's slightly different to that whole if I'm interfacing with OpenShift can I deploy a standalone virtual machine that I can deploy I don't know CentOS and Jenkins into it for example it isn't containerized moving on so from networking point of view so we support integrating with NSXT we do that both from our network container plugin this is our NSX native so this is bringing it into part of the same overlay that is going to be used by your host systems and your virtual machines that live on top of them host or we can bring Antreya so you can use Antreya as a standalone component just inside your cluster on its own but then you can also manage that as part of NSXT so your network can't get full visibility into that environment as well but you can still use that high performance CNI with the policy management as well and then from monitoring point of view and I think this one's kind of really key and it kind of always gets left out right we have full stack monitoring in VRL operations through our Kubernetes Management Pack that gives you that next step beyond the virtual machine so straight away if you deploy OpenShift on top of vSphere and monitor it with VRL operations you can see that virtual machine come up in VRL operations which is great I speak to lots of customers whereby Kubernetes guys ring up hey this pod isn't working I think it's your virtualization platform I need it fixed hey what's a pod and the guys go well it's a service that runs inside of my set of virtual machines and they go okay how many virtual machines have you got lots where's the pod run and it takes us into that next step where we can go inside the environment we can see the Kubernetes, we can see the pod to the services, the API and then we can also link that back to the vSphere so we can go right from which PVs are then consuming the underlying BMD cases as well through vSphere it's really good for troubleshooting and it makes it all searchable as well so it's the kind of the next bringing together and then the final piece is we have our own login product called vRealizeLogInside and again we can collect those logs out FluentD and we've got some unique characteristics versus some of the other products that are out there as well and how we visualize data as well so again if you were using that within the environment we can also bring those logs across as well and Dean I should probably know this with the vRealizeLogInside integration that is not dependent on the OpenShift logging service being deployed so you can deploy both but I don't think you're required to deploy OpenShift? So you're not required to deploy both however I worked with some of the Red Hat team around this there's a Red Hat blog post that I helped co-write around this it's easier if you use the cluster logging and just deploy FluentD through the cluster login and it's a very simple reason for that it's because of that system OpenShift logging namespace that's created with the right permissions out of the box because of course you if to get audit logs and so forth you need host level permissions host access permissions with inside of the cluster if you go alone and deploy FluentD on its own it's not going to have the right level permissions through the security context to access the data which then means there's a lot more work for you to set all of that up so again it's kind of a which way do you want to go about it but again inside of the cluster logging you've got the ability to control how much of the product you deploy because it comes with things like Kibana and Elasti Search and stuff like that and you don't need those components if you don't want to use them Makes sense so basically just you can basically configure OpenShift logging to forward over to vrealize log insight instead of using the locally deployed Kibana Elasti Search etc Yes So are you at a point where you want to address some questions Yeah so I think Johnny's been keeping track of some of the questions for me so thank you very much I know a couple of the community have been answering these questions well so thank you very much One of the questions that came through was what kind of performances may we face at application level when running containers on top of OpenShift on VMware as you use it's negligible there is going to be a slight performance impact because again you're going through that hypervisor layer it's not consuming the resources bare metal but then there's a number of reports and the same things apply you know what's performance impact if I do virtual machines versus bare metal the same things apply it's very negligible but then you also get better resource management as a whole so the number of customers for many years that we've all turned up to virtualize the state versus bare metal was massive because they have exchange servers with 364GB of RAM that in them that weren't consuming it and SQL servers and so forth and now you can scale and cross your need from availability points of view as well you also get some additional benefits so if a host goes down vSphere will manage that high availability to the event and power on the virtual machine and in some instances in some of the testing that I've been doing internally with a colleague vSphere was powering on and bringing the virtual machine back online in the Kubernetes node before Kubernetes had reconciled and realized the node was offline now admittedly this was when we were testing vanilla Kubernetes because we were just playing about with a couple of Ubuntu machines but again kind of what we were seeing there is that vSphere layer was actually giving us availability that we were struggling to adopt inside of Kubernetes itself part of the box and the expected behavior there is essentially so the physical node fails it takes down the OpenShift node vSphere HA returns that OpenShift node to service and then right so basically there is no point where Kubernetes where OpenShift declares the node is not ready but what will happen is it will notice all these pods are gone I thought they were there they are no longer there so then it will go through the rescheduling and it will immediately begin to reschedule all of that workload and then spin back up so as opposed to if you go through the normal Kubernetes wait so I think it's five minutes before it declares the node unreachable and then we'll reschedule that workload so you're kind of yes all the workload has to restart it's not like it magically comes back in the exact same state it was but it happens in a much faster process so one of the other questions I saw here and I think it was a shish is it possible to have an OpenShift cluster installed on VMware on AWS cloud or Azure cloud so VMC so the answer is yes absolutely definitely with AWS I don't think we've added support for Azure yet so VMC in the documentation because I helped write that one so that's fully supported from a AVS point of view it's just another VMware environment it's just managed by Microsoft that's the only difference so from that point of view the supportability should be the same I do know there's work going on in the background to have a more official statement on that going forward as well so you'll probably see that in the next couple of months come out but essentially from a technical point of view how to consume it pretty much the same because it's just another VMware platform yeah because I know that we specifically test like every OpenShift build is done with VMC so on VMware on AWS so while VMware on Azure is just another VMware environment because it's not specifically tested we consider it a non-tested deployment so the KCS I just pointed posted in the chat covers any kind of support caveats there which is more or less it's supported but if we suspect it maybe because you're on a non-tested platform we might want you to recreate that issue on a supported platform yeah so I think with another question that kind of links into a little bit of some of the compute side of it about availability zone concepts so what would a three zone architecture like so first and foremost I think one of the things you have to kind of understand about the virtualization platform there is the cloud provider will bring through so many details into the environment but some of the things don't quite match so in terms of if you're using distributed resource scheduler which is where VMware moves to virtual machines around to where it thinks their best place always set up some affinity rules to make sure that not all of your control plane nodes are sat on the same host straight away because if that host goes offline you've lost your control plane and the same again for your compute plane nodes when we talk about storage we deploy something called the cloud provider interface but it's from VMware at the vSphere level as well and essentially that gives us an insight of what's going on at the vSphere level and then that allows us from a storage control point of view understand how to map those VMDKs to the virtual machines as well so you can set up a topology from that point of view but I will call out now if you're using the integration for storage we don't support stretch clusters at the moment and that is obviously because you have a bit of a chicken egg scenario where that data lifts if you're doing it over the stretch itself and something goes offline then it can cause problems that you don't really want in terms of attaching and managing persistent volumes inside your Kubernetes environment yeah so a couple of things to be aware of there so you said affinity slash anti affinity rules so unfortunately that's not something that you know we do automated fashion so it's a day to outside of OpenShift thing that needs to be configured I think there's an RFE for that but I don't know the status of that I would have to go and check on it so just be aware yes I would definitely encourage everyone especially for control plane nodes go and configure anti affinity on those soft anti affinity on those so that way you can prevent a single physical node failure from crippling your OpenShift cluster so with regard to we don't support multi vCenter today so you can't have a single UPI or IPI cluster so one with the cloud provider integration deployed across multiple vCenters and multi DRS cluster is a little I'll say strange so with UPI it's fine you know you're manually deploying those nodes it basically works exactly as expected the thing to be aware of is whatever data store is being used for pvcs you want that to be accessible to all nodes that OpenShift compute nodes will be hosted on if you're using IPI there was some bugs that were found I think they were found in the upstream VMware provider and I say bug I think it was actually working the way it was intended but we considered a bug because you have to put the credentials into the vSphere.conf so they're basically plain text credentials that are stored on the file system of the host but I saw recently that there is some progress on that so one of our folks had opened a poll request so I saw that there's some progress on that so hopefully that will get addressed so it's one of those it works but there's just an asterisk there to be aware of yeah that was all I had to say there no problem so I can see we're kind of burning through this time with all the questions which is fantastic I was really worried because I know people say all of the VMware ones have lots of questions but I might turn up today they hear my English accent and everyone just goes nope, not interested if everyone can give me a couple of minutes then maybe we just go through a couple more of the slides and just talk a little bit more in depth about the networking piece because there were a few questions on that that's okay please do so I have one last animation on this slide which is around automation this comes up quite a lot as well about how to use vRealize Automation and typically around the deployment stuff today we don't really have a native integration dedicated to OpenShift but what we do have is we integrate with any conformant Kubernetes platforms to provide self-service capabilities and continuous integration and deliver it by your instructor pipelines and for those of you that want to go to my blog at veducate.co.uk I've done a blog post on how to use VRA pipelines to deploy an upshift cluster using IPI and essentially that's because I'm using a concept of tasks in there which is the same as doing it from my laptop I'm in the background just as me messing around I'm going to kind of build that out a little bit more about how you use a VRA to do that with because essentially it's fully extensible so if you think about it you can build it it's just we don't have that kind of native integration to one click say hey deploy me the concept of the OpenShift cluster from there whether or not that comes in the roadmap in the future I'm unsure of at the moment but again any questions queries please I'm happy to kind of take that a little bit further so networking with VMware so we're going to have a quick look at NSX T and Treyer and a little bit on NSX Advanced Law Bouncer which is also known as AVI Networks although I think we purchased them around two years ago now it's just the name sticks so the first one is this is VMware's connectivity portfolio and I don't think this is the full part because we leave SD1 out here so there's a number of areas that will fit into the modern app space around containers and Kubernetes as well and pretty much the majority of these are actually supported with your OpenShift environments as well I'm going to focus on mainly these points which is the virtual network infrastructure itself and we'll touch a little bit on that Ingress Law Balancing as well but again people want to kind of hear more about some of these areas I can come back again if you protest and review for me so there's two main CNI offerings from VMware there is two CNI offerings there's the NTP which is the Network Container plugin and there's Project and Treyer I'll just untrayer now so the NTP as I mentioned already answering some of the questions all workloads are connected directly to the NSX T data plane that means they become part of the overlay with the rest of the virtualization state the NTP is managed and enforced by NSX so you've got things like IDS IPS distributed firewall firewalls firewall rules, there we go my words aren't working as well that allows us granular control so inside of NSX we can set firewall rules that allows us to within a namespace which is obviously a set of isolation itself do pod-to-pod networking and we can also limit that pod-to-pod networking as well we've got projects from Treyer so your VMs themselves you've got NSX in place already we're just going to be part of the overlay of NSX and Treyer itself is going to be within the clusters itself and these can be multiple clusters regardless of where you deploy them as per the orange box there you can bring NSX into the management plane of NSX so that you can use NSX to centrally set those policies as well but you can also use control laws policies and application levels so when you're deploying your YAML into your Kubernetes cluster you can then start to set policies by saying hey this is what I want my firewall rules to look like for this environment so you've got a number of ways to consume that and I've got another slide that takes you through a little bit of that differentiating the two plugins key capabilities so NCP is mentioned across your full networking stack you're going to get the ability to do routing NAT per namespace security policies which is where we start to overlap a little bit with the Andrea capabilities as well within NSX we've got law balancing as well so you've got that included you've then got end to a network visibility through the NSX manager itself so you can do things like trace flows and troubleshooting and essentially the biggest benefit there is is the single pane of glass across your full networking stack we integrate obviously with our own environments we have operators available to make it easy to consume on cluster boot bootstrap and bring up for OpenShift as well so we make that really easy to consume from that point of view from Andrea's point of view this slide was it supposed to be that I think I rebuilt this slide to look like this so I apologise for that I forgot to hide it Andrea is an open source CNCF sandbox project it provides connectivity and network policy enforcement it uses open v-switch under the hood and basically it's focused at project levels around simplification, usability diagnostics at networking level use on any cloud platform where you run your container environment on and then obviously gives you that extra control and feature set around security policies scaling performance you can interact with us through the community that we go through whereas obviously the NCP is contained within our products it's in house IP that we built ourselves in terms of architecture overall we have an entree agent which manages the pod interfaces and builds the overlay tunnels we've obviously got that entree controller which basically centralises all of that and looks after the custom resource definitions and manages the reconciliation of them when they're created from the point of view, as mentioned we use the open v-switch as the data plane across your cluster and this is built from the Kubernetes side up as well in terms of what does that give you the focus really for entrees around high performance and the policy management on the left here we have the limited availability of Kubernetes network policies out of the box, I think OpenShift gives you a little bit of extra control because OpenShift has the ability to its own ingress through OC groups for example, so a little bit of extra control there as well. With Entree essentially you get the ability to do cluster network policies at global layer it also introduces concepts of tiering as well when you set those network policy rules what does that tiering model look like so we basically build it on a seven layer emergency, security ops, network ops blah blah blah as you can see in front of you and then you've got the ability to place rules in place and these obviously are set in order in the precedence as well this slide hopefully makes it a little bit more consumable which is how do I use these different tiers well, if you're a network admin or a VI admin or a security admin you're going to be using some of these higher more precedent setting tiers whereas you then got particular tiers that are going to be used by your platform and application consumers themselves so that means when they deploy an application they can set their own custom resource for that policy in places they want but there is a lot of precedent in place for baseline security by your team as well then the last piece to bring that together if you're already using NSX within your environment is you can centralize that management between clusters so you're actually sharing the overlay between clusters but you're sharing that management plane between clusters with NSX and again those same administrators can configure at NSX level rather than having to go down to a per cluster level for those pieces that's really cool because it gives you that global view even if you're deploying multiple clusters for your application maybe it's DevTestProt or whatever gives you that global view that global control across all of that yeah so I think this was kind of the last slide which is just a little bit more of a high level architecture diagram of how it works under the HOD but I think we've got a little bit of time just to go into a demo so I'm going to stop sharing this screen I'm going to find my right terminal window for the time being yeah so if you can multi-task for a moment we'll ask a couple of questions so from Daniel is NSX-T only supported or NSX-V is also we don't support NSX-V for these integrations and the reason for that is very simple so NSX-V was our original VMware software defined networking solution when we brought NSX-T into the market it was something that we redesigned from the ground up and rethought about how we wanted to do software defined networking and then there was a kind of a date a while ago where we started to put all of our features engineering capabilities into NSX-T going forward if you're using NSX-V with any environment today we do still support that for those customers but we encourage you to move to NSX-T for all of these new features and apologies in advance if I butcher anybody's name Johnny knows I'm terrible with names Patrick is there support for DPDK open vSwitch with Antrea so that is something I don't know to be completely honest with you but that is a really good question if you go to the Antrea open source projects in GitHub and just drop that into either discussions or open up as an issue as a query then someone from the team will get back and answer that to you and I don't think it's actually covered in the documentation to top them ahead so it's something I have to really dig into from technicalities point of view for right and then while you're finishing bring that up I'll quickly address Robert who was commenting about stretch clusters across two sites so this is, we talked about this during the DR stream where a single cluster across two sites really doesn't give you increased availability benefits because effectively one of those sites is always going to be off balance it's always going to have the majority of the control plate nodes so the solution from a red hat perspective has always been to recommend effectively two clusters one at each site and then either having an active active or an active passive type of application deployment and what that precisely looks like depends on the application and infrastructure of services that are available stuff like that and spanning it across that cluster while it's not unsupported we generally discourage it I'll also add that we've got one of the essays and I have been working together on a blog post so we'll hopefully get a blog post about that soon as well as next week we'll be joined by Annette Clouet to talk about the DR capabilities with ACM and ODF so if you haven't already subscribed if you're not getting those alerts or scheduled to subscribe to our calendar if not the youtube or twitch channel please go ahead and do that because then you'll get an alert for all of those different topics as they come up alright go ahead Dean please I'm excited to see this demo I haven't seen it yet no problem okay so if the controller brings up my terminal window that I shared perfect so the first things first is I'm not going to go through the full ins and outs of the bootstrap but we spot this through IPI and UPI for an installation and we start off of course by configuring this inside of the install config file that we use as well and it's really simple we set the network in type and then we set our various cluster networks that we're interested in using now of course behind the scenes we need to do a little bit more work on that as well so when you download the operator itself inside the deployment folder we have a number of manifests in there so even if you're using the IPI installation you are going to have to do upshift-install create manifests copy these files across and then you can continue with the rest of the installation and the upshift packaging will package all that together and then install that where it is and of course we do support things like being able to do this in an air-gap environment if you pull the container down and hosting your own image registry as well you would just change that inside of these YAML files from that point of view in terms of integrating with NSX we also can do this on bootstrap as well you pull the files down for this I believe from our VMware website so you would have to already have a license or a trial available for NSX for that component as well because again you're using the NSX Enterprise product for this piece there's a little bit of a difference there so again we have a bootstrap and an operator that brings we don't have an operator for those pods that are deployed during the bootstrap today to bring that up with so unfortunately it does mean and anyone that tried to do the NSX integration previously probably see this we do have to do a pod bond load manually on each of the host nodes on a brought up so that we can get that bootstraps at the same time as well I've spoken to the team we are going to improve that process going forward in the future it's just kind of the way that it is at the moment I would assume that because it's the SDN it happens before even things like OLM have been deployed so having an operator is kind of it precluded there yeah exactly that so essentially we need to get in as it's being brought up for everything in the environment and if you're integrating to the NSX there's potentially policy management that you want to take advantage of straight away as well when that's been brought up so again we've got a simple script for this there is a little bit of work there when you do this typically what I used to do in my older environments is I'd bring up just my master nodes and then the working nodes would follow along suit in the future anyway so that was easier to do so it kind of feels a bit more like a UPI install from that point of view unfortunately again we are working on bringing that into the operator as well in the future potentially so that we can make that easier for people to contune so let's move away from obviously we've created we've bootstrapped everything up so I'm just going to show you very quickly I've got a test application that I've deployed it's a typical web cart application itself I was playing about earlier unfortunately so I've got a few things that are dying there as well we've got a front end we're going to connect to and that's available through our load balancer and then that front end speaks to a number of different services including one of our payment services in the background so I'm just going to re-share my screen now and I'm going to go to the NSX environment in a second so that has just helped time for me out from logging I was looking at it and I was like oh I still have one billion actually I've got a number of tabbed up because I was so excited to show everything to you today so the first thing I'm going to show you straight away we've bootstrapped and we've brought the Entrayer into NSX itself and straight away if I go to inventory and I go to containers I can see all of my open shift environment as a Kubernetes type through the Entrayer CNI as well so I can see the namespace, I can see how many pods are running in there I can see the internet work, I can see any labels that I've brought through through Kubernetes it's really important because those Kubernetes are brought through and translated into NSX labels so that we can identify based on labels rather than based on deployments itself if I just click on clusters there again similar type of view but we can see where it is, what the infrastructure is again we obviously support bringing, in particular if you deploy an environment into AWS and use Entrayer as your networking again you can still manage that through NSX as well next thing that we want to then start thinking about, so I'll show you my quick my application so it's very simple, we go to buy we add it into the cart, I can place an order and then it doesn't dummy order in the background if I go to the security tab and I go to my distributed firewall this is where I can set pod to pod networking inside of a namespace so I made this slightly easier for myself earlier today because I basically settled something in the background but essentially I've set a destination of payment services but if I click into this let's see how we get that straight away I can see an effective members list and because I've got two separate Kubernetes clusters involved here I'm actually pulling them both through and there's a very simple reason for that because I've set the group definition how do I find my pods that I want to affect I'm just doing it by a very simple tag now here I've made this very generic so I'm pulling through the tag of payment services and this then brings pods in from different Kubernetes clusters if I want it just from my OpenShift cluster I don't know the tag in there to say well OpenShift has to be OpenShift for example to make that more granular but as you can see very quickly I can build that up based on criteria obviously I'm denying the service to anything just to break it straight away I'm going to set that to reject I'm going to publish this and we'll see it kind of just go in the background start to update firewall and if I go through my hipstrap again and I try and buy the typewriter this time I add it to the cart and I place my order we can now see it start to hung so once I wait for that to time out through the application let's have a look at some of the troubleshooting features so I've got a plan on troubleshooting stage I've got to jump down to traffic analysts I just want to bring the great things of NSX for many years I personally think when I was a consumer of these products myself was kind of the trace flow was really cool I can't get started with the trace flow I can do all my trace flow of my usual NSX environment but I've also got this entree of trace flow as well so now I'm going to select my cluster I'm going to select my open shift environment and I'm going to select and say I want my pod which was my front end and my destination was obviously connecting back to the payment service to process that payment so I'm going to go to that payment service and I'm going to click trace because we know there's a problem there it's a map made problem it's dropped by the cluster policy so that's what we expected to be fair but we've got that level of visibility within NSX straight away thanks to Andrea I'm going to go back to my security rules I'm going to undo that change and I'm going to allow the traffic again if I publish that and actually if I go to my hipster shop as that's going you'll see as well I've got an internal service area and you can see there that it can't connect to the background pod that it needs to write to unfortunately if I go back to this interface if I change that it'll say success if I go back to plan and troubleshoot back to here if I just click retrace now and let it proceed we're obviously going to see that actually we expect this to fully go ahead and it takes me through that full flow of data now obviously this map will get bigger and more interesting if we did this outside of the namespace outside of that isolation zone so between namespaces or even between Kubernetes clusters or from a virtual machine our network into our Kubernetes environment as well so you've got a lot of visibility control from that point of view if you didn't want to bring in the NSX component to manage this with obviously we have been talking over and over about the policy control point of view so I'm just going to show you some of the documentation from the entree point of view we've got this cluster network policy resource that's created so it's a custom resource definition that you can install it and then it's very easy you just specify your YAML files and you've got a number of options there about how you match pods based on label selectors and so forth then setting ingress and egress and how do you want to deal with that traffic again through pod based through IP blocks through pod services through matching labels whatever it may be and within our documentation we've got a number of examples there as well to take you from complex you can see here things like pod isolation within a namespace strict namespace isolations zero trust cluster security postures lots of different areas to cover from that point of view I have to say I really like this because it gives the right people the right privileges access stuff to help with that troubleshooting process you know our hope 9 made a comment earlier about the folks you want in the meetings when you're doing triage and stuff like that which is certainly valid especially in my anecdotal experience kind of medium to small organizations but I used to work with and Johnny I think you were the same right working with government organizations where you've got hard silos and this type of stuff is really really helpful because it gets rid of a lot of the overhead of somebody mentioned you know sdn and sdn how many layers does the network team have to go through to figure out what's going on or they have to teach the application admin who has access to the pods to be able to look for these things who has to teach the open shift admin who has access to the nodes to look for these so to me this this type of stuff is really really helpful to be able to control that at the appropriate place this is awesome for the entree we also support entree on metal open shift nodes as well so again that's another way to bring that policy networking control together into your full environment for that centralized team or make life a little bit easier for them as well does it make sense to have something like an Istio or service mesh or does entree really kind of like replace that so obviously service mesh the whole idea there is that is kind of bringing that global level of control between clusters and availability of your application so entree doesn't deal with that in terms of service routing between clusters on a global level even if that global level is two separate clusters in your data center and this is really around the pod to pod communication that networking control and obviously red hat has red hat service mesh available to cover those P's fmware we have our Tanzu service mesh as well which is supported with open shift to give you abilities to actually be able to do that and scale out applications as well and if you are looking to do a service mesh in your environment that gives you a different level of control as well so you can look at the headers and the request coming into your application from an end user through their web browser and control at that level you can also do service redirection or control based on service level objectives as well at business level which is a different way to look on maybe somewhere of the networking connectivity and security control as looking at it from a CNI point of view but it's definitely something that we probably should also talk about in the future right absolutely service mesh in the distributed tracing the Yeager functionality is really cool from an application standpoint of seeing Dean you highlighted it distributed tracing that was the word I was missing you can see the headers you can see precisely how long an API call took and stuff like that so Robert asks entry of visibility into NSXT and policy options make it so much more attractive than NCP are there any compelling reasons left to use NCP yeah so obviously we've been building the NCP for a lot longer than our trader to begin with so there are some features that won't quite crossover today if you are interested in that overlay networking bring everything to the same overlay especially if you are looking at distributing environments across your data the NCP is probably the right way to go from that point of view there is a couple of different supportability pieces away so again if you want the metal nodes being brought into the same CNI then that's supported by a trader it's not supported by NCP today and then there's a few things around like encryption and how we handle bringing low balance services from NSX into our cluster as well whereas again and trade doesn't have a low balance capability so you would have to bring a low balance into the system so again we have our own offerings but again we see customers kind of using different solutions out there for that so it's about the flexibility and what you want to take advantage of today we're making it simple we've got an enterprise model for you to consume and a full backed enterprise software we've got the open source software which you can consume and that comes with a different level of service behind it of course like as all things when running an enterprise environment yep the age old where do you get support from so I know we're about 12 minutes over at this point I want to be respectful of your time Dean so for our audience if you have any questions we'll take about another three minutes please don't hesitate to submit those questions over if we don't have time to address those I will make sure that we take all of them we'll address those in the blog post and we'll follow up with Dean as well and Dean I'm sure that you would be very very welcome to come back on the stream at any point I think we might have to do a week of this of like an hour session like one one day every day for a week because it's like I think I covered like one subject in the end profiler hopefully you know I really want to focus on that slide to begin with about you know some of the integration that are available because there's a lot of customers they're typically available today and they you know the different silos within the customers don't realize and I say hey well integrate with your VMware storage you're not paying extra for it it's their consumer you know if you're using NSX get the NCP installed if you want some of the features that are available there if you've got vropped and you're running on top of vSphere install the management pack and start getting that end-to-end visibility it really makes sense is the assisted install supported on VMware or is assisted install only for bare metal no so I think I'll leave that one for you Andrew yeah yeah so at this moment assisted installer itself is tech preview so it's not officially in supported with anything that being said please do if you are using it if you do have issues I know that team they will unofficially support it as they work towards GA which is coming up so at this moment I think I know it works with bare metal I think they have moved there was an internal preview where we where you could tick the box for do infrastructure integration and it worked with VMware I don't know if they've made that public yet if you haven't looked at it in a little while if you haven't tried the assisted installer check and see if that box is there but it is something that they're working on you know more than just for bare metal was the insisted installer is that the one where you boot from an ISO image yeah so I remember speaking to someone inside a red heart when this was first kind of released the wild as a preview and they said oh we've not really tested it but we have tested it and put it inside a virtual machine and you know as long as the virtual machine can boot the ISO right it's the same as a bare metal machine we just obviously there's like a lot of considerations there so again I guess this great thing about the community right and everyone on this listening to us today they get involved give us your feedback this is where you start to see the differences come down the line of products right yeah and it's as I'm sitting here thinking about this I'm remembering that the team asked me to take a look at that internal preview and I did and I don't after like two week or three week delay and I don't think I ever said on my feedback so sorry folks if you're listening to our livestream I'll get that to you as soon as I can so yeah assisted installer is really cool we've covered it a couple of times I'll drop the links to those into the blog post when it goes out but yeah Dean to your point it's the one you go to console.redhat.com you click the create a new cluster and it gives you an ISO you boot the notes to that ISO and then they pop into the interface and you can you know this one is a control plane this one is a compute node and go so lots of cool stuff there and I know there's some red hat folks who are working on where they have figured out how to deploy assisted installer locally disconnected as well so that way you can use it even if you don't have internet access so with that being said please I'm sure that there are many more questions out there please don't hesitate you can reach out to me Andrew.Sullivan at redhat.com so I will kick you I saw your comment in there and all that please if you'd like send me your support case number I'll take a look at that and see if we can figure out any any strangeness that was happening there but anybody with questions don't hesitate to reach out to me Andrew.Sullivan at redhat.com Practical Andrew just like you've seen from the Twitch chat on Twitter Johnny as well I'll throw him under the bus Johnny with no H I knew Johnny for like three years before I figured that out and he was very polite about it but you should see people spell my last name it's always Richard so yeah yeah and Dean of course Dean on Twitter you are Saint's DLE I won't ask you to put your email out there thank you everybody will also include thank you Stephanie I see you're on the ball you included a link to Dean's blog inside of there Dean has some really awesome blog posts about OpenShift on VMware inside of there including a number of things on vrealize integration so using vrealize automation to deploy OpenShift clusters programmatically which is something that we get a lot of thoughts so yeah but by all means if you have any questions if you have any stuff that you would like to send to us please don't hesitate as I mentioned earlier if you haven't gotten subscribed if you aren't kind of watching our calendar for stuff please do so we've got a number of sessions coming up or a number of streams coming up next week is OADP and disaster recovery with or not yeah OADP is a part of that sorry somebody asked about OADP but it's ACM, ODF OADP for disaster recovery I think the week after that we're going to put Johnny on the ball we're going to talk about validated platforms patterns yeah validated patterns I'll get that right one day and what's the one after that oh a performance add-on operator coming up in mid-February so we've got a lot of really exciting stuff coming up and that's just as far as we've made it so Dean any last words very much for inviting me to talk today it's been a pleasure thank you for being so kind with all the questions and giving me quite a lot to chat about I would love to come and present back to you all again in the future and please don't hesitate to reach out to us with questions queries with our open source stuff please do communicate through us through the various github discussions and so forth as well and please do bring your questions and queries to both the FMware and Red Hat we're here we do work together we support each other solutions we work out better together when it comes to virtualization this is why we're here talking about it today right show you all of these cool things that we've been working on together and throughout the rest of this year you're going to see more of that as well yeah couldn't agree more thank you Dean and uh Johnny last words no Dean thank you outstanding today this was awesome you know like the feedback from the crowd and everything this is awesome so thank you for coming out we'd love to have you back you're welcome have a great week and stay safe out there