 Good morning. Good afternoon. Good evening. Wherever you're healing from welcome to another edition of ask an open shift admin I am Chris short executive producer of open shift TV, and I am joined by the one and only Andrew Sullivan Andrew We have a special guest on today. Please introduce yourself and our guests. We do so. Thank you, Chris and Looking forward to today's show. It was fortunately our guest Catherine du Bay who is a product manager with open shifts was very gracious and very accommodating because I I asked her very very last minute yesterday afternoon if she was able to join. So Thank you DM me at like six o'clock last night. Yeah, yes. Yeah, something like that. So Thank you in advance before I even introduce you Catherine. Thank you in advance So this is the ask an open shift admin office hour It is one of the office hour series of live streams here on open shift TV So what that means is that we are here to answer any of the things that are on your minds, right? Whatever it is that is bothering you that comes to mind whatever questions you happen to have You've got three experts here Catherine is somebody that I rely on pretty heavily to help answer questions to Reassure me that I am either right or wrong as the case may be So if we don't know the answer, we are more than happy to go and find the answer Whether that means you know reaching back into the rest of the product management team reaching into engineering finding those Relevant people to to find those or get those answers for you That being said in the absence of questions from you all we are very happy to have Today's guest Catherine du Bay and today's topic which is revisiting the installing the installer in the install process So if you recall last year kind of the some of the first episodes that we did here of the ask an open shift admin live stream Back when it was called the open shift admin office hours We talked about installers. We talked about the install process We kind of did a bit of a deep dive down into that and all the different things that are going on inside of there But of course things change. There's been some new things. There's been some old things that have Changed if you will so we wanted to revisit that because it's important to keep up with what's going on and make sure you're aware of all of the options that are available out there and quite literally I could not think of a better Subject matter experts or a better person to come on and talk about this then Catherine so Catherine if you will please introduce yourself Hi everyone, I'm Catherine du Bay. I am part of the open shift product management team My focus is on installation and updating of open shift Yay So We'll get to that in just a moment. We'll we'll talk about the install process We'll talk about you know a day in the life of Catherine, which is I'm sure even more chaotic than Chris and I But first, you know in in long-standing tradition The things that are top of mind things that have come up recently that I want to highlight and make sure that you all are aware of So this was a relatively quiet week red hat had a holiday over the weekend So it was just it was quiet. I don't I'm gonna treat that as a good thing right as administrators if it's quiet That means everything is running smoothly. I don't know if Think it was Futurama right where Bender was the you know, you know, you've done it right if nobody thinks you've done anything at all Right, that's a good administrator if everybody goes what's that guy doing here? You know, that means that you're You know, nothing's broken. That's a good thing So a couple of things so I got asked this comes up periodically around the Entry VMware storage provisioner So when we deploy a cluster that is vSphere integrated so vSphere IPI vSphere UPI we configure the entry storage provisioner and That provisioner if you look in the upstream docs Support some things or is capable of doing some things that open shifts doesn't support or is not capable of doing so things like multi DRS cluster multi vCenter Importantly things like storage DRS clusters So I saw at least one example this week where the customer was I think they did a UPI install And they were able to successfully deploy using a data store cluster However, what happened is storage DRS did what storage DRS does It moved one of the discs for a virtual machine and it caused a ripple of things to happen Not the least of which was there was a bunch of pvs that wouldn't connect because it lost track of where they were at so Please be aware. Please be conscious of you know, if you're if you have a DRS cluster storage DRS cluster that you don't want to use that for open shift deployments You want to use just a standard data store for that Cool The second thing that I had was oh somebody sent me an email So one of our IBM compatriots sent me an email asking about examples for deploying the registry and using storage Non-default storage or other storage that is configured So we were chatting about this before I show started in Catherine very kindly corrected me in that when the cluster is After the cluster is deployed the Registry operator kind of detects what infrastructure it's on and then configures the storage appropriate for that and Catherine I'm gonna Help help to keep me honest here. So basically if it says hey, I'm on Azure It does things like talks to the credential mentor and says hey, please give me some, you know, Azure object storage or whatever storage Credentials so I can request my PVC or connect to that object storage and then goes through and configures all of that But what if I want to do something different? How can I do that? So I'm gonna work on and I've got some background things going on Hopefully next week. I'll be able to talk about show some of those examples I'm also gonna work on a blog post for OpenShift.com. Nice talk about that. Yeah, so the way the way it works I'm just really quick. I'm not spending too much time on it, but yeah when when any of the OpenShift components that require cloud API access, they will request a credentials request that essentially is what gives them Enough credentials to go create the resources that need so in the case of Azure Since you brought that up that would be its own storage account So when they are once it has the permissions it can then request what it needs for the registry and then that gets created as part of the operator itself Hopefully that Provides the right level of detail you're looking for That's great. Thank you So the last thing that I have is something that just came out this morning It was shared by Chris and I as teammate Eric Jacobs who also has a live streamer two or five on here Yeah, so Chris or Chris so Eric has been working in conjunction with Kirsten newcomer and a handful of other folks around Updating what we call the Red Hat OpenShift container platform architecture design guide for PCI DSS, which is a huge mouthful Yeah effectively if you are a customer who is Interested in or has PCI DSS Requirements this is meant to help you with that architecture process, right? How do I design deploy manage my OpenShift deployment within those constraints? So I posted the link to that into the chat there be sure to check it out If you have any questions, feel free to reach out to me or Chris Andrew dot Sullivan at redhat.com Chris dot short at redhat.com Not just C short. Oh C short. Yeah, that's right. We were talking about yeah, sorry It's Chris short on Twitter, but C short on email I'll get should I I mean like if I went about and created like a consistent handle across all platforms people wouldn't know Where I am anymore, so We're only this is what episode 25, you know, we're only 25 episode maybe another 25. I'll get it. Yeah, eventually Alright, so I see there's a couple of questions already So I think we should go ahead and address those and then we can you know go from there And and again, please at any time feel free to ask any questions that you have whether or not they're related to today's topic So, you know the installer is today's topic, but don't let that inhibit you So the first one from Let's see Rahul. I have to migrate from OpenShift 3 to 4. Could you please share any reference links? So Chris, I see you shared the migration topic link there. I think that's a good one Yeah, so we do have a number of migration tools not the least of which is the migration toolkit for containers What I was thinking of remember the name of so basically you stand up the new cluster you points So I think it gets deployed to the new cluster and then you point it at the old cluster and say hey migrate all this stuff for me and we just Did a stream on that with storage so you can do this with storage you can do this with just about anything Drop the link and chat there to the next. That's the wrong link. Yeah Yeah, as I'm I spent a little bit of time yesterday planning out shows planning out episodes for the rest of this quarter So I think I'm gonna tack that migration topic on to the end So hopefully some time in June we'll be able to cover that here on this show Let's see another question we had from Dean if I use the assisted installer with bare metal How can I add a new node post installation? So Catherine, I'll kick this one over to you even though I know assisted installer is kind of parallel to what you're doing And if you want to elaborate on that please do So let me maybe just ask the question one time there, so I understand what the So you add a node to a assisted installed cluster. Yeah Yeah, I believe so so and actually can be done a couple of ways I believe assisted installer itself you could and this isn't something I'm involved with but I'll sort of you know Fumble through it as best as I can. I believe you can still use the bootable assets We're under Or whatever they call it the boot media. I guess is the right ISO that they give you yeah, yeah So you can you can still stand up nodes and add them in that way, but you actually really don't have to it's an Open ship cluster at the end of the day all you really need to know is booting of the operating system and where to go get the admission and the mission is always hosted On the the masternodes. So there's a ignition boot service that you can you can leverage So it's no different from whether you do it on day one or you do it on day 27 It doesn't make a difference, right? But that's the typical method and that works across any of the installation types So not just in it's just an installer not just you know, UPI IPI It could be done in any different ways so you can manually add nodes in an IPI cluster. You can use machine sets But essentially all it really is is you're passing a user data field to that node which is Location where to go get its ignition config. So depending on the node nodes role Whether it's a worker or a master in a case of if you had to replace that control plan that You would boot with that ignition that in turn would then Work on joining a cluster and then you'll need to approve the CSRs for that, you know Which essentially is their certificates the client and server certificates to Be able to securely communicate So just kind of to rewind that back. It sounds Almost as easy as using the assisted installer minus the fact you have to go manually approve CSRs Yeah, you still have to do it in the case, especially the bare node the bare node cluster because you don't really know that The node itself belongs in the cluster We get a little smarter with Some of the cloud providers where we can say well, hey, we kind of guess you are because we know you're You know on this this cloud infrastructure, you're Um You know the the networks that you're running on or is the network you should be on So there's this things it looks at to do the auto approval Uh, whereas sort of bare metal is a little more challenging this not quite as much information there Yeah, yeah dean says using the assisted installer. He doesn't have to deal with ignition files, which right Yeah, it's they're still there. They're just invisible to you. Uh, one of the nice things about that, right Yeah, yeah, definitely. I think the original name of uh, assistant installer may be to kind of, you know, paint a different picture on it was Uh, upi plus plus was the original name And and they you know, they call it assistant solars So the way I describe upi to people is user provision infrastructure is essentially like no installation, right? There's no installer. You're just getting all the necessary artifacts that you need to Bring up a cluster, but from there you're responsible for provisioning everything, right? With ipi obviously that's full stack automation. That's means it's provisioning everything for you It automatically just comes up and you have a run cluster So assistant installer is really kind of the the middle of the road, right? Where you have the notion of you know boot media or isos those nodes boot up they essentially Establish identity going by connecting back to the service And then you form a cluster based on where you're sort of running those those Images, so it it makes it a lot easier as opposed to manually having to go to each node and sort of provision that right again anyway Yeah, so I noticed that you you answered or it probably out of Habit at this point what I can only imagine is you're most frequently, you know Asked and answered question, which is the difference between ipi and upi so I have a I guess a two two questions related to that So one from your perspective are there, you know advantages or disadvantages to one install method versus the other And then the the second part or the follow on to that is is there a difference in the cluster that gets provisioned right at the end of that Yeah, that's a that's a great question You know it definitely I think there's a large perception that Upi is inferior to ipi Um, I would argue the other direction ipi is probably more inferior to upi and the fact that It's a bit more prescriptive right we we try to you know get the 80 I think the best way to describe is the 80 use case like the 80 of customers What they're intending to use for options and settings on their deployment We cover as part of that um automated installation process Whereas upi really sort of opens the door to I want to do customizing of the infrastructure and this is sometimes what people get confused They equate it to customizing of the install of open shift. It's actually customizing of the infrastructure That open shift runs on so the resulting cluster itself isn't doesn't necessarily have to be different and Well, the docs sort of explain it to the maybe the far extreme of you manually provision everything including your nodes Your control plane nodes and everything else. It actually doesn't have to so you can still use some of the ipi Uh functionality in a upi cluster and you can actually do it on day one Uh, for example, say you wanted to do machine set creation We actually generate those as part of the installation or open shift manifests Even on a upi cluster So it's up to the admin to want to apply those and use those So they can actually be done in advance of creating a physical node like you would do on a normal ipi a upi process you could actually instead apply the the machine sets and then the cluster would go off in provision in the case where There is a machine api provider available for the The the platform you're deploying on to the case would be like vmware or aws or gcp or azure, etc so You can make those clusters look Nearly identical. I would say there's probably a Small small really tiny differences, but not enough from an operational perspective. These clusters will perform identically You can have a plastic Uh in dynamic uh compute capacity the same way you can do an ipi And you get the the the advantage of being able to customize the infrastructure Um in cases where you wouldn't be able to do this with ipi. There's a lot of cases where that may be Um more beneficial to do it in upi and sort of script it for your organization as opposed to trying to make ipi fit A model that it's really not intended for Yeah, I know and one of the things I want to follow up with you on in a couple of minutes here is You know, you and I we tend to answer a lot of questions You know with the field and with customers around the installation process and a lot of times that bleeds over into architecture So I definitely want to talk about that a little bit So dean dean has a question here, which you you kind of partially answered so Why do we consider upi to be superior when ipi allows automatically scaling nodes using effectively an oc command You know and whereas with upi, you know, he's saying you have to do your own automation which I see christian and and chris are chatting down below saying that that's that's not true And you said the same thing Um and then kind of the second one, which is tangentially related which uh acm requires ipi clusters ipi deploy clusters Yeah, so the the the reason for that Has to do with the provisioning technology under the covers So ecm doesn't require ipi It requires ipi to be able to provision a cluster, but it doesn't require ipi to adopt a cluster so you could still Do a upi deployment adopt that deployment for management through acm It's only when you want acm to provision The underlying cluster that you would need ipi and the reason for that Has to do with the integration of How it's provisioned so it's actually it integrates through a service called um open shift hive Which is a api for provisioning clusters Now as part of that For that service to work it needs to not have provisioned the underlying infrastructure And this is where I you know, sometimes maybe people struggle a little bit as why one is better than the other In cases where you need that additional customization of the infrastructure Hive or anything else isn't going to know how to do it, right? You're going to be your own admin your own Sort of controller of how you want that to look like, you know, what you know, what shape what size whatever And that's where you know, you could still look at automating the provisioning a good example is Say you're on aws. We provide Cloud formation templates that will pretty much you follow, you know one by one or six stacks You would get a running function cluster But do you really need to have six cloud formation stacks? No, actually not you could you could tie it into all different You know one script with a couple questions And it would look just like ipi at the end of the day. You'd have a identical cluster. You can absolutely do that Um, so that's the case where you really need that customization and that's why it will work with a cm Is because it doesn't know That special infrastructure customization It only works with kind of like here's kind of the 80% use case with ipi and here's you know If you can you can adhere to this then we can go provision it because we know how to do that as part of the regular installation So I'm going to take some questions a little out of order here. So ricky. I see your question I'll get to that in just a moment because And then multiple answers there. Yeah, and then jp dates. I see you chatting about some things as well So I'll we'll address those So just to kind of round out or complete the thought process here with dean So I'll first I'm going to expose my lack of knowledge around a cm So a cm advanced cluster manager. I think it's the name of it. Yes. Yep. So advanced cluster manager is red hats multi cluster management tool so effectively Right, it's a a management plane where you either create rate deploy as katharine was saying or join existing OpenShift clusters into that management plane and then you can do things like manage rate apply security policies apply our back etc across all of those member clusters so kind of what walking through you know with with that set so I Dean here is saying things like cluster pooling are coming out that will require ipi I'm not familiar with a cm's roadmap. I don't think a cm requires ipi It requires like you have to install that a cm node What they call the node or the hub? That has to be installed in an ipi fashion But after that you can add additional clusters that are not ipi Well, I'm I'm assuming what you're saying here is that a cm has a feature called cluster pooling that will use ipi And what what Dean's building here is a case? I guess against katharine statement that UPi is the superior method Which I will say that I I I agree, but it's like a 51 percent to 49 percent degree. I I agree because The the flexibility and the scalability particularly Or I should say the simplicity around UPI I find to be better Particularly the load balancer aspect, right? Yes, there's an integrated load balancer that comes with ipi It is fairly limited both in scale as well as in configuration whereas with UPI You can configure that however you like in whatever manner you want And still add machine sets day two to be able to get the cost or auto scaling So I'll take a step back now katharine if you you know, do you have any thoughts? Do you have anything to add there? Yeah, yeah, you you started hitting the nail on the head with that one So the reason why I still stand by that comment is ipi is good for A certain prescriptive use case and I I would say we're trying to get better and expanding that out And I I think the metrics we've we've sort of seen on You know who's using what whether UPI versus ipi We're starting to see a definitely an uptick in ipi. Don't get me wrong. We want people to use ipi It's a lot easier process, right? So I I'm not trying to argue there but if you look at terms of What you can do with one versus the other one up is actually the more superior one You can pretty much do anything you want a good example is so not just the Built-in internal load balancer And dns functionality that we have with ipis for sure for all the on-premise ones But even in the cloud you run into the same scenarios So say for instance, you have a case where a customer wants to extend an on-premise networking service like for dns The way we do ipi today is route 53 on s or nothing, right? So you sort of don't have that option and that's we're on aws with upi you'd be able to do that So I I think that's where the the you know a lot of the differences are One is like, you know, we give you what's there's a term We give you the bullets. We give you the gun And we let you use it any way you want Whereas where's ipi we we put one in the chamber and we point the gun So we we make sure you don't get hurt with it. So that's That's a big difference in kind of how you can do it. One is pretty much the flexibility to do anything you want With the the risk of it's, you know, it could be a lot more complex or it can be you know, sort of Is Is customers you want I guess, you know customization adds complexity, right? Just as a sort of fundamental statement Whereas we look at ipi as sort of the most robust Mereliable almost perfect way to get a cluster a hundred percent of the time But we limit what you can do as part of that initial bring up and then everything else that doesn't fit This is probably another majorly important distinction is Installation in open shift four has really gotten a lot more simplistic. We purposely Prevent a lot of options from getting in the installer on day one And we push those to day two and the only really sort of guidelines we have is that You know, this feature is prolific like everyone wants it The second thing is is it's unsafe to change on day two or it's required For installation on day one. So we're very strict on that But as such, you don't have that infrastructure customization flexibility to get the dpi. Hopefully that was not too So I want to poke on that just a little bit of you know, I know we've discussed, you know, here on the ask an admin live stream before of With the change from open shift three to open shift four, we went from Ansible playbooks that had I think somebody told me 1200 plus options that you could set in the in the values.yaml Or preferences or whatever it was right To, you know, open shift install in version four where, you know, the install config You can customize it to some degree, right open shift install explain is your friend I love that command because you can go through and you can see all of the different options and all the things that you can configure but It might be a hundred options total across all of the infrastructures all of the different things that you can do and I think you you really highlighted that of The the result of open shift install is The cluster being up and running and ready to do all of those other things which was, you know, probably 1100 of those 1200 options in version three for doing things like you know, configuring all of these little Cluster add-ons and minutia and all that other stuff and with three it was unless all of it succeeds None of it succeeds whereas with four if the cluster install succeeds Great, you've got a running cluster and now you can go through that You know, hopefully not but potentially iterative process of deploying all those add-on services So and where I'm going with all of that is We sometimes get asked and Catherine you and I have had these conversations before like partners will ask school Hey, I want to add and customers as well I want to add maybe, you know, configuring my load balancer my external load balancer as a part of the installer, right? I want I want open shift install. I want there to be a stanza for my f5 So it'll go out and configure the f5. Can we add that? Why can't we do that type of stuff? So I appreciate your thoughts your perspective on that Yeah, it's it that's a that's a great way to put it. I I don't know that it was 1200 But if it is that's awesome. I know it's I know it's many many hundreds Put it that way So, um, you know, that was always the thing. I think a lot of this I I sort of inherited the installer midway through 3x days, so can't necessarily blame anyone or I don't want to shift the blame anyone but It was always that one more option. It was just one more option I'd have the cluster the way we wanted it for this customer and What we did is a lot of snowflakey things where it was unique special Really wasn't generally applicable, but that one customer had that option, you know It was all written in ansible. Everyone could change it. They could submit pr so it was it ended up kind of getting this point where The reliability suffered a lot because of The permeantation of different options and then which ones could be You know put in the same the different combination would cause different results So that was a that was a pretty tricky problem and as we get to four we've really We've been strict One of the things that we don't allow is any flags whatsoever. I think there's there's Probably like two flags in the whole installer. I know like, you know minus minus Durr is to get the directory I want to say there's a there's one or two other like log level and things like that, but There's very little and and you know, we've gotten the the factions that oh, we could just you know parameter So we'll just put a bunch of tags or see me flags and we'll we'll just he'll be able to just run it without You know changing anything we're like, but that's not the api, right? so I think we've been we've been very strict on this And it it has shown from the reliability perspective From the ci that we've gotten in place You know The when you run this command and you basically fill out all the options You're getting a cluster right and unless it's something, you know broke or permissions or You know, we we're trying to check more and more and more validation on this environment So I won't say we're a hundred percent perfect, but it's significantly more reliable. So we've sort of shifted the problem now from Day one to understanding how you configure kubernetes on day two and that that's always been a struggle Right because we sort of looked at a knob bell and whistle. It makes it easier on day two But we sort of broke the installer by making it more complex So that's been a little bit of a struggle getting people to think more kubernetes like more config maps more manifests You know, you can still do a lot of that. You can still add those in in day one You know create manifests so we do have That option so anything on oc apply I think right we can make it happen If you wanted more than one machine set if you wanted infras on day one, you know, everyone argues it can't be done I'm like, yeah, actually it can't be You just got to create the manifest so We we sort of given the approach, but we've moved it from a flag or a field To a kubernetes way of doing it and I think that aligns a lot more with how customers are doing config management with acm or Git ops types of situations for a policy enforcement So these are the things that I think we just need to kind of shift our mindset from You know, having a bell and whistle or a knob or a flag To really thinking in kubernetes. How do we do that? And that's been I think the biggest challenge I think people are starting to come around to it And I think as we see more and more environments roll out We see the git ops types of Of deployments where they want to be able to just you know, check some PR in and you know Be able to boom push out a config and make their cluster declaratively conform to what they've defined it to be Yeah, and I think the the popularity of christians live stream You know the git ops happy hour, which is happening tomorrow by the way You know shows that We as a kubernetes using Industry or maturity right and and adapting and adopting many of these new philosophies Like any new technology it takes a little while Okay, so i'm going to go back and revisit some of those questions that we were talking about before I'm so lost now So apologies for anybody who's chatted anything about the last five minutes because I've been holding the chat In my screen just where ricky asked his question Which is I would like to host multiple nodes for students to access remotely Is there a high level roadmap on how I could accomplish this with open shift? And I think roadmap here is not like roadmap futures, but like how can I do this type of thing? So so ricky first a couple of things So When you say nodes are you referring to open shift clusters in which case the simplest thing to do would be to use the ipi installer Against lake azure or aws or google or one of the public clouds and just spin up clusters And then at the end of that install process It'll spit out the connection endpoints and credentials and you can just hand those over to your students and let them do what they do Uh alternatively if it's a shared environment, you know, essentially, you know spin up a cluster somewhere that's publicly accessible again aws azure etc And then deploy so deploy that cluster connect in and then use something simple like ht password authentication and give each one of those users A You know set of credentials that they can access the cluster with Entitle them, you know or give them permissions to whatever it is that they need permissions to And kind of go from there So the other scenario that I you know might be possible or that I might be thinking of here is You know, maybe you want to give them something that's Lighter weight than a full, you know, five or six node cluster Um, you know in which case code ready workspaces or excuse me code ready containers would be the answer for that So either helping them to deploy that locally to whatever resources they have or potentially deploying that onto Um, something that they can publicly access. Uh, so I don't know. Maybe it's not Packet is no longer packet. It's now uh equinex equinex metal. Yeah, so, you know Renting a server from them, you know deploying a number of instances inside of there and then handing over credentials for that Is is another way to potentially do that Um Catherine chris anything to add there Yeah, we we've got the other option is you, you know, if it's just like a student case You could use hive, you know, you could leverage a hub and spoke model deploy hive in your hub and then provision through Cluster deployments additional clusters as then they're needed again. I'm not sure if it's clusters or nodes In in the in the question, but that is another option and that's fairly minimal to do in terms of using hive Mm-hmm. Good to know. I need to I need to learn more about hive. Yeah, so here like I don't need some api Yeah Um, so jp date can we do upi for vSphere with windows worker nodes and ovean kubernetes networking? Uh, so I don't think this is an installer limitation. It's a uh, uh windows When when co what is it? It's the windows machine Yep, I always think windows media for some reason Which is not the same thing. Yeah, maybe so I don't think this is that's an installer limitation I think it's a uh windows nodes windows, uh machine config operator limitation there I thought they're they're pretty close to that. I know the the byo Uh windows isn't quite there yet. I think that's coming soon though But I thought the vmware support, um Sort of like think like a machine set type of deployment where you're spinning up your own Uh windows. I thought that's just about available or was already available. Ipi it works. Um, that's that's ga Uh, I think it's ga. Yeah. Yeah, the byo is still I think a release away if I recall Yeah, christian says it's in version 2.0.next. So thank you for uh Spending some time on chat during your your workout christian. Mm-hmm Let's see scrolling down here. I I know there was some others. So I'm going to take just a second To uh read through the chat, uh, does the installer uh rap scale in reads Does the installer allow for mixed ipi and upi deployments? If not, is there an easy ish way to add manual upi nodes to an ipi cluster? Um, so now I I'm just trying to figure out which way we're asking this whether we were trying to automate a upi Install or try to go the other way around. We're doing an ipi and then adding I think what he's asking is can I deploy with ipi and then add manually provision nodes as day two, right? You you can Uh biggest caveat is they must be on the same Platform that you've deployed the cluster on One exception is so uh, let me let me just maybe maybe take that one step further if you're on aws And you want to put a bare metal node manually joining the cluster that won't work but if you did a Platform agnostic install Where you didn't pick up a cloud provider. So think in the install config um Platform none would essentially allow you to mix any which way you want to mix, right? The downside is you wouldn't enable any of the platform integration So you wouldn't be able to autoscaling. You wouldn't be able to Dynamic storage. You wouldn't be able to You know provision the underlying infrastructure all that so pretty much any ipi Wouldn't work in that situation. Um, but you could do it if it's within the same platform So you could do a manual machine addition in the same platform What you would end up doing is you would have to be able to get There is a url for the For the admission config for a worker on the masters and I have to look up exactly what it is But I think it's even covered in the doc somewhere Um, you would pass it in through user data depending on the platform you have, you know Whether it's hosted in a web server somewhere or you know again using it off the cluster Which is probably the easiest way to do it You would then boot that node and once it joins it would be a single worker node on its own so essentially So this is an interesting one and it's not one that I I think off off the top of my head I would have said no to this um, but effectively it's the same as A upi and then adding machine sets day two Just in reverse So you're adding nodes that wouldn't be a member of a machine set but would be a part of a machine config pool And you're just following the exact same process Yep, so that's good to know. Uh, so I'll follow on that question and it might be asked later on I just haven't gotten there yet. Can you convert between ipi and upi? Like can I deploy ipi and then change that to a upi cluster or vice versa? I think is one of those trick questions because there's really really not um any nuance of ipi or upi within the environment Think of upi as user provision infrastructure So if I use a provision the infrastructure, I can can obviously do that, right? You know, you can Set up all the resources you need um To perform a successful and deployment of open shift vice versa if you use ipi to have the installer provision that on your behalf You could do that as well the trick is is What are you trying to manage on day two? Like what are you trying to go from installer provision to user provision because the cluster on day two is nearly identical? I want to say 99 and 40 hundreds or ivory soap identical So you're still going to use a lot of the same operators that would essentially manage Resources a good example is say the ingress operator on aws. It would still be managing the um network load balancer for start out apps or ingress On day two you could disable that, right? So as part of that operation in the operator you can disable that Likewise, you could go to the internal registry and say I want to use different storage You could Um, I'm trying to think of the other thing. Uh, you could stop using machine api some machine sets, right and do manual modes So I think it just depends on what you're trying to change I don't think it's a I'm going to convert a upi to an ipi or upi to an ipi like there's not really a notion of that It just depends on what secondary level services you want to enable or disable Got it And I see here, um So usami who's same apologies for butchering anybody's name Is it possible to use ipi with an external load balancer at the same time? And I think the the inverse is true and what what I was thinking is you were saying that is, you know, we see people ask Can I use the integrated load balancer of ipi with a upi? deployment on premises And I think what you just said was more or less along the lines of Yeah deploy ipi and then just don't use the machine sets, you know to to scale nodes It's a little trickier in their question. I think I think what they're probably asking for and maybe I'm mistaken or reading between the lines here My guess is it like for instance, they did a vmware install where we would have the Keep alive the ha proxy doing load balancing for the cluster um I'm sort of wondering are they talking about no, I'd like to really use an f5 after I deploy the cluster That's definitely what they're asking. I I asked a separate or I added on to that So yes, you are answering their kids. Yeah, so so that's that's the trick. Um I want to say you you probably are the most experienced on this, but I'm gonna I'm gonna sort of Flubber through it There's no good way to manage that. I'm aware of the internal People have dha proxy setup. I believe it's just you like going through mco to Basically tell it not to work anymore, but I don't recall what other problems that digs up as as you do that I think it's technically possible and I'm sure someone's probably done it You probably have some more better insight and I do want this So to answer your question, um directly Yes, but no, um, so what kathryn is saying is is true technically you could go in and using mco you could basically remove or disable the Keep alive d functionality that's associated with the ingress a endpoint and then Basically move that virtual ip address or the dns name associated with it to an f5 or a citrix or whatever external load balancer using um But that that goes back to the whole, you know, now you're you're Breaking or or deliberately modifying that opinionated You know ipi installation process and Should you really be using upi in that instance? And if you want to continue to do things like automatically scale machine sets, well, great. You can add that day two So the real answer here is in the way that doesn't, you know, deliberately break core, you know, ipi functionality Is to simply add a second domain and then point that domain to the the external load balancer So if you know the the default ingress is star dot apps So maybe you have a star dot prod or a star dots, you know, something And that's hosted on i'm going to pick on f5 right so an f5 load balancer So when you create your routes, you simply say that they are managed by that external, you know, that second set of route instances or excuse me ingress instances And then particularly with our partners who have certified Operators words are heard at the moment certified operators, right? They'll do things like automatically updates that external load balancer configuration For additional worker nodes as they get provisioned read all of the things that they normally need to do So it is possible. It's just a little bit different than you might be expecting Hopefully maybe Yeah, maybe to take that one step deeper because I always I always sort of think the Sometimes it's a misunderstanding of why we even use this in the first place so The reason why we have these services as part of the ipi deployment and they're not on the upi deployment Is upi we're we're sort of at that point of assuming you're you have control of your own You know infrastructure, so you're probably going to bring your own dns You're probably going to bring your own load balancer chances are you have an f5 or or something equivalent since that's you know, what we're We keep talking about here But I in the case of ipi We still need a service To automate the bring up of the cluster. So the the reason for this has to do with The whole inception problem of how do you bring kubernetes under management? If you don't have kubernetes running right so We have this notion of a bootstrap node And what that is is our our temporary control plane now We need to be able to during the pivot from the temporary control plane to the permanent control plane, which is your three control plane nodes that you end up with on our running cluster is we need to be able to perform tasks against the api server And at that time you really don't know where the api server is running, you know, what's running somewhere It's got an api dot, you know cluster name dot domain name But you don't really kind of know where it is. So what we use is a load balancer And it could be performed a couple of different ways. You can have a load balancer You know with health checks to to you know with a with a bunch of dns Names below it or you can just use ron robin dns really doesn't matter per se on the on the bootstrapping operation But what do you need to do is when you resolve api or api int is actually the right one here You have to be able to get to the right running control plane so The way we do that is we have the master in there and the three masters and the bootstrap node And depending on where it is in the cycle Someone's going to be responding So without an on cluster service, we have no way to provision An external service to do that as part of that bring up. So that's sort of our work around to bringing kubernetes Under management of kubernetes. It's the whole bootstrap process And that's why we're leveraging on cluster services for internal communication. Yeah So jumping back to some questions and i'm falling way behind on chat here my apologies um So dean has another question that I think is an important one here Other than not having access to dhcp. What are some of the main oppositions we see customers have against? using ipi And I see some of our other audience members chiming in here with reasons so Welly saying not knowing the name right both of nodes or I guess of nodes for compliance and dns approvals Um, let's see keep alive d plus ha proxy floating ip fail over time I mean, we have somebody asking in chat about you know, they have a pool of mac addresses And this is the only pool of mac addresses are allowed to use. I mean, that's a reason right like Yeah, so katherine being you know being the product manager are what other things do you see or hear about in that respect? Yeah, um a lot of it. So definitely dhcp is one of them And this is this is an argument we get in all the time. Um The trick with this is It would require a significant retooling of the platform Even if you handed me a bunch of the of mac addresses And say just go fill them out right because everyone's definition of it isn't even just here's a bunch of mac addresses Some people say well, here's a bunch of ip addresses, right? So it's it's even more specific But it's really more of the cattle versus pet mentality, right? And I think in some environments, you know, where you're very strict and you can't have dhcp It's it's definitely going to be a lot more restricted on how you assign things what you let on the network So it it sort of breaks this paradigm of dynamic Pew on capacity, you know, this whole thing that we have with ipi. So that's one of them And that's a big one I would say the other one is credentialing could also be sometimes challenging Where they don't want to automate a lot of stuff They want to make sure that like credentials are locked down as hard as they can And then sort of just give just enough to get a cost drop and running And the way we do it today, I would say good or bad. I'm not going to sort of defend this but we require admin credentials For provisioning a lot of the things today. So sometimes that's a little bit of a rub for folks and I I think it's understandable I don't I don't what we're not trying to say otherwise But we are trying to improve upon that which will help A number of customers who are using UPI move to IPI So I think there's there's definitely some work there and it's in the other one that I've also seen Is sometimes some of the architectures are very restrictive that customers need to work in work in Good example is like just throwing gcp out there. For instance, I don't feel like that's kind of enough face time today um It uses something called typically customers use something called cross project networking. Well It sounds great on paper, you know, you create all this you create a shared vpc with all your networks in it And you share it out all the other projects Or open shift in a different project The issue is that you lock the account that's provisioning open shift down From accessing just beyond reading the networks in the shared vpc So it can't make any changes. You can't update any firewall rules. I am Whatever you can't pretty much create anything. So ipi is under the assumption I'm going to create everything you need to be successful in having a perfectly running cluster out of the box Well, if you can't create things like firewall rules or you can't figure I am I mean you pretty much broke the model So that's another reason why sometimes the restrictive natures of the architectures by locking things down Prevents automation And and there's really sort of no good way around it. You just don't have the right permissions as the account installing open shift so, um We've only got about eight minutes left and I think we have a hard stop today chris. We have a very hard stop Yes, okay, so I want to do a bit of rapid fire with questions here So for any questions that we don't Answer here on the stream or if we, you know, have an incomplete I'll make sure to put all of those into the blog post So Friday morning on openshift.com slash blog just look for the blog post that summarizes This this particular episode and we'll have all of those inside of there um, let's see Any option to use ipi plus external load balancer other than router sharding unfortunately know Although I think the assisted installer folks are working on the ability to do something like that Basically be today when they deploy they use that integrated Keep alive d load balancer functionality I think I have heard that they are working to be able to add the ability to specify an external load balancer as a part of that And catherine, please feel free to jump in and add anything if needed Um So if I have nodes on over slash rev I can't add bare metal nodes I've kind of answered this Yeah, so in chat. So and this is one, you know, Catherine again, I see you answer this question all the time I answer it probably just as frequently which is um, and you've you've already said this We can't mix infrastructure types So if if there is or let me be more specific If there is a cloud provider infrastructure provider integration configured Then you can't mix infrastructure types. So if you deployed say vSphere ipi or upi or rev ipi or upi Then you can't add a physical server to that because It would not have the same, you know, cloud provider infrastructure provider integrations available to it and therefore kubernetes not open shift kubernetes won't allow it to join the cluster Yeah, so maybe maybe just taking that one step further. I think technically You may be able to get away with it and rev because I don't think they implement a Kubernetes cloud provider yet This is the one I hate because I hate saying you can and then if they ever do down the road and it breaks things Everyone's going to be mad and come after me But the the reality why it actually doesn't work. It has to do with the kubernetes cloud provider And anyone that implements a node lifecycle controller Will essentially think those nodes aren't supposed to be part of the cluster It'll basically say hey, you get this foreign node here. I don't know what this is And it actually removes it, right? So it doesn't know how to deal with the integrations on that It doesn't It knows that it's different and unique and special and it thinks it should never be part of the cluster so until The kubernetes itself as you mentioned has the notion to ignore node types that are external to The cloud provider you've deployed on or the provider you've deployed on Isn't it to be a cloud provider, but the provider you deployed upon It will never allow those nodes in the cluster. So it's a physical limitation of kubernetes And I we get that question all the time like I just want to grow a metal node with VMware. I just want to You know, so anyway wraps galleon reeves So I see your comments in here about rev going away in favor of open shift virtualization and all of that Please reach out to me about that. Um, I'll I'll you know, we'll have a conversation about that And what the future holds and how we can help address that so andrew.solovan at redhead.com or D me on twitter practical andrew and we'll set that up and we can have that conversation. I think I saw The pm for that in the chat here, so I'm sure he's aware and we'll loop him in as well Let's see. I'm scrolling through chat quickly Just to make sure I'm answering the the question from ricky about code and his students needs Okay um, so hosea Open shift delivers as a ready cluster day two Configuration is another level of automation the client needs to map business logic kind of difference So I think hosea and christian have been having a conversation around get ups here I'm keeping up with the we're trying to keep up with chat Um Or go see So good use case You sef for an ipi installation in vSphere is there no Option to distribute control plane virtual machines on different esxi hosts So that's a question that's come up quite a bit recently And katherine again, please keep me honest here and feel free to add on so red hat created the machine api Provider for vSphere So if you look in the machine config operator repo, and I'll I'll dig up a link if I can here before the end of the show And post it and included Basically all the functionality there So today it has no awareness of the underlying cluster topology or settings or features or anything like that Basically, it says what cluster do you want me? You know What's the path in the infrastructure the vSphere infrastructure? Do you want me to deploy these vms to and that's what it does it doesn't Reach in and ask, you know, hey, what are the drs configure? What's the drs configuration for this cluster or anything like that? Um, I don't know whether or not that's on the roadmap. Honestly, I would be a little surprised if it's on the roadmap I think there's an rfe for it because that's pretty complex and I think that Different people have different preferences or different desires there and we don't want to presume Right what that should be or something like that But I'll let Catherine speak to that if she Sure, sure. Take the take the good ones. Um, yeah So so yes, there there actually is an rfe on this. Um, I will tell you there's some Technical limitations that need to be fixed right now In the cloud provider. I don't want to sort of blame anyone, but I know that there's a bug there that To be able to use something like say multi cluster Which is what a lot of folks would like to do and I think it's a good one So there's the good reason is the scalability to hit that There is also the the idea of Sort of multi vcenter. Let's just call that a fantasy right now I think that's one's a little too far out But the notion of multi cluster is definitely one that we want to look at We did do Sort of like a poc to figure out if we could do it with upi But even upi isn't going to work because you have to hard code the user name and password of the center in the cloud provider, which is Who in their right mind would do that? So that's a bug that needs to be fixed in the upstream Before we could even say that we would bless that as a poc So I I think you know, I hear the the question. I hear the ask It's come up with a number of customers We want to try to do something but right now we need to fix some fundamental Implementation issues before we can even look at figuring out We sort of offer that as a as a deployment method Okay, and I I'll see if I can dig up the RFE and I'll include that in the the summary blog post as well Yeah, there's a there's a like I think want to say like there's three RFE's in it and they're all have different Desires so they're not even common Among people what they want to do. Yeah All right. Well as christian just reminded us in chat. So we've only got oh well now less than a minute left So yeah, uh, thank you so much Catherine for coming on today I really appreciate you accommodating us short term. I know that this has been a fantastic Episode having you here right and answering all of these questions to our audience. Thank you so much for all of your questions I know that we missed a few inside of there again. I'll go through I'll pull out all those questions We'll make sure to address them in the blog post. So keep an eye on openshift.com slash blog Alex has been getting those published at like 6 a.m. Eastern time or something. So If you're an early riser, you'll you'll be good to go even Awesome. So I will make a plug real quick. I will be guest hosting for chris. Uh, tomorrow's in the clouds with Marco bill Marco Peter. Yes, thank you Who is svp of cx and oh customer experience and operations? So if you have time tomorrow, please feel free to join would appreciate you being there Otherwise, thank you so much everybody. We will see you next week at the same time Bye. I'll thank you Catherine Thank you everyone