 Yes, we have the starting soon. I don't know if they, in the registration, there are also our voices. I don't think so because the Zoom window is not there. Okay. Okay, I have the chat on Twitch here. I have the YouTube also. YouTube is not started yet. Okay. They say online, online, online. Let me check our restream chat. There is Didier in Twitch. Okay. So it's, I see it's 10 o'clock here. What do you see in your clock? Yes. 10 o'clock, yeah. So starting soon. I'm going to show the bumper and then we go into the main Zoom session. So that means that we will start and we will check that everything is fine. Let me do that. Usually the mic is always open. If you hear any noise in the nearby, please switch off your mic and then you can open. Okay. I'm going to switch to the bumper. Welcome. Welcome everyone to our first Openshift coffee break show here at Openshift TV. My name is Natalia Vinto, product marketing manager for Openshift. And I'm pleased today to present everyone in this session of today. So let me introduce the other one. Jeff Farr, you want to start? So hi everyone. Welcome to our first edition in the middle of the Openshift coffee break. So here's to everyone connected. Thank you very much Natalia for hosting this. So this will be a bi-weekly event. So meaning every two weeks. And I will be alternating with Natalia to host this show and have esteemed guests talk about experiences with Openshift. So again, thanks a lot. I also work as a product marketing manager for Openshift. And I hope you guys enjoy what you see in our show. Thanks. So thanks Jeff Farr. So before introducing our fantastic guest today on this very interesting topic, which is disconnected on public cloud with Openshift. So let me introduce a little bit this show. So the idea around the show was to having like a kind of a coffee break. Like we had in the office where we were talking about many things that we were relaxing every some cup of coffee. So I would like to take some coffee together with you. Everyone in the world together talking about cool topics and technology. Feel free to ping us in the chat if you have any question for today at the other show. As Jafar was saying, this is a bi-weekly show every Wednesday morning at 10 a.m. set. If you are in the UK, it's 9. If you are in Eastern, it's 11. One hour show. And I'm pleased today to introduce our guest. So if you want to start, Raf. Good morning everyone. Well, my name is Rafael Cardona. I'm one of the solution architects. Specializing Openshift for the EMEA region in Red Hat. I have a reasonable good experience with Kubernetes. About three years deploying and playing with Kubernetes all around Europe. Today I came to talk a little bit about my experience deploying the Openshift in a highly secure environment. There are many, many features interesting to allow us to unleash the enterprise capabilities of Kubernetes in this fashion. And well, I'm very happy to jump into those contents in a few minutes. Nice to meet you. Thank you, Raf. So we have other two guests today because we want to talk about those US cases. And I'm pleased to start introducing you Hamid and Ryan. You want to start, folks? Yeah, sure. My name is Hamid Hussain. I work for a company called FightServe. I'm an architect. So my main job with FightServe is evaluating new tools and frameworks and then onboarding them onto our development teams. My experience with Openshift has been very new. This is the first time I've worked on Openshift with Red Hat and IBM. My background has been enterprise application development, mainly on the Java applications. I come from an investment banking IT background, been working there for most 20 years. And I moved to FightServe a couple of years back and into an architect role. Yeah, that's about me. Thank you. Natalia, sorry to interrupt. So it seems the event is not streaming yet on YouTube. Can you please check? Yes, yes. It's fine. So yeah, and while I check this, I think we can, because I see it started, but we can continue with the presentation. So if you want to start presenting yourselves. Okay, so it's on now. Okay, cool. So thanks, Hamid, for your presentation. Brian, you want to go ahead? Yeah. Hello, good morning. My name is Brian Olmeda. I am a customer success manager architect in IBM. I usually help clients with the adoption of open ships and cloud packs. The pieces in this case, we were working with Hamid and Raph quite close to install open ship disconnected and on top of that, having a disconnected installation around the cloud pack for integration, especially one of the product data power. It was a pleasure. Thank you for inviting me and I'm looking forward. Thank you. Thank you. Thank you, Brian. Thank you, Hamid. Thank you, Raph. So today's topic, we invited you because I think you can share with us some very cool example on how to do production grade open shift cluster on public cloud. What does it mean production grade? So we understand and we know that open ships is very easy to install, very easy to use, but how to make it production ready? So the title word about disconnected, what does it mean disconnected Raph in a public cloud? What does it mean disconnected? Okay. When we talk about security, we already understand that this is probably the biggest issue and the paramount of all enterprise environments. Customer got different needs and with their needs, we need to try to adapt our technology to address those concerns. So customers want some to be able to store their application infrastructure into environment that is completely isolated from the outside world. But at the same time, they want to enjoy the enterprise capabilities that bring us to a dual world. If we want to automation, we need to be able or the customer need to be able to give up some security constraints. So what we've been working the last month heavily in this integration is try to keep using those amazing level of automation that we are now enjoying with enterprise Kubernetes with open shift, but at the same time addressing those concerns. The two approaches for that is disconnected installation and private installation in a private VPC. They might sound the same, but actually they are not. When we talk about installation in a private VPC, it really means installing open shift with all the capabilities and automation of the installer, but doing that through a proxy. The cluster is still isolated from the outside world, but the access to the internet is needed and that's where the challenge came. We want to store this platform, but we cannot use the internet. Okay, not literally installations. The other approach could be disconnected, but disconnected reduce a bit the facility of the installer to do all the magic. So I created infrastructure to configure all the environment. So what we did was to use the bit and byte of the installer in a completely isolated world and that's what we did. We did a mirror installation in a completely private VPC. I want to share something with you. I prepared especially for this call. One second. I want to share my screen now. The right window? Yeah, this is the right window. Okay guys, this is what I prepared actually last night. I hope you guys like it. This is a small procedure. This is small because it reduces a lot of the tasks we suppose to roll out in order to have this type of deployment. Here I explained a little bit what we want to do. I just mentioned that we want to do a private installation in a mirror environment. This is very important to mention that because of doing that we are actually telling the installer to behave in a way that is not meant to be. But at the same time we are using all the capability and all the intelligence behind the installer. The tonnage is a very powerful binary because it reduces the complexity of preparing all the infrastructure around Kubernetes. At the same time, it has trapped that complexity providing you with a ready installed cluster. How the architecture of the deployment of today? We can start by defining what are the components or the minimum necessary component to have an open shift installation. We need of course the control plane that is composed at the moment by a default of three master nodes and two compute nodes for the compute area. Those are the core of the cluster but around the cluster we need other components. My variant is with a bare metal or with a private cloud. But in general, it is more or less equipped in all types of environments. We need always a low balancer. The newest version 4.6 and 4.7 are a little more flexible about the requirements. But still we need a low balancer one who provides the traffic management by traffic distribution to the internal API of the API server. The other is not public but an user API called the API server. And the other is the endpoint that is both the router who is the one who managed the traffic directed to the ingress router of their applications. The other component is very important is the S3. The one who will be used not only to store the applications or loads in case it is needed but in this case the installation will save the images of the bootstrapping and will support all the back and forth of data during the installation. And route 53. That was also a very interesting issue because before we needed to deploy our own DNS, we weren't able to use route 53 because there were some constraints but in the latest installation we can do that. We can do route 53 in a disconnected environment in the way we desire. Moreover, how our VPC will look like. This is the idea of work. We got three private subnet, three different availability zones. You might ask yourself what do you see now Internet Gateway here? Let's do this question. Why do we have an Internet Gateway there if it is disconnected? That's a very good question. Yeah, that's a question that all our customers ask. Okay, we want a completely disconnected. Why do we have that? Well, at the end we need to download the images. We need to mirror the installation images because the installation takes place by deploying those images and spinning up the container. One way we avoid this in several use cases was to create the mirror registry in a less protected environment. Then we mirror all the images. We did all the testing and then we create an image out of that server. Then we move that to another easy to install in a privileged environment. We did an error gap. That's one of the modes. The other mode would be to use a VPC peering that one VPC is completely isolated from the outside world. However, it's connected to another VPC who serves like a gateway to download those images. That VPC is less protected. The connection to Internet is not completely enabled by the VPC itself, but it contains the bastion and all the mirror images. The use case I want to bring up today is none of them. The one I want to bring up today is the one who uses Direct Connect. Basically, the VPC exists in the customer premise. It's completely isolated, but instead of being connected to the outside world directly, it's connected directly to the AWS premises. They got a certain level of access to the outside world, but that's something that the customer can self-determine. Of course, I don't have Direct Connect. If I do that, I will have to pay a very big bill. I'm sure my money will not be happy about that. What I did was to mimic the case to have a Direct Connect here with this Internet gateway. I would like to show you something else. Here goes the template. I told you that we're going to be using an existing VPC. With that VPC, we have to have certain requirements. Those are not very special requirements, but anyway, you should be aware, especially about the security group that will govern the internal mirror registry. Here, if anyone is interested to do this deployment in a Direct Connect environment, the only thing they need to do is to come here, change the Internet gateway for the Direct Connect gateway, and change it here. That little step will allow you to do this deployment in a Direct Connect environment. A rough question. This is a cloud formation template. This is the cloud formation template for AWS. How do you link this to the OpenShift installation, or how do you link with the OpenShift cluster use it up? Very good. As I mentioned before, what we wanted to look at with this type of deployment is to use the best of both worlds, the highly level of automation of the installer, but also use our own premise, ready to rock, let's say. But the VPC needs to have specific settings to be able to hold the installation. Because the installer will take over the installation. I will say, okay, I'll do the installation. But when it reaches the VPC, it expects that certain settings are done. Those settings are well explained in the documentation. But if you go to the template itself, you can relate what exactly it needed. In a completely disconnected environment, you might replace the installer in the area of provisioning. You can do the provision only through a cloud formation template. But this is not the case. We got our VPC already up and running. I can show you. This is our VPC. It's a brand new VPC. I haven't done much on that. The settings, as I mentioned, is to compare with this. But my experience with this is what customers have on place. It never differs so much what we have here. The cloud formation template for this particular case will help to have a role model to follow. But it's not part of the procedure. It's just to see, okay, my VPC is able to handle the installation. I got another cloud formation template. How it relates to the installation of security groups. This cloud formation template that I put here is quite complete. Because it doesn't only cover the security groups of the installation. It doesn't only cover the security group of the vacuum hose, but also the security group of the home installation. For this installation, we don't need that. Because we're going to leave the installer to the rest of the magic. But it's important that you have access to see how it's really deployed in a similar way than the installer. The installer doesn't use the cloud formation template, but the interaction and the flow of how those components are being deployed is exactly the same. Does that answer your question, Natalia? Yeah. Thank you. Let me ask you another question. Just to summarize what you mentioned about the Internet Gateway and the need to have all of the required images and components readily available somewhere. What you mentioned is there are different options. I'm just doing a quick recap for the people who are not so familiar with OpenShift. You said there are different ways of getting the Red Hat content to make it available offline. In this case, we are using a direct connection and we are using the OC command lines that allow you to mirror the content for a specific release. Basically, you say, I want to install OCP-46. Please get all the images, all the content. Download that for me and make it available as container images somewhere. If we wanted, we could have that downloaded separately on a different host and then copy that on the version, for instance, and make it available on an internal registry for the private VPCs to be completely isolated. Is that roughly...? Yeah, definitely. Thank you for completing that. I appreciate it very much. Yes, indeed, we got the OC client and the OCP install who are going to help us not only to mirror the registry but also to the deployment. I see the time is running. Guys, if I should start with the deployment straight away. If you can increase a little bit the font, that would be better. Oh, excuses. We can see that topology pretty good. Thanks, Didier, in the chat for having shared the link of this repo. You can access to this repo. It's publicly available. So you can use it for your POC, for your use cases. And then I would like also to know from Hamid and Brian what they think about this kind of topology. And if this kind of use case, Solved addresses the requirement that they had for a production grid. So Hamid, you say this was an example for Pfizer. So how did Solved your use cases? Yeah, so we had a couple of requirements. I would say challenges. When we started this setup. So the first one was due to Pfizer policies. We were not supposed to use all of the AWS services. Services like the Route 53, the load balancer and the S3. So we had restrictions on those. We were not supposed to use it in our installation and the setup. And the other challenge that we had was we were forced to use AWS CLI rather than the AWS console, which made things a little bit difficult because CLI is more manual and is a lot of syntax checking and all that stuff. But the scripts that Rafa built for this whole thing in GitHub, that helped a lot because it had all the command that we needed to run quickly. So that helped us making this whole thing quicker. Yeah, so overall, I mean, if the whole disconnected mode that we wanted was already there, designed by Rafa. So we just had to use it, make small changes and then use it for a specific installation. Cool. Thanks. Thanks for your explanation. So as I understand, this is rough. This is a prod-like topology. This is something that people that look at your repo can say, hey, I want to try this on my prod and I can set up my OpenShift Disconnected on AWS with this strategy. Yeah, definitely. That's a simplified version, as I mentioned before, because I didn't include the Red Connect. But if someone is familiar with CloudFormation on AWS, that's exactly what they need to change and they can mimic completely this into the Red Connect environment. Actually, Hamid mentioned something that came to my mind during the installation there. Yeah, indeed, we had to use CLI. When a product is mature enough and is enterprise enough, it has to be resilient to the type of constraints. We were able to, it was a teamwork. It was a few people working together. We managed to translate the Terraform and the CloudFormation template into the CLI command. It's a comprehensive way to do that. It's long, but it gives you complete control. Every single component has been deployed in the environment. Yeah, that was probably the biggest constraint. But as Hamid mentioned, we are able to do this and we're able to provide, to provision the cluster in those three fashion ways. That would be Terraform, our installer, CloudFormation template and also to the CLI. And the result is the same. Yeah, thanks. There was a question in the chat on Twitch asking, what does it mean updating a disconnected cluster? If you have this topology, that looks great for production grade using disconnected on AWS. If I need to update to OpenSheet for that seven, what I have to do? Well, actually, it's a very nice question. At the time it's running. I think I will do the mirroring because you can see how easy that is and it's connected to that question. We do the mirror. We mirror to any version, any level. For those six, four, 15. And our set of operators, that's a specific level of release. But when you release, you are not able, as a private-holder cluster, you are not able to acquire those updates on the outside world. So you need to mirror those content and you execute the upgrade in the same fashion that you need to just mirror the exact version of the operators that you want to upgrade. It is quite handy because I saw the case that the customer wanted to upgrade only a section of their clusters. There were many clusters installed and they were able to have different mirrors, different versions, and then they manipulate the updates in the way that only a specific cluster had a specific version of the OpenShift in that moment. You want to say something? Yeah, Rafa, I was going to offer to explain that if you wanted to start doing things, otherwise you can't carry on. So basically what we all need to understand is that OpenShift 4 installation and the whole operation of the platform heavily relies on operators. And those operators use container images that are versions that are hosted on the Red Hat release. And basically upgrading means deploying the new set of operators, running the containers from the new versions and to do that, as Rafa explained, you have to get those images from the Red Hat registry, extract them, so we provide the tooling for that, host them on a private registry and then trigger the upgrade, meaning that you will replace the operators that are in place with the new images, exactly as when you publish a new image for your application and your new containers get started from that new image, that's exactly what happens with the operators. So there is a global operator that is responsible for the overall release. It gets its image and it knows what other images it needs to deploy as part of the release upgrade. So basically that's in a very high level summary how it works. And that's why you need to have those versioned images on your internal registry. That's pretty cool. So Rafa, if you are showing something in the terminal, can you please increase the focus so everyone can... I would like also to hear from Brian what he think about this disconnected public cloud experience, because you know, when you think about public cloud, you think, oh, it's all on internet, no? It's all connect. So Brian, what do you think about this specific use case? Do you see... Is the normality in the production grade cluster on public cloud? Yeah, I think... Thank you. I think nowadays customers they are turning to this disconnected mode and I can see... Yeah, Opensiv has improved a lot and we are pointing to this direction. And I will say the most common, especially in private industries, in banking, I will say governments, they are all going for this disconnected installation. And as Rafa and Hamidis is highlighting here, of course you can have... You can see a lot of challenges, you can find a lot of challenges, but the good thing or the maturity of Opensiv that you can find workarounds for everything. And that is probably the key here. And yeah, as I said, on top of that, no difference once that is installed, no difference on Opensiv, you can run the sense of work in disconnected mode or in public mode, that is helping a lot in our case for running the software that we install on top of Opensiv. Yeah, I agree on that. And to link to this discussion, there was a question in the chat, say, hey, do we have to enable telemetry in the disconnected to have the support? So the telemetry service is only needed for having insight from Red Hat. Like, hey, your cluster is running under pressure, hey, you should fix this, but it's not really needed. And I imagine for a disconnected cluster, the telemetry is not active unless only that specific host is enabled. So this is another topic, interesting topic. So in the disconnected cluster, telemetry active or disabled, so I don't know, Raf, what do you did in this case? One second. Yeah. That's an excellent, excellent question that out of the box, you got a monitoring setup. You can avoid that, you want to be there all the time, but in case you want to extend the telemetry, you can always do, but in the case of the disconnected doesn't make much sense to go as far as monitoring that we do in exposed systems. The basic out of the box setup allows you to have the minimal required to have a good overview of how your cluster, how your resources are being consumed by the cluster. But you are able to even reduce that in case you don't want to see anything, but the out of the box stack, how this setup is reasonable, not to abuse the amount of resources it consumes, but also to keep a good overview of how your resources are being utilized by the cluster. Thanks, Raf. Hamid, I was wondering also from your side, if you know what is the telemetry services, if you think it was good for your use case, what do you think about? No, we didn't explore much on that aspect. Okay, so the monitoring, internal monitoring was enough for you, or do you have also, you use also other monitoring tool to track your cluster? No, we didn't use any other tools. So for now, it's just what we have. Okay. Yeah, but we might be adding something in future. Okay. Once we have this whole set of stable and running. Cool, thanks. Yes, and Natalie, so maybe a quick also reminder of why telemetry service has been at OpenShift. So it's basically a way for Red Hat to improve our support relationship with our customers, but in a proactive manner. So what we ask for the customers is to agree to provide some non-sensitive data about their clusters, about its health. And because we collect information from those clusters, and of course it's, there's no, like it's mostly anonymous information about the components of the cluster itself and not anything related to the workloads that run on those clusters. It allows us to see the overall health of the clusters. If there are some upgrades that are failing for some reason, if there are some components that have problems, we are able to see it ahead of time. And when a customer will open a ticket to say, hey, hi, I have a problem on this, we know if it's something that is only with this customer, or if it's something that we have identified with many other customer installations. And that's something we can proactively start working on and dispatch the information to our customers that something like a new fix has been released ahead of time. So it's really to help being proactive in the way that we handle the support relationship. Yeah. Cool. Thanks, Jafar, for this explanation, for having some context around the telemetry. That was a good question. So thanks. I think it was the UNIX 365, I made this question. So if you have any questions, please use the chat. We want to use this coffee break to have a coffee break with all you sharing all the experience and the use case. And as it is a bi-weekly show, if you have any idea, please reach out to us at OpenShiftTV or we will share also our Twitter. So please reach out to us to propose any topic you would like to discuss around OpenShift use cases. In this coffee break, let's say virtual extended coffee break in me a morning, but we may have also people from Epoch. I don't think US because it's very early there, but yeah, it's an extended virtual coffee with all of you. It's a pleasure. So Raph, what are you going to show us? Yeah, I will show you guys how easy and straightforward is to mirror a register for the installation. Here we have brand new AWS, AWS C2 Instant with a real image. Can you please share this link in the chat so I can reshare in all our screens? Sure. Thank you. I'm trying to look for the... Yeah, you can use the Zoom chat and I will share to all our channel. Okay, I cannot find the Zoom chat here. I got more than 30 windows open. I'm going to share it with you. Okay, no problem. Go ahead. You got that. Okay, first I will increase the size. I hope you can see this. Yeah, okay. Here we got our client already ready to rock. Describe stacks. Okay, everything looks good. We got our bastion we created, the security group and the VPC. Okay, that's fine. And now we're going to do the most important step of the whole installation. That is to mirror properly our internal registry. I created... This is the script that Hamid was mentioned before. Everything is new in this server. So I need to store almost everything. Yeah, Veeam is a good package to install. I think you heard it already. Yeah, perfect. We need to find out first what our host name is. I think the host name is, of course, but it's not what I want. Why? The following. I configure a most reliable host name, let's say for a mimic of a real environment. That was then the... And folks, this is real live coding, live hacking. No jokes. No preparation, no recording. Let's hope that everything is going to be fine. Yeah, we got this created. Very good. But I was expecting another host name for this. It's playing me with... Yeah, we're going to use that one as well because I wanted to use the test lab that I wanted to use. So can you give another little bit of context of what you're doing now? I'm going to create a mirror registry and for that, I'm going to use this magic script here. It looks a little bit long, but actually it's not that interesting. Sorry, sorry. What it does is first it install the package. I need a package to do the whole magic of the operating system, but it also installs the installed packages, the one what Jafar mentioned before, the OC command, the OC pistol, later for the installation. Fair tools like POTMUN, SCOPIO, HAProxy, in case we will use the proxy. This is a very complete script because it brings you from here to zero in respect of mirror registry. You mean the opposite from zero to hero? Zero to hero, sorry. That's very important point. Thank you, man. However, when you mirror the registry, you need a fully qualified domain name. Yeah, to specify very clearly that guys, don't put IPs because then it will not create the proper certificate. Use the fully qualified domain name for your environment. That is an internal hostname in the AWS machine or is the reverse? It's reversing to this, which I actually don't know why because it should be reversing. This is the internal hostname on an AWS virtual machine. Yeah, but I created a record in a private zone and that's what I was expecting. But anyway, we can also use this one. It's no problem. Okay, as best practices, it's better to create a record on Route 53 and in the private zone so your machine has this hostname, which is better than IP, whatever, and more mnemonic names. So you did it, actually. Yeah, I did it. However, I believe I'm missing the endpoint that is necessary for being able to query, to query back DNS entries. But I'm going to check that later. But in any case, we can use this. It's no problem because this is what, anyway, the bootstrap hosting when are we looking at, okay? Come here. That's what I'm talking about. You test everything and then at the last moment, it's something missing. It's never changed for demos, for live demos. It's really crazy. I mean, the important that worked on your cluster, right? If that doesn't run in the live demo, we don't care. It worked for us, fortunately. Okay. I don't know if it's very relevant in this short time to go through every single part of the script, but guys, you can go through it and then you have any questions how to use it. I'm more than happy to go deep a little bit on that. Okay. Rafa, and keep in mind that we are here in case you need us to do whatever explanations while you are doing things. Oh, that's very good. Don't hesitate to use us for that. Thank you very much. First step, we're going to install the appendix in the matchup before. Wow, that looks good. It's quite automated. I think Hamid can remember that I saved a lot of headaches in a given moment. You just launched the script and you can see what you need. Actually, it's very good that it happened because it means this part. It requires that on the root directory you got this file. I'm going to place that off camera because it's a secret file. Yeah, so this gives us the opportunity to discuss again a little bit about it. There are kind of 12 minutes left. I would like to know from Hamid and Brian what do you think about the experience you had with this disconnected on AWS and what do you think could be the next step? If you need to scale up your cluster and if you need to create multiple cluster, if you are planning any hybrid strategy or you want to use full public cloud with multiple cluster, what are the next steps? For us, this was the first OpenShift install within Fitesurf and it was very important for us to get this right because there are many other projects which would be needing a similar kind of setup. So yeah, so now we've got the AWS setup done. We might be looking at hybrid cloud with Azure as well. So yeah, we would need to work on that in the future. But yeah, so getting this disconnected was important for us. Okay, thanks. So a multi-cloud strategy always disconnected as far as I understand? Yes, yeah. Disconnected is very crucial for us because of the concerns on security and all that stuff. Cool, cool. For me if I'm wrong, but to complete this hybrid cloud a non-prem installation is going to happen too. You will have the complete hybrid picture. Thank you. And on top of that, again this I will say disconnected installation is quite standard and we are taking the advantage to get in the same procedure to install the software on top of OpenShift. In this case was the cloud packs on IBM and basically they are following the same steps duplicating this registry and continue, I mean that is an industry standard, I would say. Cool. Yeah, it's going to be an industry standard. That's why I wanted to understand a little bit more about this use case. So this disconnected on address is kind of a template let's say hey, this works, let's continue to do this and let's have also an hybrid cloud strategy after so we can have the resiliency, you know. We can have multiple points where running our workloads. That is cool. And how about this disconnected cluster? Can you say it runs many apps or it's running and only some pop up or how is this production? Yeah, I mean it's not been productionised because we are still in the process of evaluating some of the applications that are being onboarded. IBM and Brian has been involved so we are trying to onboard some of the IBM products onto this cluster. So we are still in process of testing our applications. So once this is done then we would be looking at moving on to the next stages. Cool. Thanks. Thanks for all. Hey guys, I would like to show you before the time. Yeah, actually we are looking at you Raph. Oh, sorry. You are seeing the screen. So we are talking and waiting for your message. Okay. Actually I found out that it was retrieving through, it was resolving two different host names but actually this is what I was looking for. So this is that, can you increase a little bit the phone because I think it's better for visibility. So you are using DIC to resolve this host name which is the bastionos. The bastionos is the host used usually for the installation. It contains a mirror registry we were talking about before. So it's kind of helper node. We call it also helper or jump node. Many names for this node. I like helper node because it helps. And you are now trying to resolve the host name for this node, right? I actually execute this command before and actually it didn't happen to me before that it was doing a reverse to this internal because actually I deactivated it. So I don't know why it didn't want to help me this time but this is the most important. This is my private song and you mentioned before, Natale, I would like to do this because you have control exactly of the DNS resolving and anyway, it's resolving properly from this server. So now what we want to do is keep checking what our steps are for the mirroring. We did the dependency, we did artifacts. I don't remember. You can do it again, no problem. Yeah, we only did the dependencies. Thanks, manager. The next step for this is an Ansible playbook, right? Some Ansible roles. Actually it is a quiz already but this mirror script is hot, as it was up there last night. So I need to really translate into a proper playbook. Okay, we carry on with the artifact. Then, interested enough, we're going to go to create our, all the preparation for our internal registry. I'm going to use admin, admin. We left the OC tools that Jafar mentioned to the magic. Then we come to the registry. It will last, normally last seven minutes. I see success. Hey, Hamid, working with Raph is great. It looks like everything is easy. I feel I can install disconnected blind now. Yeah, that's true. The script that Raph is running now, I've used on my own multiple times to create the registry. It works wonderfully every time. Cool. So you can try your script for people watching in the streams. I've also shared in the chat the link of the OpenShift, the Try link. If you want to try OpenShift on AWS or public cloud or any other installation, you want to just try it out. Please go to that link and start trying OpenShift. And you can use this script, this repo, if you plan, if you want to try disconnected installation. It's an helper that can make your life easier in this disconnected scenario. One more thing I would like to add to that. Guys, we need the mirror now, but that's the minimal necessary to do the vanilla installation. There is another flag here. It's a prepare operators catalogs. We actually do the mirroring for all the red hat and community operators. We don't want to do that now because it's not necessary for the installation. And also because it's quite big, we're talking about more than 120 gigabytes of beautiful operators, not only produced by Red Hat, but also by the community. But I just want to let you know that in case you want to mirror also the operators of the community, you can do that here. It's a bit of a script. Yeah. A bit of caution on the operators. I think it's more than 120 gigabytes. Really? Yeah. I did remember Hamid. Yeah. So we started it and left it, but the EC2 instance didn't have enough space. So I think it was up to 200. I don't even remember. I remember now. I was really sorry. Sorry. You know, the speed of the Pfizer was really amazing. Those guys are very professional and IBM are also doing their stuff with the integration of the Cloud Pak. It really was a super interesting project because we could see the power of a Cloud Pak serving the customer running on OpenShift doing what they needed. It was really, really good. It's a great team. Thanks. And the collaboration is still growing. It just starts. Okay. Yeah. That's wonderful. That's wonderful. The collaboration is key. So thank you for sharing this use case with us today on OpenShift TV and having this collaboration is key as a final dose because we are going to the end of the show. So I would like to do the closing. Raf, if you want to show something, I don't know if we have the time. I had hopes that it will finish and then we could see, we can query the registry. But if we don't have time, you know what I'm going to do? I'm going to run this. I'm going to load the video so you can see the whole thing in this repo later. Actually, that's a good idea. I see that I'm going to do. I'm going to do the whole procedure with a small shot of video and I'm going to play it in here. So if you want to try it at home, you are safe to do it. Thank you. Thank you very much. So the topology you shared that is in the repo is very interesting. It's very cool. And also this collaboration you were talking about with Amit, Brian, that is fantastic. This is really the spirit of how to build great things. Thanks for sharing this use case today. It was very interesting. If you want to reach out to Raf, to Amit, to Brian, you can ping us and we will also folks, if you want to share your Twitter handle in the chat, but we are always available on Twitter at OpenShiftTV. If you want to get more about this, please reach out to us. We will keep you in link with Amit, Brian and Raf. In the while, thank you very much for this session, Raf. You can stop sharing the screen now. So we end this session. I only would like to thank you all. And I would like to thank Chris short specifically for this big, big help on setting up this first EMEA Times on OpenShift TV show. Thank you, Chris, for having set it up all for us. So thank you very much. Appreciate it. Appreciate it. Talking about the collaboration. Natalie, do we have one more minute to show how the mirror finished? Or not? Just, we have a couple of seconds. While you show it, I have to close, but if it's finished, you want to show just that it's finished, it's fine. Okay. I'm going to show that right now. Yeah, so just before we close up, this is a show that we are hosting for the EMEA Time Zone. But I linked a calendar for the main OpenShift TV which shows, which have also a lot of great content. It's just maybe a bit later in the day for us. But yeah, I'm free to check it. And the good thing is that it's also recorded. So you can also take the... Guys, look at this. Here you can see the little pattern to be added to the installation definition file. And here you see the query. Query our internal mirror registry. So happy work. We got a very nice internet rate to do our magic. That's it. Thank you. We want to show, okay? That worked good. Thank you. You have it live. This was a totally live code. Live hacking. Thank you. Thank you very much. So I want to thank Hamid, Brian, Raph for the session today. Thanks for having joined this first episode of our OpenShift Coffee Break show. Just a quick reminder, as Jafar was saying, if we on OpenShift TV, the next show would be at 9.80 time, the level up hour today. And we will come back on two weeks, on the 17th. And the topic will be an inner and outer loop for Java developers on OpenShift. So if you like to watch all the shows, go into the calendar. I think Jafar shared it in the chat. In the while, folks, thank you for attending it. Have another coffee. It was a great coffee break. And talk to you soon. Natali, we see the Italian coffee is not the same as the rest of the world. Of course. You have to go to Espresso. You have to go to Espresso. We are a margin of improvement. This inside, my friend. Guys, thank you. Bye-bye. Bye-bye. Bye. Master, we are offline already, right? Not yet.