 All right, it's pretty amazing that we have so many Red Haters here, so Take advantage, right? This is my favorite part of OpenShift Commons is the AMA. So we're going to do a quick round of intros and Then we'll get your questions answered, right? So I'm Karina Angel. I am one of the OpenShift product managers and My focus area is cloud packs if you've heard of our partner IBM, right? We have a collection of software called cloud packs and I focus on that as well as upstream projects My name is Daniel. Oh, I'm developer out of Kate Java and OpenShift a lot of stuff so far as service and match thing. Yeah Daniel you didn't even say the word corkis. Oh Come on. Yeah, I'm at the Kubernetes Java stack corkis I'm one of the main responsibility And I'm also the CNC for investors. Just let me know anything about CNCF project I'm Daniel Messer. I'm a product manager at OpenShift for Red Hat Quay, Quay.io and the operator life-cycle manager Ramon Acedo Rodriguez, product manager for OpenShift. My focus area is OpenShift on bare metal I'm Jan Schafranek. I'm not a product manager I'm a software engineer for storage Kirsten Newcomer OpenShift product management focused on cube and container security Ramon Roman here yet another product manager for the migration toolkit for applications and very involved in the conveyor community Connor Gorman. I'm an engineer on the ACS and Stack Rocks team Perfect All right So I know this is the hard part. It's the end of the day. Does anyone have a question? Or are we gonna be making softball questions for these folks? There's a question here and You've got the the other mic. It's got the mic So right over here So how does this OpenShift and OKD really relate? Who's upstream? Who's downstream or is it just a mesh? Christian, where are you? I think Christian and Vadim we sent them off to go talk to one of the other upstream projects However, I have been working in the OKD world for I think I have a microphone I'm the one person who's still mic'd up. I like to call OKD a sibling stream Okay, so it is built on the same release process as by the OpenShift CI CD build process so it is Kubernetes is really our upstream or for OpenShift and OKD and all of the other Things that we build with OpenShift But it so I would call it a sibling stream and we have a really great relationship with the Fedora core OS Team and we do most of the testing and deployment within the OKD working group On the different variations of where people like to deploy it So if you go to Google groups, there is an OKD working group If you go to the Kubernetes slack channel and to OpenShift dash users You can find the OKD working group and get involved. So good question. He gets the t-shirt. All right Any other questions here? There's one over here. I'm making Michael the tallest man in the room Hello, thanks for all the presentation. Very interesting today I have some question about migration toolkit conveyor and also security about six store That's all these product Are supported in this connected mode and are there any requirements or for that Migration toolkit for applications you mean and the whole conveyor suite or yeah, but even application and containers Yeah, I can go I can go for my migration. So In the conveyor project, I mean it depends where where your Origin is what what is your source for migrating applications into into Kubernetes? I don't know if you will be interested in application coming from Let's say legacy classic Runtime platforms or what what kind of a migration are we talking about? Yeah, more about legacy application okay, so for for applications it doesn't matter that much the the Deployment model that you might have for for Kubernetes We focus on assessing the application portfolio that would be a questionnaire-driven assessment and then on the analysis we will be working mostly with a Source code and and binary so we wouldn't be that interested on the on the platform Sorry platform itself then for example with move to cube you would be able to Based on the application type that that you are migrating Let's say you have a Java application running on Tomcat move to cube is able to understand What kind of application you are based on the on the dependencies and generate the deployment manifest So not so much about the kind of deployment model you might have I don't know if that answer your question Yes, but it was also about When we have a open shift cluster in this connected mode air gap mode That's the all the thing that I think there is an analysis tool with AI and some of the IBM Modules in the in the migration toolkit does they need to access internet or services or Okay, sorry That shouldn't be a problem because the knowledge base is building in the process and project itself so We wouldn't need a Direct connection to the internet everything is pretty self-contained or can work in self-contained Mode right now. I'm not sure about the Design, I know the current design for the AI Thingy powering the IBM research tooling is self-contained But I do know that they were thinking about having some central knowledge base or something like that But I don't have the details about how this thing will evolve in the future But for the moment as it is right now it's self-contained Thank you welcome So quite the same question for a six star. Yeah, so So you're asking specifically what what can I use with six store in a disconnected mode, right? so as Luke said earlier today six store has kind of multiple components, so Creating, you know using a open shift pipeline and connecting the ACS scanner connecting with quay If or what other whatever registry you have All of that can be done in a disconnected environment Tecton chains can run in a disconnected environment the ACS image Admission controller to validate the signature all of that can be disconnected a piece of the puzzle that right now We have not yet shipped an instance of recor Which is the signature transparency log? So if you need to validate that signature if you if you need to check, you know audit the transparency log Right now. That's not something that would be available on premises. It is in our plans to Make it, you know to ship something that then you could deploy yourself to ship a recor operator You could deploy an instance of recor yourself But right now the recor I think there's an instance available from Linux Foundation, but that is an online access so so eventually Okay, thank you sure thing All right. There's another question Hi Sorry if we wanted to start with a like a we've we've begun our cloud journey and we've built some Arrow clusters and We've got our dev and our prod and we want to start creating our dr instance, which we've already deployed But how do we begin that journey of actually creating that replication in between? Let's say our prod cluster and what we want to set up as our dr cluster in case You know what a disaster actually hits and we need to start I'm saying I'm pretty sure there's some documentation on it But if you were to just give some general advice of where you begin that journey, that would be helpful Who's the disaster recovery person? The question is about disaster recovery and strategies strategies with disaster recovery with open shift Okay, so this is a difficult question because open shift is designed for Applications that you can distribute right so most of the time what you do is you have the disaster recovery layer above open shift at the application layer and then You will take care of your application being properly distributed having said that obviously open shift has in every cluster mechanisms for You know if one note goes down Then there's an evacuation Another note I can take over the workloads that happens. That's by default that you have it I'm not sure if your question is more about I have it's it's more like if your your clusters are in different regions All together you want to make sure that in case That's cloud exclusive, but it's not too exclusive Now it depends hot hot hot cold and that's where you start from now with the cluster itself it depends on Let's start by the simplest one hot cold, right your cluster will You have a definition for your cluster for instance in the cloud You can have an arm template to deploy your algebra to open shift cluster or Terraform or an ansible book whatever So that's where you start with so you're gonna have that ansible book That is deployed in a different repository in a different region, right that you can access in case of disaster disaster So that's the coldest possible thing that you can do You can have the cluster deployed as well in the different region. That's a bit hot, right? Then we're gonna move to the applications now on the application ends at the question is do you have a stateful or a Stateless cluster and a stateless cluster. That's the easy possible thing, right? So I have my pipeline it can spit the applications into two clusters, right? So I have this my front-end application that will be deployed in region one and this and at same time We can deploy it in region two then I need to have a global low balancer that can point to two clusters assuming I need to you know have an active active setup for instance, right now when it comes to the stateful workloads This is where things get interesting don't do stateful workloads unless you need them right inside the cluster So in the cloud what we advocate customers to do is hey Go and use a managed service because that managed service has the underlying Backup and disaster recovery and replication underlying in it, right? So you just point to a my sequel server or my sequel as your my sequel service that does Logical replication to a different region and then your data will be there And then your front-end application that live in an openshift cluster You just need to deploy them in the other end now if you have a stateful cluster for instance, you have my sequel running in the cluster The folks at cockroach they figured it out They have logical replication the application level But that's not the case for the rest of the things like my sequel or postgres or MongoDB and so on there You're gonna have to have a backup and restore solution for instance, right? One of the most popular ones in the community is Valero, right? So you can snapshot that underlying data and then restore it in some other region and You know keep restoring it in some other region and you know point the other cluster to that database, right? so that's that's how you think of it and It's a matter of cost of complexity You can go for active active and that's gonna be the most expensive one You can go for active cold and that's the expensive one, but it depends now RTO and RPO discussion, right? Hopefully that answered the question. All right. Thank you Thanks, I think this was a great answer and I think you heard a theme, right? So it depends on what you need to restore I did he would start by applying GitOps from the very first day that will maintain your desired state And configuration all in git which you can just redeploy in your dr. Cluster There is this question about data restoration data movement, right? so on the road map this morning you may have already heard this but redhead ACM will have a orchestrated failover process this year that relies on background as in chrono's data replication and Movement of the application differences themselves to a dr. Cluster, so this is a going to be a first-class concept in ACM Which will rely on ACM's native capability to apply and manage application Manifests across clusters. It's across the management stack after all and it also integrates with the wall sync project Which is how we as in chrono's in the background Deplicate data between persistent volumes of different clusters and this is what you would set up. It will continuously run the background There is obviously a recovery point a recovery time objective because it's as in chrono's, right? You know there are costs to these things so in a dr. Recovery case being so rare It's probably acceptable that there is some loss of data, but eventually It will be a feature in ACM To be able to carry over and the the workload if the cluster has failed and Restarted in the dr. Cluster in the very orchestrated fashion as in first-class ACM concept We'll get one question down in front Hi, I have a question about the conveyor, you know, I forget combating application to the Modernizing applications. Do you have any plan to support crowdfunding application because you know HCL? Guys offering support offering that services as a commercial services, but for the conveyor project Is it working? Yes Their tool is based on move to cube So they already they are most of their tool is already based on on conveyor projects So what what they're doing what we're trying to do with conveyor is try to address all the different scenarios out there With first of all answering your question. Yes, we do support that migration path move to cube supports that and HCL is using which move to cube to power their tooling. Oh, okay. All right. So that's that's that's the the whole thing Yeah, I'm gonna add one comment So the under the conveyor project they are moved to cube tool So we can use that tool to migrate from cloud foundry application to the Kubernetes So once you run they move to Q command right You got a bunch of the M of file based on how to deploy The manifest and Kubernetes like a deployment in a service is something like that You can also generate a bunch of the M of file for open ship the cluster as well So it's a still CLI stuff, but we are some working on to make it fancier. All right. Okay. Thanks so much There's always another answer and to add some some stuff on top of that We are currently working on integrating the move to cube experience on top of tackle So you will be able from your application portfolio Generate the deployment manifests that move to cube generates and have them fully integrated within your git repository So the whole idea is for a developer to push a button and have have everything in the repository for them to start Making changes and migrating the application and deploying it on the target flow platform from day one. Hmm. Okay Thanks so much So do we have another question right behind you Mike? You ought to be able to reach right there? Hello Another question on conveyor project. Okay. Do support other languages than Java? No, yes That's that's the The usual one for the moment. No for the analysis beat of it Okay, so assessment is language agnostic But for the analysis beat of it for the moment, we're focused on Java But we would like to bring in dot net and other languages as well, but we're still Waiting to get contributors that can provide that that thing So what we did with tackle to for example was to create an add-on oriented architecture That allows us to expand easily what we're building and what we would like to have now is Have other add-ons that can analyze different languages rather than them than the main tackle analysis having to do that So you don't have it on the roadmap yet? Sadly no We would love to have other vendors involved in that we have some involvement. So it might happen. I Did want to point out that if you go to conveyor.io you can contribute We highly encourage you to join the community. We will be happy to get your contributions and second questions about Multi-architecture clusters Is it possible or when it will be possible to have let's say x86 cluster with some worker nodes running on let's say power IBM power So at the moment you probably were here this morning we have x86 and ARM This is in the roadmap, I don't know exactly we can check that out When this coming this is coming but the answer is we will allow Multi-architectures in a single cluster and these two architectures won't be the only ones You know how far this is down the line and I don't know but this this will happen and how does it relates to? hypershift hypershift is a different concept with hypershift what you will have is One cluster that hosts multiple control planes in one cluster and then what you really want is Different worker nodes associated with every control plane Working against that cluster, right? You might I'm thinking about this now for the first time you may be able to combine that Why not because we're talking about two Openshift features, right? So Openshift will support hypershift on the one hand and Multiple architectures in one single cluster because we already support multiple architectures On the other hand, so you will have the tool how related they are. I don't know Maybe you will because you're gonna have many more ways to architect your you know your topology to design your cluster But it's two separate Topics at the moment. Okay. Thank you. All right One more question. I think this is gonna be our last question because we're hitting up the 5 p.m Thank you. I'm representing IBM client engineering, but as you might see I'm former Red Hat or so as far I remember the Openshift 4.1 was released like three years ago So my question is very simple any plans for the Openshift 5? I Think we have a lot of work with Openshift 4 at the moment Thank you