 Good morning, good afternoon, good evening, and welcome to a special developer experience office hour here on Red Hat Live Streaming. I'm joined by some of my favorite Red Hatters here, especially Serena from The Future, who is here to talk about all things in the future. So I will remind everybody, I'm Chris Short, a host of Red Hat Live Streaming, and this show talks about things from the future. So if you try to do this on your cluster now and you call support for help, could be a problem. So just keep that in mind as we discuss things. This is all about the future. So Serena, would please introduce yourself, let us know what we're talking about. You know, happy Tuesday, all that fun stuff. Sure. Hey, everybody, hope everybody's doing well. My name is Serena Nichols, coming to you from the future and we're gonna be talking about one of the new cool features, which will be available in four and nine, which is export application, and I'll go into that a little bit more after we have a couple of our guests introduce themselves. So I'll pass over to Jay first. Hi, hello, everyone. It's glad to be here. And today we'll be talking about a lot of cool things, what we were working on four at nine and about me. I worked in OpenCivDevConsole as a UI engineer. So it's really nice to be here. Thank you. It's great to have you. So I am Ryan Cook from the Office of CTO. As you can see, I have camera issues today, so bear with me on this one, folks. But very happy to be here, as you can tell. I found a camera, just so I could be here. So I am going to be showing kind of what, how to export applications. This was something that was really interesting to me because I love the whole get ops approach and I wanted to get more people involved with it. And so that's why you'll see what you do today. Yeah, awesome. Thanks. Yeah, so today what we're gonna talk about is like I mentioned one of the, one of the dev preview features that will be in four nine. So again, it will be in the future that you'll get access to it. The specific like use case we talked about is I have an unmanaged application that I wanna export and so I can later add that to code and get or share in some locations so that I can either share with other people or I can replicate it onto another cluster. So we started this and like Ryan mentioned just talking about collaboration within Red Hat. It's pretty cool. We started talking about that we needed this type of feature. We found out through some sources that somebody in the CTO office working on this and got connected with Ryan and then we've got our, you know, Ryan connected with the developer console group as well as one of the other internal teams on our at Red Hat and have been able to produce this really cool feature. I also did wanna talk about what future phases of this might lead to in the future, right? So, okay, four nine export it, meaning just export it and have it on your local machine but post four nine things like I have an unmanaged app which is not yet in Git and with a single click I'm able to export and push it to my Git repo which would be super cool. And then obviously also the another caveat that would be we don't wanna have additional needed on that. And then another use case post that would be I have a managed app which is running and I wanna easily disable the sync and be able to modify that application and then re-export it, push it to Git and then re-enable the sync. So how do we interact with GitOps in the future, right? And how we fulfill those use cases. So those are kind of where we think this feature will lead us to in the future future. And with that, I think I'll pass it over to Ryan. I think Ryan's going to discuss the technical details first and then Jay will follow up with a cool demo using the UI. Nice. All right. So like Serena said, Jay will be actually showing the really cool stuff. I'm just gonna talk about a couple of slides and kind of how we got here. So, I mean, as probably the audience knows by now after seeing a lot of the videos, getting started with Kubernetes and OpenShift is so much easier than it's ever been today between the developer sandbox, even at your own company, people setting up clusters for you and boom, you have a namespace and you're deploying. So that excitement sets in and you're deploying things left and right and you have no idea how you got there. And so, life-cycling your applications and then even having to redeploy on a new cluster that gets kind of difficult because you don't really know what you did to get your application running sometimes. You might have made a service account here and there. You just, you're having fun and your stuff is working. So with the solution, what Serena was talking about, what we're gonna kind of show today is we're going to extract objects from the cluster. So pretty much what that means is we're going to pull all of the Kubernetes objects that you have access to that you're permitted to use. So your user account is going to just pull down all of these, store it into a zip file and then from there you could pull it down to your local system, commit it to Git and then deploy the OpenShift GitOps tooling. And so kind of the workflow you issue and what we call an export to the system. It's a YAML file or actually what we're gonna show today is within the UI. So ignore the YAML file. It's easier whenever you just click a button in the UI and the export takes off. So what that's gonna do is it's gonna create a service account cluster role, cluster role binding, job and PVC and you're thinking to yourself is a lot of objects just to start the process to export. But we're doing this with the concept of trying to keep the least privileges. And so what we're actually doing the next time when you issue the export, it's going to read what your username is at the time. So that way it's going to only pull the objects that you're allowed to see. So it's going to create a cluster role based on your user account and assign it to that service account. The job is then gonna use that service account and only allow you to pull things that you have access to. So it kind of gives us the least privileges required to get what you need done. And it makes it so you're not able to pull secrets or anything that you're not actually supposed to have. So from there, you'll see that that cluster role and that cluster role binding and job go away. When that job reaches success, you no longer need that privileged cluster role and role binding. And so that service account transfers over to a deployment and that PVC is used again and a service and route are created, which is really cool and it just allows you to one click download your pull objects that we extracted from the environment. So the tooling itself, as you see here, the upstream repository is a GitOps Primer. And within that Kubernetes, well, the OpenShift Kubernetes job that runs, we're using Crane to extract the objects and we're using Crane lib because what we have to do is with those objects, once they're exported, they're kind of ugly. Like they have fields that we don't want. They have like UID, they have like a date, just various things that we don't really want to transfer out and keep. So the Crane lib library allows us to manipulate that data to have it in a usable format that we could take to another cluster, we could store and Git. And further down the line, when you use GitOps tooling, they're not fields that the tool is going to obsess over. Since we fixed and removed certain fields, then the tooling doesn't see that that needs resolved. So with that, let's go to this awesome demo. I am going to stop sharing my screen right now. Thank you. Thanks, Ryan, for the detailed introduction. And I'll be setting my screen and we'll try to look at the demo now. So I believe you guys can see my screen. Yes. Cool. So we'll try to demonstrate whatever like Ryan had described just now. So we are currently into a namespace and I've logged in as user developer. So as you can see, in this namespace, we have one deployment over here, which I want to basically get exported. So you can see option like export application. So this button will be shown only if the operator, primary operator will be installed, which will be available in the operator hub in 4.9. So once you have installed the primary operator, you will see option for export application if user have access to get access in this namespace. So once you click on this, you'll be seeing a toast notifications showing that export of resources in test has been started. And this is the job which Ryan described about. So we have a job running. So forgive for the image because operator is not out yet and we have to find a cool image over there. And if you click on this thing in the side panel, you can see the details and the resources. So at any point in time, you can go to the jobs and go to the pod and try to take a look at the log to get to know what it's trying to do or what's happening behind the scene. So once the job work is completed, it will create a deployment out of it and job will be deleted. So let me go back to topology and let's try to visualize this whole process. So as you can see, job is up and running. It will take a while for it to get completed. And if you try to click, oh, it's done. So that was super quick for this particular application. As you can see, job is gone now. We have one deployment, which is up now. So if I try to click on download link, I should be taken to a URL and where I need to authenticate with OAuth. I'll use developer. That was the user I was logged in with. And if you provide the user ID password, you should be able to download the whole support. So with one click, we are able to have all of our resources downloaded. So if you try to unzip this, so just a sec. Yeah, if you try to unzip this, you should be seeing test three and you have the all the required resources sample over here, like we stream deployment routes, service, et cetera, whatever was there in this particular name space. So with this, we can use it in different phases with GitOps, the whole flow. As part of this flow, we had this particular piece to demonstrate and there are a few more things which I would like to highlight. Like this can be extracted or this process can be initiated only from topology. Now, the topology can be seen in the depth perspective and even on the admin perspective. If you go to name spaces under workloads, you should be able to see the same option again over here, export application, which can be achieved. And another thing is like, if you are into a name space which doesn't have any workload, in this particular case, user will be shown with export application. Again, sorry, if it doesn't have anything, it could be disabled basically, export application. It will get enabled once this particular name space has some workloads. And let me go back to test again and you can again re-trigger it. If you try to re-trigger the thing, the process will again re-trigger and it will try to create a job again and it just tries to export again with whatever state it is in. But there's always a way to cancel the export or restart it if you are not happy with that kind of exporting progress. That will again help you with these things. Jay, I'm just curious, like why don't we talk about why would the user want to restart it in case they pushed the button before they hadn't had their application complete? Complete maybe? Talking about those use cases a little bit more. There could be different scenarios when you would feel like restarting. One scenario could be if something went off and you're not happy with job stack or something like that, you can do a re-trigger. That's a common use case may happen or may not happen. Second thing is when there are different persons working on the same project in the same name space. Assume that you started the export and someone just added or created new application which you want to export again. So you can do a restart and cancel is something like when, okay, I mean you triggered it by mistake or you don't want to or you just don't want the download to happen at this point in time. So these are the scenarios where cancel and export can be useful. And as you can see, again, the export has happened and I have my download link up. And one important thing, you don't need to be in topology. You can go wherever you want and this toast will also be soon. Even if you close your browser and come back again after half an hour, a coffee break, you'll still find a toast over there. Okay. And I think your initial export went very, very quick. We have Ryan, I know that we're at both of you. I know you guys have done some testing before when in other instances, et cetera. As far as like how long that process might take, I'm assuming I just want to make sure that we allow the users to know that not every time that you do an export it will take within 30 seconds. I'm assuming it won't always be that quick, correct? Right. We got lucky for a demonstration that we only had one small application to be shown because what we're actually doing is when we get really down into it is we're asking every single API resource, hey, do you have anything available for me? And the second part of that is do I have access to ask if you have anything for me? So the kind of the nice part about that is it's two-sided that it makes it kind of fast that if I don't have access to it it's going to immediately kick me down to the next step. And so it's really dependent on how much you have in there as well as do I actually have access to this whole barrage of API resources? Damn. One thing that I want to kind of touch on. Oh, go ahead, Suri. I'm sorry. Go ahead. One thing that I wanted to kind of touch on as well is that the OAuth that we showed just a moment ago, that is linked to your namespace only. So if you are a member of another namespace and you don't have actually access to this test project you're not going to get in. You're not going to have the ability to get in. And that's how we're kind of stopping people from accessing it. We also implemented network policy as well so that somebody couldn't just bypass OAuth and just ask the deployment for the zip file itself. So we tried to kind of cover some of the bases just to make sure that somebody couldn't seal your stuff if they were really trying to. So are the artifacts stored anywhere long-term out of curiosity? Or is it just like a snapshot in time and disappears after a certain amount of time? So the artifacts actually stay within the deployment, within the PBC for as long as the PBC and the deployment are available. Got it. OK. Makes sense. And the thing was I was trying to say earlier what will happen once we have our export there? So currently we have downloaded it. So there will be different phases on this whole lifecycle will be connected. So currently we are doing this for the next thing comes is the import flow. So this download can directly be going to a good point future once we have the top cycle in place to handle all of those things. So currently a few things you have to do manually. Jay, how good are you feeling with this? For example, would you be willing to try to take a low open shift and maybe change the icon, right? So change the label so we have some other icon maybe and then do an export. And then open up a new project and see if we can import those individual files by hand. See how that works. I think that's worth it. Yeah. We can try that. Sorry. We'll see how much it likes me after this. Yeah. So let me try to do that. Yeah. We can do it basically and quickly we can change the runtime icons. I mean, if that's one thing to do, say hello, open shift. Maybe I'll call it a place here. So as you can see, the icon is updated. So now let me quickly go ahead and try to do an export out of it. So now in the exported, basically, jar file should be able to see two departments over there. And the YAML should have those specifications whatever we have provided. So while it's happening, one question to Ryan. So Ryan, once we have that jar file downloaded, so in order to import it, is it just like OC apply the YAML or there's some other way that we can do at once? So we have a couple of options here. There is one thing that I want to validate before we do this. I want to make sure that we don't have the namespace set. If we do have the namespace set, then we will either need to remove that or just delete the. Because we have the same question. Yeah. Because so what we can do is we can either delete the hello, open shift deployment that we have in this namespace now or we could just remove that field and roll the dice and see what we end up with. Let me try to download this first and then we'll see. The nice thing is in the event that you need to go back, we have the ability to actually tag these zip files as you kind of see down in the bottom left. They're tagged by month, date, or month, day, year, and then hour and minute. So if you remember to click the export button frequently while you're testing, you always have kind of a point in time to make your way back to. That's a good point. Yeah. So let me try to open the deployment. Hello world. Hello, open set. Yes. I'll just save this code. So as you can see, you have all the necessary things required here, like when time is three scale. This was the powering the image. And as I was trying to highlight, we have namespace tested. So why don't we, for simplicity purpose, why don't we just go into the openshift console and delete the deployment? Yes. The only reason why I'm saying that is because the image backing is saved in the registry under the namespace test, I believe. So if we were to try to bring this in, we would have to rebuild and all of that. Good. So I have deleted the deployment and I'll try to get the oslogan command. Or we can even do direct import. I think we can do that quickly. Yeah. That's a little better. Yeah, I think that there is some cool stuff in the console. Yeah, I'm all right. Yeah. Yeah, definitely. Let me copy and just paste it and let's see. You can also drag and drop that either way. Oh, nice. Yeah, from your finder window. Hinder window. You can find a window. Yeah. That's cool. Use that. Yeah. Sorry. You're fine. Tabs are hard. Yeah. I should know. I have like 500 tabs open right now. Pretty disturbing. I mean, you know, you got to hide it. For this presentation, there is a dark side of tabs that I'm not even showing on another screen. You can't close them. Let me do copy and paste. We can drag and drop. Yeah, we can do drag and drop later. Yeah. So yeah. So we can take a look at again. We've got the final create. Looks good. So we have our deployment details. And apparently it's pretty quick. As you can see, again, we have the three skill. Open. Ryan, all the girls. Awesome. Wow. And so right now, Ryan, can we go over what kind of things that we do support? So for example, if somebody created a service binding connector between two objects, would that be retained in the export? So right now in the current state of how it operates, everything that a user has access to within a namespace is pulled down. The only thing that might not be there that a user may want, there may be some, I guess I'll call it CRUD, fields that they might not want to store in a given policy account, like I said earlier. If a UID is not scrubbed out of a field by the Crane Library, that would be the only thing. But anything that is within a namespace. Jay, if you could go back really quick. Oh, actually, we don't have it. Never mind. The job will actually show you whenever a primer or an export job runs, you'll see that it's asking all of the resources that are objects. So if you think about it, if it exists in the namespace and there's an API resource for it, we are pulling it down. We're going to attempt to pull it down. It may not look pretty. I will say that part, if there's no kind of scrubbers on it, but we will pull it down. Okay, great. So hypothetically. So hypothetically. That kind of thing, right? Right. So yeah, once this log loads up here, as you see, it's adding all of these resources. It kind of goes fast, but see how it's asking cannot have access to, and then you'll see where it says adding objects. So the nice thing is that it's going to add it in regardless. And then as you see there, we just have it a little bit, but it's creating the whiteouts. And what the whiteouts are is when something needs to not be saved, it actually just makes sure that it doesn't land in a zip file. So for example, we don't want to really want to record the pods themselves because they're part of deployments. So we just go ahead and wipe those out. And then that way, when you pull out the download, it's not actually saved. But Serena, yes, from your original question, anything that is in the namespace is pulled down. Okay, awesome. So that would mean we have a serverless app, like their serverless deployment, we're good. Somebody has their own cluster and has custom resources. Those hypothetically would also get exported as well. Exactly. Awesome, cool. So I think this was the crux of what we wanted to share today. And as Jay had mentioned, which I did not earlier, thanks for mentioning that, is that this will be an operator. You'll have to have an operator installed in order for this export app to be available. So. And if you're somehow not familiar with operators or just dropped the link in chat with a link to the ebook for the, I'll call it the operator Bible, but yeah, Josh Wood and J. Dovey's did a good job of that. So we're, if I, okay, like Jay, for example, I hit export and then all of a sudden, I accidentally click that X. How do I get it back? How do I get that down? So you can't get the toast back because toast is like based on user access to be dismissed, but you have this route over here, which will do the same thing for you. Because still the deployment in PVC exists. So it should be accessible. Okay. So another way to check that, like if you love Fiamma and other resources. So basically we are creating export resource which is handling all of those things. So you can search for it. The status as well, you can get the route. Cool. All right. And as Ryan highlighted, yeah, even if it's shared with someone else, some of your colleagues who don't have access to it, they won't get to download it. Beautiful. That's very handy. And simple to get to. I thought that was gonna be harder to be honest with you. That's why I asked. So what use cases are we enabling here? Right? Like how do you see folks using this feature the most? This is to Ryan or Jay or Serena for that matter. So I mean, I like that. Yeah. Go ahead. You got this, Jay. You got this. Sorry. So yeah. I mean, how I see as a developer myself that in what scenario it could be useful for me when I have worked on something or created a lot of workloads or use cases depending on SBR or managed services or serverless, for instance. So it will be quick export for me to take it to another cluster or maybe sharing with some of my colleagues who have access to the same name space or want to recreate that. So as a developer, I could even find this useful when we're working on some issues. Yeah. So customer funds an issue in some particular name space and they can just share the export without secrets. I mean, the thing that we need to recreate the issue. So that's another use case of it which I think which will be very useful but that's not the important one. The important one is able to manage your unmanaged state which Serena told in the beginning. So you can add Ryan and Serena. Yeah. The only thing that I'll kind of add is, so say for example, if you're developing on CRC locally, could write containers and you give this awesome demo to your colleagues. You can then press the button, export your stuff and then bring it on to a real cluster. And so I think kind of the portability as well as kind of given the developer space to work and then know that they're going to extract everything that they need and have a successful running application after the fact, I think it's definitely something that's useful for them. Yeah. Like that's the kind of thing that I think of, right? Like, oh, it works in my cluster. Hey, check this out, right? Like hand it over to somebody else so they can test it in their cluster and add on to it and then export it and then save it upstream in their repos and then deploy it however they see fit. That could be a very handy, I don't know, what do you call that? Outer loop Serena, right? Inner, outer, both. More in the middle? Yeah, kind of in the middle maybe, yeah. What other cool things can we show off here? Anything? I think that's kind of what we had focused on today was around the export piece and then the next time we came, we were going to bring additional people to talk about those advanced pipeline cases, use cases which were pipelines as code and the Tecton Hub integration. Oh, yeah. Yeah. Cause I think we had talked about those last time but we were not able to demo them at that point. So. So this export function is very cool. I also like the fact that it's some resource type and everything. You mentioned secrets were not being saved, correct? Or were being saved, if you can forget. Secrets are actually saved. The only things that are kind of scrubbed are the export objects themselves and we also scrub out pods just because those are usually linked to deployments or deployments configs. You know, very minimal things are being scrubbed and you can actually see those within their repository under plugins but yeah, just very minimal things. And the nice thing about this is too is that we want to get to the point in the future that between the configure crane group and us we're sharing plugins back and forth to just kind of provide a user experience for upstream and for an open shift experience as well. Nice. So folks, if you have questions feel free to drop them in chat. I forgot to mention that earlier where you are here to answer your questions if you have any, but yeah. Anything else, Serena? I have no problem wrapping up after 30 minutes. Yeah, I think we're good on our side. Thanks for having us. No problem, this is really cool. Like I'm glad this feature is coming and I can't wait to make use of it. Yeah, keep your eyes open because as well as continued releases come, you'll see us also enhancing this in the future. So, like I said, in the future future. In the future future, yeah. So like one thing I foresee happening someday is we're gonna be doing a demo here on this channel and then all of a sudden that person's cluster is gonna go like wonky on them and then they'll be like, okay, hit export and then hand it to me and I'll run it on my cluster and wonky Dory, everything's fine. I could totally see that happening on the stream sometime in the near future. Yeah. Yeah, that would be really nice. You can break your cluster to here and there. All right, well, folks, thank you very much for tuning in. We appreciate you coming. Tune in in two weeks, four weeks, I forget. Two weeks I think is going to be by the advocate team and then we'll be back on in four weeks. Cool, so yeah, four weeks. It's not early, yeah. Today, so the fifth basically. September, we'll be demoing those new things that Serena from the future future wants to talk about. And until next time, stay safe out there, folks.