 Let's get into the big topic, open source, something that we actually have. This is so awesome. Welcome to this week's Ask an OpenShift Administrator office hours live stream here on Red Hat Live Streaming. I am Andrew Sullivan, your host and I am joined as always by Johnny Ricard. Hello Johnny, how are you today? I'm good to go man, how are you? Do you know how special today is? I do. Yeah, yeah, there's three reasons today is a special day and I'm not even counting Marco and Eric joining us. There's three separate reasons. So one, it's May the 4th, right? So happy Star Wars Day to any Star Wars fans that are out there or a lamented, lamentatious, I don't know I'm making up words now, Star Wars Day for any Star Trek fans I guess. So two, this is the two year anniversary of OpenShift TV. Oh dang. Yeah, so two years ago was when we had the very first stream back when it was, I'd say mostly the same crew, but that's not really true. We've added quite a few streams, we've done a whole bunch of other stuff on here. Stephanie of Courses and you, right? All new. Not that this show was the first one to stream, but yeah, two years ago was when we had our first stream here on OpenShift or it's now Red Hat live streaming, but at the time it was OpenShift TV. Nice, awesome. So and then third and certainly not least of all, this is also from what I remember I should probably double check this. This is the week that Kubernetes was released. So I believe that this is Kubernetes 7th anniversary. So our 7th birthday, I think it was 2015 was when Kubernetes 1.0 was released. Nice. So yeah, it's a busy week. Yeah, that's awesome. We also had the big release yesterday of the latest version of Kubernetes, so it's right on time. Yeah, I know you've got that in our top of mind topics to talk about. So yeah, it's crazy, right? 24 releases in seven years and just all of the just I'm constantly flabbergasted that we have, you know, and by we, I don't mean we, you know, you and I, I don't mean we Red Hat, I mean, like we as an industry have been able to adapt and adopt to this, you know, model of, you know, at one point, you know, Kubernetes was quarterly releases where there's a non-inconsequential number of folks who we're keeping up with and pushing that release schedule. And, you know, I've said, I don't know how many times, right? I'm an old storage admin, right? Storage admins, like maybe every two years we'll do an update, you know, and because you got to schedule downtime for like everything, like the whole data center you have to have downtime for. So, you know, it's having this platform that's hosting all of these applications. And, you know, it's in many cases the core of, you know, the business, you know, is being hosted on platforms like OpenShift. And, you know, yeah, we push out Zstreams every week, every two weeks, and folks are, you know, folks are adopting, you know, all of that other stuff. I won't say everybody, which is part of the reason why Marco and Eric have joined us today. Because, yeah, hello. And so not everybody is moving that fast, right? And I think you two are very familiar and very knowledgeable around, you know, folks who are, they're not laggards, because I think there's very good reasons that some folks are still on OpenShift 3. But it's very important that we start considering and start, you know, executing on those upgrades and those migrations today. So, and arguably a year ago, right? So, that is one of the reasons why... And maybe it did, right? Like, it's been, we've been doing this for two years already. We have migrated like thousands and thousands of applications. So, yeah. But some, some move fast, some move slower for all kinds of different reasons. Yeah. Exactly. So, yeah, so today's topic, if you haven't seen any of our social media or any of those other stuff that has gone out, is around the migration toolkits. And in particular, what I wanted to focus on is the migration toolkit for containers. So, we'll talk a little bit more about the other migration toolkits once we get a little further in. But, yeah, how do we move those applications? Whether it's OpenShift 3 to OpenShift 4, whether it's within OpenShift versions, I think I don't, I don't want to get ahead of myself here, but I think we can even migrate from just vanilla Kubernetes into OpenShift as well. I'm, I'm not getting it. That's a new case for us that we're trying to solve with the brand new tool that we're working on. Yeah, like, yeah, but it's, it's actually functional, but it's, it's not something that we have, like a full, full G release yet, like it's a tech preview product that we're working on right now. Very cool. So, yeah, constantly growing, constantly expanding and, you know, ultimately helping customers stay ahead. It's always amazing to me that OpenShift 3.11, which is based on Kubernetes 1.11, which is now 13 full versions old. You know, if you think back to all of those releases, all of those things, you know, we're talking at least three years. Actually, I think it's more like four years. If I'm thinking, if I'm counting correctly, because, you know, 13 versions, probably three and a half years, you know, three and a half years since that release. And it's still supported by Red Hat for now. So, which is part of the reason why we're talking. All right, I'll quit rambling. I'll, let's move on to our, actually, I almost completely skipped over because I've already been chatting with, with you, Marco and Eric. So Marco, if you don't mind introducing yourself. Yeah, my name is Marco Bejube. I'm based in Canada. I'm a product manager working on migration toolkits at Red Hat. I've been doing that for probably most five years now, but I've been at Red Hat for 10. And before that, I've been on the sales side in Canada. Like, so what was, in my first couple of years at Red Hat, I've been like helping our customers in Canada migrate to cloud. And now I'm, I'm pleased to work with our super sharp engineering team to actually build migration tools, help customers stay on the latest and greatest of our workout platform. Yeah. So super important with all the things that we've got going on. We'll be talking about the at CD inconsistency issue, for example, again, so staying on the current versions is always important. And then Eric, if you don't mind introducing yourself. Hey, sure. Thanks for having me on. My name is Eric Nelson. I'm an engineering manager today. I joined Red Hat in 2015. So around the time cube, I guess at its first version, I've been kind of 10, generally working with OpenShift since then. I started life at Red Hat as an engineer, and I kind of continued up through the independent or individual contributor route. And now I'm working as a manager in engineering. So I still get to play with a lot of fun and interesting stuff. And I work with Marco, solving customer problems around OpenShift three or four. So, and other and that that mission has kind of expanded, like quite a bit to include application portability and workload portability, migration of state, just a lot of kind of interesting problems in that space. So when you talk about shutting down data centers in order to do storage, my storage migrations that definitely we've seen what that looks like. And we're trying to make that less painful. And that's basically all I've been thinking about for the past three years. So thank you. Yeah, it's some folks know Johnny and I in previous lives, I should say, because apparently I'm the new guy here on the show. I've only been at Red Hat for three and a half years. But a long time ago, we actually worked together. We were peers as administrators. And I know Johnny and I had many late nights together, planning outages and shutting things down and coordinating all of that. So it's working with the application teams who like, what do you mean you have to I don't use that storage? Well, it's on the virtualization platform. So yeah, you do. Anyways, let's get to this week's top of mind topics. So I see some comments here in chat, Mike Murphy. Yes, may the fourth be with you as well. Happy Star Wars Day. Any documents for OpenShift 4.10 and disconnected. So yes, you can deploy in the documentation. Johnny, are you looking that up? I am looking it up right now. So there's a specific page for disconnected OpenShift. Basically, once you mirror the images across, you use the image content source policy. You apply that in your install config. And then it doesn't matter if you're doing UPI, IPI, bare metal rates, it just works all disconnected. And that ICSP, the image content source policy is what redirects all of those image polls to your offline mirror. And we also have a stream. We did a disconnected deep dive stream. So we'll dig that up link or that link up as well. Share free resources. So I'm going to go ahead and share my screen here because I'll also need it for our top of mind. So I always recommend to folks kind of three places. So one, you can go to learn.openshift.com. This will redirect you over to developers.redhat and inside of here, you have just a huge number of resources available to you. For example, this OpenShift 4.9 playground. There's a bunch of other stuff in here. You can see there's 10 pages of these samples inside of here. And they're not all developer related. They imported all of the ones that were available before. So there's a bunch of stuff available from there, all completely free. And it's literally like, you know, you click the start here and it might have to provision environment in the background, in which case if it does, it's like a 15 minute wait. But otherwise you can see I can click the launch here and jump right in. The second one that I would suggest to people is try.openshift.com. So this one you'll see also redirects over to cloud.redhat.com. Oh, no, we broke it. Try.open. Oh, yeah. So our web properties are undergoing a bit of a migration right now. So note to self, harass the marketing team about why our tri-page is now broken. And by the way, that's one of the new use cases we're working on from a migration point of view is if you start there and you're building your first application on OpenShift there, then in the future, we'll have a tool to help you migrate the application to maybe eventually you're fully enterprise-supported that OpenShift cluster, right? So that we've been asked by some people that when I started an application on try.openshift.com, how do I eventually move on to my enterprise-supported cluster, right? So that's one of the use cases we're looking at right now. Very cool. So normally when this page actually works, it has links to a bunch of resources, including how to register for a developer account. And with a Red Hat developer account, you are entitled to 16 cores of OpenShift that you can use for single-user non-production. So you can deploy OpenShift. You can do all the things with it. You can learn it, whether you're an administrator or a developer. And last but not least, I always suggest to people that they look up the DO080 training course. So this one, which we can see is here on Red Hat Learning. It is a free course. You can go, you can take this. It kind of introduces you and gives you some basics around Kubernetes and OpenShift. It's a great start down that path. I know it says based on 4.1. I'm positive that that is not true now. I'm sure it's something much newer than that. But it's a good place to get started. And again, it's completely free. So I always suggest those to folks. Most of the audience on YouTube, no, actually. So for anybody who doesn't know on the back end, we actually stream to three different places, sometimes four. So we normally stream to Twitch and then we stream to the OpenShift YouTube channel and the Red Hat YouTube channel. And then we also stream to Red Hat TV. I think that that program was temporarily paused. Stephanie will probably send me a message in a moment. But yeah, I always tend to tell folks to check out YouTube because YouTube actually keeps a history of, right, they don't delete the streams after they're gone. So Twitch, they're only there for I think 30 days. And it's, you can't leave comments and a lot of other stuff. So I always suggest folks use YouTube. So top of mind topics, I'm going to stop getting distracted, I promise. So two things real quick, two and a half things real quick. So one next week is Red Hat Summit. I believe that it is a no cost virtual event. There will be some in-person folks. But I think the majority of us, myself included will be virtual. So you can go to redhat.com slash summits. I'll just show that because I happen to have my browser shown. So you see you can sign in here. Tons of information that'll be coming out there. Definitely check it out if you have the chance. I think that we are going to be having a blog post be put up. I don't know. I'll have to confirm around anything OpenShift related that will be at Red Hat Summit. So keep an eye on the OpenShift blog, which is cloud.redhat.com slash blog. So see the hybrid cloud here, hybrid cloud blog here. And this is where we have all that information out. So the second one, which is semi related here, if we sold down a little bit, we have this blog post right here. We have a blog post that has all of the stuff that is happening at KubeCon EU. So happening over in Valencia, Spain the week after next. So there is, and I was taken by surprise to be honest, there is a stunning amount of red hat related things and projects and all the other stuff that is happening at KubeCon. So if anybody is, whether you'll be there in person, will he be there virtually? There is a ton of information that will be coming out there. In addition to which, there is the OpenShift Commons that will be happening as well as, I think there is a get-ups event. I think there are several other events that are happening. Let's see if they are on this page. Here is the Commons Gathering agenda. So definitely always, always, always encourage us to check out what we have got going on at KubeCon and the Commons Gathering itself. I don't know if you can register separately for it or not. You can register to attend virtually. So it looks like it might be no cost to attend that. I don't know. But these are always great. Diane and the Commons team does a phenomenal job of putting those shows on. So I'll close out those two things by saying that as a result of Summit next week, KubeCon the week after, we will not be streaming the next two weeks. So we will return on May the 25th. So the next two weeks, please direct your attention and your views over to all the stuff that we've got going on there. So I'm looking forward to seeing Summit again. I think Andrew's personal opinion is, I think that this first event will have a lot of great information. But the deeply technical information will be in the second phase of Summit events. So register for this Summit, attend all the stuff that's really cool, and you'll automatically be registered for and get all the emails and all the other stuff about the follow on events. Well, we'll have a lot of that technical information. Let's see, Johnny, Johnny, you had a couple of things in here. Yep, so I'm sorry, let me just get over to the link really quick. Okay. So yeah, the big one is that 1.24 of Kubernetes released yesterday, it was the first release of 22. It's called Stargazer. They had, you know, rightfully so it fits with everything going on right now. But they are, there were 14 enhancements that went to stable. I think there's 15 that are going to the next, they're going to dev or something like that. And then there's a couple that have been deprecated. But the big ones that I think that we're coming out about the, that we're coming out of the stable or a lot of the things that Andrew's been talking about before with the entry storage providers moving over to the CSI. So if you look at the release notes, there's stuff like on Azure, I think there's there's a couple of ones that are going from entry, they're migrated from entry to CSI. And then another cool feature that's coming out of storage capacity tracking, which if you think about it, it's essentially where you have a pod that's going to be scheduled on a node that has a PV requirement. Well, right now the way it works is it'll go to that node and then try and schedule the PV or build out the PV. And if there's no storage available for the exact capacity, then it'll just kind of die where now it's going to go out and it's going to do a check and make sure that that storage is available before it tries to actually provision the PV. So it's pretty cool feature. And then there's some API deprecations that are coming out, not this release, but really the next release. And the big one is the big one's going to be the batch API. So we'll see that in OpenShift 4.11, 4.12 time frame. So that'll be Kubernetes, most likely like 125, I think, something like that or 126 or maybe even 124. And the last thing out of this one is the beta APIs are all turned off by default. So basically like if you need to turn them on, you'll have to manually turn them on. And I know I said the last one, but really the last one is the Docker shim was actually really moved from the kubelet on this release as well. And while you've been talking here, so we actually Rob Zumski did a blog post planning for and reviewing, I don't know if my screen initials still shared. So there's a blog post that Rob did about API deprecations and finding those and all that other stuff. So he walks through it again. We also did a stream on this so you can find all of those APIs, deprecated APIs that are being used in this one. And this will actually help you find those differences too if you're migrating workloads from say 3.11 to a modern 4 cluster. And that's going to become more likely as 4 continues to progress. So that's something we've been paying attention to ourselves. Oh, that's awesome because I think it was, you know, 4.8 to 4.9 was the first major deprecation. And so now it's one of those like or removal, I should say, not deprecation. So now if you're going into something later than 4.9 or in the future later than 4.12, you know, you might not even, you know, you might not have any cluster to spin it up in to get those API warnings through the CLI or through the GUI to be able to fix them. So that's that's great news. And hello, Tiger, I see your hello. Thank you, Stephanie, for highlighting it because I missed it in the in the live chat. Yeah, so definitely keep an eye on this. One thing I like to point out to folks, and I think I did this on Twitter, is if you're looking in your cluster and you see those, let me see if Rob included an example in here. Yeah, so if you do this, get API request count, and you see, you'll see maybe it's an API that is removed is being used, and then you can dig in, and you can see what precisely is using it. Generally speaking, if it's something at the system, or if it's an OpenShift service, Red Hat will take care of those, right? The ones that you really want to care about are the ones that are from your applications, right? Any locally, you know, or partner developed operators, although we do try very, very hard to work with our partners to make sure that they're aware of those and they're addressing all of that stuff before it becomes a critical issue. But yeah, generally speaking, Red Hat will take care of all the Kubernetes, well, Kubernetes takes care of the Kubernetes ones, Red Hat will take care of all the OpenShift ones, including all of the Red Hat operators. So it's really the stuff that your teams, your application folks are creating to be aware of. All right, I think I had one more in here. Yeah, because we talked about API deprecations. Oh, I wanted to very quickly talk about the NCDN consistency issue. So we've been talking about this for a few weeks. Remember, this was the reason that updates between an OpenShift 4.8 and 4.9 were temporarily disabled. So the issue is in NCD version 3.5.0 through 3.5.3, there was the potential for some an inconsistency to happen. Basically, in very specific scenarios where NCD was shut down uncleanly, like it was terminated due to an out-of-memory condition or the node failing, something like that, it wouldn't have committed all of the data to disk due to basically a bug. So with 4.9.28 and 4.10.9, we reopened the upgrade pass from 4.8 to 4.9. And we have been encouraging everybody to upgrade to at least those versions. So if you're on 4.9, please get to at least 4.9.28. If you're on 4.10, please get to at least 4.10.9. But I want to point out that the bug isn't fixed in those two releases. We've simply implemented a mitigation to help detect it and prevent that corruption or that that issue from happening. And what could happen is essentially, if that scenario occurs, it will kick the NCD node out of the cluster. So you'll end up with a control plane node that is in a not-ready status that is in a, you know, NCD is not fully consistent. So if that happens, basically, you just have to do a recovery of that NCD node to bring it back into the cluster. So rather than having that, you know, silent corruption issue, I'm probably not silent. But rather than having that risk of it introducing a corruption into the cluster, it just prevents it from happening right at the go. You just do that recovery and get there. So not fixed, but definitely much safer once you get to one of those two versions or later. And they are working on the eventual mitigation or excuse me, the eventual fix. I don't know precisely what version of OpenShift that will be in however. All right. So I see, Stephanie, thank you for including links around the migration toolkit learning path. Yep. Training on, I'm just reviewing chat real quick to make sure that there's nothing here before we move on to today's topic. So as we said at the beginning, this is something that is really, really important as we approach the end, the last final, the end of the end when it comes to OpenShift 3 and specifically OpenShift 3.11. Because, you know, for better or for worse, it was not an in-place upgrade, it's a deploy a new cluster and migrate your applications. But I think application migration extends and is applicable well beyond just an OpenShift 3 to OpenShift more migration. Marco and Eric, you know this far better than I ever could. Whether it's, you know, hey, with stereotypical examples, hey, we acquired this cluster. This other team was doing it and now it's coming under our management and we want to consolidate whether it's, you know, hey, we need to. We're seeing a lot of interest of migrating to Arrow and Rosa these days. So there's a lot of questions about migrating from on-premise to the cloud or migrating from on-premise to a managed service in the cloud. So yeah, that comes up a lot. So that's another reason. And it's not like it's one tool fits all either, right? Like when we're talking about migrations to customer, like there's a wide spectrum of use cases and need even from for a migration tool in the first place, some customers can reprovision anything from pipeline, they don't have state, like they don't need the tool, they don't need a migration tool at all. And some others will have no automation at all. And you have all the other scenarios in between, right? So for the most common scenario is, is I have some, I have a percentage of my applications that are stateless and can be reprovisioned from pipeline. I have a percentage of my applications that can be reprovisioned from pipeline, but as state. So I need to migrate the PVs, right? Even if I can reprovision for pipeline, the data is important to me and that needs to be migrated and have another percentage of my applications that have no automation at all, because it's not as critical for me, but still important enough that I want to move it to my new cluster. So and MTC can deal with all those things, depending on your use cases, but it's not the one, one scenario fits all, like it's about planning a migration that makes sense for you and your applications and the kind of automation you already have. Yeah, and I mean, I can think of dozens of scenarios ranging from we had, you know, one or two or a small number of very large clusters. Now we're moving to a model of many smaller clusters, you know, or the inverse, you know, there's all kinds of scenarios that come to mind. So, yeah, migration toolkit for containers. So the upstream project here is conveyor. So conveyor is a, yeah, let's talk about conveyor for a couple of minutes. So conveyor is a, is, is a actually an agglomeration of its multiple migration, not just migration tools, but, but the kind, like what we're trying to have in the conveyor community is migration and modernization tools. So there's other projects, right, and they are all upstream and as pretty much everything we do at Red Hat, we work upstream first. So this is where we build our migration toolkit for container upstream. The name of it is crane. So if you hear that, it's actually the upstream version of MTC and our future migration tool that we can talk about later. But there's also other tools like tackle that will help you do a full analysis of your current applications, like it's more about migrating traditional legacy applications to a containerized based system, so to Kubernetes. So if you have like legacy apps that you want to containerize and tackle can help you assess all that and then help you migrate those applications to containers. We also have forklift, which is MTV downstream. So forklift upstream is the tool that can help you migrate virtual machines from traditional supervisors, like VMware or Rev to OpenShift virtualization. So all those projects helps you, like it's this 6R approach, right, like you probably heard in the past, re-platform, re-factor, I always forget one or two, but anyway, it's all about like looking at the overall applications and using the right tool to help you modernize your apps to a Kubernetes platform like OpenShift, right, and that's what we do upstream in Conveyor, and that's why we have all these projects and we keep, and we even have Pilaris now that is a tool about once you have modernized to measure your success. So Pilaris will help you having metrics, right, the Dura system metric base that helps you understand, like, are you actually modernizing the way you actually produce code and push code, although with production. So yeah, so it's all about modernization in the first place. That's interesting. I wonder, at least for me, I initially thought that migration toolkit for containers was essentially a wrapper around Valero, and I don't know whether or not Valero is used, but I know that it is, but it's more than that too, right? It is, yeah. You can think of it as kind of a, I mean, I can get deeper into the architecture if that's interesting, but MTC relies heavily on Valero, and basically what it's doing is, of course, Valero is like your cube native backup and recovery disaster recovery type solution, and cube, a lot of people use it. It's got a lot of usage in the wild outside of us, and so it did a lot of things that we knew that we needed to do when we were first designing MTC. MTC backup and recovery in DR is different than migrations, and that boils down to kind of the experience around that. MTC is supposed to be out of the box of migration tool for mass migration, so typically you'll see it under cluster admin usage trying to, we're under the scenario of mass migration, like cluster evacuations. So the initial mandate was to provide a solution at the workload layer so it doesn't handle the control plane to help people get off of three and into OpenShift four. It quickly became apparent that it was useful even outside of just that use case for migrating workloads around from cluster to cluster, so it'll do four to four, but really the way that it works is by orchestrating backups and recoveries across two different clusters that both have Valero that share some intermediate repository in order to share those objects. As MTC has evolved, I've been on OpenShift TV a couple times now over the years. It's kind of been interesting to see the evolution of our product and what we've been talking about over time. Even some of you may be familiar with the project when it was called CAM, which was like a container. I don't even know what it stood for to be honest, but I'm forgetting. But yeah, I went back and I kind of watched a little bit of that and it was interesting to see what we were talking about at the time. At the time we were using like it really only supported indirect state migrations using Valero and Restick, which is what Valero uses under the covers for its PVC data. Since then we've developed direct solutions ourselves that use R-Sync, so you can R-Sync data between PVCs directly between two different clusters. So we have indirect and direct. And then there's kind of three buckets when you think about a migration. There's your Kubernetes resources, which are relatively easy compared to state. So you're talking about basically deserializing, sorry, serializing and deserializing your cube objects into and out of intermediary storage, object storage. You've got your state of course, so if you have a stateful application, your data that lives inside of the PVCs. And it's also possible to include things like config maps or secrets that under that umbrella as application state. Maybe your application is speaking directly with the QBAPI and storing state there. And then third, you've got your images. So we'll also handle images. And so the product's definitely evolved over time. And the reason I mentioned this as we're talking about Valero is that we've gradually added different solutions to the product that are not necessarily using Valero, but fundamentally MTC uses Valero under the covers to performance migrations through an orchestration of backups and recoveries. One thing I would add to what you just said, Eric, is if you think about it, Valero, the use case was to do backup and restore. In our case, we learned the art of way at the beginning when you have customers are trying to migrate hundreds or thousands of applications over a weekend. If you have to backup everything including the PVs and then restore everything, you're copying the data twice. So that slows you down like significantly, right? That's why the team has been working on ways to do the migration directly on the PV side so that we don't have to do a full backup and a full restore to support those that have the time limit to their migrations and need to just happen as fast as possible, right? So maybe, I don't know if this is an appropriate question or not, but is there like a relationship or some kind of interaction between what you're doing with Valero in the migration toolkit and what the OADP, the OpenShift API for data protection folks are doing? 100%. So we work really closely with them. I've worked with Dylan and his team since before MTC was a thing and CAM was a thing. OADP, in a lot of ways, you can think of it as productized Valero. It does more than that, right? So a lot of that boil sounded Valero plugins that handle OpenShift-specific logic and a lot of that stuff actually started as part of MTC. So for a while, there was no real supported manner for running Valero. And so we saw obviously a need for backup and recovery in DR and so OpenShift API, sorry, OADP provides that in addition to providing APIs for third parties to develop against. And the newest version of MTC, and this has changed since the last time I was on here, actually uses, so we basically spun off that layer and now that's what OADP is. And so we actually just talked to OADP. So that's been really nice because we get, it's under active development over there. And so we kind of are getting those fixes for free. There is a little bit of a caveat to that in that I don't want to get too far into the weeds with this, but it relates to deprecated APIs. Valero has moved on to V1 CRDs, whereas we still have to care about clusters that didn't have support for V1 CRDs. So we still have this idea of a legacy agent that runs a Valero that uses the V1 Beta 1 APIs. So OADP is not in use on your legacy platforms that don't support it. That is a, it comes up often and it has become more of a thorn in our side as, like I said, four continues to move forwards. But yeah, so I thought I would just mention that. But yeah, on modern platforms where OADP is supported, MTC just talks to the OADP APIs in order to drive migrations. Yeah, that makes sense, right? Especially the, which you said, you get updates, you get development for free. Why not take advantage of that instead of doing it twice? Sorry, yeah, I was about to jump on that one, that comment, right, about Kasten. And I'm less familiar with PX backup. I don't know this one. But we have a great relationship, right, even with the Kasten team, right? And they are in the business of doing backup. And I think they have one of the issues we used to have in the past when it comes to my though is like talking to their team. You're breaking up a little bit, Marco. Migration, it's actually a full backup and a full restore. So I think you're coming back. Okay, sorry about that. You know, if I keep breaking up. But yeah, each solution has pros and cons, right? And MTC is a free tool available in OpenShift. And as an operator, like there's no additional charge to it. Like, and it has pros and cons compared to Kasten. So it's not like one tool or the other. It's like each there's pros and cons in each of those tools. And it's good to understand what they're good at and what they're less good at to pick the right tool for the right job, right? But definitely, like if you have a need for migrating or backup and restore, have a look at those solutions, like Kasten is also a great solution that sometimes makes more sense than MTC for some use cases. Yeah. And noodle Jutsu clarified PX backup is the Portworx solution, which I know there's other storage vendors, NetApp's Astra comes to mind that also have, you know, similar backup recovery type of solutions that, you know, you could use as a migration tool if you wanted to. I think the difference is the additional help and metadata and like that overall, it's a, I'm trying to think of how to phrase this off the top of my head. It's an experience that's targeted at migration instead of an experience that's targeted at disaster recovery, right? I need to back up here. I need to read. That's a great way to describe it. Yeah. And to get back to what it's question again, I say MTC, MTC, it's part of the OpenShift subscription. So if you own OpenShift, yes, you can use MTC free of charge or include it in your OpenShift subscription. And upstream, we also have the crane tool. It can be, it's an upstream tool as well that can help you migrate if you are on other platforms. And Marco, is that crane with a C or crane with a K because it's Kubernetes? C, crane with a C. And we created that for the big event. But let's not talk about naming something, right? Oh my God. I have nightmares of name discussions. Walking around that. Which should be capitalized and which shouldn't be capitalized. Yeah. It should be a C or a K or like, oh my God. Too many hours. Let's see. So to clarify, you're saying by using R-sync and migration tool kit for containers, you can do live migrations rather than snapshots as with Kasten and PX backup. No. So R-sync doesn't enable you to take a live copy of a file system that's inside of a PVC live being your application still up and running on the source side, right? The way that we approach that problem is that we use a pattern called stage and cutover. So imagine you've got terabytes of data, right? And you also are sensitive to your downtime. So you probably want to schedule your downtime at some point when your application load is low. Maybe it's late on a Saturday as admins are familiar with. That's when they're doing their migrations and they're awake and drinking lots of coffee. Yeah, exactly. So in order to ensure that data consistency, if you've got an application that's up on your source side and it's accepting a lot of traffic, so we're able to do a stage migration. So what that's going to do is an R-sync copy and gather the bulk of your data. So it'll capture, let's hope, 99% of that terabyte of data and that there's only a small amount of it that's actually changing. So it's sort of like a snapshot. It is not a snapshot, so I don't like to use that word. And then during your downtime, you'll do your final cutover migration. So during that time, we'll actually queuesse your source application. It's deliberately vague in that queuesse can mean different things depending on the application workload. So in our simplest case, it's literally scale the application to zero, although queuesse can also mean maybe you put your database into a read-only mode. At that point, we're going to take a final R-sync copy, but the beauty of using R-sync is that it'll do the incremental file transfer so that it will only take as much time as it takes in order to get the change that happened between your last R-sync copy and when you've just queuesced your application. And then finally, during the migration, it'll take all that final state. It'll bring over all your application objects and then your application will come up on the target side. So there's a lot of different options now in MTC depending on your use case and some of them will make more sense than others. So that's copy. There's also a move idea which is effectively a unplug the disk in one place and plug it into the target location. So there's a lot of people who use NFS. So if it's some kind of a shared storage solution, you can just simply pick up the definition and plug it in in your target side and you're using the same data. Is there any vendor integration there or opportunity for vendor integration? And I'm thinking in particular of things like I know VMware, NetApp, and I think also ODF and maybe Portworx, you can basically import an existing PV and underneath its management plane. So like with NetApp, if the storage system is the same on the back end, it's just two separate open-shift clusters. Can I remove the... That's actually the third flavor is kind of what you're describing. If I understand you correctly, so like I'm envisioning, we have a slide at some point that we've shown but basically you've got this like kind of pyramid of options that are available to you when you're tackling your state problem. One of them is file system level copy. One of them is snapshot copy which actually integrates with your vendor. So it'll take a native snapshot and then it can restore from that snapshot on the target side assuming that they're compatible. And then there's move which is that unplug and re-plug into the target side. So in the copy case, you're really cloning the data which has its advantages and its disadvantages. It's going to take the most amount of time but it's also more safe than actually using the same data in a shared storage scenario. So although if you've got a ton of data, being able to just pick it up and plug it into the target is advantageous as well. So you know it again like we kind of described the vast amount of different permutations that a migration may look like. And a lot of it comes down to kind of understanding the solutions that are available to you and then making a decision based on what you care about how you approach it. But we think customers like for example like staging going back to this like on the Friday like staging hundreds of applications and then over the weekend doing the cut over right and that significantly reduced the amount of downtime and its scales right. So this like obviously there's still some downtime but the R-Sync helps you with the staging process, helps you reduce that downtime significantly assuming that you don't have a lot of data that have changed between the staging and the final cut over. Yeah the normal snapshot and snapshot. Yeah and you can run stage at the time as you want and so on but yeah that's yeah. Yeah so I see a rock out which makes me think of the movie Armageddon. Is there a DR session on Twitch planned with some stateful workload? So we did do a couple of sessions around HA and DR. I will say that generally speaking we don't talk about the stateful side of things from OpenShift proper's perspective because it's abstracted into CSI or whatever your storage vendor happens to be. The ODF folks OpenShift data foundation folks will talk about it. We had a net clue it on to talk about using ramen and the ramen project alongside of ODF to do replication and data protection for disaster recovery and then after that it's kind of up to your storage provider right. If you're using NetApp I know Trident can do things like coordinate snap mirrors. If you're using PortWorks PortWorks has its own replication mechanism inside of there same thing with pure right. If so I know pure owns both PortWorks as well as pure storage right. Yeah so ODF is SEF based. So yeah if you're using upstream SEF or SEF outside of ODF I don't know precisely what's there or not. I would assume that that would come from Rook. So I don't know Jonathan if you are or Johnny rather if you know anything. Yeah so like you basically nailed it like the the object storage really is the new bus of anything like AWS S3 like would be your new beside and then I think Rook is the controller for the block but I could be I don't know for sure it's been a while. Let's see Komaradu apologies for butchering any names how to configure and manage a Red Hat SEF cluster. So I will say that we are not the best set of folks to ask that. There is a or there was maybe Stephanie on the back end can provide us a link. There was a OpenShift data foundation or a Red Hat storage live stream that happened. But if you'd like you can send me an email andrew.solvenetredhat.com and we'll point you in the right direction of some resources. So you know we can make sure to take care of that for you. Yeah except when somebody shoves a database into yeah NFS is a it's love or hate right depending on what you're trying to do with it. Yeah NFS is one of those things where it's great for low level you know like kick the tire stuff of things but like if you try to put like an actual workload on it especially if you're not using the enterprise type storage you know it's it's a boy no. So Eric I'm going to change directions on you a little bit. So it's been a minute since I've seen migration toolkit actually in action and I know when we were talking you said that you had a demo so I would love to see a demo and I'm going to ask a lot of dumb questions because that's what I specialize in. Okay but yeah I like I said it's been probably a year since I've seen it in action and it was impressive then so I can only imagine today. Sure so let me get I think I'm gonna set up let me share my screen. While you're doing that Eric quickly like so right now we are at MTC 1.7 that's the latest release and what we added in MTC 1.7 was in cluster storage migration so before our specialty was to migrate apps between clusters but we also got requests that some people wanted to stay on the same cluster but change the underlying storage for some pvs so assuming you are migrating from one storage vendor to another or some apps you want to migrate for some reason from one storage vendor to another so I think Eric can demonstrate both like a like migration of apps but also migration of storage which is the latest thing that we added in. Oh I didn't know that the same cluster pvc migration was only added in 1.7 I thought it was 1.6. Yeah that was that was we could do it before and in 1.6 we kind of had it like it was kind of working but the user experience wasn't great so that's why we didn't actually build a lot of marketing around it and like if somebody would want to do it we could show them how to do it but it was a little bit ugly right so we just made it a lot better in 1.7 from a user experience point of view and that's why we just started to more probably talk about like how to do that then with the brand new experience. What's that phrase about you you could dig a hole with a spoon but why not use a shovel? Yeah I'm not just trying to implement new things right but until it's really like well done like you just like you just use it when you really really need it. Yeah accurate yeah so I'll show a little bit 1.6 we actually did introduce like the fund like kind of the the lower level ability to do it but it required a little bit of glue so with 1.7 we've got a proper UX experience that's on top of that so I'll try to quickly go through this although we are going to be blocked by how long it actually takes to move the bits since I am doing this for real so I've got a host cluster that's my OpenShift 4 cluster I think there's a 4.11 cluster and then I've got my OCP 3 cluster registered I've gone through the registration process previously so these are my two clusters that I'm migrating from OCP 3 to OCP 4.11 I've got a replication repository that's registered this actually means is I've got some intermediary backup storage that's been configured with Valero so this is actually using and exercising the OADP APIs and you'll actually be able to see that it's running the OADP operator so now I'm all set to craft a plan so this is how MTC generally works not a kind of this is different although one thing that we've added since then is now we've got several different types of migrations that you can do so you've got full migrations which is the classic stage and migrate bring a workload over from one cluster to another you've got state only migrations so the intent behind this is imagine you've got an application that's fully managed by GitOps so you're able to actually point it at a destination cluster and reprovision on that cluster but you need a solution for your state this will help you get your pvcs only and potentially some other Kubernetes resources onto the target namespace so that you can redeploy your GitOps managed application and it'll find its data when it's brought up on the target so that's really what that's intended for well you've you've used the the the GIT OPS word twice now so I think if we do it again Christian will appear because we okay yeah I'll be careful I'll keep the lights on and then finally we've got storage class conversion so that's what Marco was talking about this helps people do in place storage migrations so and it'll handle like it'll handle it soups and nuts so that you're able to do a it'll create the application it'll update your pv references which is something that we need to be able to do that one six did not do that one seven does now and so I'll demonstrate that as well so I'll just do a quick full migration I'm going to go from ocp3 to my 411 cluster and I'll pick my s3 repository so this is that persistent volume discovery so it just found the hang on I'm sorry this is the name this is namespace discovery so these are the available namespaces the unit of migration and mtc is by namespace so I can select more than one namespace and it'll grab the things that are inside of that namespace this is the discovery component so what this is doing is finding the pvs that are either in use or just lying dormant inside of your namespace so that I'll go ahead you mentioned s3 a moment ago that's where it will store those kubernetes yaml objects or and all that other stuff we support several different flavors but um generally you'll see people using either either aws s3 if it's an option or you can use it self-hosted as s3 like so I heard nuba nuba works so does minia right so mcg so what you use is is mcg that you can install again you can use that as well as long as you have an open shift subscription even if you don't have an udf subscription you're you're allowed to use mcg as the s3 endpoint for migration purposes so we allow that even if you're you don't have an udf subscription so that you have an mcg is the nuba endpoint it's the nuba yeah it's funny like people know the nuba endpoint more than mcg but the downstream name is mcg from nuba the same thing and uh just to clarify rockhound was asking about doing the um the third option there the pv pvc to pvc migration is that um just out of my own curiosity basically does it create a new pvc and a new storage class and then copy the data over is that exactly yeah and it uses that file system copy um I see another question to pvs or cluster scope that is correct so um you're you're correct in that it's actually the pvc but you can follow the breadcrumbs to the pv the point is you're getting the data um so here you'll see the pv migration type this is where uh we see the volume snapshots file system copy or move uh so i'm just gonna do a file system copy um I can actually map the storage class so as part of this I can change my storage class I see people doing this when they're going from portworks to odf something like that um and then this is another um new section of the wizard so you'll see we have a direct image migration and direct pv migration this refers to that new state um implementation that we've got going on behind the scenes um that's not using rastic to do its state transfer um hooks very very quickly Eric because that's a question I get a lot like on the previous page right you said like direct image migration was unavailable right the reason for that is because when you set up your two clusters and you install ntc like we need to see you need to open a route on your registry so that we can talk from one cluster can actually push the image to the other one so if that route is not available that's why it's unavailable there so we cannot copy the image directly to your registry right so so as long as you have that route available and we can see it then this will become available and then we can stream the image and same thing for direct pv migration you need network connectivity for that to be available if not we'll copy everything using rastic to the s3 bucket and then we'll restore on the other side but that's much slower so this is why if you have network connectivity those two things should be available if you follow the instruction properly in our doc you'll have that and that will test that the network works and then you will have that as available for both image and pvs sorry Eric it's just such a common question I had to jump no please and if if you see anything I'm just trying to go through watching the clock but interrupt me if you want to add anything. Hooks are another more advanced feature but you can they're really powerful so they effectively will delegate at certain steps of the life cycle of your migration out to a container so you can do really cool things like post migration upload update your load balancers it'll run like a playbook out of the box so I can upload an ansible playbook here or really what it's doing is executing an entry point in the container so it could do really whatever you want it to do and you'll get a little bit of information about your migration so you know where you're going to and from and that sort of thing so I'm not going to mess with hooks right now the volume I happen to be using has is close to capacity in terms of its usage so this is another cool feature of MTC we frequently saw people with very full volumes and in some cases they even had made their volumes larger out of band so kubernetes wasn't even aware of it so what would end up happening is you'd go to do your migration and it would provision a smaller volume than the amount of data that was actually living on the volume so we'd go to fill it up and we'd end up with these errors so we've built like we've built in some intelligence around that so that we actually look to see how much the volume you're using and we'll provision a volume if you have the feature switched on it's either I think it's defaulted on I need to go back and check for that but it'll actually provision a volume of the size that we have gone in and looked at the volume to figure out you actually need depending on a certain threshold of usage that's an awesome feature because that's something that I remember as you know previous employer storage storage vendor we got asked that question all the time hey can I you know before volume expansion was a thing which by the way Johnny did you notice in what 24 volume expansion goes GA I had to look at that I'm like but it's been there for like 10 releases it hasn't been GA this whole time anyways yeah that's it I was looking at the last night too yeah so yeah that that's a that's a great check to have in place because I can imagine that that comes from experience so after seeing it several times we realize this is important so I'm going to skip doing a stage just in the interest of time but this is where you're going to actually go execute your stage and you can do stages many times as you want so you can have a rolling stage over and over and you don't need to quiesce your application so I'm just going to run through a cutover migration we've got a little bit more information about exactly what that's doing here and then lastly you've got a checkbox here that'll tell it whether or not to quiesce your application so if you know that you don't have to quiesce it you don't have to you can uncheck this box so as the product has evolved we've gotten much better about observability and kind of understanding what is actually exactly going on under the covers so you'll see kind of a pipeline view here right now it's preparing for the cutover so it goes through it's you can think of it as a state machine so it's moving through several states as it's performing the migration so that did a quick backup that's the backup step of the villaro it's doing volume transfer so the way that we actually do this is we'll spin up a client in a server of rsync and it wraps it in s-tunnel so that's what it's doing here is it's waiting for all the dependencies to get launched on either end of the the tunnel and it's doing a direct transfer using rsync so right now it looks like it's waiting for the pods to come up so it sits there and it spins it waits for that the last piece I wanted to show off is this debug view at least that's what we're calling it this was buried in previous versions of mtc but it became so useful that we really felt it was important to add it looks like it's hitting some errors trying to find some debug uh resources um I mean that's a great example of like when you're doing these migrations they're they're it's pretty frequent that things go sideways and so the observability and the um just ergonomics around being able to understand what's going on is absolutely paramount I joke about mtc being a really good tool to as a health check tool because when you run a migration you if something's wrong with your cluster and you're doing a mass cluster evacuation you're probably going to find it um and and those are most of the problems that we see now with mtc are our environmental issues that are really unrelated to mtc but they manifest as mtc problems because we can't finish doing what we're trying to do such as volumes being unable to mount in our staging pod or something like that so we worked really hard over the last year reducing that feedback loop so as soon as you see an error happen we want you to be able to get to that root cause as fast as possible um so that was something we were deeply interested in Eric quickly like somebody's asking and I think it's a good question on chat about how to deal with failures right so so when so first thing to end that because it's another common question when you are migrating actually like one of the key benefits of the way mtc's migrating is we're not affecting your source site so if anything goes wrong you're just starting the source site again and shutting down the distance and killing it and it's like we the application will come back up right so it's pretty easy to roll back this way and and you can try as many times and and and as Eric said usually the errors or the issues will happen at the beginning people install mtc their first couple of migrations might get rocky for all kind of environmental problems they might have and they don't know about but after a few typically things get a lot better and then after that you can migrate at scale and and typically you will scale it will not have any issues anymore it's at the beginning you'll find all kinds of little quirks that you know you didn't know about or are you going to deal with this and that and and and the easy way is just test it on non-critical apps and if it when it fails kill destination restart the source site or you can even migrate without shutting down the source site at the beginning for the first time you might have issues with data because some data might be locked during the migration process but for many apps it will work and it allows you to test at the beginning and make sure you're confident and now you can migrate at scale before you actually do the real thing i'm looking at the video monitor and i i think it's cut off but there is actually a section under here for rollback as well so you can do a rollback migration my face is is removing i saw rockhound ask about acm integration and i i would assume the use case there is you know i've got you know multiple clusters in acm and i want to i want to from that interface have it move an application from cluster one to cluster seven or whatever so that's that's a net gen of what we are working on and even acm is getting kind of integrated as well some pieces of acm is getting integrated with open shift there's a unified console initiative without without giving too much on what's that what's coming next but yes eventually in your open shift console you'll be able to switch from one console to another and we want to attach migration in there so because one of the key limitation of mtc today is that you have you need cluster and then privileges like you need to be a cluster and then the next step for us is to provide a way for developers or app owners to migrate their own application and that will be done directly instead of having another tool like mtc for mass migration will provide you a migrate migration button inside the open shift console so a developer can go there in his name space sees application and then he can migrate it to another cluster or pull a pull an application from a different cluster to the one he's working on right now so that's the next step that will get integrated with this multi cluster initiative and so on yeah um i was going to ask something and now i've forgotten so um thank you for the questions i can yeah please do um so our uh what are the what are the default timeouts or other exit conditions so that's actually a very good point as we were trying to make this tool like a little bit more ergonomic it became very clear that um the timeouts are absolutely critical right so we can't just sit and spin our wheels when something's actually failed we really take fail fast to heart if we know something is bad we really want to fail fast and loudly about that we understand that firsthand we've sat on support calls or using it ourselves and if if something's gone sideways and it's just spinning for hours you have to know about that so that's something we've worked to include specifically when you talk about default timeouts most of this is asynchronous right so when you're talking about asynchronous work you have to have timeouts in case something just goes off and dies and never reports back its status um and a lot of kubernetes development looks that way um so uh we have a lot of different timeouts um if i pulled up the configuration values a lot of those have been tweaked um as like due to we we're taking into experience a lot of things like a lot of those values have changed so some of them have gone up and then come back down but um if if you we do have an ansible operator that you can go take a look at and so that's literally it's like enumerated every configuration value that's possible in the operator and a lot of those values are timeouts so um i can provide a link to that if you're interested in that um i'm just i'm catching up on chat here uh new noodle jitsu um speaking of acm and acs can we when can we expect improved partner training um so i don't know the answer to that but i know where to get that answer um if you can send me an email andrew.sullivan at redhat.com um we'll we'll dig up that answer and uh get some information for you so that is the portfolio field enablement team it looks like we're a little bit ever i was going to do a storage class conversion we can also field some questions i think and then we can also tease some of the next generation stuff we're working on um whatever you guys are interested in doing um so i think um say 10 more minutes does that seem seem about right sure okay so for our audience we'll give it till about a quarter after um so 10 more minutes if you have any questions anything that you'd like to see or learn from Marco or Eric please let us know in chat um but other than that yeah i'd love to see the the pvc conversion because that's one that's him it came up uh when was it last week or the week before when did we have the my the uh open shift virtualization folks last week yeah because you know people were asking you know hey how do i move my you know my vm from one storage class to another so i think that'll be an interesting one and it has broad applicability um and then Marco i think you answered it looks like you answered in chat but uh somebody was asking about the same thing we talked about at the very beginning which is does it work with non open shift deployments yeah this is this is the next gen stuff we're working on like we're still early on so we have tooling right now that would work like even do demos of kubernetes the open shift or kubernetes to kubernetes migration it's just that we're looking for early adopters to help us like make this still solid because it's it's a good it's one thing to make this work in a lab it's another thing to make this world work in real life right and be ready to release that as a ga fully supported product so if anybody's interested into testing this out or helping us out upstream with that or like definitely uh you can reach out to the conveyor community or to myself i can introduce you we're looking for people to to test early bits of a new product that you know might be not 100 percent perfect but like i think it's getting there because it's actually based on everything we learned for the last two years so it's not that far off but but there's still i'm sure a lot of little things we didn't think about so before we can release that so um i'm on my open shift four side um i created a new migration plan um actually i'm in the midst of that right so i've got a simple engine next application with a pvc that's bound to it um where i'm storing my logs notice it's using gpt gp2 um so in open shift four what i'm doing is i'm going to change the target storage class to gp3 um and all i did here was it's actually just kind of a simpler variation of a full migration um so all i did was uh select the storage class conversion type um so what we're going to do under the covers is actually spawn a new provision in your pvc we're going to do our file system copy into that pvc and then we're actually going to go into the deployment um queues things and then update the application reference um so um and that was the piece that was missing previously in one six that got added um pernav has been on our team pernav gaikwad has been working super hard on that and he's working he's doing a lot of work in the ball stream community ball sync community sorry um upstream and so actually a lot of our logic around the state transfer and a lot of the hard problems that we found there that's getting contributed to a shared library that's being consumed by ball sync called pvc transfer it's in the back cube organization in github in case anybody is interested in the work that's going on there um and that uh so once we contributed that we actually brought it back into mtc so we're using i'm not sure if we're using that library specifically today but we have another library basically the intent is we want to contribute all this state transfer work put it in one place so everybody can benefit from it and then it's consumed by ball sync and crane cli which is our next gen product and mtc so and it's all battle tested logic that we've done production migrations on so that's pretty cool um it's always nice um you know i've been at red hat for three and a half years i've been a red hat you know administrator user for two decades it it's always nice to remember that open source is amazing for that very reason right because it's not just us that's doing it it's the whole community and we can reuse a lot of that stuff across different projects 100 and that's like the um that's our motivation right we want to be good open source citizens and and not just contribute um for red hat sake but also just to contribute to the upstream community um god nice that was and i'm i know that scales right with um with the amount of data that you have in the pvc and all that but yeah it's it's really nice to me that's like you see i you kind of alluded to it or maybe hand waved over it a little bit of the reconfiguring the application part right yeah that's uh not an easy part um so yeah that's the that's what we actually we're focused on with one seven um so there are a lot of different ways that we can actually approach that and actually again in the open source um spirit we are designing the stuff out in the open so there's we've got this great repo called uh conveyor slash enhancements um it takes inspiration from kubernetes and open shift enhancements so we're doing all of our design in the open uh it's it's open for comment um we have the different approaches that we're on the table when we implemented this uh documented there and why we made the decision to do it in the manner that we've done um so you'll see we made some design decisions and the reasons are actually documented in that enhancement so that's been it's been a process that's been really helpful for us even just to reference in the future when we try to remember well what were we even thinking when we made that decision oh we've got all this documented and uh yeah it i think it produces better solutions for sure because we do our due diligence and it's done in the open yeah yeah that's really cool um let's see uh rock down can can you test it with ibm cloud packs i don't know enough about cloud packs to to help there but um i know uh actually i wonder if cloud pack for data i wonder how relevant it would be there um we do have several folks on the open shift product management team that interact and talk directly with ibm for the cloud pack stuff and all that so there is quite a bit of communication that happens there so that'd be that'd be interesting um if it's our sink would you test ibm rocks right i think yes there ibm rocks has some quirks but yeah we do we test with rocks and rocks so uh rocks is the old name for um the uh open shift on ibm cloud managed service right yeah it's what was it red red hat open shift kubernetes service i think it's the name oh i didn't know that i'm not you know we have to like we speak in acronyms i have to remember yeah roks um so uh if it's our sink with um with which parameters it's used can i can oh okay so uh rockhound is asking um since it's our sink in the back ends is there something in the operator or crd or whatever that you can use to modify the our sink parameters when doing the data so um checks on verify is like a top level first class feature that's necessary in mtc so it's supported it with both uh rustic so indirect state transfer and direct state transfer which uses our sink under the covers that does use our sinks functionality in addition to that there are several other configuration parameters that you can set some of which plaster over like some parts of our sink um and abstract them others like and there's also an escape hatch which is um we'll just accept arbitrary our sink options and sometimes that's necessary and we try to do our best to validate what you're putting in there because we've also seen people put junk into that string and uh that that can break things too semi colon rm dash rf yeah exactly so we check for that and uh we don't allow it what's what's the xkcd the little bobby tables yeah data validation yeah um so we've only got two minutes left so um i anybody who has any questions feel free to send those in um but i i think that we probably will run out of time today but uh that doesn't mean that we won't collect those questions we'll do our best to respond to them um if not if you would like an email um send me an email andrew.sullivan at redhat.com you can reach out on social media at practical andrew on twitter on red hats um i can find my name on linkedin all all of the places all of the things uh except facebook i don't have facebook um you know it's something old man yelling at cloud so yeah if we don't get your question get your question today or uh if you feel like we haven't answered your question adequately don't hesitate to reach out um we won't but again just a reminder we won't be back for two weeks so we will be back may 25th um i don't know yet what our topic will be we've got a couple of things up in the air i know that we are uh trying to get the acm team on to talk about acm 2.5 when it's released um so lots of exciting stuff coming up so please don't forget to um you know subscribe on whatever platform you're on so that way you'll get alerts when uh when we do go live and you can tune in and join us i also always recommend to folks um you can go to uh red.ht slash live stream and it'll take you to our live streaming land landing page and from there you can see the whole streaming calendar and everything that we've got going on here so definitely uh check that out um with that being said um marco eric thank you so much for joining today this has been a really great session i've learned a lot this is a topic that's near and dear to my heart as somebody who did multiple data center migrations and all kinds of other stuff so i love seeing it be easy right that's that's what we want yeah well easy and safe i should say um you know just just fire and forget is not what you want now thanks to you guys that was great thank you for yeah thank you for having us so yeah to our audience thank you as always for joining us uh really love the chat thank you for all of your questions all of your comments um johnny uh i always appreciate you being here stephanie thank you for your help in the back end and uh i'll hand it over to johnny for last words yep hey thanks again guys really appreciate it i really one thing just uh to before believe is the health check idea that's an excellent idea to use the toolkit for doing that because it gives you that visual representation everything that's going on i i didn't even think of that until you said i might it's that's a pretty good idea um but for everybody else that's watching uh on youtube if you could just like or subscribe to the channel uh that way we can get the feedback know that everybody's still you know wanting to see what we're putting out there so uh appreciate it again thanks again and we'll see you in a couple weeks i'm gonna may the fort be with you may the fort be with you