 So, yeah, we have next topic about democratizing developer productivity with cloud services. We have Jaya, who is a technical product marketing manager with cloud services group and Karan, so both will be giving a demo now. As an individual, like, you know, you can book, book pizza, you can get pizza, right? You can get food delivered to your home when you're hungry. You can get cabs whenever you want it. You can get groceries delivered like magic at your fingertips, right? So there are certain things which are inherently, now, they are a very close part of our life. To understand that, so, yeah, let's talk about cars. Cars is a good example to segue into what we want to show you guys today. So let me ask a broader question. Which car do you want? A car with basic features, basic important features like an engine, wheels, brakes, steering, right? Or if you have a different option or a second option, a fancy car with all the basic features because they are absolutely important, otherwise your car will not go even a single, single mile, right? Plus some extras like safety, you know, advanced dashboard like internet-enabled car or let's say I need an extended warranty, I need roadside assistance, right? So it's a question, which car do you want? Let's start with that as a basic question. But of course, it comes at a cost. You know, nothing is free. The red car or the car with advanced feature will have some price tag to it, okay? So intelligent guy will say, okay, you know what, I'm a hacker. I want to build stuff in my garage. And over the weekend, I don't want to do things. So I'll go and buy a basic car, go to the shop and get the gadgets which I needed for the features. I'll bolt it out over the weekend or maybe multiple weekends. And my basic car will eventually start functioning like my Jazzy car that you are going to be here and it will cost me less. That could be a story, right? It could be a theory. It could be done, right? Nothing difficult about it. But those who are into building a car business, you can think about that. Building a car and managing the car is complex, right? Adding more features to the car is complicated. You go to security cameras, you need to go and get your keyless entry devices that you want to bolt into the car. Advanced dashboard, you can always go and tear off your existing rooftop of your car and install a sunroof of your desired color and flavor, right? And on top of it, getting registrations, getting warranties, getting certificates from the authorities, and the checkup of your car health, it's also a super complex stuff. It's time-consuming stuff. It just takes a lot of time to kind of do all these things together by yourself. But I'm not saying that it can be done. It could be done. It can be done. But you need a lot of time. You need a lot of spare parts. And you need a lot of expertise to convert your pieces or a basic car into a car that you need or a car that you want to have all the features, right? And it does not stop here. It's just day one that I was talking about. In day two, you need to renew your insurances. You need to have regular health checkups of your car. You need to clean your car, which I don't like to do, right? And then finally, you have to use your car, you know, you have to go on the driving seat and you need to drive it, right? Is there a better way to do it? Like, so the first question, instead of what car you want, which car would you need? The difference between wants and needs, right? So a car, I mean, a car that can take, that can fulfill all your requirements and all the features that you want, it's already taken care of. Like, you know, classic sunroof or interesting driver or insurances or cleaning services, all bolt into a car and you just drive it and use it. So let's segue this from here, from cars to softwares. I mean, I think that softwares are similar to cars. They're complex. They're not very simple, right? And there are a lot of options available in software and upstream community, open source community. There are millions of repositories out there to figure out which tool to use it for your specific requirements. For example, let's say you want to use, you want to build streaming services. You could use a lot of things. You could use NAS, you could use Kafka, you could use MQs. So a lot of things out there. So it's on you and your expertise, what you can do. So can we get something like a black car, a managed kind of service which can help you guys navigate through all the pain of kind of deploying it, managing it, selecting the right configuration and it has to be instantly available. As soon as you click on the button, hey, launch this or give me this thing. It has to be instantly available to you. So this is what Red Hat Cloud Services is trying to do. Red Hat Cloud Services provides managed open shift as the base and providing you managed application services and managed data services on top of open shift. We work with all the cloud providers, Amazon, Google, Microsoft, IBM. It uses open shift as the base layer because open shift is like the multi-cloud hybrid cloud operating system. If it works on open shift, it can literally work on anywhere. So consuming our offerings from open shift offerings from AWS, open shift and IBM cloud and dedicated, these are all managed open shift services. On top of it, we can provide you features that you'll really care about in your app. You don't need to worry about, hey, which repository should I go and clone and then build and then integrate and manage and maintain the security of it, rather just go in and pick up the right tool for your applications. So starting from API first approach using Red Hat, API designer, all of these things that my colleague Jair will want to show you live. So starting from designer, API designer, design the API contract that you like to see even before writing any code for your application. Then using registries to do it, using API management by three scale, you can do it. And if you want to integrate, let's say, streaming services, streaming capabilities in your app, you can use OpenShift seems for Pachi Kafka, which is a managed service. And then all of this could be nicely done in an easy drag and drop fashion powered by the Red Hat OpenShift connectors, which helps you connect X service to Y with just simple drag and drop. And then moving on to next, you definitely want databases. You might be having databases running on Amazon or let's say MongoDB cloud. How can you bring up the database instances, or how can you connect seamlessly to those database instances running on a different territory? So database access provides you that glue to connect to the remote databases from directly within your OpenShift environments. My colleague, Veda, gonna talk about database access in greater depth. Once you also have databases, next would be, hey, how about AML? I want to introduce new features in my app, introducing AML, some data analytics, how can we do it? So you can leverage OpenShift data science, the managed service that we have, which provides you ready to use environments for AML, like building your models, training your model, doing some inferencing over the model, distributing your models, and then using Apache Spark kind of tools for big data analytics. So in a nutshell, these are services which enables you run faster, faster to market. Rather, you go and figure out your own time to deploy these things. You could leverage the managed service offering from Red Hat and get to the market faster. So with that, let us quickly see the Red Hat Cloud services, how they work in tandem, how they work together as a whole interaction. So Jaya, please help us with the demo. Thank you. Thank you so much for making the time today. And it's lovely to see all of you. I'm Jaya, I'm part of the cloud services team. My role as technical marketing manager is a mouthful, but I do a lot of content, demo creation, and do a lot of good stuff with our cloud services that we will see today. And while you're at it, please pray that the demo gods are kind to me today, because everything is going to be completely live. And hopefully, I will also be alive with the end of this session. OK, let us see Red Hat Cloud services in action. Let us see how a customer called Glovex is going to, it's a fictitious retail customer, is using Red Hat Cloud services to be able to do a lot of fun things. Glovex started on an application modernization journey. And that led them to this particular screen that you see here. They moved from Monoliths to microservices. I mean, Burr spoke about microservices being standard now. It's nothing great, right? And then all the good stuff of OpenShift, because all the services are running on OpenShift. And we spoke about GitOps at length. And they have adopted GitOps, and they are in a really good place for them to be able to run their microservices, run their applications, to be able to scale as quickly as they want to be able to go to market very quickly, right? But why do we do all of this? For us to be able to realize more and more business value with what we built. In the end, everything, all the code, is all about business value. The code in our local laptops is of no use, unless it is deployed in production and being used, right? So now Glovex is like, all right. So I have spent so much money. I'm running on OpenShift. I want to do new stuff. I want to be able to add new features. I want to pretty my ride. I want to be able to add more functionalities and features. I want to be able to add new channels. Right now, I have a basic website. I have products. People are coming and buying stuff. That's all good. But I want to build a new mobile app. And then I want to be able to track people how the way they are engaging with my website. What kind of products do they like? What products are they viewing, searching for, adding to the cart? And then based on that tracking user activity, I want to be able to add intelligence to my system so that I will pick up the top 10 products and showcase that, as you know, these are the most liked products. These are my featured products based on what the customers like. And this information will help them to be able to build more personalized experience going forward as well. But there are challenges, just like how Puppy is figuring out how to drive the car. Adding new channels is not easy. The Globex team does not have a mobile team in their own development team. So they would like to outsource that to the mobile development experts. And you have the mobile team. You have the new UI stuff that needs to be built. You have the backend systems, which need to be refurbished, or new backend systems that need to be built. And all of this, we will need to come together. And what's the most complex thing of software development? Any comments? Coding? Sorry? Integration? Sorry, naming products, that's true. It's people bringing all of the people under one single umbrella, making them talk to each other to ensure, no, information is not dropped when I speak with you. I need to speak the language that you understand. And we have people, processes, and technology. The most difficult over here is the people. Because we are the wild card in the entire process, right? So bringing different teams together to be able to work well with each other is going to be a challenge. And how do you ensure that that works well? There needs to be a way that the teams, there are certain levels of governance that's being put in place to ensure all teams are able to speak with each other in the right fashion. And adoption of new technologies, for example, when talking about user activities, tracking all of those events, as soon as we say events, we think of streaming platforms, adoption of technologies, and setting up of the infrastructure, stuff like streaming platforms. It's not easy. It's definitely a challenge in terms of patching, in terms of even finding the right people to come and help you to be able to set up the platform and maintain it, just setting up just one step of the whole thing, right? So technical skills, who know the technology, that is going to be a challenge as well. So they come up with a blueprint. This is a standard, they build a wire frame, an application with all those hearts that represents your like button. I think all of us are very, very familiar with hearts and like buttons here. And that's the next step. When a user clicks on any one of them, they pick certain of those products that they like. All of the information is being tracked through a bunch of microservices and is streamed into Kafka topics on a streaming platform. So messages keep flowing, events of user likes keep flowing from WebEx, what am I saying, WebEx for? From the web UI and then going through the services and then flows into your streaming platform. And after that, that data is being analyzed. And some level of intelligence is being derived out of that to pick what are the top 10 or top five most like products? And that is being used displayed as feature products. So this is what the business agreed upon, discussed upon, they built a blueprint around that. Now if the mobile application comes into picture, the mobile applications will also should use the same services, right? We don't want to be duplicating our efforts. But they should be able to access these services in a secure fashion. We don't want to have public access to everybody, right? We need to be ensured that we secure those access. So the teams arrived at a new development approach. The first step being adopting an API first approach. Anybody's heard of APIs? Great. So API first approach is all about identifying the contract first, identifying what is specification gonna look like. Even if you take Java or any software for that matter, the specification is gonna be extremely critical. There's decays or the documentation is gonna be very critical, right? So from adopting an API first approach is defining how an API is going to be structured. Once that has been structured and then mocks are being created out of it, mocks are basically even before the backend teams finish building their services, you are able to simulate as if the mocks are already developed. So that the UX team, the UI team, the mobile team can continue development while the backend team is also developing it. So basically we are able to achieve parallel development streams. And then we need a way to manage and secure those APIs. And they want to introduce an API management platform for that. Then Kafka has been used. I think that is the most popular streaming platform to ingest and process user activity events. And then they'll plan to use cloud services because you will see why they want to use cloud service. The click of button in a few minutes, you'll be able to set up most of the platforms. I wouldn't say all of them, but most of them is very, very quick to go set it up. All right, so spoken so much. A demo is worth, you can continue to count the number of zeros, but many, many zeros, 100 million words. So we will go have a look at the demos. Like I said, we start with an API designer. As you can see right on top, right? We see hybrid cloud console. Now, hybrid cloud console is the entry point for all of our managed services. Through the hybrid cloud console, you'll be able to set up a lot of services. I'm just gonna quickly show you what are all the application data services that we have. You can set up data access, data sciences. You can set up service registry. We'll be talking about almost all of those services over here, okay? For you to be able to access it yourself, you can just go to console.redhat.com, create a user ID, or if you already have a red hat user ID, you can just log in over here, and you can just check it out. The first step is to be able to create a design, the API design. So API design would be click on a create design. I'm going to just showcase an existing pet store sample. You click on create, and this is the sample, right? And so you'll be able to create various parts for an API. This is basically defining how the specification is going to look like, so that later the backend team should not be doing their own stuff, and the UI team is expecting something else. We don't want that challenge to happen. So I have created a product catalog API, one of those APIs, as part of the Globex application, right? And as you can see, there are multiple parts. These are like, for example, on the UR level type, right? xyz.com slash, get me this, get me that. So the multiple part. And then you can also identify what we call examples, okay? So these are real, real, I wouldn't call real, real examples, but these are like, close to real examples of how the response will look like. So this makes it really easy that upfront you're agreeing upon all of this. So as the UI developers, mobile developer, I know exactly what kind of content I'm going to be getting. Or as a backend developer, I know exactly what content do I need to produce? So there is no ambiguity later, okay? Once all of this is done, what we do is that we export this into something called a registry, a service registry. I'm sure the term registry is not new to us, right? A registry is basically a listing of so many things. It's like a home to a lot of content or could be container images. In most cases, in OpenShift case, we would have heard of registries. In this case, we have something called a schema registry. Now, I'm sorry, it's called a service registry, okay? Now what is a service registry? Service registry is a home to all of the specifications that we create so that this becomes like a single source of truth for the entire organization who is working on a particular project. So once a schema has been created using the API designer, you can export that design to a service registry. So I'm just gonna create and click on export. And my service registry, this is my service registry page. I accessed it through the same console.reda.com, service registry, service registry instances. I'm sorry, the internet is kind of slow. Okay, so I will wait for the service registry to come up. Once the service registry comes up, you will be able to see that, you will be able to provide certain content roles. Just give me a second, I'm just gonna try to refresh this page once again. Okay, we'll come back to it when the service registry is up. Once the service registry is done, there are multiple developers who are able to, all of the developers, it could be backend developers, it could be UI who are able to access those APIs, right? API design, I mean the specifications, the specifications that we just saw about creating it and publishing it. And the next step is to mock them so that our developer is able to have a tangible endpoint for them to be able to test and develop against a particular API. There are multiple mocking tools out there. We have Mikrox, we have Postman. I picked Postman here because I think most of us are familiar with Postman, I'm sure, but try with Mikrox also. It's a very easy to set up tool. The easy thing is that you just go ahead and import your open API specification and it'll just create a documentation for you. Once that is done, it starts getting, it appears on the list of collections and then you can create a mock server. All of the, most of the mocking tools that I spoke about, Mikrox or, excuse me, Postman creates what is called a mock server. So it is as simple as, a mock server is very simple, you create a new mock server and say that I want to use it. Existing collection is the one what I had no recently imported, okay? I have done this all of this already. So this is a mock server that I have and now I would be able to use that mock server. This is a mock server, okay? So it gives you a mock URL and this URL is public. As in anybody, it is not just within this environment, this URL is public, you'll be able to use that URL elsewhere. So as a developer, I can just use it as my endpoint till the actual real backend systems endpoint is available to me. And then I use that endpoint and I can just test out these environments for me to be able to get this response. So what does a developer, what will they do? They just copy this URL and use this endpoint in their Java code or the JavaScript code, Angular code, React.js code and so on and so forth. Okay, so something went wrong. Third time lucky, perhaps it's a third time refreshing it. So the product catalog API got imported and you'll be able to see the actual content being made available here, hopefully. From the documentation, you'll be able to see the same catalog, the developer would, from the organization would be able to download that and you'll be able to view the content over here and you can also define certain content rules saying that if there's a new version being uploaded and you can see here, right, you have versions. So you just can replace an existing version with a breaking change. So every time you upload a new version of your specification, it's a new version, okay? So you can define content rules and you can define compatibility rules to ensure it doesn't break or change us already. The next step would be for you to go and secure these APIs for external users. For example, if it is at the same cluster, it's fine. I mean, you can access your internal service, service URLs instead of using external facing routes or internal service is fine. But if you are exposing that to external people, right? For example, in this case, we're talking mobile application or in the future case, a partner may want to start, you know, using your products, displaying the products in there. For example, it could be some sort of Amazon wants to start using pulling out your products and displayed on their website, for example. So there could be a partner team who wants to access your data, could be a mobile team which wants to access your data. So you will want to start not sharing the internal services, right? Nobody wants to share internal services because it needs to be secured. So you start using an API management. Now an API management, I have already set this up. An API management, you would define a user key, something called a user key, which protects you from a particular URL, okay? Now this is how, I think that is, I'm sorry, but that is not something, even if you do a control plus and all of that, I'm unable to make the screen any larger. But each endpoint will need something called a user key. And how will any developer get a user key? An API management, a good API management will have something called a developer portal. A developer portal is a place, people can go and sign up for an API. So in this case, I have John, who's part of the mobile team who's signing up for an API. And once they signed up, I get a key over here, okay? So right now, as you can see, but if I just directly try to access this URL, it tells me authentication failed because there is no user key. But when I start providing the user key, which I obtained from the developer portal, then I'll be able to start using it. Now, what have you done? Created the API. We're able to now hold it in a system of truth on a service registry. Then you're able to mock it. So we have parallel development teams. And then you're able to secure it as well from production perspective. Now it's all back in stuff, right? All that is good. You're basically setting up the groundwork of all of this. But what is it that we really want to do? So developers have built, or you can see the Node.js application running here. Let us see how that looks like, okay? So you have the developer team who's got a bunch of products running. Burr, you must be able to see your name over there. And we have multiple products and all that is good. Now, we have to start streaming those user legs, the clicking of the user legs. Here's where you're gonna help me, not right now, just give me a few more seconds. To be able to connect to a Kafka instance, I'm just gonna show you something very, very quickly. How easy it is to create a Kafka instance, okay? Like I said, it's all in console.tradha.com, okay? I'm just gonna show that again to you just so you are aware. Under console.tradha.com, use your own login, okay? Click on Kafka instances, and when you click create Kafka instance, let us say, GloBux wants to create a Kafka instance. So you pick a cloud provider at this point in time, because it's trial environment, we provide AWS, but then we also have types with most cloud providers. Then we have the various, you can pick availability zone on a production environment, you would have multi-availability zone as well, and because it's a trial again, it has a single availability zone, we have multiple streaming units, and you click on create instance, and that's it. Of course, we gotta wait for a couple of minutes. I don't wanna keep you all waiting, and so I do not, I already created a Kafka instance. Once you create a Kafka instance, you can very easily import the Kafka instance, a connectivity onto your machine using an operator. I think we already spoke about an operator, so there's an operator who would look something like this. I have set up connectivity from my Kafka instance to where my Java application is running. Now, all of this would look very similar, because we use the same UI on the CSS called pattern flyer, pattern flyer, okay? So all of this will look very similar. So you may be wondering, I'm doing the same stuff in the same box, you know? But then we have the hybrid cloud console, which is all of our managed services. Now, what we see over here is all of my custom Java applications, custom Quarkus applications, services which I know UI running over here. So I have connected my custom Quarkus applications to my Kafka instance, and then this how we do it. So once you click on connect, it picks up on the topics that I have already created. You click on next, and then it helps you to set it up very easily on the screen. So once you do this, you may wondering about, so I have created a Kafka instance for you. Now, there's one thing I want to iterate. All of our managed services are secure for access. What do we mean by that? Anybody know service accounts? Okay, so service accounts is a way for you to be able to protect your resources on a very, very fine-grained level, right? For example, in this case, if you are able to, if you want to be able to provide access to some of these, to the Kafka topic, I have created a topic, multiple topics here. I'll talk about it. You can create, you pick up an account. It could be, for example, any account, and then you click on that, and you can start assigning permissions. So if you can see over here, you can produce to a topic. What's the produce to a topic in sense? You can send information to a topic. You're a producer, generator of the messages, and it is pushed into a particular topic. And even within a particular topic, right? You can provide multiple permissions. Is it going to be? Yes, yeah. So that's because there is a custom requirement of what everybody, individual will need, right? The topics will not get automatically created. The cluster is set up for you on a higher variability cluster is made available to you with all of those replicators and everything in place. And you can just create a topic. So creating a topic is as simple as creating a topic, provide a name, and then it asks you what partitions you want. Message retention in terms of retention time, retention size, is it visible? Okay, and then how many replicas, and then click on finish, and that's it. Your topic is made available to you. And then your access controls can be on the entire Kafka instance perspective or individual topic perspective as well. There's something I want to reiterate again because it's very important because security is of prime concern for all of us, right? So you will be able to provide the right, a level of managed access for the Kafka topics, okay? So I have set that up in such a way that when I created the, this is called a service definition. So if you remember in this page, I had pulled, created a connectivity, right? So I will be able to, you know, just drag and drop each of my service to bind. It's called service binding, okay? I'm able to bind each of my service to the Globex environment. So if you'll be wondering, how will I do this in each of my environments? So as a developer, I want to be able to very quickly bind my service to a Kafka instance rather than worrying about GitOps or rather than setting up all of those other, you know, automation that is necessary for you to be able to do in real life. So as a developer, you can very easily, very quickly drag and drop using the service definitions, okay, and service binding. But in real life, all of this can be used, done using GitOps as well. We have the right APIs necessary for you to be able to automate it as well. But in this case, I just wanted to showcase the other side of things, you know, how it may look like from a developer standpoint. Okay, now all my servers are running. We're all connected to Kafka and my service, my products are all up and running, okay? And if you remember, there is just one quick thing I wanted to show you. In this page, when we're talking about once the messages are, the events are getting picked from each of the developers' likes and clicks and all of that, right? We want to be able to build the top 10 products. How would you do that? There is an API called the API that I have used here. It's called Kafka Streams API, it's a Java API which will help you to aggregate the various data and to be able to produce the top 10 products, okay? And so that is this part of it, okay? The analyze part of it, excuse me. Okay, so if I go back over here, I do have a simulator which can simulate thousands of messages and all of that. But I wanted you all to please open up your mobile phones and help me to simulate the message. So I will have to upfront apologize that it is not very mobile friendly at this point in time but I'll make sure I make a change, okay? So access it and go ahead. It'll show you a list of products. Like I said, it is not mobile friendly but you can zoom in and go ahead and click on those like buttons, yeah? Are you able to click on the like buttons? Oh, yeah, sure. Able to access? Is it coming up? Nice, yeah? Okay, super. So go ahead and click on like buttons, okay? People like Patagonia, some people like the Quarkus t-shirt, water bottle, Cuba, Nettis t-shirt, webcam, awesome. So if you see, the first page had nothing in the front please. Now, all of you helped me to generate that rather than me clicking on a simulator, all of you helped me to generate all of these messages. So all of the messages that you had sent went through Kafka Stream, Kafka Streaming Platform, got consumed by a Kafka Streams application and it crunched all of the data and created a list of messages for me. I will show you how that looks like, okay? We have a few more minutes. Let me go back to the Kafka Streams application, okay? And this is my messaging page. Okay, how many messages have I had? Wow, I mean, how much have you guys been clicking on it? I have like 400 messages, is it? Okay, 10 messages. Okay, any more further clicks? Let us see. So you can see here, right? Each of these messages that you clicked was a payload, which went over here, saying that this is what the product that you liked, what was the local time, whether you have previously visited using cookies and stuff like that, okay? So thank you for your engagement, engaging with us today, cool. Now, you have more topics over here and this topics here are the ones which are actually being used by the Kafka Streams to be able to crunch all the data and pick up the top most product. So the aggregated change log is the one, you can see in the last product, it showcases the top products, okay? Now, that's the end of the demo. What I want to quickly, you want to say something? Okay. What I want to quickly reiterate is, we saw a lot of moving parts. I wonder, I trade, where is everything running? All of the custom applications is running on OpenShift. It could be on-prem, it could be on cloud, could be hosted, it could be on a bare metal, right? Then you have the UI being accessed by multiple users. In this case, it was you all who were accessing this application. Then we have OpenShift API management, the API management platform, which is running on OpenShift dedicated. Then we have all of those services, the Kafka, the designer, the service registry, all of them running on the cloud services. Then I also had Kirod.io, where did Kirod.io come into picture? All of the images of all of the services that you saw is right over there. So all of them come together well to be able to build a good use case for a developer. Now, a great platform should make it easy to do things. A wise man once said, I didn't say this, I copied from somewhere, but I quite like that, you know? A great platform should make it easy for us to build all of this. Don't take my word for it. You could test drive this, okay? You can take a picture of this. I'm sure I'll be able to gain access to this. You can try Kafka using that URL. You can try API management this way. We have sandboxes available for all of them. We have console.redhat.com. You just use your own email address or your Red Hat account to be able to access that. And this particular demo that I showed over here is right over there. That is the URL for it. You can try it out yourself. You don't have to have, don't think it's so hard to have all of these moving parts. You can very easily set it up yourself with all of the tools available. There's nothing else that you will need for you to be able to build it. There are further learning parts available at the end of that. developer.redhat.com slash learn provides you so much more. Thank you so much for your engagement. I hope you guys are useful. So, yeah, we have one of some time. Any questions? Anyone? You have an open source. And what technologies are those? See, in the sense, we are also open source. Right? If I have the same setup hosted in Azure itself and I want to move it on OpenShift and take advantage of these tools. Yes, absolutely. We have migration toolkits. Yes, we have what we call migration toolkits. You can look for Red Hat migration toolkits. We have multiple toolkits available. Right from applications being migrated, VMs, or even moving from VM to a container world, talking about even workloads getting migrated. That depends upon very much on the workload, right? So, we do have good toolkits. Just look for migration toolkits. You may want to just note it down, or it's not a difficult time to remember. And just to, just to add, please continue. Just to add one more thing on this one. So, it depends a lot on the architecture as well. You could obviously move, you know, live, live from one environment to an environment depending on architecture. For example, if you have, if you have to move, like, you know, you can do something like Debezium, which will kind of be the main player, kind of moving the data, thinking from one to another. So, you know, migration is a way to kind of, you need to design the architecture the way your business supports. If you can get a downtime, that's okay. But if you want to move it live, this could also be done. So, since the bits are open, so it could be done, a lot of things. So, moving live from AWS and OpenShift, I don't, maybe that is more possible from an application perspective. From data point of view. Yeah, he was primarily asking about, I think from microservices and applications and all of that, yeah. And he brought up a good point. We didn't have time to cover that. You can look it up called Debezium, okay? The, it is, I'm just gonna show another screen. You can take a picture of that. There are the one over there, it's called Change Data Capture. There's a technology called Debezium which helps you with change data capture, which is really, really helpful. So, we have content, hands-on lamps available. So, please try it out and set us a comment.