 So, you know who this guy is, right, he is Steve Bummer and he was the biggest advocates for the developers and he thought that developers should quote and that's their only primary job. And this he was referring specifically to the Microsoft products such as Azure and Windows. So developers are basically what, creators, innovators, right, you guys write the best of the quotes that makes things live, that makes things available. In the morning session, you might have heard about how much time you spend generally in the development, 52 to 55 minutes per day, per 8 hours of windows we have. And what are the other things that you do, other than writing actual code, what do you think, what are the other things that you do, anyone? You spend a lot of time in meetings, you spend a lot of time in the technology stack, lot of research and solution, you have to create a solution, you have to understand it, then you go for the ticketing system, that you have to write a lot of tickets, solve the issues, JIRA and what not. And another thing is context switching, basically you have to switch between different text tabs, frameworks and runtimes to achieve your goal. Now, meeting and reporting, I think you have to fight your own battles, right, you have to talk to manager and meetings and all that, but rest all these four, today we will see a quick glance on how the rail, the Red Hat Enterprise Linux will help you in reducing this cognitive load, if you adopt rail as a development platform. In today's agenda, what we will see is how this cognitive load can be reduced by native runtime and frameworks that rail provides, how to containerize your application with the native container support that rail has, then how Linux as a foundation will help you, making sure the development cycle moves very smoothly. And last, we will see a quick demo, if time permits for us, to that how you can implement all these text tags, what you have learned, what you are available and build a local application which runs on bare metal and it will also see how to containerize that application using Podman. So, when we say rail as a developers powerhouse, why is that, why everybody say rail is available. If you see briefly, we have support of lot of language runtime, favorite language runtime that you have such as Python, Ruby, Perl, PHP, along with supported by the development tools in GCCC++ and Postgres, MySQL and MariaDB. So, this overall languages tools and backend technologies will give you a solid background. Wide range of application development that you can do and what rail guarantees, as you might have heard also as well, the Red Hat's commitment of 10 years of cycle. So, what does it mean? So, if you have adopted a specific tools and technologies, runtime or libraries that are part of rails release, you can be assured of 10 years of guaranteed support. So, if you're using specific version, you are assured of 10 years of support that will make sure that your development cycle is not disrupted. Your project cycle is predictable. You can define, okay, this is when I have a specific end of support, end of marketing and end of life support of the product, you can plan accordingly. So, this will free up your mind from thinking too much about, you know, what I have to do if I have to upgrade and all. You can focus purely on the development. You know, how you can do that? So, what are the run times and the framework that are available at your disposal if you have to focus on your development? Microsoft developers and even .NET? Okay, great. So, basically, Microsoft .NET is now open source, as you are aware of. And it is being used from traditional application development to all the way to the microservices-based application development. And what's more is that rail supports .NET natively. So, you can have your development environment in .NET. You can run on Windows and just lift and shift that .NET application and deploy natively on rail. So, in this way, you have great flexibility to switch between the various infrastructure, various environments, and you just don't have to worry about it. And along with the native containerization support that rail provides, you can just build any .NET application, create, build, and deploy, and containerize using on rail directly. Well, we have heard a lot of what Corkus today, right? Corkus is this and Corkut is that. So, just to rephrase or re-give some context about it, Corkus is a Kubernetes native Java stack for your serverless and cloud native development. And but why we everybody say it's good to have development? Firstly, it gives developers a lot of flexibility in quick runtime, quick setup time, and moreover, as per the IDC report, it reduces the resource consumption by 64%. So, that's great way of having Corkus can execute your programming. Another important aspect is that it basically bridges the gap between imperative and reactive coding. So, what does this mean actually? So, imperative means you can do certain things in certain sequence. You have to follow certain development cycle to do that. And with the reactive is basically as and when the event occurs, you have to create your code. So, half of the code you can do in the imperative method and half of the code you can do in a reactive mode. So, that way, Corkus will take care of his context switching and you can just develop the way you want. So, even if some other developers come, he can do the way he develops and Corkus is ready to go with that. That's what it definitely helps in reducing the load. And you can focus more on what logic you want to implement in your favorite develop programming language. Well, there are certain scenarios that you come across very frequently that I'm writing my application in specific version of the application like Python 3.6. But the new project that I'm working requires Python 3.8 or 3.9. But my underlying hardware is different. I don't have time to spend on that upgradation of entire ecosystem to support my one application or one project which needs Python 3.9. Rest all remain the same. What you will do in that case? You'll get stuck or you have to find another way. Rail is here with a beautiful functionality something called application streams. So, what application stream is, it is a group of package that will basically correlate to specific programming language, specific tool set or libraries. So, in this example, if you want to use Python 3.9 for your project, you just application stream of Python 3.9, put it on the environment that is already there running on your infrastructure. You don't have to worry about whether it will work or not. If it's on the Red Hat certified application stream that you are deploying, you are rest assured that you will give a most flexible way to build the various applications with different programming languages. So, for example, you have Python 3.9, a Perl or a PHP. You can get that application stream repository from Red Hat package management and you are good to go. That way, what is the most important thing taken care of? Generally, everybody thinks about the security aspects, right? So, every application stream or other, anything, any package that is derived from the Red Hat repositories or else repositories is security certified and hardened as per the best industry standards. That's what Red Hat is known for. This way, you are assured of your security taken care of, you are cognitive load of managing the various technology stacks being taken care of and you can focus most of your time on coding. That's what you guys just based on. Now, containerization, right? We talk a lot about containers, containers, containers, containers. Let's see another aspect of container. How many of you think container can run anywhere? Is there any possibility of some issues that might come? And now, as and when your applications increase, their applications grow, they become more mature. They way out of your comfort zone. These problems are going to increase. Correct. So, in this case, I would like to provide a very good feature that trail has. Let's get some background. Let's level set. You know the kernel space and the user space, right? Think about it as you are traveling internationally with your electronic device along with your travel adapt and you have to fit in the electrical socket. What will happen? You are able to fit that device in the socket, but will it work? Not guaranteed because your electrical specifications are different, technical specs are different in the socket. The probability of failure is more. If you apply this analogy in your kernel space and user space, so what happened in kernel space is kernel space basically interact directly with the underlying hardware or your I or file system and it provides a layer for the user space to get all the services. Now, what's a user space? A user space is where your applications and libraries that work and if they want to have any service from the kernel specific hardware, they have to go through the kernel space. So, if you apply this analogy and probably not every kernel in the world is same. Every kernel has a different version. So, not necessarily if you have your user space or let's say container, which is designed for a specific kernel version will work flawlessly in every deployment. As you said, it might work, but the probability of failure is more. In that case, what will happen is these two user spaces or two scenarios or two different area of working, we have to consider as a whole. So, if you want to run your overall application in a containerized mode, there has to be some solution. As we said, this problem is there, the probability is less, but if you come across with the problem, the time and amount of energy that you where that you will consume to fix this problem is also more. As of now, it's less. But as we said, the workloads will increase, your AI is coming in picture, IoT needs a specific hardware specification, your GPU intensive workloads increase. That's for application side. What about the underlying kernel or OS? That's also going to evolve. They are getting more refined, more matured, more optimized. In this case, it will be if in the future, if we remain attached to this scenario, this problem is bound to increase. What's the solution for this? Here come the UBI. It is universal base image, Red Hat's universal base image, which is built on the standard real packages. So, the UBI is nothing but your user space and kernel space combined together as a part of an image. So, what happens is, if you take as a UBI, it will cover all your packages, libraries, tools, and backend technologies along with the kernel space. So, when the image is created, that image is a fully functional miniature version of REL that you can deploy anywhere. Now, for example, if you are running, if you take almost 99, 95% of enterprises on REL. So, if that infrastructure, you take the UBI, no matter where you are running your REL, you are running on bare metal, if you are running on cloud prem, public cloud, anywhere, that UBI image will don't perfectly find because it encompasses the underlying kernel architecture as well. We will take the question. Yeah. So, what happened in this case? You have freedom to deploy your image anywhere you want. Now, there are different types. Now, every developer is not like, I don't want fully functional base image. I want to create image which I like. I'm just writing a Python application. Why should I take everything else? In that case, you have option to create a pre-built language images. So, what happens in this case is, you can create images, like if you have a Python application, you have a Perl application, you have PHP application, pick up only those libraries, those packages from the main REL repository, and you are good to go. It will encompass your application, all these dependencies, and you can deploy it anywhere you want. Now, there are certain types like standard image or a minimal image or a init image, right? So, these are specifically defined to suit your user needs. So, for example, I just want to run web server or I need only apache server, I want to run. I don't need anything else. I have my own application. You go for init image. If I want a standard image which is a replica of your REL and I'm good to have whatever come with the micro-DNA functionality, VM, and other utility packages, go for standard one. You have fully flexible to create your image from the Red Hat portal, which is based on REL. So, you can be assured of fully functional containerized image that can be deployed anywhere on the infrastructure, from bare metal to the public cloud, without worrying about the underlying security and support. Now, again, we are emphasizing red. For the Red Hat, we support 10 years guarantee. So, even for this image, tomorrow you can be make sure that you can run it for 10 years without compromising any of the functionality or support. The other thing is, if you, tomorrow you won't, okay, I want, I'm running Python, but I want a PHP also. You can just talk those packages from REL repository, install it on your UBI, and you are good to go. Your underlying infrastructure will remain the same. So, that will give peace of mind to keep focusing on the development. Now, let's say if you have, we say the in-loop and out-loop, right, in the morning sessions. Same way, REL as an underlying system with the tools and technologies that it comes, you can create, run, build, all the containerized applications using Podman, Build and Scopio, fork it out, and you can deploy it in the open-shift kind of containers where you want to deploy them. That's how the containerized ecosystem that REL provides, a full end-to-end solution. Now, you will say, I don't want, I don't have fully, I don't want fully functional container. I'm just sufficient for a small container deployment. Here comes the Podman, I think you are saying in the morning as well. Podman is basically a rootless component that runs, so that what happens is it will make sure your security is not compromised. So, Podman basically gives a rootless access to all the applications. So, that's added layer of, what I can say, security that is getting added if you are using Podman. Now, just to summarize what we saw so far. So, we saw what time, what time that consumes developers, what are the cognitive load that get into from the technology stack switching, various environment, different, different containerization, security issues, package upgrades. So, if you see red hat as the overall ecosystem provides a set of runtains and frameworks that you can run anywhere, that you can switch easily between the different, different contexts. It works as open as with open source like Quarkus and also with the partners like Microsoft. Also, we provide the containerization functionality with the UBI where you can containerize your application and we can also provide the Podman where you can if you want to have a minimal infrastructure you can do. Now, with all this we will see a quick demo where how to install the Quarkus app which is consuming Microsoft SQL server running on the Enterprise Linux. So, this is actually the demo link. This demo is already available on the red hat website. So, in the time to save the time, we are basically have a decoded demo about it. So, this is a demo. So, what we have done is for the sake of time, we have installed SQL Quarkus on the virtual test instance. We have installed Microsoft SQL server. We have created a sample database and now from here onwards you can see how you can download the code, a sample code from your Git and then install that. So, once the repository is downloaded, we are basically running the application Quarkus at the background. So, that is all this demo. I think I can share the QR code as well. You can test all these labs, containerization, everything on the developer portal that is available. So, it gives wide variety of labs for you to test and experience in the virtual environment. So, currently this is installing the Quarkus. This is done and now we are basically executing those APIs in the different terminal to add certain records in the database that is already created. So, we saw what is created that we saw. Now, we will do an update the delete operation. So, there is a double verification also we will do by logging into database. So, that way you can it is very easy to understand that you can download your code, create an application and it is very easy to containerize as well. You see this is a record that is created. Now, you saw how the application, the Quarkus APIs work and the next lesson we will see how to containerize it. So, with the help of Podman will containerize this application. Now, so we are building the Docker file. Now, we are running the container. So, the container is running. Now, we will see again all the operation that we did will again execute them using the APIs. Now, this container's application is running. So, this is how we see we can how using the real functionalities and the features and tools that is provided you can easily reduce your cognitive load and build a better application. Now, we will see, now handing over to, now we will see how this everything can be expanded to the edge. That's how we can see going to see now, handing over to you. Thanks Nikhil. That was a very informative quick demo which is very important for us, for all of us here is to realize how do I extend all the goodness that we've just learned, right? You know, fast boot time, low memory footprint, which is all important for us, right, from a cloud native perspective. UBI, universal base image. Again, the key takeaway there is purpose built. We've got four form, you know, typical formats available for UBI which are all subset of REL bits, which means all the goodies of REL packaged into base images that you can build upon your containers. And the good part is it's all purpose built. The reason I say that, you see that different options of UBI, right, including minimal and micro, not just the full UBI package. What that means is as low as this 15 MB. Probably a use case for an edge gateway. Now, you also saw the goodness of the container, build and management tool, like the Podman, Scopeo, Builder, the best part is being OCA compliant. That's again a very key takeaway. Rootless, you know, enforcement of the user definition was non-root, right? And all those goodness that we've learned so far. So thanks again, Nikhil, for all that, you know, good learning that you had so far. Now, how do I extend all this goodness to an edge footprint? Edge is everywhere. Edge is a reality. Take 5G as an example. They're really pushing the ultra low reliable, low latency communication. So it's a reality. How are we ready? Tomorrow we're talking about, you know, architectures like V2X, vehicle to anything in a smart city, where a connected vehicle can talk to a smart pole. Can not only let you know that the fuel is over, but at where is the next nearby fuel gas station, but the next time you ride there, it can also settle a financial transaction. If you've heard about architectures like wallet on wheels, edge is a reality. 5G is pushing that envelope and making it even more a reality. So how do we go back to that ideology of extending the kutuparata to edge? That's the big question. All of us here needs to understand and we'll see how we, you know, kind of expand all those goodness that Nikhil spoke about. Now, before I get started, quick question, very simple. There's no hard definition for it. What according, any, any, anybody can answer this. What according to you is a edge footprint? Anybody can give it a try. There's no hard definition. Anybody wants to quickly try? There you go, please. Faster and scalable. Fantastic. Excellent answer. So I, we have the overall portfolio and I'll try to draw parallels to make you understand how we take care of the entire landscape here and before I get started, you know, if you look at the entire landscape from a data center facility, you know, which is your power systems, your HWACs, the entire, you know, real estate area, then comes the IT facility, which is your compute, your network, your storage, and of course HWAC, because they have to maintain an optimum temperature, right? Otherwise the wear and tear of the machines are going to be high because they dissipate a lot of heat and all of that. So that's your data center facility. Now, anything that gets closer to the data center facility, but outside of the data center facility, is what we divide as near edge. Now, anything that you go further beyond, which could be an entity, like a connected car, which I took an example, is a far edge. So we at RedEd ensure that we have a portfolio that addresses this entire landscape. Now, let me draw parallels. The first one that you see here is what we call the RedEd in-vehicle operating system. Now, that's automotive greatly next that we're working towards. Now, what that brings to you is, I'll give you an example again, just to make it easy for us to, you know, kind of understand. Let's see you driving a car. It's got all those beautiful cameras, the radars, the lidars, the ultrasonic sensors, all of that and it's got this amazing, you know, ADAS, the autonomous assistant for your driving systems. Let's say we're driving this beautiful car on the autopilot mode, with all a dash kicked in, there's an object right in front of you. The car detects all the sensors, thanks to them, they all detect that there's an object right ahead of us and imagine if you have to take that data, you know, move it to cloud or move it to central aggregated location, do all the processing, come up with the next best action that you have to apply a brick and the latency that takes away there means that you probably have ended up hitting that object. Therefore, the gentleman who gave a beautiful answer is where the data itself is produced. So you build the data producer, you bring the data consumer at the same place. That's defined edge. We had a lovely another answer. Scalability, faster, absolutely. So I'll come to that as the next bit, but so you understood what is the in-vehicle operating system, which is really the entity at the far edge. Now let me take that to one extent, so that's all the goodness of realm there. Now let's take that to the next segment as such that we have. What we have there is typical edge devices. Let's take cameras for example. Now you have this camera, a bunch of cameras, and you have a NVR or DVR on a real-time streaming protocol. It consumes this camera feed, aggregates processes, infers it, all that beautiful things that you saw in the OpenShift AI demo. Now how do I cater to that? That is where we bring the next set of the segment, which has two things to it. Remember there are two things to it and Microsoft is out obsolete. So you have something called well for edge. It stands for red at enterprise for edge. Now what that is a minimal version which caters to the latency part of it. The gentleman said speed. That translates into the importance of that, right? Now what that means to me is a big question that all of us have, right? Do the edge devices, do the edge gateways really have to run a stateless application with minimal containers? Not necessarily. So if you look at where the industry is heading, is they're all looking at a light white Kubernetes as an answer there, which is not really required unless you have a stateless application and you're looking at all the goodness of that Kubernetes brings to the table. So red at is the only organization that gives you this flexibility, which gives you all the goodness of the container built-in management tool like Portman with the system D at the realm with the low latency that you can still achieve what you want. Now add that to a micro chef. What is micro chef? Again a minimal version of OpenShift. We've done away with all the cluster operators. We have bare minimum like the ingress for your routes, the storage CSIC and I, the networking aspects, and a bit of helm so that you can bring the help chart for all your installation automation and things like that. So you have an option to bring both together based on your type of application. Now I want to re-emphasize the fact that red at gives you this flexibility nobody else gives you, right? So that kind of answers to our forage. Now in between that you could think about let's take that same example of cameras feeding the data to NVRs and DVRs depending upon your architecture. You may have an edge gateway. Now you would prefer to have a control plane and the worker notes together. That's also possible. Now you get the full-fledged OpenShift available for you to see if you want to bring the best of all the worlds together. Now this is how we kind of bring the entire portfolio for you guys to be easy to build for edge and manage at edge. Now having said that having said that what is very very very important for each of us here is to understand what are those challenges that you will have to consider when you start building anything or repurposing your applications to a edge footprint. Now a gentleman said management. If you look at the analyst data, 74% have cited in a survey that manageability is the biggest criteria for them to adopt edge and what we exactly do at Red Hat is to make that manageability easy. So let me bring upon those four important considerations that each of you guys should have when you design and build for edge. What are those? Everything that comes from manageability aspects right? Be it zero touch provisioning, ensuring to look at the visibility of your workload in terms of health parameters and also ensuring the security aspects. The reason I say that is as you extend your workloads beyond your data center to the near edge or to the far edge you're exposing a lot of threat surface and therefore it is a highest criteria for all of us to keep you know back of our mind when we design that. Now how do we bring all of that? How do we make that a reality first and foremost? Very simple and easy on-boarding. And when I say that can I deploy my images through a network because we're talking about hundreds of devices, the speed, the scalability the gentleman mentioned. At the same time can also have a physical media to set it up. Yes. Now from there if I have my own images, the golden images right? How do I download it? How do I deploy it? And how do I update those images? Backed by whatever OS upgrades, the base image upgrades, the runtime, the middleware updates that you have in a very transparent and confident way to stage this in the workload. Now that sums up to very, very important thing right? Again goes back to the manageability, the edge manageability. So all of these four things together in either way sums up the importance of rel for edge. We call it red enterprise Linux for edge. Now how do we do this? What is that we further extend and made that available as a part of rel for edge for you all to use and reduce your cognitive load? I have a very small example here. Now if you look at the evolution of cloud systems to call it so right? They've always dependent on what we call the first boot automation. Now thanks to cloud init, ignition, the things like that which made that a reality right? So we've extended that support for rel for edge. Now we've got support for ignition. What that means is a very simple declarative, very declarative way of defining all those configurations that I can apply during provisioning. So I have a simple example here. I have an example where I have this variant rel for edge, the version defined and I merge it with my init file. What am I doing here? All I'm doing here is I have a golden image. I have this customized golden image which is right through all the drift of image management, right through all the workflows. It's been tested and certified by your CISO. It's ready to be deployed all the production you don't need etc etc. Now when I have this all ready I need to then extend the deployment to a certain user definitions. So how do I define a user? How do I take care of the permission which groups he belongs to? How do I ensure that he has access to certain files and directories and maintain that overall permissions right? All of this is made available as a part of our rel for edge. Now that's limited to what we call an edge simplified installer and the raw image. Now having said that what that means simply for use as long as your init file is encoded and you push it through your web server, the deployment happens. So that's a very simple example here. Now even before we get here, okay, this is fantastic, I have the ignition support, but how do I even get there? Right? All of you as developers here will have a big question. How do you when I get to the customized images right? That's where how we bring in is you know you can embed this we are what we call a blueprint. I have a simple example here which is an HTTBD service to start and manage those configurations. So let's say you have user definitions, you have set the right permissions, the right groups, you have provided access to the file and directories. Now can I configure past all of this to start a simple HTTB service? Right? What that means to me is my image is ready for what we call ready on boot. Now this with the help of all these image customizations that you can bring to your application image, what you can do is you could either run this independently or you could run this in conjunction with the Ignition support. Now if you have specialized hardware, you know which supports zero trust provisioning and things like that, where there's a FIDO, you know the FIDO device onboarding, right? Even that is supported. So tomorrow let's say if you're building a specialized edge appliance or a specialized you know hardware plus software kind of device, that's possible, right? And what I've done there is to simply extend it during my auto build process and then call the UBI-9. So instead of UBI-9, I can call the UBI-9 minimal. You remember all the UBI measures that Nikhil went to. So this is how we kind of extend and make it easy for your specific to your edge footprint. Now having said that, you heard Ashutosh, Amita, most of us talk about developer sandbox and so on and so forth. But how do you guys get started? What is that I have to get started? So what we have, especially for you as developers, as individuals, is what we call D4I. Now D4I is the developer for individual subscription. It's a no-cost subscription, provides you access to all the Red Hat products like REL, Red Hat Enterprise Linux, OpenShift Ansible, including Red Hat build for OpenJDK. So the best part is you can download the cheat sheets, the ebooks, and you can run up to 16 nodes per subscription for the individual developer use only, for non-production in other words. So please scan the QR code and be happy to subscribe and get started on it. Now on that, I'll hand it back to DK. Yeah, well, all this is good. But if you guys are on a sentos, which is a hot topic these days, how to get here? Right? So this is an announcement that was there related to sentos. And now let's say you want to get use of all these features, functionalities, edge, and everything for REL. How do you come to REL? Whether you are on Oracle Linux or on a sentos Linux, there is a convert to REL tool that is available on just the developer website. This tool will readily identify what is your operating system, whether it's a sentos Linux, what version it is, then it will also identify what are the packages and rows in the sentos. And equivalently, it will map those packages in the new REL version that going to migrate. Everything is automated. You just have to run the tool, that will identify everything, does all the migration, and just one reboot of your system and your latest REL version is up and running. So those who are not on the REL and still want to consume it, there is a D4S subscription as we mentioned. It's one year free. You can utilize this convert to REL utility to get on to REL and start using all these feature functionalities. That will give peace of mind to you, your projects and you can focus more on the actual coding. Well, that's about it. Thank you very much. Any questions? Any questions, please. The first question that I wanted to ask earlier was when you say user space and kernel space combined together, if I be the size of the whole pod or the container would be relatively bigger, so what do you have to say about that? So basically as you mentioned, the minimal image is hardly 15 MB. So if you create a container using both, using UBI, so let's say you want a standard image. That is a little bit high. It's around 400 MB, but it is still less than the entire OS is so that you download. So it's still packed. The size is minimal. It doesn't have a lot of strings attached. So as and when you start focusing on more focused image, the size comes really less. It's very easy to move as well. So everything is there. You can just create and deploy it. Hardly 10 to 15 MB. In some of the cases it's less than 10 MB. The size of the container. Any other questions? I have one. So as we are talking about containers and the end result, we will be monitoring and observing how the results are, the application performance. So what would be the best way to ease the entire container monitoring? Container? Monitoring. Observability. Monitoring. Okay. So there is a feature in the web console. So where all the container, along with your system parameters that are there, you can also monitor the containers that are running there. So there is a web console feature in REL. Once you install the REL, it gives an option to define the web console. Using that you can monitor all the containers running UBS that are running there. Can I pull or create dashboards using that data? Dashboards. A separate dashboard? Yeah. No, you cannot get a separate dashboard. But there is a tool called Red Hat Insights which you can integrate your REL. So once we install your REL, you can integrate with the Red Hat Insights which also part of a subscription. It's a freely available. And there you can create your dashboards to monitor which nodes you monitor, which container to monitor, all that can monitor on there. But on REL specifically it's not there. Can I onboard the third parties? Yes, yes, like FDE or Dynatrace and things like that. So if you look at it from a larger footprint, right, if you look at OpenShift you have support for Prometheus, Grafana, you've got LokiStack here. So you could do all the log aggregation, the logging and monitoring, including the distributed traceability aspects. Because when you start building the true microservices, all of them are asynchronous, right? They need some way to talk to each of the services. That's where the service mesh concept comes into picture, like Istio and all of that. So it depends upon what form factor that you're building. If it's something for edge specific, then it's about REL and micro-shift. And if it's for larger footprint, larger where TPS is important and the performance aspects are important, then you look at OpenShift as an enterprise Kubernetes. And that's where all of this is available. You can build your dashboard. We've now made the dashboard as a part of our in console. The best part is you can also integrate it with Dynatrace or AppD, New Relay, and so on and so forth, which is all certified to work on OpenShift. Does that answer your question? Thank you. Thank you very much. And also in that like upgrade, seamless upgrade and rollback. So normally it is possible with the state less application but in stateful it is like quite challenging, right? In the like theoretically it is okay but in the real time we face many issues. Yeah. On that front, what is the benefit whatever you presented? Yeah, so good question. So when you're stateful you have to manage a lot of state aspects, data, application data and so on and so forth, right? So it becomes important for you to have those strategies also put in either real footprint or the larger workload which is from an OpenShift perspective. When you back up at CD data, you do all the you know backup and storage and resiliency aspects of it. That's one side of the world. Now if you look at it from the pod man capabilities, from the rail for edge capabilities, what we've done is the auto rollback option that I mentioned about is simply making the system D a little intelligent. The moment there's a new release, the moment there's a new code release that is happening or a new version upgrade that has happened and if the service is not coming up, the system D can simply roll back to the previous version which was working absolutely fine. So that's the capability we're giving for edge footprint. When you took it with yeah both of it, you're running it as a container or running it as a you know stateful application, it's up to you. But more or less the edge workloads are not something that you would end up having a high rate frequency of changes. So therefore it answers the purpose that you would want from edge footprint. How about the patching? How do you patch your edge systems? Your answer. So the patching is again from an OS upgrade perspective right? Everything that you have to do is to use the the rel aspects of you know OS tree, take the delta, only pass it on. But if you want to automate that, let's say for a scale for hundreds of plus devices, you can look at Ansible. So both in conjunction, depending upon your use case, your type of application, the complexity of application, you have you know various architectures that you can plug in. Thank you very much for patience and good questions. So if any queries