 So, basically, I think, you know, this cognitive overload, lack of productivity, whatever be the reason, skill bottlenecks, multicloud complexity, lack of good tools, whatever be it, you know, that's increasing, but at the same time, the expectation that deliver meaningful business outcomes faster, better, cheaper, that's not going away. In fact, that's increasing too. In this session, what we're thinking is, we'll take a little bit of a step back, look at the open source technology landscape, right, how it fits into the whole modern cloud native app development, we'll look at how Red Hat brings those technologies to you at enterprise grade, and then we'll look at the developer sandbox tool in depth, how you can basically have this entire cloud native, modern cloud native app dev landscape at your fingertips. That's what we're going to talk about. Just one more thing, we'll look at the technology landscape, we'll also put the same set of technologies in the inner loop, outer loop workflow that Amita and Ramke were talking about. Okay. So, let's continue. So, the way we'll try and slice this problem is first, let's look at, you know, what are the development challenges around developing cloud native apps, right? Then we'll look at bucketing those challenges and bucketing some of the technology areas which solve those challenges. And then we'll look at some core open source technologies in those technology areas. So, but before I move on, in case some folks are not familiar with what we mean by cloud native apps, we're basically talking apps architected with the microservices architecture, right? You break a monolithic app into separate microservices. Each of these microservices are independently scalable in response to variable load. They also have built-in fault tolerance to infrastructure failures. So, just because there is an infrastructure failure doesn't mean somebody sees a 404 or a crash. Other important aspect of cloud native apps is the whole CI CD developer workflow. Because you now have an app which is broken into parts, you can independently update those apps quickly. And that's why, you know, you see a lot of rapid innovation which has been brought about by these apps. So, CI CD and, you know, that agile way of updating these apps is a major characteristic of cloud native apps. Okay, so let's continue. So first things first, right? You need to write the business logic. You need to take care of basic application problems, performance, security, multi-threading. You need, when you are going to migrate from legacy to modern apps, refactor, rehost, all that. There is a host of language runtimes and frameworks available for you in the open-source world, right? I'm going to just mention a few. Maybe you are using Quarkus and Node.js for the microservice development, maybe JavaScript for front-end. You might be using CRUST for low-resource IoT environments. That is, in the open-source world, there is a language runtime framework available for the type of application you might want to build. Okay, you have built the business logic, but now you need to expose it. You need to make the app ready for consumption. You will need things like authentication, single sign-on, messaging between different parts of the app. Between apps, you need the messaging capabilities. You might be writing an API. You need to put a gateway in front of that API to secure that API, to implement performance caching, things like that. This bucket of technologies, let's call it integration technologies. And there is a lot of integration technologies available in the open-source world. You have key cloak for authentication, single sign-on. You have three scale for API management, API gateway. You have DBSM for change data capture, and you have AMQ for messaging. Lots of technologies out there. Okay, next, this aspect in your application development landscape probably has been important, but is going to be even more important going forward. Sometimes the technology hype is real, and really the change that has come about with the new innovations in large language models and new AI world, they really mean something. So you're going to need data engineering capabilities, data prep, but most importantly you're going to need a full machine learning operations platform that lets you do model training, model development, model tuning. Increasingly, you might take a LLM from one of the large vendors, and you might tune it for your data. You might take a model from something like a hugging phase. There's 300,000 open-source models available there. You might take that, tune it for your enterprise data. And then you need to make that model available as an endpoint that applications can then consume so that the application itself becomes intelligent. And you've got to monitor the model in production. Is it really doing its job? So this bucket of technologies, there's quite a few open-source options. There's PyTorch, TensorFlow for model development. There is OpenData Hub, which is ML Ops platform for engineers and data scientists. Like we mentioned, in order to exercise all these technologies, you need productivity tools. We talked about a few in the keynote. There's more, right? You need, of course, your IDEs. You need containerization tools. Desktop was mentioned. You need onboarding tools. Developer Hub was mentioned. You need build pipelines. You need deployment tools. There is plenty of open-source innovation here, too, Tecton for build pipelines, Podman for desktop, VS Code, Eclipse. You name it. There's a lot of IDE options available for you. Cool. At this point, at least we have sliced the cloud-native application development side of the house, right? But what about the deployment? And this is where what you need is a hybrid cloud consistent deployment platform. And you might be ending up in the hybrid cloud for a variety of reasons, right? Should be security, compliance, some geopolitical issues, whatever be the reason. You're probably going to end up with an on-premise footprint, a multi-cloud footprint, whatever be it. This hybrid cloud is a reality. I'll share just a simple story from my past prior to Red Hat. We had a really enterprise-grade location technology and mapping platform. Pretty unique with a lot of great features. We cleared the RFP. This platform was built on one of the popular cloud providers. At the last moment, the customers came in and said, we like the product, would love to use it, but it runs on a cloud provider who's our competitor. And we're not going to pay money to enrich our competitor. We took us 18 months to refactor that whole application, removes some of the dependencies and tight integrations we had built, refactor, retrain our engineers, create a version that could be deployed on-premise and in other cloud providers. My point is, there could be any number of reasons where you might end up having to have a multi-cloud footprint. That's where technologies like Kubernetes come in, almost a de facto standard for containerization, Helm for application packaging, Argo CD for deploying, Knative for serverless, and so on. Now the bottom most row here, I can tell you, is a very, very small subset of the technology landscape. I really had to pick a few given the real estate in the slide. This is what it really looks like. This is the CNCF landscape of open source technologies. How many of you have seen this? So, but you know, this is what innovation looks like. Open source always is at the cutting edge of innovation, because there's somebody like yourself who sees a problem, goes out there, creates a project, others come in, build a community, because those are real problems that people are solving for in the real world. That's why open source is so powerful. Yet at the same time, this becomes a challenge for an enterprise. Which technology should you pick? At what point? How do you align them with your skills inside the organization? How do you align that with your future needs? How do you make sure you don't take a dependency on a certain technology to end up losing the employees and having a critical gap? And one more thing I'll say is this is open source, meaning you're going to have to build these products. You're going to have to maintain them. If there are security vulnerabilities, you're going to update the product, patch it. And the other thing is, in the previous slide, you saw a whole bunch of technologies, but all of them need to work together. And if you have, I'll call it open source sprawl, if you have uncontrolled landscape growth, you're going to end up having a very brittle infra and a brittle application platform that ends up causing exactly the cognitive overload that we've been talking about. Can it be done? Can you take these technologies, build the stack, do it yourself? Absolutely. But my question would be why? That would be like saying, you know, I'm going to do the plumbing of my house. I'm going to do the doors of the windows, I'll do the painting. Probably not a good use of time. A better use would be to focus on coding for the business logic, the application that you are building for your company versus dealing with these underlying problems. And that's where, you know, Red Hat has a unique approach. We obviously work with a lot of these upstream communities. We are ourselves a big contributor to many of these projects. We look at, you know, what our customers are telling us. They say, hey, we need an internal developer portal. Then we look out there and say, okay, you know, backstage is a very good option for that. And then we take that, working with our customers, build functionality, build enhancements, and all of that goes back into the open source. So and the other thing is, you know, we take all these technologies and we harden them, bring them to enterprise-grade security. And obviously, we support them. We'll talk a little bit more about in the, in the, in the real deep dive, but we support these technologies over long time frames with multiple combinations of the previous slide so that you're not constantly running from this version to that version and just upgrading and doing forward compared and backward compared and so on. So let's look at now the same exact thing with how Red Hat brings those technologies to enterprises. And we have Red Hat runtimes and frameworks. We have an extensive portfolio of Java frameworks available. And we support several of the popular languages on our platforms. We have migration toolkits. As you re-factor, re-host your legacy applications, you can utilize these migration toolkits to understand what is the delta, what is the best approach for you to migrate from traditional to modern. We have an extensive integration portfolio, API management, API gateways, messaging, change data capture, authentication, single sign-on, all common problems that enterprise application would face are covered by our technologies. From the AI front, based on the open data hub upstream project, we have a product called OpenShift AI. It's basically one consistent platform for engineers, for developers and data scientists. Model tuning, model development, model deployment, model monitoring, all aspects are covered. Most importantly, this machine learning platform runs on OpenShift, which is where the applications are running. So the applications can conveniently consume these model endpoints. I won't go too much into these details. I think we covered a part of this portfolio already in the keynote. I just want to mention the DevSecOps Trusted Supply Chain Tools. This is a new set of products that is now going to bring, that is now going to make DevSecOps a possible thing to do. For the longest time, it was great to talk about it, but hard to do. And you'll learn more about this in subsequent deep dives. Okay, but what about that whole consistent platform across different cloud footprints? That's where we have this platform called OpenShift. OpenShift, just like RHEL is Linux for enterprises, OpenShift is Kubernetes for enterprises. You get the power, the innovation of open source Kubernetes, but at enterprise grade. There are certain useful enhancements in there specifically meant for enterprises. And it's built on RHEL and can be managed by Ansible. When OpenShift comes to an enterprise, it comes with an underlying version of RHEL. So there are no inconsistencies between the Kubernetes platform and the underlying operating system. Also OpenShift is available in self-managed and also in managed versions. Managed if you do not want to have a staff of people who's expert in managing Kubernetes. If you don't want to have that, you can have Red Hat manage it for you. And the other important thing is OpenShift is available on different cloud providers, AWS, GCP, Azure. And so you get an consistent underlying application platform which is available across any cloud provider. You may be needing today or tomorrow. And because we offer this consistency and availability of this platform across different clouds and on-premises, there is a large ecosystem of ISVs and partners who build their own software on this platform. Because even for them, it is easier to support one platform consistent across different footprints rather than somebody's do-it-yourself implementation where you don't know where the problems are. Is it in this software or is it in the underlying infra? Okay, hopefully this landscape was useful for you. So I'm now going to look at the same landscape. Now let's put those technologies in the inner loop, outer loop. The inner loop, coding, debugging locally, push it into merge the code. And after that, the outer loop takes over. It's all about deploying. And then you also, you know, life doesn't end after you deploy. Life begins actually when the code is in production. Okay. But before even inner loop and outer loop begins, there is the problem of onboarding and experimentation and learning. That open source stack is going to keep evolving. It's not going to be static. In my opinion, no skill program can keep up with training you on all the open source technologies. It's going to be, you're going to have to do it yourself at your own time, at your own pace in response to something that you need to do in your day-to-day job. And that is where the developer sandbox comes into the picture. I won't talk about it more. We'll just demonstrate in a little bit. I'm going to start placing the technologies that we saw in the previous slide onto the inner loop, outer loop. You use runtimes, integrations, ID, plugins to actually get your code in place. And then in the outer loop, you have open shift pipelines, which is built on tecton. And Ramke was just showing how the container was in the registry. Advanced cluster management, advanced cluster security, these things help you put a security guardrail on your entire cluster rather than worry about individual applications breaking some policy, et cetera. You can set those policies and security requirements at cluster level. And GitOps will help you deploy it across different environments. And we have a whole bunch of observability operators. So in case you don't know, open shift works with the concept of operators, you can bring external services, et cetera, and manage it as if they were part of open shift through a concept called operators. And we have a lot of observability operators that help you monitor your application in production. Look at logs, performance monitors, and so on. OK. We've been talking about it, DevSecOps. And this is where we are bringing this trusted software supply chain suite of tools based on SIGSTOR, RECOR, Cosine. Just like you don't want a car that has some parts from somewhere and the engineers may not be the engine from the manufacturer, similarly, you don't want to build software where you don't know where everything is coming from. So these tools let you first thing, establish where your software is coming from, number one. Once you know where your software is coming from, it is this version that has these security patches and so on. It makes sure that during the manufacture of the software, you did not deviate from what you brought in. And that's where you have tools for signing code, images, binaries, and so on that will then end up constituting your application. What are the barriers to experimenting? First of all is the complexity of the stack. We should acknowledge that. And on top of it, if you want to set it up and configure it, it's going to be a challenge. Let's face it. You don't want to ask for licenses just to keep yourself up to date or if you are evaluating a potential technology to use. And you need to stay up to date. Your skills, you might acquire some skill and you may need to brush it up, things like that. So you need continuous learning, continuous integration, continuous deployment, continuous learning. So we have one solution for you, the Red Hat Developer Subscription. It gives you access for trials to the Red Hat portfolio. There are self-guided learning paths, labs. There are highly valuable eBooks and cheat sheets written by Red Hat subject matter experts. Articles and blogs written by people who are developing those products. And that in their voice, that's what they want you to take away. And all of this is at no cost, free. I will not talk for more about the developer sandbox. I'll just hand it over to Yashwant. So picture is worth 1,000 words. So which is a free to use version of Red Hat OpenShift. So the OpenShift developer sandbox helps developers to easily deploy any kind of application and also extend it with some additional capabilities. For example, if you need authentication, if you need a database, if you need APA management, you could do all of this within the developer sandbox. So like Ashutosh mentioned, it really is a very complex world out there for developers. Before we jump into sandbox and get lost in the features of the sandbox, I would like to ask you this. How many of you are engineers who would like to just get in, open your laptop, write code, commit, push, and then get... And how many of you are more interested in the platform side of things, building tool chains, or I want to do DevOps, I want to create, you know... Okay, great. So today we have something for both the groups. So I definitely am from the first category. I like building the features, shutting the laptop, and getting out. So the developer sandbox comes with a bunch of features, you know, obviously it's a free version, so not everything from the world of Kubernetes and cloud is available, but there is definitely enough for most teams to, you know, run your experiments, get started, and then, you know, I trade rapidly by partnering with your teams, you know, if you think that your application is production ready, there are options to, you know, export the application and then deploy it onto your production clusters, right? So some of the features that the developer sandbox has are like, you know, it has support for KAMLK, it also has support for OpenShift Pipelines, which will enable you to, you know, create a basic CI CD and also go all the way into configuring very complex workflows like, you know, adding code quality checks, you know, making sure, you know, your integration tests are working and stuff like that. We also have support for OpenShift serverless, which will enable you to build functions as services. We also have support for AMQ and Kafka to build even-driven applications and systems. You also have access to Red Hat OpenShift data science, which will enable you to add AI capabilities to your application. So let's take a closer look into what the sandbox is. To begin, you know, you're basically expected to go to developers.redhat.com, which will be the starting point for pretty much everything that we're talking about here today, right? So once you click on the Start Your Sandbox for free, you will be redirected to a page within console.redhat.com, which will show you access to three major products. One is Red Hat OpenShift, which will be the Red Hat Developer Sandbox. Second is Red Hat Dev Spaces, which is a cloud-based development environment for your teams to get started without having to configure anything on your local machine. It has basically an IDE plus terminal combination that will enable you to build applications directly in the cloud, right? And then there is Red Hat Data Science, which will enable you to add AI capabilities to your application. Now, let's explore this part of it, which is the developer sandbox. The moment you log into a developer sandbox, you're greeted with a UI that will look something like this. Like I said, the Red Hat OpenShift or the free-to-use version of it, which is the Red Hat Developer Sandbox, enables you to do two things predominantly. One is build your applications, deploy your applications, and the second is to extend your applications that are already running in your cloud environment. So to talk about deploying your applications, there are wide variety of options that are available. For example, you have an option to import from Git, which will basically enable you to directly import your source code, and OpenShift will automatically take care of identifying what is the builder image needed and then deploying that particular image onto the cluster. And then there is also an option to deploy using container images. For example, if you or your team already has a fully packaged application image, you could directly deploy that onto the sandbox. And then you also have an option to import from YAML. For example, if your team has already built a YAML file configuring all the spec of your application, how its deployment configuration should be, you could always do that. There's also an option to deploy a JAR file directly if you have a Java-based application. And then there are also a couple of options to deploy serverless applications. And then there are also options to even-driven applications. All of this is powered by Red Hat AMQ and Kafka-based tech. So yeah. So now, for the sake of this demo, I have a very simple JavaScript application. Let me just show you a very quick overview of what the application looks like. It's a very simple JavaScript application. If you look at it, there's a server.js, which basically has one endpoint and responds back to you with one message. That's it. I mean, to keep the demo simple, I've sort of kept it as vanilla as possible. So now, let's try deploying this application. So all that I have to do is just grab the URL of this repository. And then, oh, another thing that I want you to notice is if you look at the repository, there's no container file or Docker file, or any sort of YAML. It is basically an application that the developer has built very quickly to run it on their machines. And then now the developer is basically thinking about, OK, let me just try making this application available for my team or a smaller set of group. So now, let's jump back here into the sandbox and then use this option to import from Git. So I'm just pasting the URL. And as soon as I pasted the URL, OpenShift has automatically detected that it's a Node.js-based application, and then it needs to use the Node.js-based build image. Like, if you look at this from a developer perspective, I, as a developer, have only written the application logic, did not have to worry about any of the config or what is needed to make that application available for a wider group of audience. So there are multiple options, deployment options, for the sake of this demo, I'm choosing the deployment option. So all that I have to do is just go ahead and then create this application and give it a few seconds, and then your application should be ready. So while that application is getting created, so the moment you click on create, you're redirected to the topology view. So the topology view gives you a visual sense of what is happening within your cluster. Typically, in a Kubernetes environment, you have a lot of applications running because you're not the only person using the space. You have multiple development teams that are using, building multiple microservices, everything happening in an asynchronous fashion. So I think the application is building now. So if you are interested, we could just quickly jump into logs and then try to understand what's happening. So it has already started pulling the application. I think now it is building the source. So while let's give it a few seconds for that to complete. So right now, we're working with the UI of OpenChef developer sandbox. Now, if you're somebody who is more interested in working with a CLI-based approach, you know, the Hardcore Platform Engineer. So you could always use a web-based terminal, or you could just go here and copy the login command and then put that into the terminal on your machine, and then you have OpenChef Sandbox accessible from your terminal. So close this, yeah. So let's jump back into the topology view and then see if the application is ready. So you see this icon here? This is helping you. This will help you access the application endpoint. So there you go. So my application is up and running now. So like you could see, within a few minutes, I could deploy my application onto OpenChef Sandbox and without having to create or provision any infrastructure, I did not have to configure anything. All that I needed was a good URL. That's it. So now deploying applications is only the beginning of challenges for engineers. The moment you deploy, you need a CI-CD workflow, you need automated tests, you need to make sure that your application is running correctly, whether it is overusing the resources or not, et cetera. So with the limited amount of time, I can only showcase some of the features here. So let me show you how to add a CI-CD pipeline. So to be able to add a CI-CD pipeline, let's jump into the build section and then select our application here. Now if you scroll down, there is this section that talks about webhooks. So using a webhook, we will now configure, connect our Git repository with the application that is running on OpenChef. So to be able to do that, let's jump into settings, get to webhooks, and then add a webhook. Copy my, that's it. Application running on developer sandbox is now connected to the Git repository. Now what that means is, if I make a change now, we should be able to see the application automatically getting deployed on the sandbox. So now let's change this message and then commit our changes and push them. And as soon as we push our changes, we should see that the developer sandbox has received that. There is a change to the code. And then it is deploying. So it is, again, going to take about a minute for this to be complete. So while that is happening, let me showcase what are the different options available within the developer sandbox. So if you look at this, there is a section called developer catalog. So meaning OpenChef comes with a bunch of services out of the box. And you could always extend it using operators and help charts. That is in our managed OpenChef instance. But for the developer sandbox, we'll jump into what is available. And then let's try to understand what the catalog has. So we have CICD tools, pretty much around Jenkins. We have databases, a wide range of databases, SQL databases, no SQL databases that are available. We have access to a bunch of languages and frameworks, support for pretty much every programming language there is, and then some of the middleware, if you come from a Java background and an EAP background. And yeah. And yeah. And we do have a lot of other ISP based software, like Ashutosh has mentioned. So these are softwares that are made available for OpenChef by a lot of software providers. So let's jump back into the topology view and check on whether our application is deployed or not. So it looks like it is deployed. Now, if I refresh and everything went correctly, we should be able to see hello, Bangalore. There you go. So I could get started, add a CICD pipeline, and do all of this without any cost, or setup time, or anything needed. So that's the developer side of things. Now, let's talk ops, right? Ops and admins who are managing the cluster. So now you want to add, as an op, as somebody who is managing the cluster, right? You would like to add some resource limits, meaning you don't want this demo or an experiment to consume all the resources that are there on the cluster. So you could always set some level of control here. And you also have observe, which will enable you to understand what is the activity that is going on within the cluster, what services are consuming, what kind of resources, et cetera. And then if you need to add AI to your applications, you could always just go to the menu here and then jump into OpenChef Data Science, make your application available over an endpoint. And once you have it available, it should reflect on the topology view, and you should be able to consume it in your application. That's pretty much a summary of how you can get started with Sandbox. Is the old name of OpenShift AI. So the two are the same. Yeah, please scan this QR code, and you are good to go. We didn't show the sign up process from developers.redhead.com. We don't ask you for your credit card, and then send you a big bill because you were adventurous. Redhead doesn't do all that. So it is free, meaning it is really free, and no obligations. So please, yeah, I mean, that's all we have. Thank you. These will enable you to try different features within OpenShift that will match to your context, meaning based on some of the things that you would be going through. For example, you need to export your application. You need to build a Quarkus application, et cetera, et cetera. All of those guides are available here on developers sandbox slash activities. You can get access to all of this by scanning this QR code. Thank you. Thank you.