 Now, my name is Cedric Leibern. I'm a developer advocate at Red Hat. I love helping developers and our customers work with different issues, no matter if it's RHEL with containers and Kubernetes. I'm joined here by my colleague Max. Thanks. Yeah, I'm Max Kossitz. I'm an account solution architect. Basically means I'm the technical contact, technical trusted advisor of some of our largest customers at Reddit. And I spent my time looking at their challenges, at their goals, especially in the developer field, to really drive reducing cognitive load on their developers. And I myself was a developer before. So in this talk, I'm just going to refer to myself as a developer, such as you guys. And I want to start off with stating that I believe developers are creators and innovators. And they should spend most of their time creating and innovating, or in other words, coding. Hands up, who agrees with me here? Yeah, great. OK, so how come then that developers code less than one hour per day on average? This is not a new discovery yet. I still find the surprising every time I hear a new report stating more or less the same finding. And not all of these non-coding tasks are unproductive, especially with the growing recognition of DevOps as a powerful practice. It should be clear that coding isn't the only responsibility of a developer. However, we shouldn't forget that all additional responsibilities are only meant to improve software development. We're not giving developers any additional tasks just for fun. And this is why it's crucial that while we introduce new processes, new practices, new responsibilities, and new technologies, that we need to focus on maximizing the time spent on coding by reducing cognitive load caused by meetings, reporting, working with complex technology stacks, researching and solutioning, troubleshooting incidences, as well as context switching between these different tasks. And well, we can't really help you guys with the meetings and the reporting sites of it, but you're going to have to tackle that with your managers and team leads. But yeah, we want to discuss a bit today how we can, as Reted, with Reted Enterprise Linux, help you reduce cognitive load for the other points mentioned. And we'll do so today by focusing on runtimes and frameworks, containerizing applications, and Linux with open source at the foundation. Exactly. And let's be honest, as developers, we're juggling a lot from different versioning as of runtimes and frameworks, different tolling. It can be very overwhelming. But Ro is here to simplify that. And when we use a solid platform for all of our development, whether it's different environments and across various frameworks and runtimes, we can take advantage of, from the beginning, all the benefits that Linux provides as an operating system, giving us more time to code and do what we like to do. Show of hands, does anyone know this GIF, where this comes from? By chance? All right, perfect, perfect. Now, this is Steve Vollmer, 2000s. I don't know what I was doing back then, but he's talking about developers, developers, developers on RHEL. And we think he might have been onto something. Now, for Microsoft developers, with RHEL being open source, it's now used for a variety of applications, from traditional monoliths to microservices with ASP.NET. But RHEL supports natively.NET 6 and 7, allowing you to develop applications to build them and to run them on RHEL natively. So it essentially allows you to go from a Windows environment, say you're running Windows Server, and lift and shift that application directly onto Linux and take advantage of all the different capabilities and strengths that Linux provides from security to performance and beyond. So RHEL allows for these applications, such as .NET, we'll talk about Forkis and other runtimes later, to be containerized and with all the benefits that running with, for example, Podman on RHEL provides, rootless containers, therefore. So this cross-platform flexibility is huge in the term of context switching, going from a Windows machine over to a Linux machine and the cognitive load that exists from there. So what if we had an example that we could show, taking a Windows application that we'd have here, running on Windows Server and actually taking that, lifting and shifting that over to RHEL. So as you can see here, I've got a Windows Server 2022 VM running up right now in the cloud in AWS. Probably should hide that information, but it's okay. And we're gonna take an application that we already have that essentially is just a microservice to give different quotes out. We're going to run it on here and we're gonna lift and shift this application directly onto RHEL. So I'll go ahead and go into my first PowerShell console. I'm not really a .NET developer, so I'll do my best here with developing on here. But I'm already, oh, I'll head out of this directory. So I've got an empty directory here and I can go ahead and paste in a GitHub repo for this example .NET application I have. So we'll go and cd into that. And as you can see, if I do a quick LS here, I've got all the different files and resources that we normally have in a .NET project. So if I were to do a cat on the CS Proj file, so like that we can see we're running with .NET 7 here on our Windows Server. So traditionally, if I was a Windows developer, all I would do is use .NET, the .NET CLI, and do a .NET Run in order to build this application and run this on Windows. So here soon enough, we'll have an exposed port and we can actually curl this to make sure this application is running in this other terminal that we have here. So I'll just do a curl. I might already have this here. Curl LOCA host with the port 5000. We can see that the microservice is then running. We could curl different endpoints for this application. We've got different quotes that we can receive. So random quote. The application is running fine on Windows, but now what if we could just take that directly over to this beautiful rail machine that I have here and simply run it using this native capability that we have on Linux. So I'll go ahead and I'll copy in this same directory. So I'll do a clear and I'll go ahead and clone this in. So I'll cd in there and from here, same exact project, same exact resources and files. And all I have to do is install the .NET SDK here for .NET 7 to be able to run this natively on Rails. So I'll go ahead and do a DNF install and I'll type in the .NET SDK for 7. Now I should already have it here on my Linux machine, but it's gonna double check for me, make sure that we have it. And so with that, we can only thing we have to do is a .NET run in order to run the application exactly how we would on a Windows machine over here on a Linux machine, the same way, same capabilities. But now we're getting all the enhancements and security benefits that we have here on Linux with the same application that we've been working with. So if I were to go down to my other pane, I could do a curl to the local host and call in the port. So same things working. If I call the quotes, get a list of quotes. So the same application is running, lifting and shifting, but it's not just .NET. It's also for, say, Java developers. Do we have any Java developers in the house? Nice. Who here has heard of Quarkus by chance? Who here is using Quarkus? Nice, you guys are awesome. Well, Quarkus, if you haven't used it before or heard of it, it's a Kubernetes native Java stack, sorry, that allows for native compilation using GrawLVM. So it's essentially supercharged Java for the container error. Now it also supports a lot of familiar APIs that we're used to working with from Hibernate to Kafka to RestEasy, whatever it might be. And additionally, you also get to use live coding. So say you're working on an endpoint or some logic and you're changing it, where you're seeing the results in real time. Additionally, Quarkus is also a container first. So you've got fast startup times with or without the JVM if you wanna build a container like that. And you can switch between traditional, imperative programming and reactive programming. So you can switch between lessening that cognitive load of having to switch between those. Now it's not just this, it's the entire toolkit. It's the runtimes, the development tools, the technologies, and it's the support that you're getting from being able to develop a Python 3.9 application today. And in 10 years, you still have the support for a 3.9 Python application that far ahead. So it's that stability of being able to know that what you write today will be able to run 10 years from today. And in addition, we also have application streams allowing us to lessen the cognitive load. So if you haven't used it before, it's essentially allowing us to modularly install and remove different versions of tools, libraries, frameworks so that maybe if I'm working with a Python 3.6 application and I need to upgrade, I don't wanna have to play the game of is this upgrade gonna break my whole system or is everything gonna be right? So it allows at a low level to be able to work with these packages, insert modularly, and essentially just make our life a lot easier. And from there, let's dive into containerization. While it's revolutionized application development, containerization also brings its own complexities and concerns and with that cognitive load. And in this section, we'll talk about how REL simplifies containerization as the next step, reducing the cognitive load for developers. Now, when I started learning about containers and started using containers, that was a while before I joined Red Hat, I believe that containers can run anywhere without a hitch. Hands up with things containers can run anywhere. Let's assume on Linux without a hitch. Not so sure? Well, the aha moment I had when I was started working at Red Hat was when I learned that a lot of the time they do work anywhere until they don't. And that's the problem because don't forget containers are just processes, right? Processes that run on the Linux kernel and a container engine just facilitates the execution of these processes. Now, what I like is the coffee machine metaphor that I always use when I talk with customers about this. And that is, imagine you're a coffee lover, right? You have a little coffee machine that you want to use at home. You want to take it to the office. Then you want to take it on vacation because you always want your specially brewed coffee. Now, this coffee machine is portable, okay? And it would usually work. But now, let's say because I've lived in the UK before and then went to Germany, right? You move from UK to Germany or go on vacation and take your coffee machine with you and you want to plug it in, but it won't work because the plug is not compatible with a socket. And this is how I like to describe that there's a difference, a key difference between portability and compatibility. The kernel space in a Linux system directly interacts with the hardware controlling processes, managing memory, devices and file systems, right? This kernel space provides an interface, sometimes referred to as the Cisco layer, for applications to interact with it. Now think of this as the electric socket in the wall, right? On the other hand, you have the user space where your applications and libraries run. It doesn't have direct access to the kernel space. Instead, it uses this syscall interface to interact and call the system calls that it requires to execute what it should. And now think of this as the electricity plug that needs to go into the socket, right? What does that all mean? Just like the coffee machine has a plug that matches the socket of the country it was built for, let's say, the user space is coupled to the kernel space that it is shipped with. Now, a container encapsulates an application and it's the dependencies by shipping a user space. And in other words, when this container ships its own user space, as mentioned, the user space relies on the host or as kernel to execute, right? Containerized applications in their user space therefore are coupled with this kernel space. And not all Linux distributions have the same kernel version or configuration. And as a result, a container user space may be incompatible with its host or as kernel space. Now, okay, let's level set. Is this really a problem? Well, like a lot of things in IT, it's a question of probability, right? You might say I run an Alpine container on CentOS or Fedora all the time and my friend runs it as well on Ubuntu and it just works. Okay, I believe you, but it's still a risk. If you haven't faced the compatibility now, you will later on very likely in your containerization journey. That's why I want to very briefly share one story that the Reddit support team has shared with me when I was looking into this topic for a customer. And that was a customer wanting to migrate Fedora 13 applications to Fedora 19. And now they containerized these Fedora 13 applications because they couldn't be modernized and just needed to be lifted and shifted to Fedora 19. But it didn't work. User ad didn't work. There was a bug. So the Fedora 13 container on Fedora 13 worked, but not on Fedora 19. Now, if you run into such a problem, where do you start the troubleshooting? Is it the container image, the container engine, right? Is it the container host, C-Library possibly, right? Linux kernel. Now ask yourself, how much time and cognitive load would have gone to waste if your dev team would have faced such an issue? And that's why I think it's important to consider it. One more thing I want to add is that we've only together began or just begun the container journey, right? Containers have existed for a while, but the wide adoption of them is quite recent. And over time, workloads will become more complex, OSs will evolve, and the permutations will sprawl. Meaning that over time, how do you make sure that your user space and your containers are compatible with the kernel space of the container host you're running them on? Now, we at Rated have early on recognized this problem and have come up with a solution. Quite straightforward. We provide a Rated universal base image. And the purpose here is to be the highest quality and most flexible base container image available. Now, if you remember my little customer story Fedora 13 container on Fedora 19 bug that would match to a REL 6 container running on a REL 7 host, we fix that bug. Because we make sure that there is a cross compatibility between, in this case, for example, REL 6 user space and the REL 7 kernel space. All right, so the solution is ship the user space with the kernel space. So it's such that you have a complete portable, compatible and supported stack. Now, just very briefly, I want to mention, you can talk with us in our, how do you call that? In our booth around the corner, if you want to know more. But there are three different UBI types. Standard, minimal, as you know, might know, or yeah, from other universal base images. And multi-service for running multiple applications in one container because system D is enabled. And we also provide pre-built images and something that I do want to mention is UBI's have a subset of REL packages, but you can, of course, install them and continue relying on the Red Hat secure and trusted software supply chain that we provide. And this is where we can get into containerizing the application that we were working with before that we were locally running, not containerized yet. And it doesn't matter if you're starting out with containerizing or you're already deep in year, working with a bunch of different microservices already. Well, with the UBI, as we were just mentioning, you can use Red Hat Enterprise Linux as the platform to be able to develop, to be able to build and to be able to run your containerized workloads with all the added benefits, of course, with running containers on REL, such as rootless containers, all within the user space, UIDs, whatever it might be, with a bunch of different tools that we support, namely Podman, Builda, and Scopio, all part of the container tools package. So REL essentially ensures that these containers are first class citizen within the Linux landscape. So for example, maybe if you're developing and working on the .NET app that we just had, or maybe taking that, containerizing that and deploying that to an OpenShift instance, say, for example, you're working within that compatibility stack where the kernel space is aligned with the user space. And I think we can go ahead and get into a demo to show that here. So before we hop into that demo, has anyone heard of Podman? Is anyone using it right now? Nice, love to see it. I hope you guys were able to see the Podman desktop session that we also had earlier, phenomenal work that they're doing over there. But Podman is essentially the modular way in order to manage, run container images, share container images, and create pods that you can then take and deploy on Kubernetes. It's also daemon-less, meaning that it's super lightweight. You don't have to worry about a central background daemon working in the background that maybe requires root privileges. And it just stands out in its way to essentially focus on the security, and you don't have to worry about maybe a container going rogue and it having access. So in a nutshell, it takes away the headache, and let's go see it in action by going back to our .NET application that we have here and containerizing this. So as you can see, we still have it running here on the top. I'll go ahead and shut this off right here. And we've got, within this directory, we've got a Dockerfile that we'll take a look at. So within this Dockerfile, and I'll actually, in order to show it, I'm gonna exit out of that pane. I'll hop back in here. Let's take a look at the Dockerfile. So as we were mentioning before, there's a variety of these redistributable base images that I could use, I could pull it from the Docker hub, could pull it from key.io, and then I could use that as the base image in order to run this application. So what's going on here is we're taking the base image that includes the .NET runtime for UBI8, we're copying in the application, we're running that, and then we're also exposing the ports in order to be able to access that. So I'll go ahead and use Podman in order to build this application. So all I have to do is do a podman build. And I'll give it a tag, so I'll give it a name. Since its quote of the day is the application for the day, and then, oops, sorry. And then I'll use the local Dockerfile that's in here. So it's copying on all the instructions, it's building this container layer after layer, and now we have an image tagged in our local registry that we can now use, we can now take maybe to OpenShift to somewhere else and run it there. But for our instance, we're gonna just run it locally. So we'll do a podman run, we're gonna run it in detached mode, and actually I'll go ahead and run a, get another window up here so we can see that. I'll go back up here, I'll do a podman run, detach mode, give it a port assignment. So remember there was port 5000 that was open, and I'll call in the name of the image. And just like that, the container is running on our rel host. We could do a podman ps in order to see that, that we've got the port assignment. So now if I was to just double check that the container is running with the curl local host to port 5000, then we can see the microservice is running. Maybe we could check out the quotes that it can offer exactly how we were doing it on our local development environment. But now we're containerizing our applications and scaling it with the compatibility that we have using the UBI. So we can also go into maybe creating a Kubeify YAML, but let's head back to kind of wrap up talking about Linux at the foundation. Thank you, that was awesome. Yes, I do want to take the last five minutes to talk a bit more about Linux and open source as such from the perspective of reducing cognitive load. So I want to address what I feel is a beautiful interplay between open source innovation and enterprise commercialization. As some of you may know, Rated is the largest open source company in the world. And when we talk about Rated Enterprise Linux as a technology, we're not talking about a product. It's an open source technology that Rated provides support and expertise for as a product. We're not selling a technology license, we're selling a support and expertise subscription. And that is why for Rated, the open source part is incredibly important. That's why already in the early stage, we participate in small to large open source projects where we facilitate as well as just contribute to them. Then in the second stage, we integrate these upstream projects fostering open community platforms. And as a third and last stage, we then commercialize these platforms together for a supported security tested Q and A tested product as in support and expertise. Now, open source communities are very, very good at rapid innovation, collaboration and addressing common problems with the power of the community as a whole. One of many ways that Rated contributes to open source besides generally writing code as such, which we do a lot, we focus on reducing cognitive load. And so we've shown some examples of how certain rail features and ways of working reduce cognitive load today, but overall across many open source projects and platforms, Rated focuses on integrating projects into platforms to reduce complexity and increase compatibility, which reduces cognitive load. We focus on user interfaces and user experience to improve usability, which reduces cognitive load. Thank you. And we focus on Q and A and security to reduce incidences and troubleshooting needs, which reduces cognitive load. Thank you very much. And yes, so this is how Rated, for Rated reducing cognitive load is a central part of what we do when we contribute to open source and then support these technologies for our enterprise customers. Absolutely. Thank you, Max. And I hope that we have kind of given you a great overview in the past 30 minutes that had to be fast of the different ways that you can start from the ground up from the foundation working with runtimes, shifting and lifting applications to run on RHEL, to containerizing those applications, but then being able to share those containers and the entire ecosystem that RHEL provides in order to make developers like us our lives easier. So thank you very much. My name is Cedric Clyburn. And I'm Max. And thank you so much for being here. Enjoy your day at We Are Developers. Thank you. Take care. And we will be at the booth around the corner. So if there are any questions, there's unfortunately not a lot of time left now, but feel free to come visit us. Thanks. Happy to talk. Take care.