 All right. Hello, everybody. How are you? Final day of DevConf US. It is actually my second conference in person so far. But I think it's certainly the first of the DevComps. So it's been lovely to see you all in person. Been very nice. And today, we're going to remember to pick up the clicker. Some of the work some students did, either sometimes supported by Red Hat, sometimes not. I think this year, we're pretty much talking about mostly stuff supported by Red Hat. However, we do different things, different years. But we like to show off some of the summer work students did so that they can get a chance to show off their stuff to all of you. So without further ado, I'm going to invite Rohan to the stage. So good morning, all. My name is Rohan. And this summer, I got a chance to work as a software engineer at Red Hat. I worked on, was Elastic Secure Infrastructure. So Elastic Secure Infrastructure, or ESI, is a Massachusetts Open Cloud Alliance project which allows owners and lessees to allocate the bare metal servers or hardware to the tenants who are interested to buy it. So users can just attach to the nodes, to the network, and lessees consumes the nodes until the owner returns it. So I'll give you an example so that you can picture it better. So let's say there are various departments in the university and the CS department has a hackathon or some trivia contest happening on a particular day. And they need a lot of servers to scale up to run there for this day. So on the same day, there is a Health Science Department which do not need their servers, like some of the servers. So what they can do is the Health Science Department can just lend the servers for a particular period of time to the CS department so that they can carry on with their trivia contest or hackathon. So this is one of the use cases for ESI. And yeah, so that's a pretty cool project which I was working on. Yeah, next slide. So yeah, so some of the stuff that I worked on was productization of ESI. So initially, ESI had to be installed from GitHub source. But what I did was to build packages which allowed ESI to be installed from PIP. We all know how various packages in Python can be installed using PIP. So now ESI can be installed in a single command using PIP installed and the packages of ESI. And the other thing was to remove the restrictions to delete the contracts which were present before in the ESI that involved significant changes in the code. And apart from these two, I also worked on bug fixes and refactoring the code. So yeah, this is some of the future work for the ESI. I'll give you a moment to go through it. So I'm particularly interested in the first two since they are the immediate future goals that is collaborating with the outside teams and get their feedback about how ESI works for them and make improvements. And another one is achieving the high availability for ESI. Yeah. So I don't know. I forgot to introduce myself. This is a long time. So I'm a graduate student at Northeastern University pursuing master's in computer science. Some fun facts about me are I like to play badminton. I used to represent my college in undergrad. And I also like to travel. So of course, I heard this when we were talking about this yesterday. And I immediately asked, so have you been playing pickleball? And what was your response? I haven't got a chance to play pickleball yet. But I've heard about it. It seems interesting. And it's a combination of three sports. So it would be definitely fun. I'll give you the shot. Well, thank you so much. And we really appreciate your work. ESI is actually a project I'm particularly interested in. So thanks again. And we'd like to thank you. And now I'd like to invite Fuzius to the stage, who is the, you're going to introduce yourself, is the kind of leader for the next part of this. Good morning, everyone. Thanks for having me. It's a pleasure to be here today, finally in person. My name is Fuzius Amir. I am the coordinator of undergraduate programs in the Kennedy College of Sciences at UMass Lowell. Four years ago, Associate Dean Fred Martin and myself with support from Red Hat, we created a program called Source CS. Source CS is a summer brief program for all incoming computer science students at UMass Lowell. Sorry. OK. This year, the program was a four-week hybrid program. So some students were able to attend the program fully online, and some came once a week to do some in-person activities on campus. The students were introduced to three programming environments. They were also, they also attended several panels, and they were exposed to various resources available to them on campus and off campus. We also had a panel with Red Hat employees and interns. The students also, we launched a Discord server three weeks prior to the program. So everyone started socializing even before the program started. So when they finally met each other, they kind of knew each other. So that was one of the goals of the Source CS program. Our goal is for students to feel welcomed and to feel supported and to be successful as full-time students when they start their career in the fall. We want them to know that they have faculty, staff, alumni, current students that care about them and care about their success. We want them also to meet each other and to support themselves throughout their years at UMass Lowell. So this year, we accepted 70 students. That was a record number compared to only 27 students in 2019. So this program has a special place in my heart. I'm so thrilled that it is growing and it has been successful. And all of it is due to Red Hat's partnership and Red Hat's sponsorship has been integral to this program. And without it, we wouldn't have a program. I want to thank, especially thank to Heidi for her leadership, for her involvement in the program, which is very valuable to us. And it's very inspiring. And it encourages us to continue this program every year. As in the past years, we had a mixture of students that they had several range of experience. Some were very advanced. They have been coding for years. Some were fairly new to the programming and some were in between. And but nevertheless, they all challenged themselves to do the work that they have never done before. And for that, I'm so proud of them. Today, we have five groups of students, in total 10 students who volunteered to be a part of this event. And I'm so proud of all of you guys. And I believe that this is a great professional development opportunity for you to be here. I hope that the Sources program sets you off really well for your career at UMass Lowell and beyond. Thank you so much. Of course, thank you. Yeah, I appreciate it. All right, so first up, we'd like to invite Christopher and Kevin Wahome to the stage. Hello, everybody. I'm Christopher Coco, an incoming freshman at UMass Lowell. Fun fact about me is I've been coding since 2019. I first started out with Python. Hello, my name is Kevin Wahome. I'm an incoming freshman at UMass Lowell studying computer science. And a fun fact about me is that I was first interested in programming in the sixth grade. So for our project, we did a Myer conversion project where we used two different technologies, Myer. We used Myer, which is a VR sandbox, which was developed by fellow UMass Lowell's computer science students, which was written in JavaScript. And we also used a Python library called Pillow, which handles all the image processing for our project. Our program's purpose is to take 2D images and convert them to be viewable in the 3D space provided by Myer. This is done by a Python script that takes each row of pixels in the image and then converts them into loops, which are then outputted to a text file, which can be copied and pasted into Myer. We also included a resize script because sometimes when the images are too big, Myer becomes inoperable and crashes or is very laggy. So being able to resize your images instead of having to go find and resize or online, we just decided to include one. And it also supports image transparency. Nice. So we talked about programming a little bit. Christopher, what got you into it? I think a lot of it comes from back when I was little. I would watch a lot of you. It probably sounds a little funny, but I would watch a lot of YouTube videos about just glitches and technology and stuff. And I'd be like, it's kind of crazy how technology that you use every day and you see work flawlessly has these little glitches. And I was just curious of how stuff like that happens on the inside. So once I started programming, I just fell in love with the loop of code, debug, and then code again. So now you can make your own glitches. Yeah. Basically it falls down. Kind of fell in love with that little loop of fixing and debugging. Nice. Nice. Yeah, so Kevin, we talked yesterday. You said that you got into it kind of in sixth grade. What was it that brought you into programming? Well, in sixth grade, we had a new computer science program at my middle school. And they introduced us to an hour of code, which I think it was developed by code.org. So after doing that, just doing a couple of simple projects got really interested in programming. Cool. That's awesome. Well, thank you so much for being on stage. It looks like good work. And we hope I had a good summer. And good luck in your career at school. So next, I'd like to invite Miriam, Shreya, and Wendy. All right, so we only have two mics, so we'll be passing back and forth. But if you wanted to introduce yourselves, and then we can talk a little bit about your project. So hi, my name is Wendy Carvalo. I'm an incoming freshman for computer science and a fun fact about me is that I want to minor in graphic design. Hello, everyone. I'm Shreya Mishra. I am an upcoming student at UMass Lowell majoring in computer science. A quick fun fact about me is that I am a twin. I love Indian classical dance. I'm interested in data and data science projects. And also another important thing, I want to use technology to simplify our activities in our everyday lives, such as Alexa. Good morning, hello, everybody. My name is Mary Melcudi. I'm also an incoming freshman this fall, and I'll be studying computer science. A quick fun fact about me is that I've got first introduced to computer science and programming back in middle school when I joined my school's first LEGO League team, which is a robotics team. So let's get into our project. For our project, we developed a website for future source CS students. The website allows students to easily submit their projects. They can vote on their favorite project. They can also view past recordings in case they missed a class. We also have some information about our organizers and our peer leaders. And we also put in a direct source CS. So to actually host the website, we use Google Colab, which is like a collaborative notebook online. And we use Python to host a server called Flask Ngrok. And so we had that read in HTML file we imported using Google Drive together. Right now, our project is a prototype. Initially, we used HTML on a website called Syncfiddle, which we collaborated together as a group. Then we use CSS to bring our rough sketch used earlier in our lessons to life. Finally, we use JavaScript to add our logic and develop our code further. Thank you. So Shreya, you mentioned dance. So are you a big fan of Bollywood? Yeah, I'm starting to appreciate Bollywood like my other friends are. Nice. Yeah, I still remember going to a friend's house who basically they had a whole like house dinner party and everything else. And basically just Bollywood movies on. It was like an all afternoon kind of thing. It was really a lot of fun. Yes, when I was younger, I used to go to other people's houses. Sometimes they would allow that. They'll allow dancing. Nice, nice, that's cool. So Wendy, you mentioned that you were kind of into graphic design. And so do you do any other kinds of art? I'm curious about. Yeah, I like drawing and painting digitally and physically. Nice. And then Miriam, we were talking about programming and what brings you there? What's the appeal? Oh, yeah, so I first started coding and programming back in middle school when I joined my school's first level league team, which was a robotics team. And we were able to win regionals and states. But after after that, I took a while. I took a break from computer science because I thought it was going to be a doctor, but not anymore, I guess. You thought better of it? Yeah, yeah, yeah, I'm with you. Well, thank you so much. We really appreciate you coming on stage, appreciate project. I hope you had good work. And we look forward to building the coolest things in the world. Perfect. Thank you so much. All right, so next up, I'd like to invite Gabriel Mohamed. Do the mic swap thing. Hello. So I'm Gabriel Lima, and I'm an upcoming Maslow student in the computer science program. And a fun fact about me is that I'm from Brazil, and I've been always interested in robotics and stuff like that, and recently in computer science and programming. Good morning, everybody. I hope everyone's doing well. I am Mohamed Bilal, and I'm an upcoming freshman at UMassSlow, majoring in computer science. A fun fact about me is that back in Canada, my dad and I used to spend our family Friday nights making games on a website called Scratch. We made a lot of games, such as Crossy Road, our version of it. And yeah, and for today, our program is a number classifier. Now I know everyone's thinking, oh, it's only 9 AM. Why are we dealing with numbers? But don't worry. This is going to help us because it's going to make things easier by classifying what a number is when we send an image. So basically what it does is that when you submit a 28 by 28 pixel of a number, it comes out and tells us what the number is, making our lives easier. Yeah, so here there's a little video of us demonstrating it. We can skip forward a little bit. Oh, in the video? Yeah, can we? Yeah, or do you want to jump to? A little bit forward just because that's just importing weights and stuff. And yeah, we can go more forward than that. But basically we used activation functions and weights. We didn't use biases in this neural network that we did. And you pass in a 28 by 28 pixel image into it. It does the math and spits out what number it thinks it's seen. Each of the input nodes represents one of the pixels. And yeah, so here is just we import like we load in the weights, import some numbers. This one here, one looks like a two to the neural network. But we go a little bit more forward. The network gets the three. So yeah, that's our project. Thank you. Cool, nice job. So I'll ask about the robotics. What have you done so far with robotics? I played around with Raspberry Pi a little bit. This summer I actually took on the challenge of building a remote controlled airplane. Nice, that's cool. And we're talking about you kind of working with your father. So what was your father doing that did he was your programmer? Is that what led you to it? Yeah, he basically was a programmer for a health care company making things such as EMRs and stuff like that. Which as a kid, I was like, it's pretty confusing and all. But as a girl, I'm like, it's kind of interesting. And I think I can do this. Nice, nice. Well, thank you so much. I hope we had a good project for the summer. And good luck in your future career. All right, next I'd like to invite Quinn and Nick. So my clicker doesn't work. All right, sorry. The video was angry with my clicker. So please introduce yourselves. Hi, everyone. I'm Quinn Dolan. And, oh, sorry. So I'm Quinn Dolan and I'm a third year college student. But this is my first year I transferred to UMSL in computer science. And a fun fact about me is that I've become really interested in game development after this. Hi, everyone. I'm Nicholas Blank. I'm an incoming freshman at UMSL. And a fun fact about me is I was actually never really interested in computer science. And I took one class in high school on a whim because I thought it would be fun. And junior year, now I'm here and studying computer science. That'll do it. So you want to talk about your project? Our project, we simulated in my, or much like our friends CJ and Kevin. But we did a medical triage where you have to select the most severe scenario to treat first. We ran into the issue that Meyer only doesn't allow loops and it only does one run. You can't do a continuous function. So what we had to do is we had to use auto hotkey to be able to scan the image for a, scan the screen for an image of the correct answer. And if it detects the correct answer, it will reset the game for you so you can keep on playing. And if it detects the wrong image, what it will do is it'll end the game for you so you know you got it wrong. Nice. You want next slide? So I took this one step further and I actually modeled a hospital. And so this is an unity. And so what you can do is you walk into the hospital and you are given this chart, three charts. So you have to look through those three charts determine who has the most severe case and you can actually walk into the rooms. You can, I'm actually working on being able to put in an IV and perform CPR in a patient. So this is a really useful tool for medical students so that they can learn how to actually triage patients in a low risk way without actually having to deal with any cadavers or actually like real patients. So yeah. When we were talking before, particularly what interests you about game design is using it as a teaching mechanism. You can elaborate on that? Yeah, so I've really become passionate about this project. I'm really fascinated by how much you can use it and how useful it is in real life. So bringing something, bringing games, which is usually something that's typically fun and use it as an educational tool, especially like, I mean, you won't kill anybody here. Cool. So you said you were thinking originally about engineering but came to the dark side. And so what were you thinking about with engineering? So I had a couple majors that I liked. I had sophomore year, I had built my own computer and so I was debating computer engineering. My dad's a fire protection engineer so I grew up around that so I was thinking about that. And then also I also really loved doing things like tech ed and middle school and building stuff. So I was also thinking about mechanical engineering. It was a while ago so I was still in the fence about a couple, but yeah, yeah. You don't actually have to decide anytime particularly soon. You can also do what I did and do a whole career in software and then decide to become a professor. So, you know, you can always change. Well, thank you so much. I, like I said, I hope you had a good project. I really hope for your future success and thanks so much for coming on stage. Thank you. Thank you. All right, next up I'd like to welcome Geo and, sorry, Shruti, can you introduce yourselves? Hi, I'm Shruti. I'm an incoming freshman at UMass Lowell studying computer science. And I'm Geo and I'm an incoming freshman at UMass Lowell also studying computer science. Fun fact about me is that I chose to get into computer science so that I could try out game design and game development. And a fun fact about me is that I like painting. Nice. And so can you tell us a little bit about your project? Yeah, so with every project you need an idea and we both like playing games. So we kind of discussed and we realized that we wanted to have a game for our final project. And we were also very confident with Python and JavaScript. So we use both of those languages with Maya and Google Colab and made a guest to TV show game where we'd create two scenes, those two scenes in Maya, and we kind of put them into the game as well. So how the guest to TV show game works is that the player gets a scene that pops up on their screen and a little drop box below and they put their guests in. If they get it correct, the game congratulates them and moves on. If they get it wrong, they get to try again and if they still have no luck, it reveals the answer and moves on. So we have six total rounds. Two of them are from the Maya scenes, which are the two created we showed on the slide. Other four are just images we found on the internet. And yeah, it was really fun to make this game. It was the first time we created the game instead of just playing it. So it was a new experience and new look into the computer science. It's awesome. So you mentioned painting and what's your most recent work doing painting? I did a piece that I couldn't see, but I felt. So I listened to music and I kind of just painted what I felt. Nice, that's really cool. And it came out in like, you're proud of the result? Yeah. Yeah, awesome. And so Shruti, you said you got into coding from like a coding class? Yeah, I took a few coding classes in high school and it was my first experience into it. And I saw, I really liked it. I really liked the freedom I get and like the vast majority of things you could do with just one language and I enjoyed that. That's cool. That's awesome. Well, thank you again. And I like your project. I thought it was cool. And you know, good luck in your career and thanks so much for being on stage. Thank you. And I think that wraps up our kind of student showcase and upcoming we'll have Dan Walsh. I'm supposed to be making an announcement that I forgot. Oh yeah, so if you're a speaker or a track captain, please make sure you come and collect your speaker swag from 315 before you leave. That's when it's in person. We don't have to do shipping. So it makes it a lot easier for us. But I'd like to introduce Dan Walsh, who is a senior distinguished engineer at Red Hat. He does something with containers here and there and we all know and love him for the creation of SE Linux and come on stage. But we, okay, everybody hear me? Okay, I was an incoming freshman in 1978 to the College of the Holy Cross. Fun fact about me is when I watched the men land on the moon for the first time, I thought the guys in Mission Control were the really cool ones with the white shirts and the thin ties. Nice, nice. This is your power. I don't look like it almost works. Give me the mouse. Where the hell is the mouse? You can just control it. I just want to make this full screen. That doesn't help. Yeah, it's all right, it's all right. That'd be enough. All right, enough for our new Mr. Dan Walsh. Okay, well, thanks. So as Langdon point out, I've been working on containers for many, many years now. Some would say for about 20 years. And I've been leading the container team for the last 10 to 12 years. And I recently switched my role a little bit to working on edge devices. And my goal is to put in technology like containers on the edge. So I saw this diagram. Would you like to live in that house? I don't think that. And of course, my worst nightmare right here. This container on the edge. But so when we talk about edge, those that came the other day, you saw the first keynote talked about, does this work? How do I get the? Okay, I was gonna show it. Laser beam, but it didn't work. When we talk about edge computing inside of Red Hat, we usually look at different tiers. This is like the actress at the keynote talked about the onion. And so you start with the data center here. So the first, that's sort of the cloud or your traditional data center where you have computing. And then you slowly move out to the edge. And sometimes when they talk about edge, we talk about sort of retail stores. So your home depots or your CVSs and things like that. Well, you might have a suite of computers in them. And then as you get a little further out, you might get out to oil rigs and thing, our edge. So I'm really talking, we're talking really far edge and things like this, windmills. Okay, imagine you put one computer system on a windmill on thousands and thousands of windmills to monitor it. Or maybe train stations. So each mile of a train station might have a monitor or some kind of tool that's monitoring it. Oil rigs, right? So these oil rigs are supposed to ocean-based oil rigs. You might have a computer system just monitoring and getting communications back and forth from there. And then finally, self-driving cars. At Red Hat, I got this little bit missed order. Well, we have this saying in Red Hat that we will not be putting computers on sprinkler heads. Okay, so this traditional IoT devices. So you're like your light switches in your house and things like that. We're talking about fairly beefier computers for monitoring large infrastructure type projects. So part of my effort is actually looking at automobiles. So Red Hat has started to work with the large automobile sellers in the world as well as the infrastructure. And we formed a group inside of Red Hat called Auto-Cow. And the cow stands for containers on wheels. And this is a loss in the tune of the, as you drive by a cow, you say moo. The cows are saying, yeah, yeah, yeah, yeah. So as we move containerized applications out to the data the closer to the edge we get, we need to rethink how computers work. All right, so most computer systems right now are managed by human beings, right? And many of them are building tools that basically figure on having high connectivity to the computers so we can constantly monitoring and constantly updating. And you have the ability for human beings to go out and actually fix things when they go wrong, right? If the connectivity fails or something like that. So as we move to the edge, we start to lose the ability to do that. So we have to start thinking differently about containers. When we have hundreds of thousands of nodes, right? How do we manage that? You can't have a human being driving out to every windmill to do an update, it's like a USB key into the machine, right? So in my opinion, we have to start thinking and I call them toasters, right? We have thousands and thousands of toasters or application service. These are just, everything has to be the same, right? We can't have every system on every single windmill being different. They have to be cookie cutter or I like to call them toasters. So they're the exact same system. We need to have a limited set of packages on the computer systems, okay? So the base operating system on these systems has to be very limited. It's not gonna be the full set of packages that's available in Fedora, the full set of packages that's available in RHEL. Frankly, I think it needs to be, and this will show my bias, the kernel, system D and Podman, a container engine. So anything else on there is sort of superfluous and then we run applications on top of that and the applications come in the form of containers, okay? That way the operating system is the same everywhere. Every single one of these N-nodes has it. Just talked about that. So one of the interesting things a few years ago I worked on a thing called Project Atomic and Project Atomic sort of introduced this idea of having sort of a better operating system and then having applications always running inside of containers. But a fundamental problem with it is every time we went to somebody to talk about it, they always said, that sounds great, that's exactly what we want, but I gotta have my couple of packages on there. So we couldn't go out and just say I wanna have just the container engine and the base part of the operating system. They always said they had to have something else on there. So Colin Walters, who sadly is not here today, but he was here earlier this week, came up with the, he's a guy who really worked on Project Atomic and eventually worked on Core West nowadays. And what he came up with the idea of is building operating system images like we build containers. So anybody that's built containers over the last few years using Docker or Podman has used the concept of a Docker file, which I now like to call a container file. And which has a fairly simple syntax, right? For just saying from the base image or the base operating system and then just adding packages to it. And what Colin did is, just building a container file that looks like, so remember I said that Project Atomic had a problem in that we would have a base operating system and then we told them, run everything in containers and the customers would come back and say, no, I can't run this in a container or it's impossible to run this container and I have, we're forced to run just these applications inside of it. On the base operating system, not in containers. Usually those would like security agents, okay? Virus scanners or other tools to look at it. And so what Colin came up with is, we just did this with the standard container file workflow. You could just, you know, do a run command, add it. You could add your corporate secrets. You could just inject random content into your image. And then you would just do a Podman build to build the image up and then you would do a Podman push it out to an OCI registry to your container registry. Your way.io or your Docker.io or your out of factory, whatever container engine you have inside of your environment, your corporation, whatever. And now you can take those hundreds of thousands of nodes and actually update directly from the container registry. So you could basically customize your environment and then on a monthly basis as new releases of the base operating system. You see here, we have the base operating system. In this case, it's relative 9.1. As that gets updated, you just do a standard container workflow which is rebuild your image, repush it to container registry, and then you could have your hundreds of thousands of nodes wake up and update. Now this model I call the CoreOS model. So CoreOS is an operating system is a containerized operating system that allows you to have two different images at the same time. So you have the currently running image and then you have the side image. So with CoreOS, you'd be a CoreOS type environment. You'd have each one of these nodes running the operating system and then they'd be able to download the update to the operating system on the side. With this new features now, we're able to do it from a container registry. So you don't have to set up any separate or new architecture, new services in your environment to be able to update using this method. So what happens if the update fails, right? And if an update fails, you're in big trouble. And this is a problem that OpenShift has dealt with, I jumped the gun, but basically OpenShift dealt with this by using CoreOS as its operating system. So CoreOS has the ability to again download an update in the background and then it turns out you basically reboot the machine and you boot into the new operating system, at which point you're running, but a health check can come up and look at the operating system. If the operating system doesn't come up healthily, CoreOS has the ability to reboot itself back into the previous operating system. So you basically have the ability to switch operating systems on the fly. And this model inside of OpenShift has allows them to scale out to hundreds of thousands of nodes. So all of OpenShift, which is a Kubernetes, Red Hat's version of Kubernetes, uses CoreOS to do it. So I'm proposing that we use CoreOS technology at the edge. By the way, Red Hat has not approved me saying this, but that's what I would like to do. So it's gonna be, I wanna get Rel CoreOS for the edge and really have a limited set of packages, but manage it like CoreOS, the way OpenShift uses CoreOS. So images, updates would be based on OS tree pulled from the container files that I talked about. You run the health check and then if something goes wrong, you roll back. So now we have a lifecycle of our remote thousands and thousands, hundreds of thousands of nodes have the ability to be updated. If something goes wrong in the update, we don't lose a windmill and don't have to send someone out with a USB key to 100,000 windmills, okay? So that's how the operating system needs to manage. Now we need to look at how the containers, the images are gonna be used and they're gonna buy it. So containerized applications on the edge, first thing we need to look at is what container engine do you wanna use? And I have a feeling everybody in the room knows which one I'm gonna say to, but. So first of all, we start with Kubernetes. So Kubernetes is great. Kubernetes and a lot of our customers come to us and say, oh, we edge Kubernetes, all about managing containers, right? The problem with Kubernetes, first of all, it's a very heavy weight. So when people put Kubernetes onto a single node, they often lose 20, 25% of the node just for managing Kubernetes. It's very memory intensive. The other thing is Kubernetes is designed for moving apps from one node to another, right? So you want to move, the idea is if I have a web services running, I might have a web service running on three nodes. One of the nodes goes away, maybe I pop up another node somewhere in the cloud and I move that web service. Well, that doesn't work with toasters, right? So if you're running thousands and thousands of toasters and your toaster goes down, you can't suddenly get your toast to be toasted in your neighbor's toaster, right? So I don't know if that analogy works very well, but the basic idea is if my windmill, one node on a windmill goes down, I can't monitor that windmill from another windmill that's 200 yards away. It just doesn't work that way, okay? So fundamentally, Kubernetes doesn't match this model. So Kubernetes uses up too much memory, too much CPU and doesn't really do what we want to do in this case. Docker. So one of the things that Docker does is when you run a Docker in your environment for running containers, you have to run multiple services to run and these services are always running on the system. So you're gonna use up resources to run the Docker daemon, Docker daemon launches container ddaemon, if you have any authorization daemons, it's using up a lot of memory and it's just sitting there running. Well, usually when you're running your application on an edge node, you just want the application running, right? All you want is periodically maybe to launch up a container engine to do some management on the system. But for the most part, you just want the tooling to get your orchestration tooling to disappear and just run your application. So that's why I recommend that we use Podman on edge devices because the way Podman works is it actually comes up, starts your container and then the container can continue to run. Periodically, you might have SystemD launching Podman to come up and check to see if there's updates available maybe coming up to shut down the services and things like that. But Podman doesn't have to permanently run, right? It's basically because of the fork exec model that Podman was designed for, it just comes up, does its thing and gets out of the way. So it only needs the resources while it's running. So how should I orchestrate the applications? So one of the interesting things is when people write applications, obviously I'm saying we should write applications for as containers, as OSI containers, but usually you might want to have in a service, you're gonna have multiple containers running and multiple different services and you need to have these services talking to each other even on a single node. So we could look at how to orchestrate and some people, a very popular tool to do this is Docker Compose. The problem with Docker Compose, I mean well, Docker Compose is a YAML file that basically describes how to run multiple containers and how they enter and to communicate. You could also do it with just standard command line, right? You could write lots and lots of command line or you could use Kubernetes YAML. So I just told you we don't wanna run Kubernetes on the nodes but Kubernetes YAML is a way to describe multiple applications, multiple applications running within a pod and how they can communicate together. Not only that but then you could take your Kubernetes YAML and you could, of the application you're gonna run on an individual node and you could push it out to your Kubernetes cluster for its testing, for its development, right? So imagine you're writing an application for an automobile to test the Spenometer, right? And so you write a Spenometer application, you put it out to your cloud and you have all your CICD system, your testing going on, what happens when I hit the break? How does that affect the Spenometer? What happens if the cache starts to speed up very quick when it's going downhill? You have all these simulations running and they can all run inside of Kubernetes. So you would define your application in terms of Kubernetes YAML. You could use Kubernetes for all the testing but now when you take it to your edge device you have, using Docker Compose you would have to translate it from Kubernetes YAML to Docker Compose while you would have to translate it to running in standard Podman commands or Docker-type commands on the system. But if you use Kubernetes YAML all the way through the system we can have your testing in Kubernetes in OpenShift and then you bring it to your individual nodes and you end up running the exact same application because Podman has full support for running and working with Kubernetes YAMLs. Podman has a command called Podman GenerateCube which actually will help you translate nodes, containers and nodes to Kubernetes YAML. So you can run, just run a command or run a container on your system and then run Podman GenerateCube from that container we will pump out the Kubernetes YAML file. But we also have the ability to run Kubernetes YAML file through the command Podman PlayCube. So we can take any Kubernetes YAML file and we will translate it to running local containers on the system. So the last question about orchestration we have to talk about is who's the orchestrator? Who is the tool going to be launching the containers? And in this case, we're really, in my opinion, the best tool for running on an edge device is SystemD. SystemD is the boot-up service for running, you're running your environment and SystemD can work closely with Podman for launching containers. So you put your Podman into SystemD unit files and they launch at boot time. We've written several articles on how to do this and Podman Generate SystemD is a really cool tool to take your locally running containers and run them. But this has just been introduced to Podman. So what Podman now has is a simple one-line command to, or in this case, two lines because we have to escape something, but we can take a Kubernetes YAML file and SystemD can run it directly as a unit inside of the service through system control and there's a special flag in front of there called PodmanCube. So this tells SystemD to go and execute PodmanCube, something that's delivered by Podman and will take your Kubernetes YAML file and run it through Podman on the system. That's all you have to type now to get Kubernetes YAML files running as Podman services on a Linux system. So it's that simple going forward and that's the type of tight integration we wanna get with Kubernetes YAML files running locally on your nodes. So how do I update? Now I've written my application, I put my Kubernetes YAML file, it pulled down a couple of images from container registries to the system. Everything's up and running. Two months later, I find a bug in my application. How do I update it? Okay, one of the things I would like to do with that update is be able to push an update to a container registry. So here we have an admin, they fix for his application running on the node for his windmill application. He pushes it out to a container registry. Each one of the nodes is running basically a tool that's monitoring the, basically periodically system D will launch Podman to monitor and check to see if there's any updates available on container registries at which point it will pull down the container image to the node. So Podman would pull down the application update. When the image has been downloaded to the server, Podman will restart the service, restart the container, basically recreates the container off of the new image. So say you had a bug in your windmill monitoring tool, we pull down an update from container registry and this container registry with any OCI registry and it would reboot the application. At that point, a Podman health check would go and check to make sure that the application is fully up and running on your system and working properly. If it is not running properly, Podman has the capability to roll it back. So again, we want to have this allow the machines to update the operating system and through the use of Podman, we'd be able to, you know, if the application update for some reason fails, we'd be able to, at that point, roll it back. Again, no humans involved in this at all other than pushing to known infrastructure in your environment. Bad slide. So now we have operating, the computers are able to update and if you have an application installed on the service, we're able to basically manage the application, you know, the life cycle of that application by updating the images and that application will self update. But what I haven't covered yet is operational management. What happens if I want to add a new application to the node? Or what happens if I want to disable an application that's running on the node? How do I do that? And that's what's called operational management. So how do I update these nodes in the field? Not just update the software with fixes or newer versions, but I want to actually add a new application to all of these hundreds of thousands of nodes. This is not an answer question. I don't have a great answer for this at this point. How do we add and remove services on the edge nodes? So one of the things we could do is we could send an admin to every one of the nodes and plug it in. Or you could imagine if your computers and cars, you might bring your car into a dealership and the dealership somehow updates the software on it. That's probably how cars will work when the software finally first gets out the cars. But you're not going to be bringing the windmills in, right? Or you're not going to be bringing in the oil rigs. So how do you do it? In sending an admin, obviously, that's the worst case scenario, but not very cost effective. You could have SSH, daemon, some kind of remote connection to the services running all over the environment. From a security point of view, this gives the heck out of me, right? Because it's basically, I don't want to have, this is the problem with IoT devices right now is that they can be hacked and then create spam bots. But imagine someone able to SSH into your automobile and actually affect the way that the functional safety of your automobile is working. Potentially, when we first go out, we might have to go out with some remote services that can be connected by it. But the number one thing we need to phone home, right? So that the edge devices have to be able to call back to a central server somewhere, some group of central server, and ask, what should I do now? How should I update? I mean, that's really kind of what we're doing with the CoreOS updates and the individual applications updates, but now we're asking more fundamentally, how should I operate? What should I need to do to operate? So this, I've come up with three solutions to this, just talking in the last couple of months that people are looking at. And one of them is Kubernetes. So Kubernetes doesn't necessarily have to run in each one of the nodes. You can update, Kubernetes has a database out in the environment called SED, and SED can store huge amounts, fairly huge amounts of data, and then the services could connect to the SED demon on the environment. So theoretically, through OpenShift, you could send out information in your Kubernetes environment to say, the windmill applications should be running this new piece of software. So they could reach back to it. I've been informed by people in the Kubernetes world that they really, this might be stretching it, that it has this capability because you might want to have each one of the nodes be separate. Another method for doing this is part of Ansible. It's called Ansible pull. Ansible pull is a potential solution where the, usually Ansible comes in through SSHD and drops down workloads on individual nodes, and then the nodes, then activates the applications. But Ansible does have a feature where the nodes can reach back to a central server and basically pull the applications that run, pull the Ansible workloads down. So that's a potential use case. And then the last one was a presentation given yesterday called Fetchit. And this is Sally O'Malley, who's somewhere in the room, I believe, and interesting thing when I was, Fetchit doesn't have a logo right now, so I had to put her a picture up. And if you Google Sally O'Malley, you'll get some, there's a very funny skit from, not about Sally, but on Saturday Night Live, there used to be a character called Sally O'Malley, which I'm sure Sally doesn't want people to know about, but not to your 50, right? So the Sally O'Malley character always talks about being 50 years old. So anyways, so Fetchit's interesting in that it uses a GitHub workflow. So theoretically, you could push to, well, if you had Fetchit running on the nodes, you can run out of a container on each one of the nodes, and then you could have, it would reach back to a central Git service, whether it's GitHub, GitLab, or running your own local version. And then you could push information to Git, and then each one of the nodes would download the Git, and it would figure out different ways that it would have to run. Fetchit has support for assistive units, podman, and lots of other features in it. So these are three potential ways we could do it, but I don't think it's by any means set, and I don't know if any one of these is necessarily always gonna be the right one, but all three of them are sort of the same thing where you basically have the individual nodes reaching out to a management tool, and asking the management tool what should run at that time. So that gives you the current vision of envision for the way I think edge devices should work. This is a picture of my grandson, so. But basically, what's next? Okay, so when we look at these individual computers sitting out in these fairly vital pieces of infrastructure, automobiles, it's like, what is the real fear that's gonna happen with these as we proliferate these computers? And this one picture to me brings it home. The United States military is putting computers all over their vehicles, all over their computers, all over their weaponry and things, and here we have a down drone in the Middle East somewhere. If I'm a hacker or a opposition in an armed conflict, I would love to get my hands on a computer that the opposition owns. Or if I was driving in a car, I would love to get hold of that car. I've heard stories about Teslas. Tesla computers inside of cars actually have a feature, I don't know if this is true or not, but they have a feature where you can have more performance, but you have to pay an extra $5,000 to get the more performance version of a Tesla. And that's controlled via software, and people have broken into the computer systems on there and just gone and hacked them so that they can get the more performance out of their Tesla. So the number one thing I always said about computers is if I can touch it, most likely I can become root on the computer. So if you start to put computers everywhere, all of a sudden hackers can actually physically touch them. And they can start to add keyboards to them or some way get into the chip and start to read them or break into them. And imagine what would happen if you had a computer system in the military world that told this drone where all the U.S. soldiers were, right, to make sure that the drone doesn't attack U.S. soldiers accidentally. Wouldn't it be good for the enemy of the U.S. to be able to get a hold of that computer system and find out where all the U.S. soldiers are, right? So you really need to protect the applications from root inside of the operating system, okay? So imagine if you could run applications that even the administrator of the operating system could not look at. And that's the next generation of computing and it's called confidential computing. So Intel, AMD are all new ways of running computer systems where they can run an operating system but they can also run workloads inside of that operating system in such a way that the processes in the operating system cannot examine what's going on inside of these confidential environments, okay? Really what they are is virtual machines, okay? Or they're basically KVM separated containers. So what we wanna get to is a point where root, the basically administrator on a machine cannot examine what's going on inside of a container. And that's the end goal of confidential computing. There's lots of things that have to happen to make this all work. Number one, we have to be able to trust the operating system that's connecting out to get the data about where the U.S. soldiers are or to get to understand what's going on. So we need some mechanism for registering the computer to the environment and having the computer and the environment realize that the machine has not been hacked. There's development on a thing called FDO which is basically when you boot up a machine, how does it identify itself to the host server to say, I'm up, I'm ready to go, and basically that first registration and that's being worked on, device onboarding it's called. The next thing is you have to have some kind of trusted boot from kind of measured boot to make sure that the host operating system hasn't been hacked, that the host operating system has the proper hardware in the system. So basically you need measurements from the time the machine boots up to get to the point where you can get an identification number or a checksum, a key and send off to an attestation server. An attestation server is a server in the environment that looks at a machine and says, okay, that machine has not been hacked, so now I can hand down data workloads or that machine is configured in such a way that when I hand down the data workload, the workload will be run inside of one of these confidential environments. So these are all things that have to be developed over the next few years, but most of this stuff is actually, is available now, but it's not available in the hardware, proliferated hardware. So it's available in certain small hardware centers and the reason I say that is as of the 18th of this week, a guy, one of the guys I work closely with is Sergio Lopez. And he announced that he had confidential computing. So he had the ability to run containers, any general container from an OCI registry, from a Docker IO or from a Quay.io or any system and pull it down to a system and run it underneath confidential mode, which means that again, the administrator wasn't able to do it and he's using standard Podman to run these containers on the system, on the internet, I'm our hat Dan. So he was telling me about it. So he got it all up and running, but this is only on a special lease. There's very few pieces of the hardware available right now in the world that can do this, okay? I think over the next five years, the major vendors Intel, AMD and all the ARM chips are gonna start to make this proliferate. And so within say three to five years, we will have fairly inexpensive hardware that can be used on edge devices to basically encrypt workloads that even the administrator on this machine would not be able to affect it. I mean, you could shut down the processes, but he wouldn't be able to go in and examine the memory of the processor, would not be able to look at any of the content. Everything would be fully encrypted. So you'd have it encrypted on the remote side, encrypted in transition and encrypted and running. So that'd be the final mile and that gives you the protection. So as I end my presentation, I do have a shameless plug. I have a book coming out in the fall called, pardon me, an action that talks about a lot of this functionality that's available, it's available. You can read it online right now, but they haven't printed out the fiscal book. I guess, do I have time to... No, I'm done, good. Thanks for having me. Every morning talk is just pushed ahead 15 minutes. So if your talk was starting at 10.30, you're now starting at 10.45. So get five minutes, get some coffee and then get to the first talk and we'll make up that time at lunch. So just anything else. And if you have questions for Dan, he'll be around so feel free to catch him. All right, thank you all. Yeah, take your questions to Dan out in the lunch.