 Good morning. Good afternoon. Good evening. Wherever you're handling from, welcome to another episode of the Developer Experience Office Hours. I am Chris Short, executive producer of OpenShift TV, technical marketing manager here at Red Hat. I am joined by the one and only Ryan Jarvenan. Hey, how's it going? Good. How are you doing today, Ryan? Oh, man, hanging in there this new year's a wild ride. I don't know. I'm half expecting it to get suddenly better and then half like, but I better keep my expectations in line based on current trends, I don't know. Nice. Yeah, right. Awesome. We're also joined by a couple of folks. Andrea, how are you doing today, buddy? I'm good. Thanks. Still in Italy, a nice, nice weather. So enjoying, enjoying winter or winter. Ah, yes, winter in Italy. I bet that's beautiful. Natali, you have, you have made it. You have arrived. Thank you for joining us today. Hey, how's it going? Great. Great, man. Yeah, two, two Italian today. I know. All right. Like it's, it's 50-50. The Americans, the Italians, right? It's like the Olympics. Yeah. All right. So awesome. We've got like a little outline going here. Ryan, you want to kick it off talking about our developer sandbox environments? Yeah. Yeah. First thing I wanted to talk about is we have a new developer sandbox offering available. I will post a link to, oh, you got it in chat. Excellent. Well, you did too. Perfect. Yeah. All right. Well, yeah. If you are following along and chat, give it a look. It's available at developers.redhat.com slash developer hyphen sandbox. So I'm going to click on that link. I wonder if I can do a share my, share my desktop. Let's see if this. Oh, boy. Let me share. If I can figure out how to do this. You want me to do it? Maybe. Well, I'm going to give it a shot first. OK. Let's see. Where's the share screen button? And how about this one? Oh, they've all got warnings on them. Sure. It's just showing me a bunch of error messages. OK. So you want me to pop this open here? Let's see. Yeah. I haven't run through it with this laptop yet. OK. I got open system preferences and do all kinds of probably restart Zoom. That's fine. Let me do the screen share real quick. OK. But I will ignore all of my tabs, please, people. Don't freak out. I'm sorry if I'm giving you tab anxiety. I thought this was my desktop at first. Those tabs. So if you go to the developer sandbox page, right? You get this little red box here to launch your developer sandbox for Red Hat OpenShift. I'm going to click that here. The main thing you need is a developer account, a Red Hat developer account. They're free to sign up for. If you already have a Red Hat login, you are good to go. I think the manager logged in here. Hang on. There we go. And so this will give you a free access to a OpenShift development cluster, which has somewhat limited resources available, but it spins up really quickly and lasts a little bit longer than what you'd get from Catechota. The Catechota environments are one hour limit, and then they self-destruct. This will give you quite a bit more time. So you want to log in with the dev sandbox option. Cool. Oh, do you want to go through this or skip it? No, well, this is the main thing I wanted to show. Basically, you could skip the tour. Yeah, skip that tour, and you're right into the kind of add new things. So like, off you go, right? Like, you want to add something from Operator Hub. You can plug it in right there. You want to pull in something from your Git repo or container image or just drop the whole yaml from your application in. Off you go, right? There are limitations. Yeah, yeah, there are limitations on this environment. I don't know, since you're logging in with a developer credential, I don't know if you have full access to adding new operators that are not already installed. So there are some limitations, but if you're really just looking to do language-based development and not lose your entire progress every hour, this is definitely something that's worth looking into. I probably shouldn't have tabbed out of that, because now I can't find it, sorry. Oh, here we go. Apologies. Too many tabs. Too many tabs, yeah. So as far as operator back services go, there's... That's weird. Yeah, there we go. So there's plenty of stuff you can tinker with in here, right? Yeah, definitely plenty of solutions. But there's resource limitations and a couple other kind of stipulations on what you get out of this totally free service. I think it lasts for... I'm not exactly sure how long. I think they give you some heads up when you create the cluster. But I think you get about a week or so. There's more information on the landing page that we linked. Yeah, I think for the beta program started for two weeks and then it should be up to 30 days when it goes to GA. 30 days, you can work with this environment. The cool thing, you have three projects. So you start already from this in a mindset of promoting your container image across environment, like you have the dev environment, the stage environment. Then there is the code environment, because with this sandbox environment, we would like to see also people coding from the platform. I know it can feel like weird, but it's possible also to code here because there is already code ready workspaces available for your application. So when you start an application, you can just start coding in it and testing the thing. Let's say, quote it locally, local to this sandbox environment and then promoting your container image across all the projects. That's awesome. So if I can have this thing for two weeks, I can do a lot of tinkering, figuring things out. And if that gets extended to 30 days, what are we doing exactly to prevent people from running all their operations on these things? I mean, where are those limitations, right? Like, what are we not gonna see? I feel like it's a good question to ask. I'm gonna jump into the administrator view and see if I can, yeah, okay, there we go. Yeah, it's kind of locked down from the administrator side currently. You get some ability to use operators that have already been pre-installed, but you might not be able to add an operator that hasn't already been kind of made available for you. There's a couple other limitations, memory and CPU. I'm looking at those right now here on screen, right? Like CPU, you know, there's one memory 512 megs, right? Like it's a good way to get started and like figuring out how the OpenShift interface works. It is not something that you're gonna run your enterprise workloads on just from the resource constraints alone, right? You could run like development workloads, especially if you're dealing with a small development size database, you know, just for testing purposes, or if you're linking out to a shared database hosted somewhere else. Right, like you could totally set up rules for this environment to get to that environment and everything be happy. Exactly, yeah. I think potentially you drop in a Helm chart that already has some knowledge of how to bind to staging or something and you're in a real good spot with this type of offering. So it's new or still kind of testing the waters with this solution, but we would love to have you give it a look, give us some feedback, let us know what you think. And like we said, it's gonna be expanding over time. I don't know if we'll get to having like paid offerings or something where you can keep your environment for more than 30 days. I think that's more kind of out in the future. We'll see where it goes. We got a comment in the chat about programming on the platform from GNU Pasta. And yeah, that's totally one of the things we recommend on this show is doing development that is highly a lot of collaboration with the platform environment and take advantage of as much of that platform environment and that feedback from production grade, platform input output, all during your dev loop. So you spend less time making guesses about how production is going to perform when you're developing. So yeah, like if you just create a simple, you know, Node.js example here, off you go, right? Like it's waiting for the build. Yep, and you'll should end up with a public URL that you could share with anyone in any of your environments. Yeah, as you said, one of the cool things is that your inner loop runs also on Kubernetes. This is a kind of new and powerful, no? You also your inner loop run on Kubernetes, then also the outer loop, of course, but the inner loop itself can run on Kubernetes. And once you start your application, you can edit the code directly from the IDE inside the sandbox already installed. It is, of course, Eclipse chair in as the red dot product around this, it's a code ready workspaces based on a glitch seven addresses. But yeah, this is awesome, right? Like code ready workspaces right here. And that's dope. And it's based on Eclipse chair and it'll start initializing as, you know, you gave it the credentials you needed. We're not credentials, but just, you know, username, email, some kind of UID creation process there, I'm assuming. Yeah, this is the factory. And factory is a process around code ready workspaces reading the death file. We talked about the file with Ryan in the previous office hours. The death file are the same used by Odo, OpenShift2 CLI. So we are converging to a new version of those death files, which is the file number two, version two. But the death file is really the Docker file of your workspace or your developer sandbox, your developer environment. This is another things that unify all the, all these tools that we can use in the platform. That's awesome. Is this, you know, loading soon, graphic, any cooler? I mean, my son would freak out over this, right? Like he's all about cranes and everything like this. This would make him happy. The one last warning I have about this site, I think there is some capability to, if your URLs or your workloads are not being accessed externally or directly via the UI, I think we can potentially idle your pods and then bring them back later when you do start getting requests and interactions. So if you're expecting 100% uptime, this is more developer quality, you know, not production quality as far as the uptime and availability, but it's a great way to test things out. And when you do start interacting with it, you should get your services come back relatively quickly. Let me try. Let's see what happens here. Not as fast as serverless though, which will be our topic later in the show today. So that failed, interestingly enough. But you are in a workspace. You made it to code ready with a small exception, which might just be a timeout or some other. Yeah, the pod timed out was the error. So maybe it's gonna come online. Let's see. And this could be a stumbling block as if the pod has not loaded yet. Yeah, one cool, one smart thing to do if you plan to use this massively is to preload those space image for the work spaces. So if you plan to have lots of, you know, for instance, Go, Python, .NET, Java, you preload that with a pooler, let's say a demo set that can preload all the needed images. So developer can start straight away almost immediately. Otherwise, those images need to be pooled from the red dot registry or they'll get up whatever image you are using. And then you have to wait for this initialization between the agent inside the pod and the code ready for space server. You can optimize this time by pre-pooling those images. Right, and just to highlight another fact, right, this is pulling in Eclipse Shea, which is the in browser kind of, it looks a lot like VS Code kind of deal, but it is an Eclipse product, I guess, is the right way to phrase it, that is open source. So we make and contribute quite a bit of work in that regard to make this, yeah, I don't think this is gonna load, load. An alternative way of using code ready work spaces that we've covered on the show in the past, if you have a local, let's see, a local environment, am I saying this right? I think with a local code ready environment, you can spin up an embedded cluster running locally on your machine, given if you have enough system resources on your laptop. So this is the IDE hosted in a browser, but you have the potential to also run it locally if that's a better fit for you. Another option that we have covered on the show in the past is use your own IDE, or use VI, or whatever you have where you are currently maximizing your productivity and then link out to a hosted cluster, or... Right, like you could totally use the VS Code OpenShift Connector with these clusters and do your work through that, right? Like you don't have to necessarily muckety muck with this interface at all, right? Like all you need is the URL to connect to and off you go, right? Hopefully. Yeah, start with VS Code would be another option. Yeah. Okay, so I'm gonna back out of this. Yeah, yeah, go ahead and bail on this. I think some of this is going to be... This is working. Yeah, this is up in... Yeah, there you go. Welcome to Node.js on OpenShift. Yep. Got your default... Everybody can hit that, you know. If you're on the call right now, feel free, you're on the streams right now. Just go ahead and you can click that link and see exactly what I see. Yeah, I mean, that's pretty cool, right? Like a couple clicks later and off you go. Yeah, cool. You can also add a database at ease from the same catalog. So it's very easy to get started from a developer perspective from the dev console. In this sandbox environment, you basically have all you need to start with the plot, right? MongoDB, Postgres, MySQL, MyDB, or you can also use another database from Docker Hub, for instance. Right. Or an Elm chart. So you can plug and play. The cool thing to see, you can plug and play with all the Kubernetes components that you need for your inner loop, outer loop, your developer journey. Yeah, like if I wanted to hack around with some serverless bits, right? Like maybe, you know, figure out the right way to get Knative to adjust, you know, some stuff in my house, for example, right? Like, you know, I have Hue lights. I have, you know, smart devices and stuff like that. So, you know, I could totally hack together something like that here and bring it in to like a CRC instance running here in the house. Or, you know, I do have a cluster here in the house, like a real actual server cluster, six nodes, seven node deal. So yeah, you can, you know, do like the hard work here in this interface and then put it where you want it after the fact. And you get a little bit more, you know, tinkerability if, you know, limits have been set on things in your own cluster where you're working. You could probably pre-bake something here and then bring it home and see, you know, how it works in your environment. So I haven't been watching chat. I'm assuming everybody else is. Is there anything else you want me to show in the UI here? Let's see, last call in chat. If I'm going to skim through it real quick. It looks like folks were enjoying the discussion on trade-offs, on whether you do development on the cluster or not. I think there's plenty of valid ways to do that. It really depends on how you're going to achieve your best productivity and best use of your time. So up to you to help map that, but we want to give you all the tools to line it up and whatever way makes sense for you. So thanks for the feedback in the chat. And yeah, feel free to pile in more questions if you have any as we move along. Yeah, let me go ahead and stop sharing my screen though. Yeah, yeah. Awesome. Thanks, Chris. Hey, no problem. Yeah. That's what I'm here for. Next topic we had, and you kind of were helping like lead us towards it a little bit. Setting up, if you did have IoT devices on your local network, we actually have a next topic coming up is to introduce the Q IoT project. And Andrea and Natali, I think we're going to lead that bit if you all are up for that. Yeah, sure. I was pleased to invite Andrea to talk about it. We did a Twitch session last time with Chris. We talked about the act fest and the result with winners. So we won't like to do today with Andrea to wrap up and see what's falling on after the act fest. What is going on on the project, Andrea? And can you recall to the people what is the Quarkus IoT project? Thanks, Natali. So the Quarkus for IoT hack fest was a marketing and enablement event. We've been running in September, so from September last year to October. And the goal of this long-term enablement event was to give to our EMEA partners the skills to implement the workload on the IoT devices and at the edge using our Reddit products and specifically with Quarkus framework. And this event was built on top of something new we implemented. So the capability of running Quarkus natively on ARM V8 devices and natively mean 64-bit and compiled natively using RLVM. So that's a big change from the edge computing and IoT perspective because that reduces dramatically the amount of resources required by the devices and low level edge servers to run on the servers. So to run the workload and that was amazing. From the business perspective, the enablement was being quite successful more than expected honestly because usually when we run enablement events they are part marketing, yes, partially marketing but more likely enablement events we don't expect an immediate return of the investment we did. So after one week of enablement through webinars and three weeks of this kind of challenge the act was brought to the partners we've got already the three winners running projects using our methodology and the technology and the skills we gave them during the event. Wait, did you say they're actually running them like in prod, like often running? No, well they are building the prod environment so they started some projects so they had enough material and skills and knowledge and understanding of the solution not just the Quarkus framework, right? In IoT and edge computing solution to propose to their end customers. So they are actually running the projects and I guess they will make it sooner or later. This year. Yeah, I mean it is one of those years still, right? Lucky us, edge computing projects, they are slightly different from the digital transformation projects. So when you want to implement something at the edge or involve a IoT device implementation it's, this is something new you designed from scratch, right? Digital transformation or replacing the existing infrastructure with OpenShift and cloud native infrastructure and workload it's much more complicated and expensive in terms of the resources and time. And that's from the business perspective. From the technical perspective, thanks to the OpenShift TV, thanks to other speeches we delivered in the last month we attracted enough people to form a community. So this is not just the QAOT hack test, now it's the QAOT project. So the, all the code Natale myself and a couple of other guys that support me during the implementation of the first version of this project. We donated everything to the community. Now we've got a project on GitHub and now the source code is available to everyone with several people bringing new ideas to make the project and the environment we implemented initially more IoT compliant meaning we are adding some strong and dynamic security. We are adding some components mainly to the data center side of this project to manage data differently to include more protocols that help IoT devices to communicate easily with the Quarkus-based services running on the data center. So it's getting bigger and bigger. Still looking for a nice and easy way to have fun together. So we haven't got a hard rock map because this is a community project. We are not meant to create any project for Red Hat. This is a project to bring people in and make everyone learn new things or contribute to the source code and the knowledge we can share. So we've got people who are always happy to play with small single board computers and who create stuff out of an Arduino board or just printing their own boards and flashing stuff on top of that. So it's amazing. That is pretty awesome. Yeah, I'm happy to share the link guys if you want or I don't know if you share. Yeah, we shared in the chat the link on Github, the Github project. If you have any other links, yeah, I prefer to share in the chat. Well, the community is actually forming and we are working on the web pages and some more introductory documents and papers in order to give this community a kind of goal and then we will work on the roadmap because we want to, of course, we want to improve the existing but we want to cover several additional areas. So just having, so just as a reminder, the basic technical project out of the QIT Hackfest was implementing some weather station or we call the measurement station that thanks to some sensors directly connected to the Raspberry Pi could collect telemetry around pollution and gas and send them to the data centers to the central servers for elaboration. So we have dashboards and stuff like this. We are keen and we are looking forward to cover more use cases in the manufacturing area, in the energy and utility area. So to fall into the same goal of attracting more people with several skills and expertise. So that's kind of important. You know what would be? The agent IoT space is very, very young. Yeah, it's very young and like this is a great opportunity for anyone watching out there to get in on like the ground floor or something, right? Like I can foresee a future when there is a sensor developed to detect COVID-19 in an area, right? And we will have the sensors spread out in a large way to some extent, right? Like I can see a future like that where it's like, okay, this room seems to have somebody in it that has COVID-19, right? Just from the air detection system, right? Like that would be an amazing IoT usage, right? To do something like that. Like he just put this thing in their lobby and you're good to go, right? Like that would be awesome. And you could see outbreaks kind of happening in real time at that point and kind of respond effectively in that nature, right? Like this is like me just like really riffing off the what is possible kind of thing. But yeah, that's possible here, you know? If once we get the right sensor to plug into, right? That would be the greatest invention of the humanity in this years. You heard it here on OpenShift TV, folks. Yeah, you have the copyright. You have the copyright. That is the greatest invention ever. Like, I mean, we have so many things and I've worked on some of these projects, right? That sense, you know, chemical, biological, whatever, you know, some kind of leak of some sort, right? And, you know, radon, for example, right? Like there's radon tests for, you know, you could just put a piece of paper in the room and you get a result back, you know, once you send it to the lab. Well, if the, you know, you're just detecting the presence of a certain, you know, droplet, essentially. I feel like that is entirely possible. Yeah, the droplet is definitely possible for the droplet itself. I think a sensor with humidity, temperature and quantity of humidity then, let's say, the smallest droplet can be also that fine. So you can assume the droplet could contain also that. So you can study also the air movement for indoor. There's a science. Yeah, oh yeah. Yeah, that's gonna be cool. But also what the Hackfest did today, you know, measuring the pollution during COVID, that means all the restriction to traffic has been removed. So no restriction, all the most polluted car, all car circulating. So the expectation is that the world is more polluted because no limitation has been. So, and this is bad for also the health. Not only the COVID is in issues, a big issue, if we start having a more polluted city is another issue. So it's interesting also this research made by the Hackfest. And I would like to measure in Milan or my city how the pollution is going. I think it's going very bad, but I can contribute to the dashboard with such data. And like, you know, me here in Michigan, right? Like not a lot of travel happening here, but people are still using the major thoroughfares to go to the factories, you know, where they're making cars and trucks, you know? I mean, like those operations still continue. Both those are considered essential services. So yeah, like having that ability to just put a sensor outside, right? And just like just have it technically breathe, right? And get an idea of what's in the air, amazing. And like being able to contribute back into a database of such, that's awesome. Yeah. I tell you, Chris, that this is a small project, right? So we are trying to emulate some very big network of sensors around the world. So there are two or three already existing big networks. One of them is builds around the World Health Organization. So it's something we are doing in a very small portion, but still we demonstrated that thanks to Quarkus Technology and Red Hat Technology, the Java programming language is not a remote opportunity. Now we can challenge easily go, Python and the other competitors in terms of microservices and cloud native frameworks. So that was the big, big, big news. And this happens natively on ARM devices, right? If you think of what's going on in the hardware world with NVIDIA acquiring the ARM company and with Apple working on its own CPU, ARM-based CPU, that's gonna change. And we are demonstrating that Quarkus can easily run on top of those CPU architectures. That's awesome. That's fantastic, right? Do you have any more requirements around the kind of hardware runtime? I think Java and Quarkus can potentially be the platform, but I heard you mentioned Arduino. Is there any currently anything like Fedora IoT or any OS image that you've standardized on for these embedded platforms? Excellent question, Ryan. So to run Quarkus natively on an ARM device, the ARM device must be 64-bit native, it must support 64-bit. So that's why the minimum requirement when you use a Raspberry Pi, for example, is the version 3B+, because the CPU is 64-bit native, 3B+, and onward. So Arduino actually can run Quarkus applications but not native, because Arduino, I guess it's 32-bit as far as I recall. So yeah, every kind of, so 64-bit gives you an excellent performance, 32-bit gives you a kind of very good performance, but running natively is quite different. So you can add, so I'm working on other use cases. I'd love to implement in the project. For example, I'm working on some object motion recognition based on Java and using the NVIDIA Jetson, the latest version. Still, you need an excellent and stable and mature operating system. So yes, we used for the standard implementation Fedora IoT. Don't forget, Fedora IoT will be streamlined to a real faulty edge. So there's gonna be an enterprise version of it soon. So it's been announced and we want to give a stable official and enterprise version to what we have done. So yeah, Fedora IoT includes natively the container technology, because cork is native means you run cork is native application within a container. So we are using Pogman as a container technology. Nice. Keep it a nice and light, there you go, yeah. Completely stable, quite mature and this comes from the community to become an enterprise and stable from the business perspective and fully supported version of it simply. Nice. Andrei, I have a question. Just to join the argument of today, if I am excited about this idea and I want to get started, so I have the artwork, can I start with the send? Do you think I can start with a sandbox to put my backend, the dashboard, all the Huarcus IoT code and start trying it with developer sandbox? I guess so, yes, yes. So when I started implementing the project I made it possible to play with the Raspberry Pi without sensors, so you've got an emulator if you are not keen to spend 50 plus euros to purchase the sensor board to connect to Raspberry Pi. Everything runs within a container. So you can easily use a sandbox because the total amount of memory for the services running on the server side is a couple of gigs of RAM, not more, including the integration layer. So the tool you need to expose MQTT endpoint to make the Raspberry Pi so that the stations send the telemetry to the data center. The internal data flow through Kafka and the database implemented using in the first version MongoDB and all the other services that make the system scalable and perfectly integrated, they are based on Quarkus. So it's quite cheap to run. Cool, probably even less than two gigs. Everything is open source, everything can be found in the community. We're implementing the guides as well. Hopefully the developers are keen to join or just to try without contributing to the project. So more info and more news are coming, are available. Let me say guys, the first outcome of the community and I'm very proud of it is the opportunity, for example, and that certified now, the opportunity to compile Quarkus native application for ARM directly on a standard Intel CPU. Meaning just imagine you've got your enterprise environment with IoT devices or stations connected to it. You can easily make OpenShift compile the application supposed to run on ARM devices, make them available, push them into your enterprise repository and then communicate the Raspberry Pi's and new images available and the Raspberry Pi will automatically download the image without the need of having an ARM server dedicated to the compilation process for those images to be deployed on the IoT devices. So that's amazing. And we did it using standard container images, the UB images from Red Hat and basic standard GralPN. So this is something the community created to make everybody including the enterprise companies automating the generation of the official images for the devices, that's simply fantastic. And that reduces the costs of automation and provisioning of the images for the IoT devices. That's quite cool. And that's officially open source as well. That's amazing. Very cool, very, very cool indeed. I put a link in the chat to a provisioning service that I've used in the past in conjunction with Fedora IoT. So if you're interested in trying Fedora IoT and setting up devices on your home network, there's a service that'll help load your SSH keys and an ignition script onto that Raspberry Pi. I've used it in the past, it's still, I think in development, but that ignition script could load up Podman and start your workload and potentially be used to kind of bootstrap those hardware devices. So another thing to look into, I added a link in chat for you all. That's awesome, thank you for that. Actually Ryan, that's very important because unless you have a Linux-based host machine to flash directly using the tools provided by Fedora, Fedora IoT on the SD card, you have to use online tools to manage SSH keys and stuff like this, otherwise. Cool, awesome, all right. So what's next on the agenda here, Ryan? You know, I had a big section for today on let's review serverless and I've just crossed it all out in our planning doc. Yeah, no, this is great though, like I appreciate how much time we put to this, right? Like the sandbox important, the QIOT project important, right? Like these are things that are gonna help people in the long run, right? So I'm very happy that we talked as long as we did about both. And I'm really excited about projects that people can do on their own in their house since I'm stuck in my house with, you know, I'm like, what can I do to try to hack my environment? Hopefully other people are also inspired by similar tasks. So we'll try to have more on this topic in future sessions on the office hours. I have next up, we wanted to mention a couple upcoming topics. So there's a, let's see, this was actually a past topic. There was last week, I believe, a OpenShift Commons briefing on Knative. If you are not already up to speed on Knative and serverless technologies, I highly recommend giving this link a review, especially before our upcoming, let's see when we have it, if we have a date here. It looks like we will be having a Knative deep dive with Paul Maury from Red Hat on February 2nd. So we'll give you another reminder, but yeah, definitely take a look at the kind of introduction and ask me anything round on OpenShift Commons before we jump into the expert level presentation with Paul. Yeah, Paul's the Knative lead here at Red Hat and also a good friend of mine. So yeah, it's gonna be awesome to have him on the show talking about Knative and serverless. I have a lot of questions to do. That's right. I should throw some oddball questions at him. He's actually working on some code to mess with you lights and the house kind of deal. So that should be interesting. Like, you know, I'll both be able to run it if it all works. So I might be over promising here. It might just be terrible. We'll see. Anyway, exciting topic, guest speaker who's very active in the upstream community. So keep that in mind. We'll have it on the calendar of events. So take a look at the calendar at OpenShift TV. You can add it to your own set of calendars if you are inspired to do so. Yes, definitely click that plus on the bottom right there and add it to your calendar. And I'm sorry, restreambot is being a little spammy today. That's, sorry. Another topic we have coming up on Thursday this week. Yes. There's a deeper dive than what we did on the developer sandbox. So if you're interested in much more detail around that topic, definitely come back on Thursday, 10 a.m. Eastern time for much more information about the dev sandbox and how that project is likely to progress. Yeah, and I've already put that as a scheduled event on YouTube. So if you want to watch it on YouTube or just tune in here on Twitch, Facebook, wherever you're at right now, same place kind of deal. Go ahead, Ryan. Oh, I was gonna say the only other thing I have left on the lineup for today was to go through a, I wanted to get feedback from our viewers. Let's see, how many viewers we got today in chat? 42 totals, what it looks like across all services. Well, for the folks in chat, and I'll probably post this online later, we're very interested in hearing more about what you would like to see in these office hours sessions. In the past, we've had a variety of topics. One of them, Serena is here all the time showing us upcoming features from the future of what's coming down the engineering pipeline for us, what's coming up soon. So that's a reoccurring topic that we have been focused on quite a bit. I have that in the list. If you're enjoying that, give it a high score on the list of your preferred topics. Other things I listed in there, kind of just getting started with specific languages. I didn't really give you much choice to choose a specific language. There's just a category for getting started with languages. If there's a particular language that you really wanna see, put it in in the fill in the blank section at the bottom and let us know what particular language in addition to marking that category as one that you're interested in. Let's see, other topics I had on there. Console customization contest was one of our most watched shows last year. So let us know if that was a crowd favorite from your perspective. We could definitely do more kind of contests where folks are encouraged to come on and share what they've built. I think that's a, I don't know. I always enjoy lots of crowd interaction so I don't feel like I'm just stuck in my house. Thanks again to folks in chat for keeping things active for us. And let's see, I'm gonna pop open that form and take a look and see if there's anything else I've left out. Deep dive into specific technology stacks. So next week, Paul Maury with Knative and Serverless. Another option we have is interactive workshops. So we've done in the past, first hundred people all get seats, virtual seats and here's your user account. Let's all march through a introduction to Kubernetes. Since we have less than a hundred people in chat today, everyone could get a free seat, right? So that's something we could also consider doing. Definitely give us feedback in the form. We'd love to hear what you're interested in and what you've been enjoying so far. So there's a question in the chat. Do we have any, is there a serverless scenario for backups, for example, a function that downloads something to storage and other function to compress it in the storage array or wherever it's being stored and not relying on the host processing? Does anybody here on the call know of anything like that out there already? No, go, go, go. Oh, I was trying to listen to what you said but I should just read the chat. You were saying download backups and this is particular for serverless? Yeah, like they wanna know if there's like a serverless way of doing backups which is essentially two functions, right? Like move data from A to B and then compress data if need be. That would be interesting. My initial thought was don't use serverless, use just batch jobs, use the jobs abstraction. Yeah, like you could totally do it with Kubernetes jobs, yeah. And I think you should do a Kubernetes job just because you are violating the paradigm if you do this with serverless because serverless is a best effort by definition. So if you rely on backup, I think you want to rely on backups, you want them consistent. Those functions are best effort, let's say. So in the paradigm, there's no guarantee that they will go and they will finish. You have to care about it when you start the function and when you gather the results. If everything is wrong, you have to restart the function. So I don't think this is a consistent way to do backup. So much better doing rely on Kubernetes jobs because the Kubernetes scheduler will control that the job is executed at the end. And you'll get logs and you'll get alerts of fails and tell you exactly what went wrong whereas a function will just be like, here's the error, sorry, and you have to go hunt for it or set up alerting for it and everything else. Exactly. And when you think about the serverless or function as a service on a big memory pressure, those function can be killed because the paradigm say that it's not important that a function itself, it's important that you can scale them up massively when needed asynchronously. So they can be killed by the scheduler if there's some memory pressure. Right. So once somebody stands up a job and it sucks up too much memory, those serverless functions are like, well, we're just gonna hold off for now or get de-prioritized completely and just die. So yeah, if your environment is very lightweight, maybe you could get away with it, but I wouldn't recommend it, right? Yeah, yeah. Cool. Pros and cons of using a highly available system is it's designed for fault tolerance, right? But you need to be able to accommodate the faults in a consistent way. And so yeah, batch is probably the right abstraction or jobs is probably the right abstraction for this. Yeah, cool. And always there is, you know, OpenShift container storage, which has a lot of that functionality you just kind of built right in, you just tell it where to go dump things and that is all there for you. Yeah, so if there's no more questions, I'd like to wrap just a little early today so I can hop over to the OpenShift Commons briefing, which is gonna be talking about, I think Windows Containers today is what it is. Double checking, yes, Windows Containers and that's a hot topic on the channel too. So yeah. Excellent. Anybody got anything else for the call or for the audience before we jump off here? Thank you to everyone that joined the call and experienced this live stream with us. Really appreciate the audience as always. So thank you very much. Special thanks to Andrea for joining kind of with a short notice. Great to have you on today. Yeah, it was awesome to talk with you. Okay, Andrea, thank you. Thank you guys. All right, thanks everybody. We will see you next week or in a few minutes here with OpenShift Commons. Thanks all. See ya. Bye.