 So hello, everyone. Welcome to the second day of DefCon check 2022. And it's my pleasure to announce the next meetup or actually first meetup of today being innovation in complex projects empowered by communities. If you have any questions, utilize the Q&A tab or basically enter the meetup by clicking share, audio and video. I will kick you in the room as well. And I hand it over to Natal. Hey, good morning, everyone. How are you today? I'm very excited to be here to talk about how in this meetup room today we're going to talk how innovation in complex project is empowered by the communities. This is a very good example on how a community could build around an open source project and then develop and you will hear from the people from this community how cool is to start a community, join a community and contribute actively to this community. So I'm really excited to be here at DefCon 2022. Looking forward to be there again in 2023 on site. But for this year, let's have this meetup virtual because we have lots of very, lots of cool stuff to talk about. So if we go next into the slide, we can kick off this conversation. We have an agenda for today. So we will like to introduce you who are the speaker of today. You see some folks here in this window and then we go into the technical details about the unarchitectural overview on how this project is built. We're going to share more info. Yesterday we had a session with Andrea, Ben and Mattia talking about also how to build cross-platform build for the edge. So we're going to talk about how to run those work in the edge. And then we're going to do another walkthrough into technology in a walkthrough on how containers work in this environment, how we can build a secure environment. And then finally connect the work to open source. We have some appointment also for the community we would like to share. So if we go to the next slide, please we can start. Now we would like to introduce you the team. So please come in the chat and say hi to the team. Let's go to the first one. You see here this is the those are the Quarkus for IoT community. Let's say leaders, those are people that are actively contributing in the community. Everyone is really welcome to join this community. We're going to share also in the chat link soon on how to join the community. So if you see those folks here, you see some of those today's presenting. But we have Andrea, Andreas, Mattia, Mario, Marcus, Jeff, Ben, and I would like you to present them today. So if you go into the next slide, Andrea, I'm ending over to you. How are you? Thanks, thanks Natale. And hi everyone. I hope you're still enjoying the DevConf. It's remote and we again are looking forward to be with you next year on site. But so yeah, you've probably heard about me yesterday, Andrea Battaglia from Italy. I live in Pao South, thanks to them. Today they did a pandemic lockdown. So I'm still enjoying the warm and shiny and sunny land in southern Italy. I'm here today with my peers here to tell you how we can work together on your project on some innovation as the technology or the community elite. And also I will introduce some key architectural points and will explain you why all these people are here to ready to have an open conversation. Please feel free to stop us anytime to share your voice to ask questions. We will make sure this will be an interactive session. So we are not here just to present what we have done and what we are about to do. As you can see from the picture Andrea is an experienced traveler and joining conferences, open source conferences. So you will see for sure in the next DevConf on site. So yeah, let's go into the other team members. Hey Ben, how are you? Hey, good himself. So yeah, I'm Ben Salyard. My day job. I'm a solution architect at Red Hat. But as part of the community I've been focusing a lot on some of the consignment technologies, how we actually bring that closer to our IRT and edge devices. Yeah, I'm basing the Netherlands. Cool, cool Ben. Yesterday Ben also did a great demo. I really like to recommend you to see again the recording that will be available soon about the presentation. And let's go next to the next team member. Hey, Mattia. Hi, Mattia, Magia, Principles and Red Hat. Based on Halps region, which is Switzerland, Austria, a kind of specialist in a lot of stuff. But yeah, I like to dig around cool stuff. That's why I joined this community because always working on the edge, on the cutting edge technology. And I like doing sport recently ballet training. You see in the picture trying to escape from customer meetings and do fancy stuff. Cool. Thanks, Mattia. Yeah, this sport looks really kind of adventurous. So that met up, I like the analogy you made that you try to escape. Sorry, this is a meetup. I would like to ask a question to the audience here. Do you really think that's Mattia? Because that doesn't really look Mattia. Probably that's a picture from the internet. That's me. Come on. Look at my arm. Look at my arm. Yeah. Yeah. Maybe he just rotated the image. He was not doing this. Okay, let's see. We have 20 people connected. I would like to give a big shout out to all the people attending today. I know it's Saturday in virtual. I promise you this is really cool topic. And we will see how in a few seconds, let's finish presenting the team. Let's go to the next step. Hey, Andreas, how are you? Hi there. I'm Andreas. I also work for Red Hat. I'm in far down south of Germany. As you can see, if you go south in Germany and you can't get any further, that's probably way with me somewhere. I'm the only up sky in this team in the shark tank of developers speaking Java code. I don't speak Java code at all. I'm taking care of the rollout, pixie booting and the edge device setup and OS 3 image and that kind of stuff. That's what I do in the project. Fantastic. And I like this picture, Andreas. This is where you live basically. It's fantastic. It's taken over there somewhere. Wow. So, Andreas, as you heard from me is the ops person. If you want to hear from rail for edge or kind of under kernel system administrator, this is the person to talk with. And if you have any questions, please send it in the chat. And of course, sensible automation. And of course, sensible automation. Yeah, of course. Oh, as we say in Italy, dulcis in fundo. The best at the end, as I'm joking. My name is Natale Vinto. If you see here, I put some YAML. As Andreas will say here, we are all developers, all kind of Kubernetes developer. So, here's the YAML to present me. I'm developer advocate here in Red Hat for Open Shift. You can follow me on Twitter, Natale Vinto, love football. Actually, I didn't push this container image yet, but I will do soon. You can see it on the top right, this book that I wrote with another colleague, Marcos Isole, about how to modernize enterprise Java. And we also mentioned it in the book with Marcos, this project, which is because it's very, very cool. And I agree with Andreas. Yes, those bunch of folks here are basically developers and developers. So thanks. This is me. I think we can go next. Let's have a YAML person. Let's have now an overview, an architectural overview as we discussed it. So if we go to the next slide, I think we can hand it over to Andreas again, because he can present. Andreas, what is this project about? What are we talking about? Yeah, thanks Natale. So let's start talking here because Andreas is right. He's the real only ops guy here, even though only few of us develop Java, actually, just Mati and myself. And one of the other main contributors, or Jack Newsom, he does, but the other, so Ben, Andreas, also Mario, they are amazing technicians. So we started this community because back in the days, two years ago, I was implementing on my own, supported by Natale, another colleague of mine, a small POC to play with Quarkus at the edge. That grown so much that we got the interest of IBM, Intel and other competitors. So instead of having just partners learning how to do this kind of stuff at the edge, we wanted to build something that could be useful to others, right? It's useful to customers, to people from outside the community, because the community is contributed by several people. As I said, also yesterday, there are some great specialists in the IoT. I can, for example, mention Domenico Briganti, for example. They are specialists in IoT, and by IoT we mean small sensors, small devices. Not all of them can cover by our project, but definitely all of them are pluggable in. And this is what we are trying to do. We present at the enterprise level what we keep building in the community. The only boundary, hard boundary in our community is, let's try using enterprise products. So it's still open source, it's still completely open and available and consumable and forkable, whatever. But still, we want to base our discussion, our interaction with other people, with our technical people and also the technical business people, on top of something that is reproducible that could be used as a POC, or a demo like we demonstrated yesterday for something tangible, something that the market could use. And so we use OpenShift, Andreas, for example, mentioned Red Porage. So it's not just for the IoT, the absolute version of that, but it's the real enterprise version of the operating system at the edge. And this is what we try to do. Based on complex architectures and setup, we try to build something that is reproducible. So that means, in turn, we don't do only Ansible, we don't do only Quarkus, we don't do only Red Porage. We do everything together, making sure that all the products, the third-party products compared to, or in relationship with the Red Hat portfolio and platform work together to do something that makes sense. And we covered several use cases, we covered generic edge computing use cases, we covered manufacturing, we will cover probably something around energy and utilities, something that we know the market is trying to understand and work on. So the more expertise we have, also around third-party products, the better or the larger is going to be and more reusable is going to be the PSE we built at the end. And we have development cycles. As we are contributors, we are not fully allocating this, we try to allocate some spare time on the community. And this is what also makes sense in terms of, let's build something if requested, is needed, if needed, and something that makes us having fun with that technology. And okay, this is not very readable, but that's just to give you an idea of how we think of complex projects. And this is something we built entirely. So we are able to cover SQL, no SQL storage, we are able to cover Red Porage, and Andreas will take care of some automation with Ansible, for example. Andreas mentioned also VXE Boot, which is incredible technology that of course takes time to be implemented and tested and also apply to specific use cases or projects. And on the data center side, we, of course, based everything on the OpenSheet platform. But again, to roll out applications, to roll out container images, workload, or to set up pipelines, we use technologies that maybe are not worth to be mentioned most of the time, but that are very important to customers, partners, to everyone here who is approaching the cloud-native world. So we used effect on pipelines, we used Helm charts, something that the people you can see here had already knowledge about, all that wanted to learn. And this is the beauty of this community. So thanks a lot. I guess I can hand over back to you, Natale. Yes, thanks, Andrea. And actually, that was really interesting. If you can come back to slide them back. So I would like to, this was very cool, because as you see, it's for two reasons. One is the architectural overview, so all the people can see how the progress is made. The second, I use this image for an upcoming tweet. So I would like all the people coming while we're speaking, because there are lots of cool stuff we're going to talk about. Andrea, correct me if I'm wrong. The innovation today, this year, for this year's edition is the rail for edge. I know we started with Fedora IoT. Now we're using rail for edge. And then we integrated all the components for edge and also for the open sheet part. As you can see here, there's the edge part, the data center plan part, where there's lots of innovation around messaging, the queue system. In fact, Andrea, if we go to the next slide, I think we can go to the next step. If we go to another one, we continue the flow. Yes, now we talk about running it on the edge. So let's explore this part. And let's hand over again to Andreas, because he was in charge of building practically this part. And I'm really curious about how you made it, Andreas. So thanks. The idea that we're having here is, you know, this whole setup is controlling a manufacturing plant and the edge IoT device is the one attached to the machine that actually does the manufacturing and controlling the manufacturing process. So the idea is you will not have full data center capabilities here. You will not set up a satellite and manage and patch the things. You want to have it quickly rolled out. You want to have it easy. You want to have it secure. And you want to be flexible. What happens if the IoT device attached to your factory machine dies out? So what we did here to build is we took the Fitlet 2 device, which is an industry standard Intel-based device. You could take other devices. And of course, in the long run, you could also take ARM-based devices for that. And to create the operating system, we use the image builder, which is a part of rail. But it's also, of course, an upstream open-source project based on Lorex and Welder that you can use to build Fedora or Center Streams image or whatever. So they are used with the OS tree imaging because that has a couple of advantages. First of all, you have a simple, quickly rolled out image, which is immutable. It also mounts the root system read only, which is a small but important fact. Because remember, if you have a machine, which the IoT device is attached to, the manufacturing machine has this big red button. So if somebody walks on the machine, gets an emergency situation, you don't care and say, well, I'll save you later. Let me shut down cleanly that machine first. That's not going to happen. They just hit the red button, turn it off. And you know what happens to file systems, they're reading right. And you just turn it off without shutdown. And that's why the OS tree image built with read only root is quite stable on that. So we deploy the image to the edge using Pixie boot with UEFI. We could do BIOS, but as mentioned in an earlier talk today, it's not secure. It's slower. It uses TFTP and that kind of stuff. And we do Pixie with UEFI. We could also do UEFI HTTPS boot. We just haven't implemented that yet. But the Pixie UEFI boot that we do could also, since we use the shim as a boot loader, you could also build that as a UEFI secure boot chain, once you configure your own security chain from that. And all you need in the center is a simple web server with DNS mask for the TFTP where you actually get the image and push it to the edge. On the edge itself, on the device, we just run containers with partman. And actually, depending on the use case that we're thinking about, you can start these containers through Ansible from a central management system, like Ansible controller system, or what you also can do, which is maybe more like the, what we're using now and which is more comfortable is you just push the partman configuration as a system defile. So you create your pods as units. So they auto-start so you can have dependencies and so on and so forth. Why we're using partman and not something like MicroShift? We see this edge device more like with a local configuration, maybe local drivers, maybe you have GPIOs that actually tie into the machine that you control. So you want to have the possibility to run containers and application more closer to the hardware. You want to run local drivers and that kind of stuff. And therefore, MicroShift in this case might not be the optimal setup, but it could be in a different way. So we documented and used it with REL, REL for the edge, but it also will work with Enterprise Linux Clones and Fedora. As I mentioned, RootFS is mounted on read only. And I just, you know, there's some links in this presentation pointing out to other presentations that were in this conference about edge and IoT devices that were very, very interesting. And we see some more next slide, please, on that. So what we want to go from here, I would like to go from here is we had a lot of stuff about security here. And the last talk was about full disk encryption on the edge devices. What happens if the edge device is stolen and these things? I actually want to try to push into a different direction because remember you have an edge device with an unstable SD card and you know how often SD cards break, especially when they're doing read writes for more time or run in, let's say, hostile environments like in a manufacturing plant. So I think about a diskless IoT device. So and using Pixiboot not to actually kick in the install process, but actually run the device diskless because it has, you know, it has two network cards. It can do failover and have boot over NFS or root over NFS or root over iSCSI even. So and if somebody steals the device and rips it off the machine, takes it home, there's nothing on it because it doesn't even have a disk. So that's one of the ideas I want to follow up in the future of the project. And also the possibility, as I mentioned, running MicroShift in addition to Portman containers or depending on the workload as an alternative to Portman could be both of that. So that's going to be interesting. And also after that very interesting talk yesterday about using Nvidia devices with MicroShift to do AI stuff, you know, like you have an intelligence video surveillance where you put the AI on camera face recognition directly on that Edge device, like what you can do with the Jetson Nano or Jetson Javier or something like that. And sorry for interrupting you here, Andreas. This is something we are in a discussion for. So we have this on the ongoing discussion with Nvidia. We want to do something interesting on that. And we want, and I want to use the same basically Edge build process that works for the FitLit until the same for some devices like that and also use the same distribution so that you could have a mixed environment and have your Edge IoT device network with something attached to a machine controlling it, but also having kind of a surveillance device that does the face recognition directly. So that's basically for the ops part. Yeah, for the audience also I wanted to mention that back in the days when we were using Raspberry Pi for the first use case, we actually used direct connection with the GPIO for the sensors connected to the to the serial port and using the open source driver from the technology vendor, the sensor vendor, and that worked like they were Python based. So we had several containers running at the Edge and one directly connecting to the Edge devices to these are into the sensors. And so exposing REST APIs to make the business logic at the Edge capable of reading those data, of course, decorate them and then send the telemetry to the data center. So it's plenty of stuff we can do, actually. Exactly. And that's it from my side. I don't want to take all your time. That was very interesting, Andreas. So we've seen the real Edge part, from the operating system. We mentioned the micro shift as an possible next step as an innovation on this part. Now let's go in the other part. So let's talk about a little bit how container technology, open container technology enabled and speeded up the innovation. For this part, I think now if we go into the next slide, please, we have been talking about it. Thanks, Natalia. So yes, I'm going to just quickly tell you about how we've used some of the container technology to help speed up our development at the Edge or for IoT. I'm not going to go into all the technical details. I mean, if you want to see that, you're welcome to look at the recording for the session we had last night. But in essence, by using containers and our IoT and Edge devices, we give a lot of flexibility towards developers, right? So think about having to manage all your dependencies when you want to deploy your applications. Using containers sort of simplifies that process quite a lot. Like Andrea also mentioned, it's also possible that we can deploy our applications as a set of multiple different containers for a specific need, right? So you get that isolation when I want to deploy something, right? So in essence, that allows us to ship our applications quicker towards our Edge devices. Next slide, please, Andrea. So as part of the community, we also ran into some challenges when dealing with different different architectures. Like Andrea also mentioned, before in the previous run, we dealt with ARM devices on our Raspberry, for example. And in the last hack-faced run, we dealt with Intel architectures, right? So to help us actually develop and compile our Quarkus applications, the community developed a set of multi-ARC containers, essentially, that allows you to run containers for different architectures in an emulated way, right? With QMU embedded inside the container. So we are maintaining a set of these native builder images that allows you to get rid of your virtual machines and stuff to actually compile Quarkus natively. Next slide, please, Andrea. Ben, sorry. If I may, what you are talking about is quite interesting. And this is something we are trying to achieve now in the community. If you think of all the things that these gentlemen are talking about and thinking of sensors somewhere, not maybe attached to the devices, but in the network, right? We can, thanks to this work Ben has done, we can pick up the real sensors, including the physical CPU architecture of the devices. You think of the workload in a completely isolated space. So if Andrea's works on the big book, right? And Mattia thinks of the security around the data coming from, coming through now the network from the sensors to the data center and several decoration. The business logic working on the data and transferring them or working on some AI analysis can be completely coupled and could run on microchip, could run on R4H, could run on Fedora IoT and whatever kind of probably platform or operating system that provides, then correct me gentlemen if I'm wrong, a container technology to run the workload, right? So that brings cloud native stuff everywhere actually. Yeah, exactly. And I mean, if you go to the next slide, why this is important is that essentially your developers working on building these solutions can keep a consistent workflow, right? So if they are familiar with container technologies and use Kubernetes for stuff like your data center components or for your edge location components, they can keep that consistent workflow. Never mind what the actual architecture is for their edge devices, right? Or their IoT devices. So that's also part of the extensions that we did to essentially allow teams to incorporate these bold images that we create inside their workflow, right? So in opening up opportunities for them to test their applications on Kubernetes, even if it needs to run on a Raspberry, right? If we go to the next slide, so some of the the challenges that we see in the future though is that well, how do we manage devices at scale? Since now we've sort of tackled some of our developer workflows, we've tackled stuff like our packaging formats with containers. But if we now need to scale this up and start managing hundreds of devices, then suddenly the management of that also becomes a bit of a challenge, right? And for that, I'm also quite excited to talk about Microsoft, which essentially adds a layer on top of Podman to make it appear like a Kubernetes cluster, right? So it's this lightweight layer with a low footprint that allows you to actually manage your edge devices as any other Kubernetes cluster. Now, what makes this important or useful is that now if you couple that to your cluster management tools, like Rackam, for example, now you can actually start managing your edge devices as just another Kubernetes cluster and actually deal with stuff like updating your OS on your device, making sure that they're all on the correct version, right? So it simplifies that management of your devices. So I'm quite eager for our community to actually explore how this can actually help simplify the management of these devices. So for more information, I think Natalia has also added some links in the chat if you want to actually explore those projects yourself, right? Yeah. Thanks, Ben. I put some link in the chat. This is really, really cool and interesting. We're talking basically here about what is the future of managing multiple ubiquitous Kubernetes endpoints, clusters. And if we look at the edge, now we have a rail for a hedge on top, the micro shift container. So it's kind of a connector that can talk to Kubernetes controller. And so you can control Kubernetes also at the edge, but having a minimal footprint container base. So that's the evolution around the edge. And let me also, since you mentioned it, let me also mention that Open Cluster Management is the red dot effort on open sourcing our Advanced Cluster Manager for Kubernetes, which is a tool to control multiple cluster, multiple Kubernetes on multiple cloud. This is fully open sourced now. And it can be used to control several workload, data center and moreover edge. So there's lots of excitement around edge development. And since there are many, many pieces that can many, many computing units, I think, and Matia, maybe probably makes sense to start talking about security. What do you think about? Before Matia goes, and I have an interesting story, Matia, because this is a meetup, right? So I want to push a bit all of you, that's an endless discussion actually. So we had the chance to use Fitlet devices, so enterprise great stuff. We had the chance to use Jetson because we have some stuff at home. And we had the chance to use Raspberry Pi, Raspberry Pi 3B Plus. We wanted to have some physical hardware restriction to be able to really prove that work was native, as we mentioned yesterday, can run with a very small footprint on small devices. But now the question is, is it really worth it to use MicroShift and where should I put it? So before we were, we connected early to test that was set up here. And with Andreas, we had this discussion, and this would be, and I also encourage Matia to discuss this also during his piece. It's interesting to see why should I put MicroShift on an Edge device if I'm short on resources? I mean, on a small device, it could be Arduino, Raspberry, whatever. I cannot put MicroShift there, or it's not worth it. But if you think of, based on the initial architectural review, I have some powerful servers that do AI or ML, whatever. And then I have the devices in the same, so it's not in the same local area. I could put MicroShift somewhere in between on an Edge device, but shipping more physical resources. Gentlemen, what do you think? Otherwise, MicroShift is... It depends, for me, it depends on the architecture and on the kind of application that you have. Think about it, because MicroShift also works with the OC CLI, you know, the OpenShift command line. And if you have the Advanced Cluster Manager or the Open Cluster Manager, which is the upstream of that, you can also manage your MicroShift with your Advanced Cluster Manager. So if the applications that you will use on the Edge are quite similar to what you run on the data center and you have a central management for all your Kubernetes, you can also include the Edge into the management and then push your applications directly from your Dev cluster or from your data center cluster straight to the Edge. But that assumes that you have more, you know, compute and less control inside your application. If you're in an industry application that says, I need to have an application which is closer to the hardware, which actually uses GPIO or these kind of stuff and needs special drivers, then you're better off, I think, with pure podman. The overheat in the hardware, I don't think is a valid point a lot, because MicroShift actually uses one CPU, one tool, maximum two gigs of memory, which is nothing compared to what you have. Think about it, you know, even Raspberry Pi has come with four gig today. And eight gig is the next step, even for the Edge devices. That's not a lot. And which is, it's absolute commodity. So I think it's more a point of the view, how you manage your environments. If the Edge is completely disconnected and local isolated network, then you rather go off with, you know, deploying podman containers from a local registry. And if you include it in the big management, you might be better off with MicroShift. So this connection means that it's better to do what Ben was discussing rather than having a central cluster management, right? So where is the boundary between Ansible Management and Advanced Cluster Management? Well, Advanced Cluster Management can use Ansible under the hood. It's super setting, it can also run Ansible workloads. And yeah, I think to come back to your question, Andrea, it depends on also what Andrea said, what do you want to do? Do you want to control those Edge endpoints? Or do you want to consider as a far edge where eventually always disconnected and then reporting via batch or kind of, but if you want to control them, I think it makes sense to consider them as a Kubernetes cluster or Kubernetes endpoints so you can have a consistency across all your deployments. If you want consistency standardization across one deployment model, I think you can, considering using MicroShift in this case, otherwise for other more disconnected, less interactive endpoints, probably you can rely on just Podman and those kind of workloads. It's an open place, very cool. And if there's somebody in the chat that would like to contribute to this, what are your thoughts, please send it over in the chat. It's an open discussion, it's a meetup. I know it's virtual, but it's still open and we would like to hear from you. Yeah, so in the while, since we're talking about multiple cluster, multiple workloads, hey, Mattia, what about security? Yes, it's like always the last piece on the puzzle. Yeah, also, if you just to conclude this MicroShift, I will say that having a standard solution across different platforms is ideal also for developer rights. And as we're talking about security, I will see this as the next step to create a unified service mesh federated with also out the edge, which is really nice. And so you don't need to kind of a custom solution to satisfy the security, but you can have a standard way to secure the edge as well. So because the challenge is you see how you can scale the security about micro-savage in general on edge device. So think about renewable process, revocation process, of 1000 devices that are connected on your platform. So the idea is to have a kind of standard solution, right? And working with Kubernetes and micro-savage, the standard factor provided by the community and is a search manager. Because search managers provide an easy tool to manage certificates, a standard API to interact with the multiple certificate authority, like Venafi and Ashikop, which we use it for our POC on the community, and give really the confidence to the developer to work on the security part. I help you to speed up the development because you can create a self-sign certificate. And when you're ready, when your production run, they use the real certificate authority. And as well, in case you work with a API, which is not standard or non-common, you can implement a custom integration because it provides a standard API to export the certificate from your custom certificate authority. Next, please. And of course, when you work in cloud native rate, you want to use Quarkus. And Quarkus allow you loves Kubernetes because it provides a native extension based on the fabricated Kubernetes cladding. And so it makes it easy to work with the Kubernetes environment. And why we want to use Quarkus? Let's talk about how to dynamically provision and certificate in Kubernetes environment. So we want to, of course, when you work with microservice, you want to recognize not just the servers, but also the client. And this is called like Muta-TLS. Muta-TLS is an additional security layer on top of the standard TLS because you won't as well recognize the client who is calling it, not just the server. So in this architecture, we see the integration that we have done for this POC around the PK provider with the choose fold and the cert manager component. You need to click. Yes. So now considering this is a meetup, you all, your audience should know, shouldn't know that Matias is expecting me to know exactly when to click on the next slide. I do like this. 20 slides with one more line, every slide. So it's crazy. And I'm okay. May I show them all of them? Yeah, okay. You're so kind. How many of them, seven? Eight. Eight, I guess. Tell them the numbers. So in cert manager, there is a component called issue. Issue is that you're interfaced with your PKI provider. So when you configure the issue, you configure the issue to tell, please vote your PKI and give me this certificate and specification based on this domain value, whatever. What's happening is that when you create an issue, you are able to start creating a certificate type. And then cert manager is going to watch certificate. And based on the information that you give it like common name, subject, alternative names, type of the key, and is going to contact the PKI provider, get the certificate, create a secret for you. And then when you have a secret, your application client server can mount to secret in a classy way. And then they can start doing the client server connection with Mooda TLS capability. And then user can access the information from the client. But how you can, next, please. Oh, thank you. Now I wanted just to say, Mathias, if I may. So when we started developing a bit of a history and also open discussion, when I started developing the first use case, that was completely unsecured. So I was just creating a self-signed certificate to make the edge, the workload at the edge in a container connect to the MQTT protocol exposed by ActiveMQ, AMQ on OpenShift. And of course, once Mathias started implementing this piece, I found it quite easy. Because the only thing I was supposed to do was to download the certificates and use them for the connection. So we introduced something very important that's very high level actually here. Mathias is giving you lots of information that, okay, it's mutual TLS, but we have to distinguish to make a distinction between two type of certificates, the bootstrap certificate and the runtime certificate. So the certificates you use to connect to the MQTT endpoint to some telemetries. And the other one is that the certificate that the each and every edge device we've been talking about stalling devices or blocking device or dropping devices because they've been hacked somehow. So those bootstrap certificates are the certificates that the device owns because they are tightly coupled to that hardware ID or stuff, which is again completely different from the runtime certificates. The certificates the device receives through the mutual TLS with some other endpoints on some servers in the network. Yeah, let me go to the next slide. Yeah, if you go to click up to four. Up to four, okay. Thank you, Mathias. You're welcome. So what happens here is that when this hour, the latest POC with the Magnum Factory, we want to kind of scale out the provisioning of the factory self, but also all the services around and the data center service running factory and services running on the edge. So we because we want to kind of scale out this we implement a register services, which allow you to which provide an API to register a factory or as well a machinery or a normal services running in the platform. So what happens here, you have a booster part, which Andrea explained you before, which allow you to connect to the register services and kind of telling, look, I'm a factory, one, I want to register my factory to start working on the on the data center. So what's happening here, the device and the request to register the API. And because it has a proper certificate is allowed to send those requests. And the registration is going to get the information. It's going to create the certificate resources under the data center in space. And then certain manager take care about the connected API provider extract the certificate to create a secret. And then the registration service API is going to watch the secret when it's ready, because it's a synchronous communication. So you kind of need to implement this capability, but Quarkus because we with reactive capability is able to work really in a nice way for the synchronous mechanism. And then when the certificate ready is going to send those certificate on the factory. And then when the because we want to kind of scale out, think about the factory can go offline or the center got flying when the factory is going to register is going to request a type of intermediate certificate. Why is that in because we want to kind of give the job to the factory self to spin up all those certificates for the factory instead to rely to the data center. So and the data center is going to give an intermediate certificate authority. So the factory can spin up the old certificate for the device and the services and can work by itself. But there will be everything will be trusted by the same Piki central Piki I certificate authority. And this is part of the job we are doing to make sure that we dig up all the data center plant from the factors, right. So in an enterprise grade architecture, we want to make sure that each and every layer can leave also without the VPN connection that actually could could drop. It's a matter of that's what I see in the future. Hopefully this year with micro shift and the service mesh federation, I will see, I would like to drop all those these things leverage a steel search manager as an entry point and all your Piki provider. But all those messages will be taken by the service mesh itself. And you have really a standard cross solution. Yeah. Well, super cool. We had this extensive overview on how the community build security. And Mattia was specifically in charge of this part. So how add the security bits to the overall architecture. And as Mattia was eventually know, security should be first. No, for you have to think about security at the beginning. But you know, you start from scratch and then you eventually adapt. I think this is a really good solution. With search managers, it's all open source around Kubernetes, the ecosystem enabled this. And in this case, there are integration with a vault system also external vault system. I see mentioned it here. There's also Ashikorp. There are many. But the cool thing is that it's a pluggable secure infrastructure. Yeah. So folks, we're going into the end of the session. But before closing, I have some quick reminder, quick appointment to give you. So how do we connect the word to this community? Mattia also shared in the chat the link to open feature request pull request. So the first thing you knew if you would like to join this community, the first thing you need to do is going into the Quarks for IoT website, join the GitHub organization. We're going to share again the link in the chat. And the second thing, we would like to talk to you about how we celebrate the winners of this Ackfest. This is the third year. So for the previous two years of the Ackfest, we all celebrated on OpenShift TV, which is our channel. We used to do a show talking about OpenShift and all clownative architecture here at Red Hat. So we have this show available. And actually, we celebrated the winners in one of the main show inviting those people talking about their experience. And in the slide, you will have also the recording available. You see there are the links also to those recording if you would like to hear about the previous editions. If we go to the next slide, actually, we would like to invite you to this year's edition. So we have two appointments. One is the February the 9th. We will celebrate the Red Hat Ackfest winners on OpenShift TV. If you go to OpenShift TV, you will see the full shadow of this series of show we do around OpenShift. On February the 9th, the OpenShift Coffee Break show will be really pleased to have the community and the Ackfest winners to talk about what was their experience and to celebrate with them. In the Ackfest landing page, you have the list of those three winners, the first, the second, the third place. And I will not spoil that if the people that doesn't know yet. We're going to celebrate with the winners. We're going to talk about the Ackfest. So I think it's really cool if you can join us on February the 9th at 10 a.m. Central Europe time. This is very cool. The show will be published in both YouTube and Twitch. So if you are a Twitch user, you can see us also on our Twitch channel. And the second appointment is March 9th. Again, the OpenShift Coffee Break is a weekly show happening on Wednesday at 10 a.m. Central Europe time. Again, on March, it will be another episode where we're going to invite the community. So next episode, February the 9th, we're going to talk about and celebrate the winners. And on March 9th, we're going to invite and talk about the community like we did today with more people from the community. So those are our points. We will try to give updates on what we are doing. So a lot of the folks here, they have mentioned MicroShift. We started last week investigating MicroShift because we want to make sure that makes sense for the business, but we will give updates on several additional stuff we are doing. And let me also mention one of the things that Mattia put in the chat. If you have any idea, any need, any curiosity, if you want to join, if you want to challenge us to do something specific, we will try to do this at our best convenience, of course. Please join us. Please open a PR request on one of our repos to propose use cases and challenges. That's what we are looking for. Yeah, that's the spirit of the community, right? Please join us. Please join us in the, Andrea also showed the link of the Quarkus variety organization on GitHub. Please join the organization. Please join us in the community. Please join us on our next appointment, February the 9th. And I think that's all folks, right? It was a very cool discussion, very deep architectural. I'm really excited about this discussion. And now I'm handing over it to Lucia. And thank you, everyone. Thanks, Radek, for your comment in the chat. Thanks to everyone that joined us today. It was really, really a pleasure to be here at Defcon 2022. Thank you and happy Defcon. Thank you. Thank you for a lovely meetup. It was awesome, guys. Like really awesome, energetic, and we covered so much information, but still lightweight. And I'm curious how many attendees will actually meet you on the February 9th or March 9th? Yeah. We're looking forward to it. All right. Thank you. Thank you so much, both attendees and the speakers. Thank you. The next session, like the next the next meetup will happen at 3 p.m. Central European time. So see you soon, like after lunch. Bye. Bye-bye.