 Good morning. Good morning, everyone. Welcome to this other edition episode of the OpenShift coffee break. Here with me today, I have the Quarkus for IoT community. We will come to them in a few seconds. Just a reminder, what is this OpenShift coffee break? It's a morning show here in the MIA time zone where we talk about all about OpenShift, OpenShift architecture and all my name is Natale Vinto. I'm a product market manager for OpenShift and today I'm really pleased to have the Quarkus for IoT community and I'll let you all our guests present themselves. You want to start, Andrea, or anyone else? Yes, Natale, thanks a lot. My name is Andrea Battaglia. I'm the community lead of this Quarkus meets IoT group. I work for Red Hat as a principal solution architect. I work with partner supporting them and training them when it's about complex solutions, complex projects like edge computing or digital transformation. I'm seven years in Red Hat already. Okay. Should we do a quick round? Yeah, let's do a quick round. Very easy format. My name is Mattia Maggia. I'm principal consultant in Red Hat Alps region. I focus on more on OpenShift integration and mid-world product. I enjoy IoT project because due to the challenges in new technology stocks to try out. Also. Hi, so Jeff Neusen. I work for a company called Intive, formerly Amian. I'm principal consultant there. Day-to-day work is OpenShift infrastructure side. A few years ago I was programming Java and that made me that background when the HackFest came up and made me want to get involved. Thanks. Hello, everyone. I'm Mario Parra, cloud consultant from Spain and today I fulfilled my first week as Red Hatter. But I have been actively participating in the community since the last event a few months ago. Oh, folks, don't be shy. So are we all there? Yeah, we represented everyone in the chat. Am I missing someone? Gunther? Yes, good morning. I have a problem with my camera at the moment. I'm the manager of Andrea. I'm managing the technical enablement, partner technical enablement team in EMEA and yeah, happy to join in and to visit the community. Awesome, awesome. So thanks everyone for this quick introduction that helped us getting more context about the people involved in this project. But my first question to you is this. What is this Quarkus for IoT HackFest for people that doesn't know what is this initiative? And we will share it also in the chat links and information, please. If you have any question, please send your question in the chat and we'll report to the attendees today. But the first thing you want to clarify, what is the Quarkus for IoT HackFest? Yeah, thanks, Natale. There is a quite strong difference between the community and the HackFest event. So while we started implementing something for Red Hat business partners and running enablement event through the Red Hat HackFest specifically for EMEA partners and focusing on edge computing solutions, the community on the other hand is a place where we want to, we want subject matter experts to join and share best practices, recommended practices or eventually just experiences like potential issues they have faced during edge computing projects, implementation or special requests they could have around cloud native application development. Because we cover, we try to cover from the technical perspective everything that falls into complex solutions and then I will be more than happy to let Mattia and Mario and Jeff cover their areas like security or open shift platform management and they are fantastic on that. The event is more trying to share all the knowledge we've got with the partners and trying to make them collaborate together in the Red Hat way. Because complex solutions like an edge computing project or a digital transformation project, they usually need to have more partners involved, more OEM involved, so hardware suppliers, different software vendors like Red Hat or they could be AWS, Azure, they could be a specific software vendor in the AI ML space. So that's why we want to, we run this event which is a mid-term enablement and marketing event to both participants, distributors and technology vendors together to discuss, collaborate and try to make business opportunities concrete and projects run at the customer side successful. So that's basically the difference. I'm happy if I may Natale to share my screen and show the landing page of the event and the landing page of the community. So let me. Oh, so yeah, please start sharing and I will start sharing also the link in the chat. We have an excellent link. Yeah. Could you please let me know if you can see my screen? Yeah, that looks clear. So this is qaot-project.kithab.io. It's our community website or blog. In this space here, we tend to collect everything that is purely technology related. So we have several blog posts around specific themes like integration or securing micro services in a container-based environment. And also we have an open area where interested people, they could be subject matter experts, they could be partners, they could be red actors themselves or customers. They can propose a new use case implementation and challenge the technical community to implement that. The outcome of the community is certified and standardized and becomes the theme of the Red Hat Hackfest. So this is the business event run by Red Hat for its EMEA, Advanced and Premier Partners. And during the event, we have several sponsors. We have partners joining and the outcome of the event is of course a new set of skills for partners, a new understanding of what collaboration in the Red Hat way means. And last but not least, the opportunity to work with our main sponsors this time is going to be our Intel and IBM. That's amazing. I love this landing page. So just to recap, Andrea, this is an event open to Red Hat Partners, right? Red Hat Partners, Advanced and Premier Partners. We make some exceptions if partners, ready partners, so the basic partners are in the process of becoming advanced, so to improve their skills based on Red Hat enablement at the global level. And we are happy to bring them into the event and challenge them with our solution. Just to let you know, let me highlight that the next iteration of the enterprise business event will open officially on November the 2nd, so the registration phase at the moment is open. We called it Hackfest and not Hackathon simply because it's a midterm event, so we have a week dedicated to the enablement, so we deliver webinars and the speakers are usually subject matter experts from our sponsors, from Red Hat itself or from partners. And then we have four weeks of the so-so implementation phase, so partners are challenged to implement part of the solution we created in the community. And that's why if you go to the community and have a look at the source code and all the repositories we share, some of them, they will be closed source simply because those are the pieces of the solution that we challenge the partners to implement. So the registration phase will close on mid-October, roughly October the 14th. Last but not least, this event is for, is a Hackfest because it's for teens, it's not for individuals, meaning in turn we require team leaders to register first, they will receive a registration code for the team members to register and connect to the team previously created. Team made up of three to five people. On the other hand. Interesting. Yeah, sorry because I forgot to mention one thing, from the community blog or landing page you have access to our Google group, to our tweet account or Twitter space, to the GitHub repo and to the community Slack channel where everybody is welcome to join, ask questions, participate and collaborate. And that's all from my side. Well, that's great integration, it gives lots of context of what is the Hackfest, also nice understanding the difference between a Nakathon and a Nakfest. I didn't know about it, so thanks for this explanation. I don't know if you were all aware of this difference. It's interesting. Yeah, so this is the project. I'm looking forward to hear from this community. So we have some representative of the community today, if I understood correctly, right? Those are the people in this call, in this show are part of the community and are also organizer of the Hackfest. So folks, well, what are your thoughts about this Hackfest? How did you start involvement? I would like to listen to stories. If someone wants to share their story, somebody, like Jeff, introduced the why join at the Hackfest. So that gives context, but could you please share what did you like the most? What did you enjoy doing in this Hackfest? Yeah, so Natalia, thanks for that. So I actually took part in the Hackfest as a participant first time around for the company with the partner. And for me at the time, it was a chance to kind of get to the other side of OpenShift. Day to day, I was kind of doing the infraside, doing the deployment, but actually getting the chance to do a bit of coding and actually deploy, deploy some code on OpenShift and do that side of the house almost. It was a real interesting opportunity and good, separate and different from the day job. So that's why I wanted to get involved. And kind of once I was involved, the technology was interesting, running containers on Raspberry Pi and actually kind of getting the communication between the Pi and the OpenShift going. That was really, really interesting and something that I wanted to take forward, but within the company, there was no opportunity. We weren't doing any work in that particular area. So at the end of the Hackfest, I think Andrea mentioned that there was a community. It was possible to get involved that way. So I wanted to take that chance and get involved. That's amazing. It's an amazing story because from the participant, you joined the community and now we are one of the organizers of this Hackfest active party. So that's an amazing story. And it's also amazing to know how to build a community. I don't think it's trivial to do that. Before I would like to come back on that, Andrea, but I would like to listen also from Mattia and the other people joining today, what their impression about the Hackfest? Any volunteer? Yeah, I can go. So, well, I'm joining the KOT project because what I'm trying to do is just looking for a new opportunity, a new way of working, and what I like is a new technology to experiment something new that I can see that in the future can be useful for the customer. Because trying new technology on the edge, cutting edge technology, it's nice to challenge yourself first, try new things, and then you're ready when these come on the customer. Because usually it takes years to use this technology. So it can get you prepared, thinking in advance, and as well, it is fun first of all. And then also I'm really happy that now it's becoming official because we start as a community. We were kind of a Red Hat partner, yes, but now it's kind of Red Hat Hackfest. So I'm kind of proud of this because it's something become bigger and bigger. So it's gaining attraction. And so it means that its interest is also for hand user, for partner, maybe in the future also directly to the customer. So let's see how it goes. And I look forward for the next Hackfest with a new user case, and hopefully with new challenges that when I'm looking as well, we propose a solution, a way of working, but what I'm looking as well from developer or partner is a new way also the same solution. Because we provide a kind of blueprint, but this doesn't mean it's just the way to do. They can also propose a better solution, which is also interesting, which starts to create a discussion around that. And maybe for the next Hackfest, another partner is going to join the community to showcase what we could do better. So that's why I like these kind of things. Awesome. And yeah, also it's cool that you mentioned that about the blueprint, which is open. We will come back to the architecture because I would like to see how the software stack is composed at the age and then on the server side on OpenShift. But before coming back to that, I would like to listen from the other, any other feedback on this Hackfest from other people, those are two awesome feedback. You should be really proud about this community. So let's listen from the other if you have any feedback to share. Yes, Natalie. Well, I started in the community a few months ago when I was a partner who participated in the previously Hackfest. And well, for me, it was a big challenge to present a solution in the past. But well, I'm not a developer. I'm from infrastructure teams. So it was a very big challenge. And I hope in the past, our community considered the solution and they provide me the opportunity to participate and get as a red hat today to promote this Hackfest. Thanks. Oh, this is another very good feedback, right? So from from participant who joined the Hackfest now in again, in the organization and, you know, contributing actively. So maybe Andrea, this is the spirit of this community joining, contributing, and then getting hands-on on stuff and contributing to the this Hackfest for new addition and new things. Actually, this is the open source way, right? This is the open source spirit, and what we want to pursue every day. The concept of moving from open source and upstream to open source and enterprise, it's quite difficult from time to time. I mean, we could have technical interest. We could have technical passion, but not all the community members are interested in what comes or what happens next to whatever we produce, right? So of course, the event focuses on the enterprise technologies like OpenShift and we'll have the chance to talk about it shortly. So we have single node OpenShift, Railforage, OpenShift platform, Intel hardware technology for the edge computing solution. We have IBM cloud. But then before that, you want to implement a potential POC for a use case based on what? A programming language, a cloud native framework, and a container technology that helps with spinning up a quick demo on your laptop, right? So you don't install OpenShift on your laptop. Maybe some people they could do, but, you know, it's unusual actually. So that's what we do. First of all, we put together all the technologies and use them to implement the POC. We run it on Podman or Docker, you know? Even of course, the container engine of choice is Podman for us, but Docker is more than welcome as well because it's something communities use, upstream communities use. And then afterwards, we migrate and we move everything to the enterprise world, so on top of OpenShift and the enterprise Red Hat software. So listening to Jeff, listening to Mattia, listening to Mario, for example, Jeff and Mario, they are amazing and incredible as you are because let me share with the audience that Natalia is one of the esteemed member of this community as well. And so these gentlemen here, they are incredible infrastructure professional, but Mario developed cloud native software on Node.js. He's not a cloud native developer, but he uses Node.js to prove what he does on the platform. Jeff has been a longtime Java developer, but he left because he loved OpenShift more, unfortunately. So if you prefer platform to Java, fine. But then we went together through some pieces of the business logic that he was keen to implement to understand more around Quarkus, the cloud native software by the community and then made enterprise by Red Hat with the name of the Red Hat build of Quarkus and he made it. So he sent to the repo some pull requests, and that was fantastic. Mattia is our amazing security expert and Quarkus developer. So all the security and certificate distribution and all the stuff the customers actually are mostly interested in are from Mattia because Mattia says whenever we speak about security and certificates, Mattia is the guru here and everything is a piece of cake for him because he has amazing skills and fantastic ability to understand what's needed before we start asking him for specific implementation. And they are just few of the incredible community members we have. Unfortunately, Oyn, Bani and Ben cannot be here for different reasons, but I want to mention them at least once because they are amazing as well. And they can work on pipelines, health charts, they can work on containers and platform as well. They work on integration because we have integration pieces in the community as well. That's a phenomenal reference for, and today we're celebrating this community. So we are here to celebrate this community. There's a goonter in the chat that say hello and say look forward to run the hackfest with our partner and extend the Quarkus for IoT community. And this is introduced the next question I would like to ask you. So how difficult or how easy is building a community? What's your experience? Today we're celebrating this awesome community with lots of experience from application development platform security. There's also, we will see probably later in the architectural review, there's also messaging system part, so there's data streaming, data handling. So Andrea and all the people in the community, how difficult or easy is building a community? Just let me quickly start and I will hand over to the other gentleman here. Let me say I'm not a community manager. I had no community experience. Actually if there is any community specialist or manager who would like to give us a structure, it's more than welcome to contribute. We are open. So I started on my own one year and a half ago just implementing a few microservices on top of the Quarkus framework. We delivered the hackfest pilot back in the days, as Mattia mentioned, it was called QIOT, so Quarkus for IoT Hackfest. Now it's official Red Hat Hackfest for the company. And then I posted some advertises on LinkedIn, on Twitter, on the internal Red Hat mailing lists. So people joined, Mattia joined, Jeff from the pilot joined and other people from different regions, so geo, so geographically distributed areas they joined. And the community grew organically, which means I was sure that was about to happen for two reasons, because I truly believe in the open source way, in spirit as you said Natale. And I believe in the continuous and the growing interest of technical people in something that is looking at the future, meaning the edge computing world. You can say edge computing, you can say distributed workload, distributed security or enterprise security. Everything related to those technical topic or also from the business perspective or pre-sales perspective is interesting. It's interesting because people they learn, people they can reuse as Mattia state correctly, their knowledge for their job and customers, partners, whatever. People they can talk now. So building a community is not easy, because you have to be an expert, you have to be a professional community manager. Working with passionate people is a piece of cake. So that's kind of easy from my side, but I would love to hear from Mattia, Jeff and Mario. So I hand over to them. Yeah, I agree that building a community is difficult, but what I think the power on open source or community in general, you always try to give some space to experiment, to do things, to try out, because you have a common goal, you have to solve a solution, but you don't dictate how to do that. So that's why with open source you share different here. You can give attraction to that, to experiment, to give a solution. So the final idea, the final implementation will be just the F or not just the one person, like Andres, I'm a good security. No, because yeah, I know some stuff, but try to understand all the points and also your feedback, because first of all, I think it's important to listen to before. I don't want to dictate this is the way to do that. You know, you need to listen from feedback and understand what's the best way to do that. Also guide them as well. Maybe during a discussion in the community, you think maybe we have this option to do that, but then they come out, we have other four options to solve the same problem. So if you give people to experiment, they bring your passion to solve something and to create a community. That's pretty interesting, because we have a community in the OpenShift. There's an OpenShift community called OpenShift Commons, gathering all the experience from customers, OpenShift users, the community, so sharing experience and contributing all together. I will send the link in the chat. But that community is built globally as a structure. It started some years ago, right? But I was curious to know how is starting a community, not everyone is community manager from day zero. How is building this community? I think this is a great example and probably this community can become, go in the next step, become on a global level. Now that the hackfest I understood is global, is globally is free. Or the intention is to be global. The community can be global, no? And I put in the chat the link to the OpenShift Commons community where we share OpenShift best practice architecture. This example can be one of this architecture, one of the example you could, the community could share to the OpenShift Commons. But that was very interesting to know about how to build this community since we're celebrating the community today. And at this point, I would like to go a little bit in the technical side for the people that are listening and attending. Just curious to know about what is this hackfest doing? Can you show us an architectural diagram or just explanation to help understanding what the people would like to do in this hackfest? Absolutely, Natalie. Thanks for asking. Let me share my screen again. And please let me know if the diagram is visible. I'm not going to zoom in now because I want to give our audience an overview of what we have planned. So the community implements one use gaze every year. And simply because we are not, as I told you, structured properly, we are lacking this kind of community management skills. And because we haven't got committed contributors, people contribute by passion. And I'm actually the only one contributing constantly to the community. Because I'm assigned to that, I created that, and I have the green light from Red Hat to work on this project, right? So Mattia, Jeff, and Mario, and the others, like Ben, Owen, Bania, and so on and so forth, they contribute to the community occasionally when they have got time. Still driven by a strong passion. Let me remark this. So the first topic we built was around COVID-19 in terms of measuring the air quality during the closing and the reopening. So that was simply an ARM V8 device with sensors attached, sending data to a time series database, living on a data center based on OpenShift on the OpenShift platform. For the second use case, so covering third and fourth hackfest, we decided to go and listen to the market, go and listen to the enterprise world and see what was the most challenging kind of project they were about to approach. And what were their technical needs? And we got the feedback that edge manufacturing vertical was the most interesting, the mostly invested, and the most challenging actually. Because edge computing could be everything. Is IoT, is security, is workload distribution, is several layers across, let's say geographically distributed layers. So that's challenging, right? Our POC covers all of this, but in a simplified version. So we have at the moment implemented a data center site based on the OpenShift platform, and that leaves on top of IBM Cloud. Then we built a... Sorry, this IBM Cloud is... Rocks is managed, they're managed OpenShift or is there another thing? No, it's not managed OpenShift. We want to have control of the OpenShift platform on the cloud because we've got the strong skills at the moment. So we wanted to have the freedom and the opportunity to showcase different stuff. That's why we didn't go for the managed OpenShift, but we didn't go for playing IBM Cloud and install OpenShift on top of that. Sorry, there's Eugen in the chat to say, please open in full screen the diagram. I don't know if you can go full screen there or probably it will like to see... Yeah, but we're going to zoom full screen. We're going to zoom in quite soon. Okay, correct. Because it's landscaping, it's quite extended, and then we will focus on each and every area. Oh, okay, cool. Eugen, you said. Yes. Yeah. So we have an edge server layer, and I can zoom in a bit more here, which is based on single node OpenShift, and the hardware behind that is Intel NUC. And last but not least, the edge device, so far edge layer, which is based on well for edge, based on a hardware called a fantastic enterprise great piece of hardware called Fitlet 2 by Compulab, powered by Intel technology, of course. So the overall use case in the manufacturing space simulates the production of t-shirts, something simple that could be challenging from the workload implementation as well, because you have to simulate several layers and you have to simulate machineries producing t-shirts in phases, in steps. So we identify steps as weaving the t-shirt, coloring the t-shirt, printing something on top of that, for example. And last but not least, the packaging. So for each and every stage, the edge device simulates the production and sends the data to the factory for validation. That's simply, so we simulate factories, aka facilities, where several machineries they take care of the t-shirt production using through the phases I just mentioned. And of course, everything is controlled, each and every facility or factory, which is this exact area here I'm actually showing on the screen is controlled by the data center. So one data center controls more facilities and in each and every facilities you have one factory controller or facility controller and several machineries. So this is already in place, the basic business logic without any special addition, aka security, aka workload management, or workload update through Podman is already in place and our audience can definitely go to our github space and download the container Compose file to spin up everything. Of course, the community provides with a JFrog repository. So we store there all the health charts and the common Java sort of maven artifacts that we use and we deliver. And of course, we have a quay.io account that is helpful when you want to test directly the whole environment on your machine directly without compiling everything beforehand. Interesting. So there's an artifactory for the artifacts, the jars are made by purpose and the health chart is also an health chart repository interesting. And you are using quay.io to store the container image inside the public registry with that security. And everything is available on our blog post, on our website and our github space, workspace. People are more than welcome to join or to have a look when they have time at their best convenience. If you have a specific question, Natale, please interrupt me otherwise. I would like to remind the people, attendee, if you have any question, please send in the chat we will bring to the show. We will answer if we can live. You know what? I'm very interested about this architecture because there's a single node open shift in between. So if you come back, there's the rail for edge, so let's say the far edge part then there's the single node open shift, which would be this one. And then we have a kind of control plane in us. If I understood correctly, a central open shift that has the business logic, right? Yeah, exactly. So the data come from the data center in terms of the product line and all the data to produce the goods, in this case the t-shirt, are provided by the data center that spreads them through our streaming service. On the other hand, after each and every validation of each and every stage for each and every t-shirt, the validation service here sends data back to the time series database through the streaming service, which is of course AMQ stream, aka Kafka in the upstream version. So that's something standard we wanted to use. Of course, the open shift platform so that the control planes, the plant manager is equipped with lots more resources because we need to index the data in the time series database to make sure there is no latency in receiving the telemetry or in sending out all the information that could be related to certificates for the machineries and for the data centers. This is something I will hand over to Mattia shortly. And we wanted to make sure that all the values and the information sent and received from the data center perspective were validated. And I will definitely hand over to Jeff who was working on this part because as we said, Jeff is a platform expert, but he started developing. So data quality and data structure is as much important as security, right? Because we already mentioned and used many times the term blueprint. The blueprint is not just a POC or a standard, potentially standard way of implementing a project on a use case, right? Certainly a use case. Blueprint means giving standards to pieces. And considering we are passionate about the divide and conquer approach, right? You have a big problem, you create a blueprint for that. But then as soon as you go deeper into the details, you want to elaborate or apply blueprints hopefully based on standards like data validation, also for the objects belonging to specific pieces of the business logic. So just to give you an overview, we don't just use AMQ streams. We don't just use OpenShift or SingleNode OpenShift or RepRage. We use lots of different technologies that of course are certified on the Red Hat platform that come from our technology partners. So we use SearchManager. We use InfluxDB for the time series storage. We use Grafana for the dashboard. And on the other side here we have Postgre. We have from Red Hat AMQ Broker. And also the, again, the hardware underneath starting from the Edge device to the platform are come from our OEM partners, our technology partners who do actually quite a lot of work to make everything certified. Fitlet 2 hardware is certified on Red for Edge. We are making sure that the NUC we are using will be certified in a POC fashion on SingleNode OpenShift. This is something I'm happy to end over to Mario because he already did lots of work on this area. And of course IBM Cloud is certified on OpenShift. So, Natalie, if you have more questions, feel free to ask them to the community members. Yeah, I would like to ask to Jeff and Mario since they, Jeff was the developer and going into the platform, Mario the same way. What did you like from the OpenShift experience as a developer, as a, let's say DevOps or platform user? Because I'm really interested about the developer experience on the platform. So we have seen the architecture or the review. My question is how and if OpenShift make easier for you start coding, start deploying applications, start implementing this architecture? This is a question I would like to ask you because you told me you had the developer experience. Sure, Natalia, I'll go first on that, I guess. So what I, I guess from the first hack fest, just deploying, I'll create in there the quarkus and deploying it out there. I was fairly, felt fairly manual, but that was, I guess, my experience at the time and lack of experience at the time, and through kind of getting involved with the community and getting a bit more experience. I actually started working with Tecton and the pipelines and deploying the code that way. So we actually put together a pipeline that would take the code, build the code, build the containers and deploy them for each piece of the architecture here, or a similar architecture for the original COVID project. And having, having those Tecton pipelines in place, along with some GitHub actions, it meant that, you know, we just submitted the code and very much an automated CI CD type pipeline there, the code was built. And the next step, we could run a command deployed from code straight on to the OpenShift box. Cool. So you, you, it looks, you enjoyed the automation part, you know, building the code up and running. That is cool because, yeah, I think as you mentioned, OpenShift pipelines based on Tecton is one of the coolest things to implement the CI CD, part. Now we, we know that there's also that GitHub's part. I know Mattia is, it's very enthusiastic, or all these extended automation. But before going to Mattia, I would like to ask also, Mario, what is your developer experience on OpenShift with this architecture? Well, for me, also, is the automation and GitHub's part, because, well, we can create the templates and distribute to all the bugs or environments that we want to deploy the same solution. So, for example, in single node OpenShift, well, the provision of the OpenShift machine is very simple, but you can automate the task to deploy all the needed components in the same pipeline. So it's very useful and easy to deploy it. Got it, yeah. So basically, the same feedback about the automation plus, right? So this was one of the benefits. Well, cool. This is interesting also to collect feedback, you know, for the product, for the usage. And I would like also to listen from Mattia. Mattia, you know, is a kind of hacker, OpenShift hacker. As Andrea mentioned, it was one of the, it was in charge of the security part, the cert manager. Can you explain us a little bit what is this certificate management part for this architecture and what was the complexity and how did you solve it in this topology? So, well, you know, the challenges when you work with IoT device, you need to think that would be kind of a lot of device to be integrated. So you need to kind of find a way to automate the provision or certificate because you want to make sure the communication between the data center and the device is always secure under TLS communication. And of course, and with Muto TLS, because you want to recognize who is the device is calling the server. And because we are leveraging OpenShift Laptop slash Kubernetes, there is a framework called cert manager, which allow you to provision and certificate for Kubernetes platform. So what the idea was to leverage this capability or to mention because cert manager allow you really easy to manage certificate. We provide a standard API and also is integrated with multiple certificate authority like Let's Encrypt, Vaynafi. And for our use case, we use Ashiko Vault because also as well, it's quite easy to to deploy Ashiko Vault with OpenShift is also is a standard for security perspective. And as well, you can also extend for support custom CA in case you have a specific company certificate authority. So what we do is to leverage a cert manager to provisioning a certificate for a specific device. So when a device wants to join the platform, it's going to call our registration services. And then the registered to services was kind of the facade for the cert manager to provisioning the certificate for the device. And in this way was kind of allow you to extend from one device to 1000 device, the communication the device just starting up, call the registration, retrieve the certificate and start working and send the message to the platform. That's really cool. So issuing and revoking the certificate at Nate when on demand, let's say, when the device untouch and detach. Yes, exactly. Also you because if set the manager or specifically your certificate to it can provide a revoke list. So you could always understand if a device get huckled, then you right away you revoke the device, the certificate on the revoke list, and then this will stop working. Because you always verify if this device is trusted or not from the broker's perspective. Awesome. Yeah, that was interesting also because the security aspect, I think, Andrea and all something to take in consideration from the beginning, right? Moreover, in IoT or Edge where security is not really considered the first step, putting that in place for the first moment, I think it's something very useful and wise. But Andrea, we've seen this beautiful architecture, but what the people, where the people should start from the architecture to the implementation. Is there any GitHub organization repos they can start? How to have this in place if the people won't start also hacking this architecture? Absolutely. Thanks Natale for asking. As mentioned, we have the link to our GitHub organization here. I guess, so this is a link to our GitHub organization. So this is the QIOT project workspace on GitHub. Yeah, I shared also in the chat if you like to follow the chat. We have a naming convention for repositories. Every repository starts with QIOT, but then the second part of the name, sorry, let me zoom in. Thanks. The second part of the welcome of the repo name is the name of the use case. So actually manufacturing these case we are working on at the moment. If you search and filter for manufacturing keyword, you get all the repositories we created already. But as we are working on the implementation, which will be finalized roughly by mid-October, you can have some because we are adding the Helm charts and all the security stuff and everything else we mentioned already. You won't find whatever we mentioned around automation and security. What you can do absolutely is go and have a look at the previous use case. So you search for or filter by COVID-19 and you get the installer in terms of Helm charts. You get whatever Jeff created around pipelines. You will get operators for specific pieces of the solution like AMQ broker or, for example, AMQ streams. And then you have registration service, the Helm charts for the services, image builder because don't forget we were working with Raspberry device, so 64-bit ARM V8 devices. So we created several tools for compiling, not cross-compiling, but compiling the Quirkus application natively in a container for a different architecture. So this is thanks to Ben Taliart. He's not here with us today. So he embedded QAM within a container starting from upstream projects, an upstream project called multi-arc. And then on top of that, all the stuff that the Quirkus engineers create to make the native build of Quirkus application smoothly and run directly by a container image. Wow, big shout out to Ben. He's not in the call, but he's one of the most active in the community. And so this is a multi-arc compilation on OpenShift. So from OpenShift, you can have the image for another architecture. It can be ARM or Intel in this case. I'll say a couple of OpenShift as well. Yes. Cool. So people can join here if they want, as Jeff did, to try themselves on the Java development or operators development. They can just ask us and we will keep them tasks. And we, of course, provide support in the purely community way, the pure community way. So we provide support. We have weekly calls, of course, one hour, just like in a mentoring way. We want people to be independent and to create, as Matthias stated, we've got some boundaries. So we have our architecture in mind. The architecture will be published in the blog in the next couple of days, not more. We are finalizing some stuff because, of course, we are not fully allocated in this, indeed. And so people will have the chance to work on add-ons, for example. So let me go quickly back to the architecture and show you. So each and every service could send events, so telemetry to the time series database through AMQ streams. But we just implemented one. So the telemetry coming from the validator, so the stage validation with successful or not successful. And we implemented just one event collector. Of course, we've got the abstract blueprint for that service as well. So if someone else wants to challenge himself in implementing events sent from the registration service or the plant manager or the product line service to the time series database in a more log-fashioned way, we can do. So, of course, you could definitely save logs within OpenShift using FluentD and Prometheus. But we want to do something different. We don't want to reinvent the wheel. We don't want to override or redo whatever it's already available out there in the Internet through standard demos already. We want to do something else. And we want to use cloud-native frameworks for this. So people who are not familiar, for example, with time series databases, they can play with Quarkus, time series database Kafka, and eventually also Camel-native integration on top of the Quarkus framework. And try and appreciate, for example, the amazing performance improvement manual VM brings to the Quarkus framework. So that's something we are always keen to share with our contributors and people interested in technology approaching the community. Cool. There's lots of things that are very nice. So there's Kafka streaming data into a time series database in FlexDB, if I recall correctly. And then you're using Quarkus, Quarkus-native, Mandrel VM, lots of the cool feature of Quarkus, which is a Java framework for optimizing this cloud-native application that works also on the edge. So even the application on the edge is made with Quarkus, with a minimal footprint, if I recall correctly. Can you give us a little? Thanks, Nadale, for this interesting question. So Quarkus-native means reducing to the minimum, the memory footprint, and the bootstrap time, right? So in a few words, what the Mandrel VM, the Mandrel VM is the downstream version of Gral VM from Oracle. Mandrel VM is specifically designed to provide additional performance improvements to the Quarkus framework. So what actually this VM does at build time is to pre-instantiate all the objects, the application would instantiate at bootstrap time and get rid of everything else, right? So that's why you get a huge performance improvement at bootstrap time, and you have a very lightweight workload. At the same time, Quarkus is a very, very performance framework. We could spend years talking about the performance improvement provided by the Quarkus engineers, but the idea is not just to do something cloud-native on a cloud-native platform. For example, single node open-shift of the open-shift platform. The idea is to have something very performant on the IoT and edge devices. And to give you an understanding on the Fit2 device with Relfor Edge and Podman Engine on top of that, a Quarkus native application spins up in a few milliseconds. Thinking of the complexity of the edge machinery simulator, because you need schedulers, you need REST APIs, you need context-dependence injection, and several other components, because this is a proper enterprise application. The bootstrap time is not more than 20 milliseconds. Compared to the previous use case where we were running, compiling natively, and then running the native version of the Quarkus application on a different CPU architecture, so that was ARM V8 64-bit compatible. The application was a similar kind of module set. The application was taking not more than 50 milliseconds to start up. And at runtime, the application was consuming on the standard x86 of the Fit2 powered by Intel device, it was taking 25 megs. So at runtime on the ARM device instead, it was using more or less 70 to 80 megs of memory. Of course, this is not pure IoT with the small pieces of hardware. We are talking about enterprise IoTs in the manufacturing space, so the application should be a bit more performant, but it needs to do a bit more work than a single lump in an IoT fashion way, or the small temperature sensor you install in your refrigerator. That's why we are talking. We have this kind of number and measures and boundaries here, but that's definitely something amazing. And considering the enterprise-grade pieces of hardware usually provisioned for this kind of use cases, they are expensive because they are enterprise-grade, and because of this on the other hand, they are equipped with less resources than usual than a Raspberry Pi could actually be equipped with. We try to make sure the performance is valid and is available also at the far-edge or edge device level, not just on a nice cloud-native application running on an OpenShift-based platform. Thank you for this explanation. Of course, the edge part is really an important piece of the wall architecture, and it's cool. This can be implemented also with this Java cloud-native technology, which is Quarkus, and some optimization for the native compilation. So folks, we're going into the end of this show. I would like to talk with you hours, because this is very, very interesting. But hey, you know what? We can invite you for our next time. Probably we can have an episode after the access start in November. So we can discuss how it's going. And then, of course, at the end, we will celebrate, as usual, the winners on OpenShift TV, like in the past edition. I would like to thank you all, Jeff, Mario, Mattia, of course, Andrea, for joining us today and talking about Quarkus for IoT community. Just one last thing. Where the people should go where they want to start? Is there an email list or email that they can reach out to you? There's the Github project, but in the blog, is there any email that they can use to reach out to you? Everything is available on the blog. Perfect. From the email, from the mailing list to the Slack channel and into the Github repo. Fantastic. I remember to register to the access as well. Of course, of course. So please remember to register to the access. And I will share again the access link in the chat. So you have it there. When you open, there's the landing page that Andrea showed. Start November 2nd until November the 26th. Please use that link to register to the access. Please go into the Quarkus for IoT blog project to get a mailing list, email to start collaborating in this awesome community. Before we stop, just a quick reminder of what we have today in the calendar for the OpenShift TV. We have our level up show and then Haskin Admin. But today is also the Dev Nation Day. Let me share in the chat the link. Dev Nation Day is a day for developer talking about Kubernetes, OpenShift, and all cloud native. We have three tracks, the Java track, the Python track, and then the JavaScript track. Lots of international guests. There will be live demos. We will show Kafka, our managed Kafka offering. We will show Quarkus, Spring Boot. There's a lot of technology involved. One of this technology we mentioned today with the Quarkus for IoT community. Thank you, everyone, for having joined us today. Our next appointment in this OpenShift Coffee Break. We will come back in two weeks. Let's see when it is. It's October the 6th. We will come back with the OpenShift Coffee Break show. In the while, enjoy the schedule on OpenShift TV. Enjoy Dev Nation Day. Thank you, Andrea, Jeff, Mattia, and Mario. Yes, feel free to write in the chat if you want to ask more and talk to you soon. Thank you, folks. Bye-bye. Thank you. Bye-bye. Goodbye.