 Good morning. Good afternoon. Good evening and welcome to a very special edition of OpenShift TV. This is the Corkus for IoT HackFest 2020-21 winners. And apologies for the early technical difficulties. Our streaming service of choice appears to be having its own technical difficulties this morning. So a little bit of a late start, but away we go. Natali, Andrea, how are y'all? Y'all are the usual suspects here, I feel like. How's it going today? Hey, hey, hey. Good morning, everyone. It's going very good. I'm happy to be here with Andrea, the winner of the Corkus for IoT HackFest. Wow. Congrats. Hi there. Thanks for the invitation. Happy to look forward to telling you more about the Q&A HackFest and what we have done. So if you don't mind, I would love to share a few info and a few slides about the HackFest and the outcome, and then we will hand over to our partners. Awesome. Yeah, we need to do intros of everybody after that for sure. Okay. Can you see my screen, the presentation? I see a very wide screen, yes. I see two screens. Is it two screens? It looks like one wide one to me. Could be wrong. No. Am I doing something wrong? No, it looks like you're sharing the whole desktop. That's all. Yeah, I think he has an ultra-mega-wide screen. Yeah, he has the monitor that I want, but won't get because I do this. Going on TV is always, you know. Yeah, I get it. I get it. Yeah, just a weak introduction, just to give you a bit of an overview of what these amazing people have done. This has been the second run of the HackFest, and these are the marvelous and fantastic technical people from Red Hat who contributed to the event. And these guys have contributed by passion, first of all. This is a, not just a community work, this is a teamwork, and people have shared lots of expertise, skills, and ideas. So thank you all. We have here Natalia and Mattia that, of course, will have their chance to share the feedback. We started one year ago, one year and a half ago. In Quarkus running natively on the Raspberry Pi, we used Fedora IoT, we used Podman. So all the latest and greatest from the community and the cloud native frameworks. What we have done is to put together a solution that is flexible, reliable. It is all for our MEA partners and for the community. That includes everything that falls and fits perfectly into an IoT solution. So that's what we wanted to bring to our friends, to our contributors, to the subject matter experts actually. And we form the community. So once again, it would be great to have more experts, more people who are just keen to learn and have fun with us. Please join us. This is a continuous implementation of our skills. A continuous implementation of the opportunities from the business perspective as well. What we are working at the moment is an extended version of the use case we worked on already. We would like to get more partnership from other big technology players on the enterprise market and adopt more latest and greater emerging technologies from the Red Hat portfolio. So we want to focus on the manufacturing use case that we are currently implementing and brainstorming on because there's lots of things to report from the previous experiences and lots of new things to add. And we expect to be able to use and reuse these to teach others, to give others the opportunity to learn and to actually extend and give back to other communities working on Red Hat products and other upstream projects. Last but not least, and I will shut up, let me congratulate to our winners. So Beros who took the first place in the Reng, Oriental Logic and the Dronix, they are amazing partners and the teams who participated in the event gave us a fantastic chance to learn from them as well. So it's been a great collaboration and a fantastic mutual support on this path. Thanks a lot again. Awesome. So do you want to go around the room here and kind of introduce everybody or? Yeah, let's do it, let's start. Yeah, my name is Natalia Vinta, I'm a developer advocate for OpenShift. I joined the ACFES the last year with Andrea and all the participants and organizers this year. Again, with Andrea, Mattia and all the awesome Quarkus4IUT community. And that's me. So Mattia, if you want to introduce yourself. Yeah, thanks. I'm Mattia Magia and Principal Consultant in Red Hat Alps region. And as well I joined last year with Andrea and on the Quarkus IoT team. And it was very nice working together because again we were driven by passion. So it was very nice to deliver such as ACFES together also with the partner and teams. Wonderful. I guess the partner can speak up. The winner. We want to listen to you. The winner. Sorry, the winner. This is my turn. So my name is Simon. I was part of the peerless team for the ACFES and I work as a Java backend developer. First, it was very interesting to start working with Quarkus. See what's possible and start to use it more and more into projects. This was a great starting point for us. Hello, everyone. This is one of the head of software development at OrientLogic. OrientLogic is one of the biggest companies in Georgia, providing IT professional services for more than 25 years. It was a very nice project for us. New experience. And it was a real fun being part of it. Awesome. Hello. I'm Cesar. I'm a full stack developer and focus on backend Java. And we worked in Jetronis three years ago. And Jetronis is an IT service company with an employee in Europe. Awesome. Thank you all for joining today. We greatly appreciate you coming on. So what was everybody hacking on? That's the question. Yeah, do you want to talk about what all the people are hacking at this time? Last year we had this same show with all the winners. So we described a little bit what they've done. I think this year the architecture was more complete. Probably thanks to all the people that joined it. And also Mattia made a great work on HelmChart and the topology and the architecture. Chris, there's lots of stuff for this architecture and Andrea show a little bit before. There's Kafka, there's MQ, there's Quarkus in the server side, in the client side. There's even serverless at some point. So I think it's a really good example and a modern IoT edge use case. But I would like Andrea introduce more technical details. Yeah, absolutely. So last year we started with something simple because we wanted to pilot the initiative, right? So it's still important to discuss and showcase the radar portfolio. That's perfectly fine. But we are focusing from the architectural perspective on an edge computing use case, which means in turn being able to pay attention to lots of collaterals, which are not really covered from the radar portfolio, but must be integrated. And we talk, for example, about distributed security, which Mattia took care of. We talk about how to streamline data in a proper way because when you talk about edge computing, no matter what the vertical you are touching is, you still have to be able to handle millions of messages per minute. And that's very, very important. Another thing we wanted to cover was the integration aspects. Edge computing relies on one specific protocol, which is MQTT protocol for the event streaming. And we actually made sure all the enterprise products from the radar portfolio could handle those capabilities in conjunction with the security. So what we did was to present a fully compliant POC, actually. That could be suitable for a proper demo based on an edge computing use case. Of course, this is simple. If I may share my screen again, the demo is quite simple. We don't pretend to be or to cover everything. We have a look at this diagram. Despite work on the edge device, we covered the security, which is probably security involving the registration service and AMQ, so our broker, which is probably the most critical part when you must handle devices remotely. And I'm happy to hand over to Mattia as soon as he wants to share something. We tried to integrate third-party products like Search Manager. Search Manager will be integrated into OpenShift starting from version 4.9. But at the moment, there is no official integration. So we actually tried to give back to the community some information about integrating third-party products on OpenShift. We covered internal data flow. Security is a bit weaker because when you have an OpenShift platform with implementing an all-in-one solution, you don't really want to step into each and every connection between services unless it's a compulsory prerequisite. So we tried to avoid security steps here, storing the data within InfluxDB, which is another great reddit technology partner. And that could help us show gazing how fast and reliable and scalable it is to use OpenShift, even for such a complex use case. Last but not least, all the services are based on Quarkus. Nice. Please go ahead. I just said it was nice that all of them are based on Quarkus. Yeah, we tried actually, Chris, to emphasize everything that comes from the Quarkus universe. I know it's plenty of initiatives out there and the Quarkus community and the enterprise version of it. So the reddit build of Quarkus is amazing, definitely. We contributed to that as well, adding some minor improvement to the security and the automation of such processes. But still, the improvement the Quarkus framework gave to this POC was simply outstanding. Also, we tried to add some automation to that. So as Natale was correctly mentioning, and this is something you can definitely find and reuse in our community, all the services they were managed by customization of a single tecton pipeline template, meaning in turn, you deploy it once and you run it several times. So dedicating the pipeline to each and every service, we were able to upgrade the workload and perform in general all the day-to-operations like backup or software upgrade in an isolated way, in an isolated patient. So it's not just the workload running with a specific and very well-defined amount of resources in terms of memory, like press and volume claim and CPU millicores actually, because we didn't need more than a few millicores per service. It was also easy to manage and actually work on pipelines and day-to-operation. Please don't expect we did day-to-operation and management on each and every component of the architecture, but still the Quarkus services are performing quite well. The community, again, is open for feedback. So if anyone is keen to reuse the POC, if anyone is keen to know more, we have a blog as well here, which I'm keen to show, where we try to save and report all the technical improvement we added to the community. The most important part, of course, that I can mention I'm keen to share is the use cases section where we discuss each and every use case in detail. So everything we produce, and we usually start with the implementation of a use case every year, and we use it for two hackfests, is here. And usually second half of the year, so before we get to the second run of the event, we try to add more collaterals rather than additional products. And that works. That works because the use cases usually get the attention of such and other experts. The community is contributed not also by Red Haters, not also by the partner ecosystem, but an important contribution comes from the community, meaning plenty of people out there working for IoT companies who are definitely operating under the radar. This is an excellent opportunity to bring them on the spot and learn from them as well. So we manage somehow to share the value of the Red Hat portfolio, and on the other hand, to get as many information and acquire as many information as possible and contribute to the growth of the Red Hat upstream projects. The contribution to the enterprise product is not something we do at the moment, but it's definitely the innovation piece is the one we focus on the most. Mattia, would you like to spend a few words on the security side, because this is something I guess... Yeah, can you share back the architecture, will it be easier to explain? Yes, please. Or if you want to go to the blog as well, it doesn't matter. Yeah, architect this time. So what we announced from the previous act first was the introduction of the security, so we make it tough for the partner to integrate with Hava platform. And our choice was to, because we are running OpenShift, which is a Kubernetes certified platform, we choose certain managers. And certain manager is the standard de facto, it's becoming the standard de facto for certain certificate as a services. So we leverage certain manager to spin up the certificate for the edge device. And because certain manager is quite flexible, is able to integrate with several PKI, we choose Ashikorp just to simulate the industrial PKI. And based on this, we were able to spin up the certificate for the device. We extended the registration services, thanks to the Kubernetes client from Quarkus. We extended the registration services to be able to spin up the certificate when based on the request that's coming from the edge device. And based on this request, we define the common name, the sound, the alternative subject name. And then we created the certificate in PEM format as well in Java key store and trust store. And we're able to send this on the edge device as a response. When the edge device is ready, is received, the certificate is able to start the connection with MQ broker. And then MQ broker does this multi-tales authentication. So you recognize this device is allowed to send the data. And after verified, the client certificate, the device can start to send the data to our infrastructure. More or less like that. Cool, cool. That looks a very, very interesting architecture. From last year, I think that improved a lot. That was, you know, a POC. It was similar to that, but now it's more, I think it's more complete, but I would like to listen from the winners. What they think about this architecture or what they think about the interaction you had with the architecture. I know you work on the client side, but also I would like to hear from you your feedback about the wall structure, the architecture and the project. Don't be shy. I can say for our team, like it was very pleasant to work with the fact that we had the full environment ready already. So it really felt more like you're not playing with the software, actually building something with the software that has a use case to it, which was very interesting. The only problem we had was with the certificate itself, but that was good that you had those job in clinics each week that really helped us forward and like gave us a hint towards the solution that we eventually came up with, which was a very fun way to work, like very accurate to how it is in the real development workflow as well. Thanks for your feedback. That reminds, so Chris, all this topology, all this architecture is on top of OpenShift and we provided the access to the one single, let's say big OpenShift cluster to all the participants in a multi-tenant way. Nice. They work in their work project, they work in their kind of sandbox or quota-based system, right? So that we kind of did like a developer sandbox is doing, no? Dividing per project, the quota, the resources, and they all work in this OpenShift cluster. So that was a nice structure that we deliver and also a good example if you want to do the multi-tenancy, a cascade of multi-tenancy. In this case, you were a tenant, you cannot even go further in a sub-tenant system. It's also possible to send always same system, quotas, namespaces, and roll binding, those kind of permission, Kubernetes-based permission system. Cool. Any other feedback? Was it easy to work in this multi-tenant environment? Just out of curiosity, like how was that experience? If you let me say it, it was very easy because instructions were very clear and the dropping clinics were really helpful to enable us how to integrate with the platform. Also, definitely the system, this client side, you know, on this big diagram seems pretty easy to implement from the partner side, but believe us, it was not that easy because the edge device part was very kind of required some manual intervention. Also, dealing with the constraints of the edge device itself, so you have to compile everything to the binary and run it. Also implementing of the auto update service on the edge device was very interesting and was very helpful to use it in our future cases. And for us, as OrientLogic, we just mainly focus on the Java Enterprise stack traditionally and it was kind of a really good experience and we are really glad to have this opportunity to learn about corpus because we plan to use it in our future projects. And it was a really easy transition for us because technology stack is mainly the same. So overall, the implementation was not that easy, but we divided the parts between the members of the team and each member was doing their part. Somebody implemented the Python service, somebody the corpus service, and somebody the edge device part and we managed to assemble everything and run it. And the real pleasure was to see the results on the Grafana diagram that the sensor data is coming and accumulating it was very accomplishment from our side. Beautiful. That's awesome to hear. Thanks, thanks for your feedback and any other feedback. So we have the world round of people. We are missing Cesar here. Yes, Cesar. Cesar is the most shy of our partners today. We are learning the benefits in different workshops about corpus. Reduce memory, application on site, combine both redshift and imperative development in the same application. And finally, deploy container image with Pogman and nothing else. Nice, nice. So it was interesting for you the memory footprint consideration while with corpus and also working with Pogman, which is an alternative client for managing container. I was wondering, so I know you worked on the client side in the board. Did you enjoy also working with the Fedora IoT part and I know you were using Pogman and also corpus on the client side. I was wondering what was the most interesting part in the client side or knowing this server side architecture. Yeah, that's service. Okay, nice, nice. That's interesting, Chris. It also goes in the direction of the new rail for edge offering that start from the Fedora for IoT experience. Cool. It's good to hear. Yeah, and this light explain it a little bit better. Rail for Edge devices. We use Fedora IoT, corpus and Grail VM. So Andrea, that was a native compilation of Faricol correctly. Right. Yeah, exactly. So one of the main outcome of the community was the opportunity actually to deliver workload directly at the Farage. So let's start from the beginning. Edge computing architectures in general can be quite complex. Say the least. Yeah, but still we have to admit that OpenShift cluster and the entire platform itself makes you save quite a lot of time and setting up, configuring and managing the clusters. We have automation tools, installation tools and management tools. What it really changes and makes you, forces you to look at the architecture from different perspective and implementation as well, because that's not a joke at all, is the Farage and IoT side. Now, when you think of IoT devices, you think of all those small antennas and small sensors that usually you connect to something bigger, like it could be a Raspberry Pi or whatever kind of nice, nice IoT device. And that's easy, easy to manage, because in that case, you could have a REST API exposed by the small sensor, or you should just manage to connect the sensor to a serial port of a Raspberry Pi. And still, this is doable somehow. The biggest issues come in place when you have to provide the workload to your device station or the Farage or whatever. This leads into a series of issues that start from the different CPU architecture. Of course, you just think of Intel and standard X86 architecture and the ARM one, aka ARC64, which is just a different name for the same CPU architecture. In that case, you could definitely compile manually the workload. And we are talking about compiling it natively, otherwise with a standard Java application, it's easy. Compiling the workload natively takes lots of resources, lots of time, unless you are on the same architecture. So this means from the production perspective, from the enterprise perspective, we are talking provisioning an ARC64, aka ARM V8 server. Could help, but doesn't scale. And if you think of all those companies, you base their own data center on a public cloud provider, that of course doesn't work. What we achieved, actually, was to create a tool, which we actually distributed also to the Quarkus community. That could help compiling the Quarkus application natively. I can't say in a cross-compiler fashion, but in a multi-ARC fashion. And let me explain why. At the very beginning, when it was just myself and considering I'm a long-time developer, not definitely a platform expert or an infrastructure expert, I just set up a virtual machine on my Fedora desktop. Virtual machine using QAM and KVM, piece of cake, easy. And I was emulating the Raspberry Pi CPU. That was quite helpful because I was just installing GravVM on the target VM and compiling the applications. So that of course wasn't scaling. Of course, it's not the target of a cloud computing-based POC. But that was helping because I was able to generate the final image for my worker, for my Raspberry Pi, and then push it to Quay.io. So far, so good. When you think of, again, an enterprise environment, you have to imagine a series or a group of automated processes that produce the workload and could either notify the Edge devices the new workload is ready. As it happens, for example, with your router, with your mobile phone, so you get a notification and eventually you decide on demand to upgrade your software. Or the other way, you expect that your devices automatically update the workload as soon as the new version is available. And this is, for example, a fantastic feature provided by Podman, which is embedded on Fedora IoT. So what we did was to make sure that OpenShift could run this CPU emulation and we could compile the workload for a different CPU architecture on OpenShift running on AWS in our case. This was not easy. And so in order to simulate the CPU, you still need QM. So our big work was to make sure that QM was capable of running within a container. So we started from yet another upstream project available on Docker Hub, I guess, and GitHub, which is called MultiArc. So we emulated this different CPU architecture using QM running within a container and made sure we emulate what the Quarkus community provides. So a container that builds, that creates a container after the build phase of the Quarkus application native native native. In order to make this run on OpenShift, we had to apply some tweaks. Actually, Natale, if you want to get into the details, I'm more than happy to end over to you, Mattia. So we had to make sure that each and every worker node on OpenShift was capable of simulating a different CPU. So that based on QM. In short, we create a demon set running image node with running as a privilege because you need to change some underlined to activate the MultiArc. Yeah, and having each and every worker node capable of emulating a different CPU architecture was scaling from the compilation perspective. Because even if using a container image to build your workload image takes probably 30% of the time, using a virtual machine, it's still expensive. I mean, it still takes 8 to 9 gigs of memory for each compilation process and probably 18 to 9 minutes time. So we could scale the compilation processes and then update the images on the Raspberry Pi automatically through the Portman feature, which is called auto update. And this is something I encourage everyone from our audience to try because this helps quite a lot when you have to manage the workload on an IoT or edge computing installation. That's interesting. I think Ben was the one developing such pipeline. Ben Tagliar, colleague in the NSA in the Netherlands and he developed this OpenShift pipeline. So we basically use Tecton here in the OpenShift cluster with OpenShift pipelines to build such pipelines. And Mattia created this demon set, which was able to add this feature to the node. So in RELCOROS or such operating system, which are container-based and minimal surface, when you need to change something, you have to do the Kubernetes way, right? So it basically created a machine pool. So he created a new setting that has been propagated across the wall node to the cluster. So this is interesting for ops people cluster admin. How do I add features like this Cuemo feature or any other feature specific to the node? Well, you can do that in this way and you are always following the way to do things in OpenShift, which is the Kubernetes way. So we're managing even the operating system with Kubernetes Cloud API machine set. So this is how it works. And I think it's really cool together with Tecton OpenShift pipelines bringing such multi-arc or device architecture emulation, producing then the binary to be executed on the board. So that's why I was saying, Andrea, that this year the architecture was more complete. I think there are lots of improvement. And at this point, I'm looking forward for the next development because it looks the project is growing, the community is growing. So I was also asking to the winners today, what do you think about this project community? Did you have interaction with other participants? Did you have interaction with the organizer? How was this community interaction in your opinion? Interaction. Anybody can talk. Feel free. I have to say the interaction was very good. It was clear from the beginning that the organization was always available for questions, which was apparent especially on the Slack channel, although back and forth questions were also quick replies. So you didn't have to think a day in advance if you had a question. It was very quick to reply. And that was a very fun way to work because you don't feel like you're working on your island. You feel like you're part of something. Very cool. Yeah, I also agree to that and definitely thanks to everybody who replied on time, but special thanks to Andrea because he was really, really helpful, you know, in every situation. And definitely because very helpful for us, we definitely did not feel alone or some kind of desperate. We had one case when we could not do the compilation part, the native compilation part, is the accumulation and Andrea was really helpful in that moment to get through with it. Thanks again. Thank you. I was about to reply because you always ask questions around the purpose so I can reply easily. But that was very nice of you, but thank you. Cool. Thanks for your feedbacks. Yeah, Andrea, I was wondering at this point if we want to introduce what are the next steps for for the community for the project. Yeah, that's an interesting question. Thank you for this opportunity, Natale. We want to do more. We want to make sure we can cover each and every aspect of an extended edge computing or distributed architecture anyway. So we thought of actually we called for proposal and the call is still open. So just in case you have any idea or you need any support, or you would like to support in terms of not enterprise support supporting tackling any challenges you have, or just advices we are happy to help and share. We would like to make it a bit more complex in terms of interaction between layers, because honestly, just an open shift platform as a central place to store events coming from Raspberry Pi is a very simple POC. Okay, the implementation is complex. No, I could actually say it takes time, but complex not at all. It's elegant complex. Thank you. It's complex what Mattia did, what Ben did, though, creating, creating cool stuff that are stable and scalable, but the implementation of a Quarkus service running natively is not that complex. It's complex to have to distribute resources and responsibilities responsibilities in terms of what service does what or is responsible for in this kind of architecture. I'm just showing here on my slide. Now, of course, the community is called QAT, which stands for Quarkus and IoT, or Quarkus meets IoT actually. But that's not the key goal anymore. And we understood thanks to the experience coming from the Hackfest and what our partners share with us that the customer challenges and the partner challenges they stay more on the aspects we mentioned already. Distributed security, distributed workload, and mainly, let me say, distributed integration. So that's why this diagram is it's simple, but it says self-explanatory. The edge devices. So we are thinking of manufacturing use case of them and tissue production and this is one of my favorite t-shirts right so it's super Mario here. So if you let me just give you a quick, quick, quick background. Think of the way in, let's say, simple fashion of the way a t-shirt gets produced. First of all, you, you weave the t-shirt. Then you have to color it. Then you have to print the drawing. And last but not least, the packaging. So this is a kind of state diagram that should be managed by the machinery, which could be identified here as an edge device. It's not the mechanical part. It's the way a Quarkus-based microservice running on the machinery so the real t-shirt production should handle. So this is not complex in terms of handling the mechanical part. It's more about making sure that each and every stage completes once the machinery gets the green light from a factory controller. And in this case is represented so it is the name here is wrong from the factory controller probably next time is better from the single node open sheet running as a factory controller. And here comes the first tough task, because you have to handle all the messages and the events going back and forth between the several edge devices or the machinery is producing the t-shirts and the central controller. This is easy if you think of the capability of single node open sheet to scale horizontally. Okay, so you can have single node open shift three single node open shift to make sure high availability and scalability is guaranteed as well. Once the t-shirt gets finally produced, the final production event has to be shared with the central controller. Okay, so the data center plant. The data center plant is responsible as well for the product line distribution, which means the new t-shirt with a new drawing, a new color, a different size, male or female, a different gender should be produced. And that's quite important because in that case, you have to make sure the factory controller maintains the information, secures the communication between the devices and the factory controller itself, while the devices are not allowed to access the services running on top of the data center plant. This can be definitely simulated using Podman virtual networks or Docker virtual networks. Our goal is to make sure we create tests and secure virtual private network between components installed on proper hardware. And that's the goal of the community. Of course, if you think of just the Quarkus or active and Q or Kafka based services, it's easily, this is easily replicable on the Docker environment piece of cake. If you want to do more like integrate this with distributed security in this case then you have something more to do. Also, because the distributed security should be something, for example, that is managed and is owned by the data center plant. And of course, in this case, the factory controller would behave as a pastor, just managing the existing devices and making sure the new devices or the new machineries spin up in the factory. They just receive the proper information for the production. And that's it. This use case is quite interesting. We are implementing the most complex part, which is probably the, again, the edge device because the production, it's interesting even from the pure development perspective. Now implementing state machines and making sure the device, the messages and all the telemetry gets delivered properly. So there are more than one protocol. It's not just in QTT anymore. It's transactional here are delivered correctly to the factory controller. Correct. But then, on the other hand, we are looking for contributors who are more into open shift data science or process automation manager so everything that that is related to decision management business process management, which is something we are missing as a skill. We'd love to get people with passion, but also with great skills to be able to solve challenges or find the solution to a specific task quickly. And also people who are happy to learn, but if we haven't got the skills actually we can share with others at the moment. This backed off course by our integration layer. So the, the complexity here is securing the integration between all the messages flowing through the data send the factory controller and going through the VPN in a secured connection through a MQ Streams AKA Kafka cluster. But yeah, probably you were mentioning this during one of our community because if you want to give more details about that. About integration between data management part and data center plan. Yeah, the idea was to leverage the MQ Streams platform. So to have a more even driven architecture. So when the data center send a request send an event to the production side. And then we have the consumer on the production side. They receive the request for printed the t-shirt or whatever we would like to print. And then it's going to send this consumer send the right direction, the right input to hold the IoT device on the production side. And when the production is ready we have as well on the other side we have a producer that is going to send back an event that the production side completed the order and they are ready for delivering that t-shirt. And that's it. So rather than using lots of microservices, validating the information and just storing them to InfluxDB from Kafka to InfluxDB. This time we're going to try properly the integration between Kafka so A and Q Streams and other third party services, which is something we are missing at the moment. So from our perspective, this is something we are keen to try more. Awesome. Yeah. So, you know, we're coming up on time here and we're free to go over if we need to. What are some of the like biggest takeaways people have learned on this, you know, kind of journey here during this hack fest. Is that what's important for me or for our partners? For everybody, right? You're welcome to say, hey, Andrea, this is what I learned, you know. Simon, if you want to reply. Well, for me, personally, it was mostly like working with Raspberry Pi because up till now like it was one of those things that never had much personal experience with. Right. But also at the same time, like it was also a professional, not quite an eye opener because like we already were eyeing corpus to use it in production on certain things. But to see how far it is and like also using it with the documentation is provided, which is also very nice. So it will push us forward to start using corpus more when applicable at the customers and stuff like that. Nice. Well, for me, again, I definitely also would agree with that that for me before this project, the IOT seemed something that was kind of unknown to us. And now we know what what what the edge device could mean and how to implement a solution running on it, configuring it and making some good use of it. Also, we learned that corpus has many kind of aspects of usage, especially we just mainly thought that it was good to be used for just microservices and just running on some server or cloud environment or Kubernetes and it appeared that it's kind running fine with Raspberry Pi with a small device and doing its job. So, I think for our future users, we just definitely think that the edge device use cases are will be interesting for our customers and we are looking forward to partner with Red Hat for potential use cases, which we can employ based on the experience regarding this project, how to approach new opportunities. Nice. Awesome. In the electronics and use IOT project for automation in a small building openshift and we use few JPMBM rules and we will try this new technologies, corpus, both man for the new new project. Nice. I think it's awesome. I don't know if you want to chime in here at all or I want to tell he's gone. Okay, fine. Yeah, yes. Next to webinar. Easy enough. Miss that message. Sorry, folks. So thank you very much to everyone joining today and participating in the Hackfest and congratulations on winning and I'm glad that everybody was able to get something out of it and kind of learn how to do things a little bit more differently and solid, you know, kind of practice environment. So, Andrea, you want to say anything before we sign off or You know, just thank you Chris again. This is a great opportunity and I'm looking forward to having more people joining us. And I have to thank our partners because it was an excellent opportunity to learn. And I had a lot of fun. Thanks to Mattia thanks to the whole community and the people who supported me on the basis running this this complex and tough event. Thanks everyone. Thanks Andrea for giving this opportunity. Thanks everyone at the party. Thank you all for coming on and showing off your stuff this morning. And without further ado, I wish you all well. Everybody out there watching, I'm sorry if we missed your questions we like I mentioned at the beginning of the stream we're having some technical difficulties this morning with our streaming service so feel free to email me short at redhead.com and I will get you routed to the right person if you have some questions on how to participate or anything like that. And that's all for this show, but have no fear there's another show coming up here in a little bit will going to be doing the developer experience office hour and talking about simplifying and standardizing developer environments. So that should be interesting. So thank you all, and we will see everybody soon. Stay safe out there folks. Bye.