 So, welcome everyone through that come 2022. My name is Lucy and I'm the moderator for this session. And it's an honor for me to actually present Ben, Andrea and Mattia here and their talk to our course native running on arm the eight. So I handed over to you guys. Thank you Lucy. Thank you. I'm very I forgot to mention for the audience. If you have any questions, please utilize the Q&A them. I'm sure our dear speakers will have time to answer them. So to you. Thanks a lot. So welcome everyone to this last session of that comes to 2022 for today. We're going to talk about work is native running on RBA. Of course, more than just technical stuff because we are part of a an extended team that tries to build solution around edge computing and cloud native development. I will talk about that shortly. I hope you are enjoying that come this year has been again virtual but still it's always exciting. Let us thanks a lot. The whole that come entourage and the stuff because they are fantastic amazing and they supported us quite quite extensively. Okay, so let's go through the presentation first. We will quickly introduce ourselves. Then I will give you an architectural overview so I will try to explain why we did this, this, this, we spent such an effort to implement automation around RBA and cloud native development. And then Matthew and Ben will discuss for question why we have picked up that specific framework and Java program and based on Java programming language. And last but not least portability and walk through an open shift that is going to be we will show you also and demonstrate how it works and how can you you can use the whole environment we implemented. So quickly, let me first of all introduce myself my name is Andrea battaglia from Italy. And since the pandemic started, we've been locked down. I am spending my time in my hometown so very far south Italy. I work for the radar ecosystem. So I look after EMEA partners when it's about complex solutions like like edge computing and digital transformation. And also the hackfest red at hackfest lead, which is the enterprise event behind the community, which is actually the community where all these technical champions, they joined to create a solution that can be reproducible and usable. Well, thanks. Thanks Andrea. Good evening, they've come. So I'm a solution architect for red at based in the land of windmills and tulips in the Netherlands. I've been involved with a carcass it community now for the last two years or so. Initially as a participant and now actually as a contributor as well. Yeah, so I'm looking forward to telling you a little bit more about what we, what we did in terms of containers. Thanks Ben and Mattia. Hi everyone. I'm pretty much here. I'm pretty much here. I'm pretty much here. I'm pretty much here. I'm pretty much here. I'm pretty much here. I'm pretty much here. And first shift or everything around cloud or middle or start integration. I did also some quadruple contribution, and what I like about to do sport recently Paléo training you see on this picture which I tried to escape from customer meeting. But yet, I'm joined the hackfest or QUT community almost now three years ago, I guess. time-flying pandemic, let's say. And so I really enjoy trying new things on the edge. So here we are trying to showcase what we have done so far during this year. Thanks, Mati and Ben. So let's go quickly through an architectural overview. Why would you need to run Quarkus Glove, the running native on ARMV8 devices action? The creative community would be discussed and we are looking forward to having you all in the next session tomorrow, actually, around how to build solutions empowered by communities. So we created this community because back in the days, we were trying to understand customer challenges and to address them beforehand. So create something that they call it at the enterprise level, the 80% solution, something that could be used as a pitch, something that could be used as a demo, as a POC, but that could be also be the foundation of a real customer scenario, right? So in many cases, you have the chance to make several layers of your distributed architecture talk to each other. But you definitely want to have some centralized automation and business logic on the data center plant, which means in turn, you wanna have all the tools, the processes and the pipelines to produce, roll out and distribute also your business logic, right? So the containerized version of your soul score. And that's crucial because the business logic, of course, runs on the data center and runs in the production facility in the specific case, for example, of an edge manufacturing use case. And the facility, of course, you made up of two completely different components. You have an edge server and an edge device and plus other sensors that could be available in your landscape. So we wanted to make sure that whatever kind of architecture, CPU architecture was available at the edge, both device and server, we could be even able to produce containerized workload and then of course, roll it out so deploy it on several edge layers. And this is what we have implemented actually. So we've got some support from Intel in this case for the edge layer and I would never stop thanking them for the big support and collaboration. But the idea now is one single platform covering the entire landscape. So we have open shift content platform at the data center layer. We have single node open shift for the edge servers and refer edge or if you want to think about it in a more upstream fashion, you can think of Fedora IoT as the upstream version of refer edge. But anyway, these platform have one single thing in common that's of interest of very high interest to us which is the container technology, native in the open system. All of them of course are based on the graph. So data center should take care of creating and managing the containerized, the container images and of course, edge servers and edge devices are the one who actually say are the most important element of a distributed or geolocalized edge computing architecture. So this is basically the overview and why we decided to go for that. And with this, I guess I hand over to Mattia. Mattia, do you want to share your screen or shall I keep going through the slides? Mattia, you're mute. Are you sharing? Yeah, yeah. Thanks, Andreas for the architecture. I guess you can see the presenter view, hopefully. So let's talk about supersonic subatomic Java. What it is? So the names is Quarkus. So Quarkus is the name is composed by two part, the Quark, which is the elementary particle. That's why subatomic and us, which is the hardest things in computer science or really stubborn to do the stuff. And that's why Quarkus was born. And there are several principle, but we kind of summarize the key component about the Quarkus frameworks. First of all, it's a framework of framework and with a mine or container based approach, which means if you want to work with a container, you need to kind of be a fast boot time as well, low memory footprint. And of course, we talk about scaling, how to scale in, it has to be really small footprint and so fit for several functionality. So all the function and everything. Only five configuration, what it means, it means it allow you to work with the classic imperative programming language, but as well with the active. So you kind of have the ability to combine those way of working and make it quite easily because I'll allow you to, with several utilities and predefined standard classes, allow you to really easily to combine those paradigm. And of course, is a framework of framework is based on really community of standard, based on really a lot of extension, which are really kind of standard of fact, like hibernate, the rest easy vertex, all those framework that you are really familiar you just managed to use it on your own Quarkus. And last but not least, is really a developer joys. Why is that? Because everything is automated out of the box like with zero configuration capability and also nice things about the library load and no, you don't need to anymore wait for mob and build everything. So everything, if you change something, automatic is going to reload your application in a blink and high, you can see the change. And recently they introduced as well, the developer UI where you can see all your framework, also a link to guideline in case you are still learning some extension, you can overview your properties as well, the default one. And also you can, if you implement it in the right way, the unit test integration test, you can launch continuously the testing under the hood, while you're implementing your application. So in the end, Quarkus really help you to faster the inner loop development. So the fastest feedback loop you have and the faster you can refactor and test again. And one of the important things about Quarkus, as well as really cube native application which allow you to generate native executable code. So what it means that allow you with some capability to implement in an easy way the native implementation. So in a way that 80% of the work is built on build time and 20% of the work on that runtime. So let's have a look at the number. All those numbers that we see on the top left corner is based on the start on time and the end on the first request. And on the left side, we have just a simple REST application. And then on the right side top, we have REST application with the CRUD capability. And you see the different memory footprint because when you had additional application, of course you increase a little bit the effort of the memory consumed by the application. But the next things that we want to show here is because compared with other modern frameworks which use the lazy initialization, Quarkus allow you to do all those lazy initialization at the build time. So because time to first response is what matter for the hand user, right? You want to your applications start up quickly as soon as possible. And in this way, you are able to scale up as well easily when the amount of requests is increasing. So just quickly analyze the below diagram. You see that for a simple REST application, a Quarkus application that runs on normal hotspot VM is kind of able to fit around the 73 megabyte of RAM which is kind of the total memory included everything used by the GVM, code cache, EAP size, whatever. And believe me, it's quite impressive for a GVM application. But if you get compiled in the application in native, it's able to start as well with really less footprint around the 12 megabyte on memory. So what it means, we can deploy native Quarkus application really 10 times more like traditional cloud Java application. So really in compare like if memory go long language, so really compare with the go long binary capability. Okay, so pretty awesome. And that's why one of the choice we choose for a higher device or edge device because you require edge, you require really small footprints to run on really small device. And just a quick overview about imperative versus reactive, you have two definition of REST services. One is the classic imperative REST services based on REST easy. And the below one is how you can manage the same endpoint with the reactive capability. And Quarkus allow you to do both. They provide a great performance for blocky and unblocking endpoints and as well provide additional functionalities on top of the JAXRX standards. And to accomplish a synchronized communication, the React attacks is able to, you need to create a method with the uni type and we need to think about like streams that can only emit either on an item, so your object or a failure event. So, and of course Quarkus allow you to create those type of instance easily. You don't need to understand how everything works otherwise because based on the immunity API. And in this way with the reactive REST services, you're able to consume and serve more requests. And when you're talking about edge device, you should expect quite a lot or requested. Yeah, that's it for the end. So for today demo, after the end, we just implemented a simple application with just REST endpoint. With just one source code, you're able to compile for different architecture and we're going to show that when you run in a normal, your developer machine will be 86, but as well when we launch it on emulator environment on a real edge device like Raspberry Pi, you will see the ARM 64 architecture. And that's it, I'll leave the talk to Ben. Thanks. Thanks, Mathieu. Yeah, so like Mathieu mentioned, and what makes Quarkus maybe interesting, especially for your edge devices or IoT devices is the small footprint and quick startup times. Now, how we can also improve things or what gives us interesting flexibility is when we start actually looking at containers, right? So when you think about containers and actually deploying your application using containers, it's a fairly simple concept in terms of I want to build something, my application, I wanna ship it and I wanna run it on my environment. So containers gives you benefits that you can actually redeploy some of your dependencies without reinstalling, thinking of actually deploying a patch to your device. I can do Delta updates, so I don't necessarily need to pull my full application all the time. But it gives me that low overhead and flexibility of actually rolling out my application towards our devices, right? So in essence that this gives us that portability or that promise of portability. However, you might ask, well, now does it really give us portability, especially when we start dealing with or thinking about different types of architectures, right? So in our scenario, we wanted to deploy our containers running on a Raspberry Pi, for example. So to do that, we need to start looking at different options of how can we actually achieve that to build a container for, in this case, a RMV8 architecture. So there are multiple different approaches that you could decide to take to get to that point, right? So one option could be that you actually use dedicated build VMs. And I mean, this is still probably one of the recommendations from Graal VM, right? Is that we create a virtual machine, we run our build in there and create our actual container. However, thinking from a developer perspective, that can be a bit cumbersome. I mean, personally, I don't necessarily like running virtual machines on my laptop, if I can avoid it. Also, especially if you think about actually automating the process, then a virtual machine becomes a bit heavy. In my opinion. Also, a different approach could be to actually have architecture-specific Kubernetes clusters. So if I've got a ARM Kubernetes cluster, yes, I can build containers for ARM there. However, that means, well, I might need to have a specific type of cluster for each type of architecture that I need to create images for, right? Another approach could be that you could think of cross compilation. Now, this looks like a very interesting option, since I can essentially create a binary for different architectures from a x86 machine. But this also depends a lot on the type of framework that you are using and the type of language that you need to compile. So for example, if you do something in C, that might be more straightforward to do compared to actually doing a girl VM native compilation for Quarkus, right? And finally, there's also the option to actually build on your device. So think of a farm of Raspberry Pi, for example. Now, in concept and in gig factor, that sounds cool. However, Raspberry Pi just does not have the horsepower to actually do a proper build or proper native build, right? You might sit there for days. However, there's another option, right? And this is one that we've explored in the Quarkus IoT community. So about a year and a half ago, I found this project on GitHub called MultiArch, or MultiArch that started using essentially QMU components to help you to run containers on different architectures. Now, this makes use of a component in the kernel, basically allowing you to run miscellaneous binary formats. And this sort of is what I'm trying to say with this whole reference to Star Trek in the universal translator, right? So typically when I need to identify a new language, I need some sort of snippet from this language and then interpret what type of language that is. Now, this component in the kernel allows you to essentially execute a arbitrary or a arbitrary binary format, right? For different architectures, for example. So in essence, how that works, there's some sort of magic identifier that this component would pick up and you can specify a specific interpreter when that is identified. So in our case, where this comes in useful is that if I pick up that my binary that I'm trying to run is for on V8, I can actually tell it to use QMU to actually execute this binary. And what that would do then is only emulate the actual CPU. So it's a lot less overhead than a full virtualization where I need to do the whole stack of from my hardware layer all the way to CPU level. I'm just actually translating those instructions. And this is done with a user space QMU component. So based on this, we actually created a set of standardized containers that embeds these QMU static binaries inside the container. And we built a set of components that actually allow us to create a native builder using Role VM and Maven Flavor as well. And then together with that, a UBI based runtime image that actually allows us to embed our application binaries. All right, so the way that works then is that essentially when I'm executing a ARM V8 container on a x86 machine, that's CPU will be emulated to make sure that I can actually execute that container. So that emulation happens inside the container itself. Which gives us some nice flexibility. So thinking about how this could be used, I mean, I like bringing always thinking of a developer workflow, developer experience of how these building blocks can actually be used in a real world environment. And especially if we think about developing applications using containers, it makes sense to actually add this capability inside your container environment itself, right? Think about, I want to build a or use my existing workflow that developers are familiar with to build for different architectures, run my test suites against that and then actually deploy eventually to your device. So to sort of show how these blocks would fit together, essentially what we need to tell OpenShift or the nodes in OpenShift is that when we receive that magic identifier for a specific binary, that we need to use or initiate the emulator within the containers, right? So there's this component that actually basically just runs on each nodes and says, well, this is the interpreter to use when you see this instruction, essentially. Then we stepping through our process, we can follow normal sort of CI style process where we actually use our multi-arc to build the images as part of our pipeline where we can assemble our container or build our application either native or using Maven, for example, and then inject that into a runtime image that we can then execute our environment. But we need to see some proof. So I will show you or I'll walk you through what I've created on OpenShift just to give you an idea of what this means, right? And how this can all fit together. And after that, Mattia will show us that this actually runs on a Raspberry Pi as well, right? Exactly. So to kick it off, well, first we need to prep our environment or our OpenShift nodes to tell it that we register this miscellaneous binary format interpreter. Now to do that, as part of our GitHub repos, there's a, we create a demon set which basically makes sure that a certain command runs on each of the nodes in the cluster. So this is a very simple path that runs. I can show you in some of the logs and all that that does is basically specifying which interpreter to use for a specific architecture type. So that sets the scene, call it that, or prep your environment to enable it to run these multi-art containers. Then we've also created a set of tasks for our Tecton pipelines, essentially. There we've upgraded two tasks, basically. One that does a normal Maven build, but for a different architecture, in this case, RMV8, and also a task that does a native build. So the only different thing for this task compared to a standard Maven task is that we actually base it on our multi-arc Maven image. And in this task, we also cache some of our Maven repos that we can actually speed up our builds a little bit, that we don't have to download from Maven all the time. For the native task, we also have a sort of two-step build process. So our first step, actually make sure that we actually download all our Maven dependencies locally so that we can actually use that during our native build process, and using our multi-arc builder image, essentially. And then we run a native build. So it should be very familiar to those that know Quarkus, similar type of parameters that we pass. Now, putting this together in a simple pipeline, I created two of these. I'm not gonna run it during this session because I mean, it does take a little bit of time, but essentially what this does is just a simple git clone. It does a build and creates the image for us. All right, so it's fairly, fairly simple. Now, just to give you indication of how long these things run in all transparency. So for the Maven build, once we've actually cached our dependencies, the build for the simple application takes about three minutes. And if we look at our native one, it's a bit slower, right? So it's about 25 minutes in this case. I mean, this is without optimization and this is running on my Intel NUC sitting in my attic, right? So I mean, there could be some performance improvements in this, right? Now, when we actually, so part of this pipeline also created the simple application. So just showing you some of the workloads or the parts that we've created. So it's a fairly simple of deploy a version of the normal one and a version of the native build. So if we look at the, maybe the, some of the memory usage, in this case it's been running for probably a week or two sitting at about 500 megs for the Java flavor. However, looking at the native one, that one is a lot leaner on our resource utilization, right? Now, also if we look at the startup times for these, this is still quite quick for our native container, right? Now, maybe just to prove that we actually, actually can call our service. So this is our native endpoint. If I actually put the endpoint in, let's zoom a little bit. So now I can actually, based on the test application that Mattia showed, I can actually show that I'm running on ARCH 64, right? Which is on V8. However, it might also be interesting to actually show which processes are running. So I've got a little script and Mattia will probably do something similar on the time just to prove a point, right? So if we run this, we can actually now see inside this container, we've got the QMU components running. So that's simulating or emulating our CPU for us to make sure that we actually can execute this on our x86 cluster. And with that, I think over to you, Mattia. Thanks, man, it was really nice. Let me show now if it works as expected on the pie. Can you see the screen, the terminals? Yes, a little bit small. Small, okay, let's increase small. Better? Yeah. Okay. We have two tabs here. One we have the JVM container, which is pushed on the QuayIO. And on the right side, we have the Nativ1. So we can launch, oh, of course we are on the pie. First of all, the Mish proof that we are on the pie. Arc64, okay. And so then I'll launch the JVM one. I will launch the Nativ1. You see, the Nativ1 is quite fast. You see the time is faster than the nuke of Ben, my pie. And then we see here the Java one, it took five seconds. Look at the difference, really impressive. We see we have two containers, Nativ1, JVM1. We see under the pros, really low memory footprint. We have around here around 70 megabytes, here around 30 megabytes for the Nativ1. Let's check, because I have two containers running on pie, I exposed a different port. The Nativ1 is on the 8081. Hello, oh, hello Carlos, sorry. You see, and then we can check as well the JVM one. And you see, it's a little bit slower, but the Nativ1 is really faster. So to prove that we are not in a emulate environment, let me get to the command that Ben prepared for me. Let's look into the one of the container. Okay, launch the command. You see here, we are really using the Arc64 JDK. It's pretty awesome, right? So that's the proof that all the multi-arc stuff, it works as expected. And the Nativ wave is really impressive performance on our Edge device. In this case, we are on the Raspberry Pi 3. Yeah, maybe just to reiterate there is that when it actually runs on the device, it's not emulated at all. So it's only emulated when we're actually running on a different architecture host, call it that, which is fine. Exactly, as we are using QEMU for both the compiler image and the workload image, the one that we deploy on the production side, right? So that's the big difference. So QEMU jumps in only if the CPU architecture we are trying to use is different from the real physical CPU architecture underneath. So that's where it comes into play. Thanks, guys. I just wanted to summarize again. We've started with Quarkus native on an ARM VA device, simply because technically speaking, it's awesome, right? The work we are trying to do and the goals we are trying to achieve here as Ben correctly said, is creating pipelines that automate the generation and distribution of the workload across a distributed environment. And that fits brilliantly into a whatever kind of edge computing use case. And that helps, of course, anytime you have to deal with these homogeneous CPU architectures. Another thing that it's worth to mention is that we are not reinventing the wheel, right? We started two years ago using the virtual machine and I was compiling the Quarkus native images on the virtual machine on my host at home, right? Not on a laptop. It was a real powerful enough computer. And then Ben came up with this idea and implemented this stuff. And what we are seeing now is that we are able to build POC for each and every use case because whatever kind of hardware you put underneath, whatever kind of CPU architecture you have to deal with, we always have the opportunity to put some pipelines, native pipelines and deal with the tasks we have to complete. And not reinventing the wheel also because so far the cross-compiling feature may be a mistake, but as far as I know, the cross-compiling feature is still missing in in Gralbm. So that's why we are kind of supporting as a counterpart the brilliant and fantastic job the guys from the Quarkus and a Quarkus engineering team are doing with the Quarkus images, right? We are not integrated. We are parallel on a parallel work stream trying to achieve tasks that are not covered by the standard tools beyond the Quarkus universe. So now, we will stay a bit more in case people they want to have a technical conversation or ask specific questions after the session is over. But let me and please feel free to put your questions in the Q and A section. So to write something in the chat, we are more than happy to interact with you all. It's time almost to close the session but before we do that, let us invite you to join the QIOT project. So QIOT could sound weird. We do definitely much more than Quarkus for IOT. In that community, we try to build pieces that could be integrated and other projects could be useful for customer project or partner project, whatever, or just to have fun. We play with Berazbury Pies. We play with technology powered by Intel. So enterprise edge devices. We played with several sensors and sensor boards and we are going to play with more and more devices as far as the CPU shortage allows us to to buy small devices. Of course, 64-bit native for a fair price, of course. So we have a lending page. You can have a look at our lending page, our blog hosted by GitHub pages to have a look at the use case and PSTs we have already implemented. The technologies we used. And last but not least, we have a quite extensive set of projects and small components because our PSTs are distributed across several layers. So you have several Quarkus native applications running on each and every layer. We use enterprise technologies. We are not trying to use upstream technologies because it's not worth it. As we propose this as a source of reusable components. My last but not least, we make extensive use of several technologies that are running natively on container technology, container platforms like OpenShift. We have, I guess, a question here from Jan. He says, guys, I'm gonna read it for Lucy. Do you want to jump in here? It's up to you, basically. My pleasure. Be our guest. All right, or even Jan, if you would like to ask the question live, just click, share, audio and video and I can kick you in. That's also an option. Okay, I don't see any tasks right now, so I'm gonna read. I want to handle a bunch of 48 microcontrollers, especially 32 with the internet. With a device like Raspberry Pi. Can I run Quarkus native for simple REST Client, REST API application on it? Okay, would you like to reply? That is yours. Okay, yeah, yeah, so the demo that I showed before, which was just a simple REST Client or REST Service Enough Client. The client was just caught. It was running natively in ARM device on a Raspberry Pi. So you could do that. And as you could see from Mathias demo, it actually takes 80 in eight, so just 08, eight milliseconds. I can share it again if you want to. Yeah, please, Mathias. So eight milliseconds to start up and only 30 megs of memory. Of course, the amount of resources used by the applications on Ben's demo, so single and double shift on an Intel NUC, 10th Gen was completely different because the application then is surrounded by the container technology that of course uses, makes use of additional resources. Please, Mathias. Yeah, so here we have the native one. Then we have the JVM one. Can you maybe zoom in just a bit more? Yeah, maybe just to add a little bit to that, especially when you're building native applications, it also depends on the type of libraries that you are requiring in your application because not all of them are necessarily support native compilation. There might be some workarounds or some ways around that, but that's also just something to take into account when you start building or trying to control microcontrollers, for example. Yeah, so beware, as Ben said, just use cry, not to use libraries from outside the workhorse universe, because the universe has a myriad of libraries and you can definitely find the one that suits your needs. Then there was another question, I guess I mentioned already, the minimum, was the oldest Raspberry Pi that can be supported. So to run natively, workhorse must run on a 64-bit capable or potentially native CPU. So the minimum is Raspberry Pi 3B plus because that supports ARM64, ARC64 natively, but then beware you have to switch that on through some configuration in the book section of your operating system. We kind of recommend to use Fedora IoT for that now, definitely. And of course we have some Fedora IoT images ready to be flashed on the SD card for your Raspberry Pi. We have the kickstart file ready to be used if you want to compile the operating system images so you can experience the amazing, annoying part of building the OS image for the SD card on your own because that's gonna be for ARM64 and then you have to use some kind of emulation underneath an account to build that image. So it could be annoying. Use our image, you can make it easily. Yeah, so try the multi-ARC image builder with the Quarkus as we did actually for this demo because otherwise that would take much longer with the basic Quarkus quick starts. We used actually the Hello World quick start, go to Ben's repo or have a look at our repos where you find the word edge in the repo name because we use a special naming convention to define whether the application should run on the data center, factory or edge. Okay, so you go to the edge and you can see that there are several Docker files that use standard builders for standard Java, the Quarkus image builders for the Quarkus native x86 and our multi-ARC builder for ARM V8. Of course, you can use as Ben said, standard Java and also compile your Java for ARM V8 directly on a Raspberry Pi, that simple Java takes no resources almost. The main is that with the native compilation that takes quite a huge amount of resources, okay? You have to, and also emulating, you have to think of a minimum requirement of five, five to six up to whatever, depending on how many modules from the Quarkus universe you are putting into your application. More than welcome. I guess we can mention tomorrow, Andrea, that we have a... Yeah, let me, yeah, advertise again. Tomorrow we are presenting into another session. That's about the community. We are driving. Unfortunately, we couldn't invite more, but our colleagues will join anyway, not as a speaker. So you will have the chance to ask or to interact with some of the most esteemed members of this community, which is again, contributed not just by Red Hat Test, but also people from the partner and people who participated in the event, people from outside, the whole Red Hat partner ecosystem or customer ecosystem, people that have a huge knowledge or understanding or expertise in the IoT world. And that, for example, pushed Matti and myself, for example, to talk about security and distributed certificates when it's about connecting devices. So, or with the Edge server, so that it's kind of interesting to see. And they have also interesting and funny people to talk to. With this, I guess we are done, Lucy. Yeah, and I thank you so much for the presentation and for the demo. It was really interesting and great segue actually to tomorrow. Your session will be around noon, but definitely we start from the morning. Again, at like 9 a.m. DevCon was known over. We still have one more day to go. And I'm grateful that you actually closed today's session on five with this amazing presentation. So I thank you all for... Thanks a lot. Thanks, everyone. Thanks, Lucy. And happy DevCon. Yeah, happy DevCon. Exactly. Bye-bye. Happy DevCon. Bye-bye.