 And welcome back to OpenShift Coffee Break and have a coffee with us today. We have today's the Hackfest edition and we have our co-host Andrea and Junter and Mattia joining us today. Andrea, take it away. Thank you Andrea. So good morning, afternoon and evening everyone. Welcome to this third session of the Hackfest office hour hosted by OpenShift TV Coffee Break. I want to thank again Andrea Natale and the whole OpenShift TV EMEA team. You are awesome guys. Thanks a lot for this opportunity. So as Andrea said, get your place so and enjoy the show. The agenda for today is quite interesting and exciting. I hope you will enjoy as well. So we will have a quick business overview and I'll give that overview to you. Then we will have Junter talking about the business plan for Red Hat pattern ecosystem and the technical pitch from Mattia with the multi-art container build implemented by the Hackfest technical community. So let's see business is interesting to discuss even though we always try to keep it technical, right? But today I'm keen to share with all of you a super news. That's about the partnership between Red Hat and ABB happened that's been announced a few days ago actually early last week. So this global partnership is actually to deliver automation and industrial solution and scale using Red Hat open source technology which is awesome. Now ABB is a very important Red Hat partner delivering automation and industrial software solution at the intersection of IT and OT so respectively information technology and operational technology which in the edge of computing industry or space let's say it's crucial it's both of them they could exist and they are compulsory for a proper design and implementation and edge computing solution or blueprint or architecture. Now Red Hat OpenShift supported by of course with Enterprise Linux will now act as an underpinning open hybrid backbone for ABB industrial platform and analytics suite enabling greater operational efficiency and real-time decision making at the edge. So we're not just talking about architecture or workload distribution or security distribution we're also talking about telemetry analytics and management of the component at the edge both server and device layer and that's quite important please stay tuned for more information it's already plenty of news up there we will be keen to give you updates as soon as we have something relevant. Another important piece I would like to discuss or at least announce is the kickoff of the collaboration between Red Hat's technical team and the validated partner's technical team so Red Hat validated partner's so this collaboration is very important to give more of a business value to the blueprints we create in the Red Hat Hackfest program you know the Red Hat Hackfest technical team is always looking for blueprint target verticals or business cases but also trying to adopt the latest and greatest from the enterprise so stable supported and certified portfolio and that not just from Red Hat that's also from our technology vendor partners which are technology vendors to our customers so through the adoption of that and through the collaboration between those two technical teams whatever we will co-create with our partner ecosystem and technology vendors will be added to the community of the validated partners and will be slowly certified and validated to give to that more and more business relevance and that's the update side so I guess I spent less than five minutes which is fine now I would like to introduce to all of you Gunther Herro, he's the manager of the INEA technical partner development team also my former manager so I'm proud to have him here and I'm looking forward to hearing from him so a technical partner development team belongs to the department development department in Red Hat INEA so Gunther I would love to hear from you what is this plan about and why is that so vital for our partner ecosystem to co-create business solutions with us so Gunther, I'm over to you, stage yours. Okay thank you very much thank you very much Andrea for inviting me it's a pleasure for me to be here it's a pleasure for me to give you a little bit of an insight of how we are conducting planning what the methodologies are and where the complexity comes from and I'm a big fan of looking behind the curtain and to remove the complexity and to think a little bit about the basics because in the basics we oftentimes understand that it's not that complicated and that there are a lot of patterns we can repeat over and over so Andrea already shared that I'm the manager of the partner enablement team that he was a heavily contributed to in the past and so the team consists of five technicians who are providing technical training in a manner that you as a partner or Reddit partners can articulate their specific knowledge and interests and my team helps to combine this knowledge with Reddit technology I'm sure you can see my screen right so I'm just want to mention the target group that we are focusing on so it's always the sales engineers architects and developers and it is heavily focusing on architects because when you compare now the sales roles and the sales engineer roles it's all in pre-sales and it's all about knowing the competition having good arguments how to position a solution and to find the use cases that are relevant for you as a customer or for you as a partner of course but from a technical perspective and always edit with this how to demo piece because at the end we need to show something and we are oftentimes very in love with our technology but at the end I need to show something that is a value for my for my customer and for my business case so there are different things that we can deliver and different ways that make sense on a journey and Andrea mentioned of course the hackfest and the hackfest as a part of a process is where you come together where you get used to technology where you are at the entry point of get to know others and to become a part of a community which leads then that you have something some kind of end-to-end solution maybe alone maybe with together with other partners and stakeholders and be able to provide an end-to-end solution for customers which might for example be driven with a partner led roadshow on the other hand side we have classical enablements when it's all about products and solutions that are created by ourselves and therefore I need proper planning and in any case I need proper planning and this is why Andrea invited me to talk about Champ I just wanted to show the context because this leads then to a programmatic approach and to programs that help to scale and why is that the reason is because we have different market sense and what you see here is from the book crossing the chasm was written in 1991 and we have different stages of customers right we have we have customers who are in a phase of I want to be first I want to have an advantage and I can live with a product that is a 0.9 edition right I'm it's okay if there are some flaws it's more important to be number one in my market and then we have the long tail the big the big masses of customers who are interested in solutions that are not only market ready but at a later stage also very much competitive or in the in the long tail so reliable and unavoidable that they say okay yes now I need it too right and depending on which cycle you are selling to you need you need a different approach and therefore it is vital to sit down first and to think of what is it I really need is it that I'm addressing these emerging needs or is it something that I want to do to grow my business or am I serving an established market and in all of those cases I need a different approach different enablement and different support and so we can do this very opportunistic of course but we also can sit down have a discussion on it and put it in a plan and the space is to inform you very openly how we do that and how serious this is for Reddit because we know that we have pretty complicated pieces of technology right so it is important to have this conversation down to the point where we say and here is the business case for the customer that makes sense for everyone who enters the room right not only for those who also can read the command like and so it's all about predictability it's about strength and commitments right we want to sit down we want to agree that this needs to be a win win situation at the end right we want to focus on clear activities like those market segments and of course we want to connect our technology with a business perspective to achieve a valuable outcome for everyone and please bear with me there is a proper description of the process which is not available for for everyone this is only for for redhead partners where you can see all the nitty-gritty details of the workshop and how this is designed but you might you might think okay there maybe there is something super special but in the end it is a planning cycle and this and it's something where it is important to have workshops where you have good communication and good collaboration and you're constantly keeping track of things and this is this is the secret source the source is as always the people who are running the process good for sorry may i ask you a question absolutely yes please i know we have several level of partnership with our ecosystem and also different type of partners is this champ program available to all of them could you give us some more details the champ program itself is for advanced and premier partners but of course no matter if it's called champ or if you have another way to plan out joint activities in any case when we try to create solutions when we try to team up in communities there is always a place where we from our supporting end but also of course from our partner account management and from the teams within redhead are always asking for where is where is the written documentation where is the place where we keep track of what we want to achieve and so champ is very specific for advanced and premier but there are different light lighter weight variations of it for for for example for ready partners thank you i guess flexibility is the key isn't it absolutely it is it is and opportunistic opportunistic working is not really an option right and so this is this is where it comes from so what we what we try to achieve is that we have one process that is that is really consistent throughout the company of course to to know where to connect with the information to standardize and to give workshops in consistent quality um we also want that everyone uses the shortest path to receive the right level of of knowledge so there is plenty of trainings out there and if you ask to become a redhead administer a rel administrator for example i can easily point you to a course that lasts 86 hours right the question is are you willing to spend 86 hours because of course you might return well i have some linus experience so let's find the gap let's find what is needed to catch up and let's make this time uh squeeze the time right this is this is the thing that that we have the least at the moment so of course we are always we're always trying to help to find the the shortest path for you to be really able to understand um of course it helps ourselves to be and our partners to be to be compliant to have good documentation and with a good plan you guess it you get good marketing support of course right and so this is this is a process where we have some things to do in in in Turkey where we prepare properly then we have things where we sit down together and Andrea mentioned it um that there is a place where we think of use cases that make sense for the market and of course these are then the use cases we also bring in a planning session and say look this is what the market is potentially looking for how can you contribute in which kind of variation you can make a solution or an offering for your customers out of it that is unique that make customers calling especially you for that so this is the one piece and the other piece that we found is that at the end someone needs to advertise it and someone needs to talk about it and this is the reason why we created a format called a solution brief uh this is more or less a standard and it is somewhere between a product description and the reference case it tells you what is on the shelf and it's for me this is i'm very passionate about solution briefs because uh for me it's always a very interesting process to not only fall in love with technology but then to articulate in just one sentence what is this good for and and what problem do i need to bring that this solution is capable to solve and so there is it is really an end-to-end cycle that we care of that we want to address and where we want to help with all our experience and all our knowledge to make it easy to consume for everyone especially in the edge area where it's very unlikely that you have just one party who's able to deliver you an end-to-end solution with all the nitty-gritty details and in many cases it's a it's a it's a it's a a community a micro community a setting of partner companies of third-party vendors of stakeholders who contribute to make it one solution that solves a specific issue and so this was a quick overview uh to demonstrate how much we care and why it's so important for us to have this central piece of information where we get all these things to be able to support in a proper way and to give our partners from the beginning the good feeling that their contribution and their commitment at the end definitely will lead to a good outcome and for the benefit of our customers thank you so much Gunter that was quite interesting and I I also believe it gives us a gives our partner also the the idea of how how we can do uh how much we can do together so was as I can see no questions there are questions for you for you in the in the chat I would like to ask you a quick one if I may of course so the hackfest caution piece brings together several partners no technology partner they could they could be a piece they could be small super technical partners but also technology vendors latest events have been sponsored by Intel for example and in asian pacific we have aws for the hyperscaler piece so yeah is the chan plan the in your opinion the natural consecution or next step for those super technical partners bringing a solution uh to uh to the blueprints we create or how would you validate those those ideas or those components that we adopt in a distributed architecture yeah let me phrase it that way um I think it's important to understand who makes who contributes what and and who's creating a value at which point of a process so the chan plan makes sense if we have a joint business case and if we are addressing a customer segment a vertical together and we want to have predictability and we want to know how much support how much um how much resource we want to spend based on a projection of an outcome now with with very specific technical details we might be in a situation where I say I have someone who's really doing something super special it's a make or break thing um but the the value is very or the value happens somewhere else right so there is there is a value chain or in contributing but I'm more or less influencing it I'm crucial but it's not me the plan is set up for but I am of course part of plan and and so this is a very interesting question because this is very where it's getting tricky the the chan plan itself is focused on a business partner and the business that we that we jointly do um but of course there needs to be uh there need to be the cross connection to say who's who's a critical element of this plan and to take care that these ecosystems um are followed up and and that someone is taking care of these collaborations fantastic thank you that's awesome that fits brilliantly into the the hackfest value propositions and though as as you mentioned already building those micro communities that help customers and big partners in solving customer challenges and business challenges thank you so much goon turn um so I guess it's time to talk technical here let me formally introduce matia so you probably know matia machia uh very well because it's not the first time he presents we together present um not officially be but let me formally say that matia is a red hatter a principal consultant in the alps region and matia is also an esteemed member of the red the red at hackfest technical community and a great cloud native and application security expert so he could say he's also an apprenticeship expert as a principal consultant he's supposed to know everything i'm new to you so yeah i guess we can finish like that that was perfect yeah fantastic okay that's not for free right um hi everyone by the way let me yeah let me just also introduce the topic because it's something we really care about and we've been working on that for more than one year matia uh myself bentel yard and other esteemed members of the hackfest community so the the project we uh matia is a skin to talk to you about is called multi arc builder so now uh to give you an example think of the need of building workload for uh devices or servers or components of your distributor architecture which are based on different um different cpu architectures different platforms how would you do that on a single point how would you use your open chip platform as a generator of container of container images matia stage is yours thanks a lot andre for interaction um so yeah that's what it's about and also what about quarkus what makes it interesting for edge device because they start this community about quarkus because it provides a kind of small footprint and really quick start down which is a hard requirement for a hedge device um so when you start looking uh you know uh container the first thing that comes to your mind it's about portability right so i build my application with specific dependency then i ship the container around in my environment we are down carefully about redeploying patches or update so it gives you kind of flexibility to rolling out my application anytime and without overhead so that's why it gives you a sense or you know to deploy it anywhere but if we start thinking about um different architecture like raspberry pi which is a harm device then he opened a new world um so if you want to really explore the approach to build for different architecture we can come out with four type approaches um so the first part that comes on your mind is to have a dedicated vm and also that's how we start in the community was andrea start with this is on uh machine spinout dedicated vm to try to build a quarkus application based on uh arm architecture so if you think about this approach it's not really flexible also in case you want to automate it is not really nice experience and also if you want to try to avoid to run vm on malato it's better also because if you want to run a good machine vm machine you need also under the hood a good infrastructure as well uh another approach would be to use a dedicated uh architecture for kubernetes which provides all the working nodes based on our arm architecture uh with means if you want to specific architect not just talking about harm device you need to kind of have a specific cluster for each uh architect to type so it becomes quite expensive if you think about that just for creating an image right and cross compilation is uh uh something nice very attractive of course but the sketch about cross compilation is depending of the language that they use and or the type of framework so every framework has its own way to do cross compilation the same as the language last but not least is device uh to build on device and and now we are talking about raspberry by so if you see here that's my raspberry by that we can show you on the demo uh it seems like a good idea because you build the right on that device so you make sure that your application will work as expected although the power or ad device is not good enough to do a proper compilation and think about as well a native compilation so um there is another option which the quarkus community the redak akas community explore is to use uh qemu and which does user space emulation what qemu does it allow you to run container on different content to utilize a kernel component which allow you to run various binary format type um so do you know how it works neither do i no kidding but um practically based on some identifier it is able to understand the type of binary and then based on the type of binary is the size the right interpreter so for instance i'm what when i i want to build um on harm container device uh i'll tell qemu please i want to use arm architecture then qemu is going to emulate the cpu and then it means that my build process will run under the emulated cpu so i'm going to use this architecture under the hood and with this approach allow you to not have anymore the full stack that the the virtual machine give you so from the hardware to the cpu level but with the emulation approach it simulates uh the the actual cpu and the nice things about the qemu the actual emulation takes place in container which gives the flexibility to the developer to reuse the existing flow which are already familiar so it's not really a destruction approach so you just need to use the right base image and then let the job do for you and talking about this is fine for local environment or your local workspace but if you want to scale out the approach you can also do this on openshift cluster or kubernetes cluster where you configure the the worker node dedicated for rbuild to do this magic for you and during the affects we prepare some basic image uh standardized which allow you to provide easily the the multi-art container runtime image based on rail and because we are using quarkus in the java java framework we provide uh two type of base image one with the standard jvm builder image and the other one with native builder image in this way you will just use this kind of image with the qemu enabler static binary and then you're able to deploy your application in your device and may i ask you a question here um just to make it more clear for our audience so you're telling me through qemu i cannot just build uh so i start with the basic image providing hardware emulation at the container level in a static way so i don't need to spin up the whole qemu instance actually but also you're telling me i'm able to build my image being that quarkus sorry jvm uh native mode a quarkus native mode or quarkus jvm mode for each and every architecture that's supported by the qemu emulation that yes it's correct also if you think about uh when you run the container on your laptop which is x86 you will see the emulation active but when that your container is running on the ARM device is actually using the the architecture of the device so you're telling me the experience that that that works as a pass through i don't really have the qemu overhead or latency when i run on the target architecture that's also correct thank you again thanks for sharing and talking about uh creating a consistent developer workflow you could extend on your current pipeline like with tecton to provide a cluster task which can be used by developer to build a specific architecture image and so you can use similar workflow not just for classic application running on a classic uber netis cluster but as well application running on the on the edge device and if you see on the link that we provide on the in the slide those are uh git repository which help you to spin up these projects on your open sheet cluster and what is happening is going to launch a demon set which configure the working node the specific working node for you for your build and then with the right qemu setting so next time you launch the techno pipeline it's going to leverage the qemu emulation for you so uh let's switch with the demo uh the demo is simple not that the image that you see it just because it's so hot through these days so i'm really really sweating at the moment right now but yeah so let's switch the screen i will gonna show you on the demo the container running on the Raspberry Pi if you have any questions me while i prepare the setup please me it's funny because we were supposed to be ready for the show and you were supposed to have the setup ready but it's fine we can also discuss from the technical standpoint now instead of having myself asking matthi a question uh let me also give you a more interesting uh perspective on this matthi a um shared the opportunity to create a demon set to dedicate a few um a few worker nodes or just one worker not depending on the uh volume of compiles compilations or or tecton pipelines you want to run on your open sheet platform right um um that will be the starting point of build tests and deploy at scale also for your devices um that's something we will discuss we will propose you also hopefully with with demos during the next open shift um to be so the hack best where that hack best office hours episodes and sessions but that's something um we can you can definitely think about for your for your customer implementation cool thanks and so um we in the sample repository in the multi-arch build a demo and there is a just a simple quarkus and hello world application the difference we add the uh the true docker file the gm heart 64 and the native one as you can see here we have a builder image based on our multi-arch builder which allow you to build um your uh container based on the uh qemu emulation and then in this way you're able to uh build the container for your Raspberry by in this case and the same for the gvm mode so um if i switch the screen on top you have it's sorry sorry to interrupt you matea um could you uh increase the font size yes that's was i was asking yes but both on this um uh increase font size that's sure good okay maybe increase as well here where was every time i forgot yeah to me yeah much better thank you okay so yeah so i hope it so maybe it's not gonna be better we have a build the image based on a native uh quarkus way and then the gvm builder image for quarkus and then now switch on my Raspberry Pi just to make sure that i'm running on this device is i can show you can information so this is our device and i already build the mesh the container in my machine i push it on quio but just for use use case it's doing the demo under this is additional top this is running the container uh native image for my arm device as you see the time it takes a little bit then i expected because it's going is running everything under my container giant on my macbook but you need to kind of resize the devices uh with a good CPU and memory to make it faster and uh but if you ever reach like andrea with a bigger machine it takes a few minutes right andrea this is profession that's all but anyway we can give a few numbers here um andrea andrea is also kindly sharing the the links to the most relevant um repositories of keeping github so you would find there the repository to build to build the image the repository with the demo and the repository uh that contains the instructions to create the demon set an open ship now running this on and until i9 on or similar it takes a couple of minutes okay so the emulation is quite fast running the process of compiling a basic getting started for the workers week starts it takes roughly let's say seven six to seven minutes on a standard open ship reference architecture but also we have github workflows to run this on our um on our github with our github actions so in our github workspace and those can be can be read and downloaded from the there is the repository called um um what's the name um qo ut ubi multi arc build a demo so we have there implemented several workflows um in the modern way in the github modern way so we we have um reusable workflows that show you how how you can in parallel compile several version of the same application several versions of jvm mode or dating mode but also um different architectures cpu architectures based on on your needs on your on your technical needs but on github considering the amount of resources is limited and so usually the recommended available memory for example for a for a completion is seven to eight peaks of memory on github it already takes a bit more a bit longer when you have standard x86 completions the same same cpu architecture without emulation and it takes a few minutes to compile with only four peaks of memory compiling an application for a different architecture and then having qmo in place takes roughly 30 to 32 minutes but that's limited amount of resources from both this pcpu and and memory standpoint but that works that work works i have to say quite nicely and we are also in a discussion we will give you more details shortly um with the workers technical team to to merge our efforts but that's something we will probably discuss and announce uh later this year is it still compiling material we did this is something we should have started yesterday no i have already just uh i want to showcase what it does be and the thing we want to show them yeah because i guess the the argument the workers argumentation could take a while yes here yeah and i can also hear in the background the noise of your fun your computer's fun is that no that's not a worker because you're working okay so be careful with your computers and they could have a good point that is my computer um so yeah in this start on the my Raspberry Pi i'm running the jvm you see here version so i can run it right now and it starts pretty fast with jvm usually takes five seconds and then uh yeah let's see yeah five seconds and then on this start i launch the native mode and usually takes you see here less than one second so 0.10 seconds which is really really um what i want to show you is you see the native one is running natively no issue because it's a binary a binary compiled for arc 64 and uh what i want to show you for the java one is that actually using the jvm architect to arm 64 but the the containment has been built on my laptop x86 there is a nice command to showcase i've prepared it right before you can run it on your Raspberry Pi directly or inside the container because you know container is a process running on your machine uh where is it area you can see here so using my jdk or arc 64 my actual container and just to uh to check as well on the containers time yes let's go to the boatman boatman xx oops minus t it and we get the container which is this one let me replace the command before otherwise i can do it by heart i'm on the same command and you see here process one of course and then jvm or arc 64 so it's a real client container for arc 64 and of course that's actually also matthias so you are telling us this this works so again just to repeat it because it helps actually if i would um if i would create a base image and then with a layer on top that adds jvm or ground vm or mantel vm would ever throw the target architecture could that be m1 or m2 or power for example that would work exactly the same yes yeah the first thing we show you just for arc 64 but that doesn't stop you to provide a specific container for specific okay that that's great um so matthias i kindly i invite you to stop the compilation on your machine because you're breaking up and it's training all your resources poor poor laptop and what i would like you to uh present in if you're keen to is an overview of those um of those github workflows we discussed so in the in the builder demo project um if you could go into the github workflow um folder on github and show to our audience how we do that build that would be also okay just give me one sec maybe m1 you can mention what will be the next step for this build because we were discussing as well with the quarkus engineer maybe we can talk about m1 to prepare the screen yeah that's that's um interesting to say so this um this tool the multi arc builder tool um has been implemented by bentel yard and contributed by myself and matthias over time but the the first implementation is uh happened one year ago um this is definitely something that fills the gap between um the customer needs and technically speaking what the crowd vm or monitor vm they are not providing which is cost completion which is complex heavy um and takes a while so instead of waiting for our friends from grant vm grant vm and monitor vm for a cross compilation um our friends from quarkus community they came to us proposing a collaboration now again this is going to be announced uh probably in the August early September but our idea is to make sure that each and every architecture is covered when at the moment we do one to one we do x86 to arc so we want to be able to compile quarkus natively or in jvm mode from any source of a source platform to any destination platform that will bring us the the necessity the need for creating a matrix of um of builders right and of course in turn a matrix of uh base images because one um to build the image we use our builder to run the image we don't use the builder but we just use the base image that andrea kindly shared in our chat so um we also invite our audience here to have a look to test and try and also to contribute if they feel like they would like to do more um that would be interesting to see how um the users of the latest um macOS sorry the latest uh macbook pro would actually benefit out of this and if that could be beneficial for them to compile on their m processor and a a container that runs for example on a different architecture like x86 or or arc 64 that could be definitely awesome so feel free to reach out to the to the community um in the blog um landing page that andrea kindly shared earlier there are at the bottom of the page there are the links to our google group to our the link to our weekly meeting uh google meet um and which is definitely open and also the link to um access um and join our slack workspace where the technical community discuss usually uh each and every kind of technical aspect of a distributed um potential distributed architecture that focuses on edge use cases have you got anything to share matthias uh can you see the screen no i can't oh now we can okay this one no matthias will you share the the the github yeah i'm sure but it looks like it's not really smart enough this is i'm not the first time it's no i need to put entire screen the entire screen um okay also um now you can see and just to finish the your discussion around where we are going on the quarkus multi-art build i think the nice things will be tell about the next step because we discuss you know with quarkus engineer team last time which to provide a standard way or build quarkus container image and so will be also supported by the quarkus engineering team so based on our work we want to do join event two together and so this will be in the next step for this multi-arc builder uh for quarkus specifically um so talking about how we do it and we leverage that we our repository on a github we use github action and we create the specific workflow to build the um jvm container or our native container and you see i'm we can show you just where the one from the harm device are six architectures 64 and you see let me open it so we create a based on uh andrea skills around github action and we create a reusable workflow and then where specific parameter you can adjust how you can define the workflow and this is a snapshot workflow so every time you do a push and they're going to build it and create a nightly build container um so if you're not familiar with the github action you define the job so it means the job with the task that you want to uh uh launch as well which based on which container and the step that you want to do for example here we are telling you i want to this check out the project and then cashed my mother repository because otherwise every time you launch maven you download the internets and so you want to speed up the process as much as you can and then you define the java build based on specific gdk in this case we are using gdk 11 and then we specify where you want to push the artifact and we are using artifactory because we provide uh free to use for community the artifact repository we build the container and indeed maybe this is more interesting uh which i didn't show uh during the demo because the meal in the image were already prepared um we in with this command we are telling to the container jai to be ready for the emulation so it defines the information to your container jai in this case running on github action and then your container giant is ready to launch uh the container build for your ark arm device and this is just a classic workflow as you can see the build application build the container for for your device and then push it when the container is built to push it to your uh container registry in this case way are you yeah just what here to add um um something more to the registration of qm static user static so this is completely different from um using the qm or dedicated github action github action enables qm to emulate we are just registering qm static to be able to use qm within our container to build the final image so we are not registering qm on the github instance we are allowing qm in the container to run so that's that's definitely much faster and dedicated to that process which is which is um easy and dedicated to the the creation of multi-art images so that's definitely working well okay um for a for our audience if you have any question feel free to put them in the chat we're always keen to to give you an answer or guys if you have something else to add also andrea gunter matthia uh no uh i think that was pretty much for the demo thanks a lot for the thank you matthia listening that's quite insightful so um working with multi-archs and in this in the in the edge distributed world is crucial i believe um and also i think that going more and more with sensors also cameras as a camera sensors as a service like rather than smart cameras for example is crucial to have the opportunity to manage workload in terms of building updating decommissioning also workload at specific layers is crucial sensors are there the way we connect to sensors we manage the flow um is is uh something we should focus on definitely in the near future and that's why i believe our collaboration with the workers community but i also hope with different languages and framework is is very very important and and there was and that was my question you guys may be more aware than i am so if this is if this is the the said the roadmap and uh and of course as you said it's very important to have base images that already contain the the native frameworks um pre-compiled for the for the target architecture so is there a plan to provide for example uh to provide base images for those languages let's say python for example or frameworks like fuses to run on edge devices for different architectures i know it's perfect but no no it's an interesting question and i actually um i won't provide any dedicated image for um for frameworks actually workers is a framework framework modular with its universe of extensions actually and plugins but in that case if you think of fuse or camel integration camel k and so on and so forth that's the workers so it's fine when you on the other hand and matthew also please correct me if i'm wrong when you think of uh language runtimes in that case yes it's also interesting to have the opportunity to be able to compile final version for those languages that can be compiled actually otherwise there is no reason i mean python you won't compile that but the the actual language libraries have to be have to be in that in that base image that could be yes that could be but you definitely build the image if you don't compile it you still have to have those things can you say again so all i'm all i'm saying is i'm not maybe i'm wrong but i'm not aware of dude that's that's right at the moment uh have base images let's say for all these architectures that you mentioned for things like the usual support languages that are supported supported for example on rel so they should be but i'm not sure if there are uh radius base images they are they are and also we as we mentioned andrea we use the ubi reddit image which is um no art by definition and can run on several architectures so we use the base flexibility basic flexibility of of the ubi uh minimal images minimal and not micro simply because we need micro dnf to install additional components required by um by groutier not running it we only have a couple minutes but there is a question from the chat yeah and the question is is multi-art design designed i guess for oporship platforms yeah it is definitely um we have a repo which i believe we shared otherwise uh we will share it offline as soon as the uh as we go as we end the the session but definitely yes in our qio t uh repo github there is also a multi-art for ocp so you can see and learn how to run those multi-art builders on on the oporship platform yes definitely we covered that earlier today so shall we shall i say goodbye andrea just uh yeah we tested on maybe just quickly we tested for open shift uh but you can translate for kubernetes because we use kubernetes resources yeah i think the missing we use image registry which is a specific component for oporship but you can just use directly the image instead the image streams and then deploy the demo set in kubernetes so thanks mattia so it's time to say goodbye we will meet again uh next month august same time last wednesday of the month so stay tuned and thanks for joining and uh just a reminder reminder for open shift coffee break next week we have our friends from ron ai for our ai ml series they'll be talking about their cool technology about scheduling small scheduling of for example fractional gpu allocation for large-scale um ai and ml workloads see you next week cool thank you for being here invite everyone so