 So, hello everyone and welcome to the afternoon of the open Euler and operating system track. So, it's great to have you here and we are starting right into it. I'm Mario Belling from the FOSS Asia team and Yeah, I'm honored to welcome Dr. Xiong Wei, Executive Director of Open Euler here on stage now. Xiong Wei is an accomplished figure in the tech industry currently serving as the Executive Director for Open Euler, holding the position of Central Software Institute Server Operating System Chief Architect at Huawei's laboratory. Since joining Huawei in 2014, he has played a pivotal role in the development of the Open Euler project. He initially established the Kuhn-Peng Basic Software Stack Server OS container engine and other infrastructure of the Open Euler project. His expertise extends across a broad technical spectrum, including processor architecture, operating systems and containers, backed by years of experience and technological innovations at other companies like Torbo Linux and Wind River. Also a distinguished background in educational area from Nankai University, Xiong's Academy background laid the foundation for his extensive career in operating systems underlying software. His tenure includes significant roles as the Euler OS Chief Architect at Huawei's Technology Senior Manager at Wind River and R&D Director at Torbo Linux China. Thank you very much for joining us here today and I hand over you to the welcome of the Open Euler track. I'm very happy to have so big a hall to hold the Open Euler's track and yesterday in the keynote I do a brief introduction of Open Euler, what it is and what's the label or the technical label of Open Euler and what's going on, what will happen in the next stage of Open Euler's development. But nonetheless I think I want to emphasize that Open Euler is an innovation community so in Open Euler we have developed a lot of totally new projects which cover DevOps, cloud native, security and AI and also embedded a lot of this kind of area. There are over 400 projects which are totally new and welcome any developer, the worldwide developer to getting those kind of projects to get involved in new innovation ideas. So for this afternoon I think we can bring some of the projects which are very interesting and are very important and also some kind of mature which can bring those new technology to our audience, to our developer or even our partners. So I believe those kind of projects can bring a lot of new ideas or bring a lot of opportunities for the young developers. So I think that is the track's value. So next time I think we can let's to start this afternoon's track. Hope everyone here will enjoy the technology details and the high tech values. Thank you very much and yeah big round of welcome applause to Open Euler. Yeah and the next first speaker here I would like to welcome now on stage is Yanjun Wu, Institute of Software and Chinese Academy of Science and Standing Committee member of Open Atom, Open Euler community. Open Atom is a foundation in China and I'm very glad to welcome you here on stage and so we've set up your stage here and there's the this or you can choose the microphone for you like. So thank you very much for joining us here on stage. Good afternoon everyone. Glad to be here to give a brief introduction about Open Euler's future. So I'm Yanjun from the Institute of Software, Chinese Academy of Sciences and also a Standing Committee member of Open Atom, Open Euler community. So the topic today is future afforded exploring the vision of Open Euler. First I would like to give some latest data about Open Euler and this this data is just updated from updated this morning and we can see we have already have 11, 11 solar repositories, 162,000 pull requests and 1.3 million global downloads and the most important we have more than 1500 enterprise members and we have 18,000 contributors so Open Euler is a actually is a very large community and back go back to four years ago Open Euler is just about 1% market share in China server operating system but at the end of last year we reached more than 36%. So it's a really fast-growing OS distribution. So however that is not my topic today. My topic today is where shall we go? Where are we are going to do in the next step? So as we all know that intelligent computing area has arrived we can see a lot of new hardware technologies like no volunteer memory, domain specific architecture, high-speed communications and as well as energy constraint. On the other hand we have diverse new application demands. So here we check operating system as innovation engine to meet the versatile scenarios. How we achieve this? From two aspects, one aspect is operating system for AI and another is AI for operating system. So here's the details. On one hand we offered end-to-end AI devos like programming assistance, we have Euler compilant which has been mentioned this morning and we have AI shell command to for outer completion and recommendation. We also have we are also working on coding compilant like compilant acts from GitHub. So we also did a lot of things to improve the performance of Euler from the AI configuration, AI tuning. On the other hand we have AI assistant operation and management. We offer a lot of AI frameworks, libraries, drivers, operators. We also encapsulate AI apps in containers like Docker and like Isoil. Also we support heterogeneous hardware with unified scheduling and we also improve the inference by concurrent inference more than 50%. So as Dr. Xun has mentioned yesterday, we have a lot of developers like 18,000 developers. So we have collaborative development for full stack innovation. And now we have more than 400 code repositories for innovation projects with an average of 10 new innovation projects each month. So Open Euler is not just for server operating system but also for many other scenarios like cloud and embedded computing. We have already implored Open Euler to some typical scenarios like multimedia conference, smart factories. We now build Open Euler as a contribution platform for global developers with other open source foundations. For example, Open Euler has already had native support for OpenStack. This has done with Open Info Foundation. And the members from Open Euler has already been the leading kernel contributors. And we also have many native support for major projects in Linux Foundation. We also give full stack native support for big data like Spark, Flink under Apache Foundation. And Bisheng JDK, which is compatible with Open JDK. We also have the compatibility certification from Eclipse. So we also have many communication platforms for global developers like LinkedIn, YouTube, Twitter, Reddit. In order for the convenience of global developers, we not only hosted our projects in GitHub but also GitHub and GitLab. So if you wanted to experience Open Euler today, you can download from the website with different images like ISO images, cloud images, container images. We also have a Windows subsystem for Linux WSL version of Open Euler. You can also use machine to install Open Euler. And you can also play with Raspberry Pi. So last but I think the most important part, I want to give a brief introduction about open source promotion plan, OSPP. I think this is very important for students here today. This OSPP is initiated by eSCAST and Open Euler from 2020. It's an alternative to Google summer code. I think many students may have heard of Google summer code, GSOC. GSOC started from 2005. Actually, I was a student at that time and I was one of the first participators of GSOC. So from 2020, we started an alternative to GSOC called OSPP. We want to bring mentors from communities and students from universities to make contributions to open source. So this year is summer 2024. It's the first year. We have more projects, more mentors, and more fellow students. And specifically, we have many students from different countries like Europe, British, France. We also have students from Canada, American, and also have students from India, Pakistan. But it's pretty that till now, we don't have students from Vietnam. So we strongly welcome mentors and students from Vietnam to join in OSPP. Together, we build a shared future. This is my talk today. Thank you very much. Thank you very much. I just wanted to ask you about the last slide. That is a student program. When will be the next season and when can people apply for it? There's an official website. Maybe you can take pictures on that slide. And there's a guide menu for students from different countries to foreign countries. There's a unique version of the website. You can read it. Yeah, so yeah, please take out your phone and take a picture of this so you have it as a memory. Hon Phuc, can you please say this in Vietnamese? I know nothing about what Hon said. But I think it's a very successful project. In China, there are a lot of Chinese college, university, which they can apply the case or project project. And the money is not so much. But it's very interesting that if you take some project and finish it, and the project will funding some money. So for students, maybe Doctor, you can explain the project. So the standard of accomplishment is not only to submit pull requests, but also have been merged into the project's main line. So money, I can, you can see it, 8000 and 12000. This is two different levels. The 8000 for the sample and the basic levels and the 12000 for heart and monster levels. So this is two levels. So I think it's not easy. Maybe it's not easy. But I think through this process, just like Hon said, I can guess that you'll get a lot of technology open source experience. So not only about the price, but also about the experience and the knowledge they will gain, so that they can prepare themselves for the industry later on after their universities. So this is what you want to say, right? The 8000 and 12000 is 47 million. It's not so much, right? In the past, when I went to study, I couldn't find this number. So I could study, I could have knowledge, I could guarantee my life, support me, I could see. So the demand, because there are a few people who want to learn about open source, the thing you have to do is to find out about this and the code. So every time you write the code, if your code is able to enter this part, it's a success, right? So the reason they are here is because they want to find the talents in Vietnam to contribute to the open source project. So who would be interested generally to become a summer intern for open oil and Huawei and these kind of projects here? Who would be interested? Show me your hands. Okay, so quite a few people, yeah? Okay, cool. So yeah, some insights here. So it's up to 42 million Vietnam dong. Yeah, up to 42 million Vietnam dong. I think it's quite nice for summer internship, so I don't think it's low, so it's really good. And so we've talked about this, we've talked about the opportunity for you guys here in the room and of course for everyone in Vietnam but around the world. And the question now is how can you start? How can you start your career in the sector in Linux and open oiler and in a technology scene? And this is about the next presentations that we have now to learn more about the technology. And we have the next speaker coming up. It is George Carl. So George, please come on stage. And yeah, I would like to share quickly a few facts about George. So George is a senior software engineer of open oiler. He serves as a software engineer in the community where he plays a pivotal role in shaping the community's infrastructure. As a maintainer of the infrastructure SIG group and a member of the technical committee, George's responsibilities span across crucial areas including community CICD, service pipeline construction and comprehensive community membership and a repository management. So quite a lot of responsibilities here. And yeah, we are eager to learn more from you about this. Here the control the microphone online or just take this one. Okay. Okay. Good afternoon everyone. I'm George from infrastructure team of open oiler community. Yeah. Glad to be here to share with you the development of open oiler and how open oiler started to be a developer friendly community, the topic of my speech today. Yeah. Okay. I will introduce you from the following four specs. Our team overview, cloud cloud and native DevOps, cloud native application, sorry, cloud native application design and the future plans. First of all, the infrastructure team overview. The infrastructure team is one of the earliest team built in the community after four years of the development. We have established five special interesting groups. They are infrastructure SIG, gatekeeper SIG, CICD SIG, open design SIG and the reproducible build SIG. Yeah. Our team members has grown from five to nearly 20. The scope of our work is also expanding. We are mainly responsible for the community development process such as code hosting, the pull request check, package building and software signature. We also support the community operations such as community websites, member management, meeting management and email service and the data dashboard. Yeah. So we are going to talk about the infrastructure and the scope and you can see, you can look at this picture. Yeah. As you know, open oiler is a completely open source community. So when we build our own infrastructure, we, most of the services are developed from open source projects of the well-known community. Take a look at this one. This one, GNU mailman. Maybe you have used it before. Which way use it as our mail needs the service. We also use Jenkins to build the CI pipeline and OBS from open Susie to build the, to build the distributes, to build and distribute our packages. You may also find a bunch of projects from the CNCF. Yes. We have the engine ingress which is used as our reverse proxy server and as a simple API gateway. Argo CD is also used to centralize our applications, the changes and codes into our production environment. We also use a word, yeah, word from hash cope as our sensitive data background as well as the corporal. Corporal is from Fedora. We have enhanced the corporal in several ways to make it running better in a corporate cluster. Yeah, lots of projects. This part I will give you the overview of infrastructure architecture. Actually, there are two tiers in our infrastructure. For the first tier, we use telephone files to centralize and to create all the basic IS resources. Includes the VPC, database, storage device, the virtual machine and the physical machines. For the second tier, here is all about the corporate clusters. We have master cluster to run key components and the worker cluster to run different applications in different regions and different clouds. Now, we have 160 services, 6 corporate clusters and more than 300 virtual machines to support nearly 20,000 developers in open EULA community. Yeah, it's a huge infrastructure. Okay, we are going to the next part. It's about cloud native DevOps. I will share some experiments and some base practice with you on how we deploy application in the cloud native way. There are four parts. Yeah, the first part is about deployment standardization. Most infrastructure services of the open EULA community are deployed on the corporate cluster, so services need to be containerized. Containerization brings us many benefits, such as easy migration between different clouds, easy tracking of every deployment change and reduce deployment time and application recovery time. So we have made some efforts in standardization. Containerize application, change log outputs, managing image respiratory and health check. The second part is about configuration separation. Yes, in the whole service deployment process, we divided different responsibilities by application developer, DevOps engineer and infrastructure maintainer. Yeah, application developers are only responsible for service deployment and container constructions. DevOps engineers are responsible for service resource interconnection, such as ingress and database configuration. Infrastructure maintainers are responsible for service release and use arglcd to centralize services and ice resources. Okay, the part three and part four is about GitOps and automation. Part three, we use Git reporters to store configuration files and to centralize with arglcd while deploying infrastructure services. In this way, services can be quickly brought online, historic versions can be managed and service reliability and security can be further improved. The last part is about automation. Actually, there are three different cases. Take a look at this one. This picture shows from poor request submission to website preview from code merges to Jenkins build and publish from new configuration files to service cache clearing and to notification. All steps are automated. Okay, we are going to the next part. During this part, I will share some useful applications that we have been working on to make them better running in the Kubernetes cluster. The first one is about bots. So if you guys have ever been the open source communities, you must have learned that there is a bot application used to collect all the different applications into the poor request. For example, if the developer submits a new PR, the maintainers will use the bot application to check the commit to perform the CLA check, as well as sign or unsign a PR for someone and post comments on the poor request. Our bot application is named Yobbert, which is based on the idea of pro project from CNCF and we proved it through several aspects. The first is Yobbert. It means each single plugins will be separated in the container and it's easy to replicate it. If one of the plugins crashes, it does not affect the whole application yet. The second improvement is about the highest throughput. Yes, we use Kafka to receive the message from the code platform as well as the deliverers. At other improvements, focus on multiple platform support and the multiple developing language support. The second application is the meeting board. Yes, how meeting is an essential requirement in the community. As we have about 10 meetings per week, so we have created our own version of meeting board. Our meeting board supports multiple meeting platforms, such as room, willing and Tencent. And you can book meeting via website or WeChat app. We watch the meeting video while YouTube or Billy Billy. Actually, the architecture of the meeting board is quite simple and most of the job is done by corporate. Once the meeting event comes, we will use the corporate to create a meeting to send the email to our maintainer to down our meeting videos as well as punished to social platform. Okay, we are going to talk about the Secular Trust. Secular Trust is a project based on the idea of OBS sign from OpenSussy while provides a more comprehensive and efficient and cloud native solution for package sign. It's end-to-end security design with high throughput. It can support almost all battery files. Yeah, and has user friendly key management. Yeah, sorry. The next one is MOOC Studio. MOOC Studio is based on Kuberlitz operator and provides open unit tutoring for open-unit users. The next one is MOOC Studio. MOOC Studio is based on Kuberlitz operator and uh uh provides uh open-unit terminal environment for community developers. Especially for developers comes from university or high school. Yeah. It's a common case that there are there are some troubles to get the real environment. So especially for the arc environment to run the operation system. In order to solve this problem we have developed our MOOC studio. MOOC studio is a kind of native terminal playground in uh browser. It has some good points. Yeah. First we can establish a collection to brand new environment in only 20 seconds. Second it supports multiple environment such as application container, system container or virtual machine. Third uh the environment can be quite high highly customized. Include base OS architecture or additional files. Yeah. The last one is the environment will be released when disconnected. Uh this part is about EUR. EUR is short for open-euler user repository. Actually it's based on corpora project from Fedora. Uh but uh components has been customized for open-euler. First of all all the components has been upgraded for corporate environment. Yeah. And we use serverless corporate part as a backend package builder. Second it can reserve package dependency and auto import. And uh EUR is highly integrated into our package developer process. Okay uh this part we're going to talk about our future plans. The first one is software marketing. We will do our efforts to provide a convenient platform for community developers to query and to obtain software. Second one is the cloud IDE. We will support developer and uh test process with cloud IDE. Third one is message center. We will uh integrated all application with message center by cloud event. Okay. Uh you can scan the QR codes and visit the website here to join us. Okay. Thank you. Thank you everyone. Yeah. Thank you very much. Thank you. And uh yeah. Um before we check if we have a question we would like to have a short wrap up in Vietnamese. Um excuse me. Yes. We have a short wrap up in Vietnamese. We just need the microphone here and um so we can have um uh be sure that everyone understands uh about this question. Yeah. Please go ahead. Your wrap up. George. George. Yes. Uh Hello. We're finished. Thank you very much. Thank you very much. Thank you. Thank you very much. Um uh so yeah. Are there any questions in the audience? Any questions? At the moment no questions. Um but I think there will be more questions probably later. So uh George uh you will be at the booth at the open oiler booth so people can uh come directly to you with the questions. Yeah. Can correct us from the. Yes. Yeah. Uh you can correct us scan the uh uh QR QR QR codes or uh visit the website. Um uh to correct us. Yeah. Okay. Cool. Thank you very much. Thank you very much. Uh big round of applause for George. And uh we're coming to the next session here in the Open Euler track um which is um second. In a moment uh so we are here with the introduction to A Ops um an intelligent operation and maintenance platform of uh Open Oil like to ask uh Yunqing Jule. Yeah. Here uh comes already on stage to set up um so let me share a few key facts about um Yunqing. Uh he's a software engineer at Open Euler and um yeah very dedicated engineer um working directly at Huawei contributing significantly to the open source community particularly within the Open Euler project. As a major force behind A Ops and intelligent operation and maintenance software tailored for operating systems Yunqing brings innovative solutions to enhance system reliability and efficiency. He holds the role of a committer in the Open Euler Ops SIG where he is responsible for maintaining multiple A Ops repositories. Yunqing's expertise and commitment to advancing open source software development underscore his valuable contributions to the Open Euler community and the broader field of software engineering. So thank you very much for being on stage we're still setting up um uh for in a moment and um then get started. So in the meantime so we're also of course interested how many people already use for example Linux how many people use different operating systems so um just a quick check in the room so we have an idea where we are and how many people in the room have a desktop and already use um or the microphone dropping out sometimes how many people already use Linux how many people already use Linux on their desktop okay yeah okay quite a few okay how many people use macOS for example so just so we have an idea ah okay yes okay ah some people both okay I understand so and um other people like maybe Windows operating system yeah how many people yeah okay so and the others probably mainly on the mobile phone using Android and yeah so we have an idea and getting started now so thank you you should you let's say correctly yeah yeah okay excellent thank you very much for teaching me that thank you big round of applause welcome welcome everyone I'm Zhu Yuncheng I'm a software engineer in Huawei and I have been working in the operation and maintenance area for about three years and I also actively involved in the Open Euler community as a commuter in server 6 and I'm glad to be here and share with you all our latest progress about the project called ALPS yeah here I'm gonna introduce it from the four aspects so firstly what it says ALPS so broadly speaking ALPS is an end-to-end solution for user to for detection and diagnosis then repair so here we divide the whole process of the maintenance work to three layers the bottom layer we call it data collection layer here we have an agent to you know collect the data from users host we have a agent to control many different collection tools to collect and the data includes the metric data as well as the log data and one of the most important tool is called Gallagolfer so it uses ebpf to collect very deep libel data from both and user mode and the middle layer we call it initial diagnosis layer so in this layer we can still use some tools to you know do some conduct some initial diagnosis so that we don't have to push the heavy data to the outside for instance if you use if you want to try to diagnose the OOM right out of memory you don't have to well transform the VM call to outside which is very heavy and the top layer we build a set of micro services and support many features so user can for instance do the routine inspections and AI diagnose and CV hotfix and other many interesting features and for user to scan and repair the entire cluster not only one so here I'm gonna introduce the three main features firstly it's a full stack observability it is provided by Gallagolfer which is a demon that provides a lot of ebpf based probes or combined with the U probe and trace point and K probe to collect the data from the kernel and driver and Cisco so it's very convenient to our user to develop their own plug-in yeah so right now we have covered a kernel runtime and some other user mode software like redis and engines so as you can see when we got the data from the probes it can be restored it can be stored as a matrix in the database for instance a prometheus right or it can be transferred via Kafka to for other use and with the data we have we can do many interesting things like we can draw a real time topology based on the TCP or IP communication data and with the topology we can do the cluster fault localizations and with the data like you know the response time or latency of the request you can easily localize the fault and you can see which part of the cluster is wrong and finally you can also do the fault diagnosis we have covered like IO and memory and even performance you can do the online performance profiling yeah which is pretty cool here is some charts made by the golfer you can see it covers most of the useful data we can use in daily maintenance work like the applications API performance and the TCP IP data and also they can draw the flame graph of the CPU on memory and they can also do the online profiling based on the on CPU or off CPU analysis and another function of a ops is the vulnerability management as we all know the security of OS is very important right and open law published the update brand update version every week so for instance if you have a CVE in your kernel you can just download the upgraded one to upgrade the kernel and then reboot the system the system and the CVE will be fixed but in most cases this will be work but in some emergency cases so user cannot just reboot the system and active the new kernel right so we can use the hot patches and here we build a brand new hot patch production pipeline and to produce the pipeline in with the code patch PR which will which I will describe later in the demo and with the published hot patch we can scan the hosts in the ops website and we can also generate a fixed task to define the host you want to fix and the CVE want to fix and even the fixed way like you want to use code patch or hot patch to fix the CVE then after the hot patch the CVE is fixed by code patch or hot patch we will scan it again and we will update the state in our database so since every since we can scan and repair it in batches the efficiency is very increased significantly and how do we might manage the hot patch in the system so we try to be consistent to users habits in overall we use DNF to manage the packages so here we just develop another DNF plugin to manage the hot patch life cycles and you can see here we support different commander of the DNF to scan the CVE and also install and upgrade the hot patches and in the lower level we actually call the CIS care you're using the command line and the CIS care is another open source project in open order it unifies the key patch and your patch external interfaces so it is easy to manage both of them at the same time and it also provides a complete life circle management from the not applied to the active then active and the traditional hot patch will be deactivated after you reboot the system so the CVE will be exposed again which is not what we want so here we add a new state called accepted so if you accept the hot patch means after you reboot the hot patch will be active again automatically which is pretty cool and so the final feature I want to introduce is the configuration tracking according to the statistics of our maintenance and SRE team we found that over 80% of OS failure caused by the change of configurations like if you change the slash it is slash at DAB it's easily can cause the boot failure right and here we set the baseline of the hosts and we will regular scan all the hosts and report the detecting modifications and we will also do the version control of the configurations yeah and so finally we have a future plan of combining the GPT with our helps so we try to we have choose the GPT called log GPT to help us to predict the next line of the log so that we can compare the prediction log with the actual log to see if there is anything goes wrong so we can just give the model the log file like the messages file and the code file and the case file of the maintenance work it will do a great job and help us to tell the log if we have to have anything wrong not just give them a specific rule right so here I prepared two demo for hotfix the first one is how we made a hot patch in the community so firstly we just find the PR in the kernel repository and here just randomly pick one and here we just need to comment here make a comment here here I omit some parameters and it will automatically trigger a new PR in the hot patch meta repository and in this PR it will record some basic info about the hot patches like which source RPM I'm gonna use and which debug info RPM and the specific content of the patch so the PR will start running with the CI system and after it's built it's down we will test it in community and you can also see the anniversary we have published here so here are the anniversaries we have we have published and you can see the detail of the about the CVE and also you can download the hot patch online yeah so with the meta data on the PR you can actually rebuild the hot patch by yourself okay the next PR on the next demo is a demo show how we fix it by a oops so firstly we log in and we have a dashboard here to show some basic info about the cluster here we can see all the hosts we have in the cluster and they're like online online status and we can also add the host in batches with Excel here we can also separate them in groups and in the CVE page we can see all the hosts we have all the CVEs we have in the cluster and how to fix them the specific RPM and their version then we can okay we can also see how many things how many CVE we have fixed and we can also see the CVE from the host perspective so here we just enter one host and we can scan it and you can also export the CVE information about the host it will generate Excel which shows the detail about the host CVE info yeah so here I just choose the CVE which can be fixed by hot patch and here we choose hot patch to fix the CVE and generate a task so here I accept the hot patch to activate even after reboot so then the task start running and we can see some detail in the task so after the task is done you can see the log of the task basically it's the DNF log and if you regret to okay here I just show the CVE has been fixed right and we can also see the hot patch status by our hot patch plug-in and if you feel regret to fix this CVE you can also generate a rollback see a rollback task and after rollback it will change to the previous state so here when we list the CVE list the hot patch again it is gone yeah that's it and feel free to scan the QR code thank you guys okay thank you sorry sorry about that but yeah so yeah please stay a moment so what we have a wrap up in Vietnamese about this session thank you very much and yeah any questions here actually this is I related you could have also bring this talk to the AI track but we had another AI talk from so AI is a big topic nowadays yeah yeah yeah exactly so so you have big big plans with your own GPT any specifics any algorithms you're planning to use can you tell us already a little bit about that actually we are you know we are working with all our copilot team to try to combine them together so that you can use the all our copilot to call our AIO's yeah and what about support for Vietnamese languages for Vietnamese right now we have support English actually we will try to support them yeah so anyone interested who like in supporting Vietnamese in this project so could for example work with you or for example become an intern over the summer with a coding program correct yeah yeah okay cool then thank you very much for this interesting session and we're looking forward to hear more from you in the future thank you very much thank you so we're coming to the next talk and the next talk is introduction to open Euler community unified construction platform Euler maker and while we set up I'm already sharing a bit about the speaker and it's a Xia Senlin Xia is the family and Senlin is the given name is that that's correct right yes and yeah let's hear a bit about the Xia Senlin so it's he's a software engineer from open Euler proficient software engineer deeply involved in the community particularly within the CI CD SIG task with the roles of overseeing the daily community version building and release processes Xia's contributions are fundamental to the community operational excellence he spare heads the development launch and promotion of the Euler maker system an innovative platform that streamlines RPM production software layer customization and image customization through Xia's efforts the open Euler community benefits from enhanced efficiency and customization capabilities solidifying the foundation for continuous development and deployment of open source solutions yeah this really great so open source of course like it's a big topic and we want to hear all about it and so we're gonna change the laptop though and okay make that happen and I have a question so RPM that's kind of widely used package management foundation for package management and yeah they're different platforms using it so if you've ever worked on one of these areas then you can also use it here and the platform and yeah setting up having having the 大家下午好我的英文不太好所以今天我的演講用中文進行抱歉 oh 我叫夏森林是來自華為的兩年共同師我在 open la 社區擔任 application desktop ruby runtime 以及 EPL stick the maintenant 今天我來為大家介紹 open la 社區統一構建平台 I am one of the designers and developers of Olamac. This is the introduction of today's content. First of all, I would like to introduce to you why we want to be Olamac. First of all, we have done the analysis of the users of the Open OLA community. This is probably the three types of users. The first type of users are the developers of the community. They are usually individual developers, and not all over the world. They are interested in their own software, how to design and implement it, and hope that their software will be able to quickly publish and update the version in the Open OLA community, so that they can improve their performance in the community. Their development environment is usually personal computers. For them, the painstaking point is... For them, there are four painstaking points. First is the development environment, and second is the software package. The construction time of the software package is relatively long. Third is the difficulty in analyzing the PR of their own project to other people. Finally, it is difficult to adjust the problem in the current construction environment. The second type of users are the manufacturers of OSWI and XINRUS. Their team scale is between tens to hundreds of people. They hope that the software package that participates in the contribution or the main project will be able to publish in time and high quality. What they are interested in is the income report and personal growth. Their original construction environment is a free server, and it doesn't belong to the third-party construction platform. For them, there are four painstaking points. First is that they can't analyze the PR of their own project to the effect of the reverse rely package. Second is that it is difficult to... Because of the construction result, they lost it. Third is that the software package can't support the contribution or the main project. Fourth is that they can't adjust the project. The last type of users are the maintainers of OSWI and XINRUS. The team scale is around hundreds of people. What they are interested in is the development of the social environment and the foundation of the digital platform. What they are interested in is the influence of the community and the foundation of OSWI. The construction environment is a free server and cloud environment. It doesn't belong to the third-party construction platform. There are three painstaking points for them. First is that the construction environment that doesn't belong to the third-party construction platform can't improve the efficiency of the project. Second is that the construction server and XINRUS are two separate systems that can't be controlled. Third is that the construction environment of OSWI can't be used in the community to support the construction of the community. As for the user analysis above, we have analyzed the different construction scenes, including the single-pack construction, the PR door construction, the power or the increase of the construction, the split-layer construction, the image definition, and the image construction and the image definition. Next, I will give a detailed introduction to Olamaker. First, Olamaker is a software build-up system that can be built from the original to the second-party software build-up. In fact, it allows users to set up and customize the OS that is suitable for their needs in the way they want. Olamaker's function board can be divided into three and the software build-up system, the split-layer and the image definition. Olamaker has provided a separate construction ability, including single-pack construction, power construction, increase construction, and these scenes. Then, by adding the construction and support dynamic expansion to speed up the construction efficiency, Olamaker is provided with a public base OS based on the definition ability of software build-up and can also be provided with layer-to-layer, layer-to-layer, layer-to-layer, and all-round OS. Finally, Olamaker is equipped with an image definition that allows users to customize a software build-up and the image definition to include server,容, virtual machine, and all kinds of scenes to achieve users' needs and needs. Next, I will give a detailed introduction to the main capabilities of Olamaker software build-up system. First, it is the first-party software build-up. Usually, Olamaker has these four main features. First, it is the general market for planning and building tasks. When a new build-up task is launched on the project, the DAG module is based on the software build-up range of this task, to explain their generation and create a DAG map. Then, each node in the map is a software build-up. Its border is the history of this software build-up market. In this way, we can analyze its longest path to achieve the longest build-up in this project. The second feature is that, when a new build-up task is launched, it ensures that the new build-up is under the latest conditions, to build a new build-up for the current software build-up. The third feature is that, when a new build-up task is found, it will transform the software build-up into a new build-up, to change the time. The fourth feature is that the software build-up API changes to reduce unnecessary software build-up tasks. The first feature is that, when a new build-up task is launched, the new build-up task is changed to complete the build-up acceleration. The second feature of the software build-up system is that it is flexible and the task is advanced. We have observed the build-up scene of large-scale tasks. We found that, when the entire build-up machine is supported by a low capacity, there are also various machine tasks to be completed. We analyzed and found that there are three problems. The first one is that, the size of the software build-up task is small. The second one is that, the number of software build-up tasks is reduced. The last one is that, some of the software build-up tasks need to be handled first, but it is still in line. Olamac has also proposed the solution. On the one hand, Olamac is the resource that can automatically calculate software build-up tasks. By automatically learning the resource of the historical build-up task, it is able to distribute new tasks. On the other hand, Olamac combines the actual build-up scene, the resource size of the build-up task, and the power of the initial build-up task, to define a self-assertive build-up task. The third ability of the software build-up system is to increase the number of build-up tasks. We analyzed the same type of build-up system. There are some drawbacks of their increase build-up tasks. First of all, they do not have a fast-paced concept. They do not have a stable limit. Second, they do not have a comprehensive catch. They do not have a consistent agreement. Third, they do not maintain the build-up environment. They are very difficult to copy and reply. Olamac first uses a quick-shoot to determine the limit of the build-up task, including the commit of the software warehouse, the build-up rely, the build-up machine, and the environmental measurement. In the current quick-shoot, like this SNAP2, and the last quick-shoot, SNAP1, the commit, make a difference, then analyze these differences and return to the rely chain, get all the software packages that need to be built in the end. At the same time, the build-up tasks of the software packages are divided into the same type of historical IPM ripple, and the rely ripple of the next batch of build-up tasks. In this way, Olamac is sure that the build-up rely, the build-up, the build-up, the build-up, the build-up, the build-up and the run-up is repeatedly repeated. It is more convenient when the problem is defined. The fourth ability of the software package construction system is the application and design's development test and the CICD development test. I am also the development of the design, so I am thinking that the current development has some disadvantages. Firstly, Pia's development does not support the long-term adjustment. It must rely on my to create a similar environment. This is not convenient, and it is also very low-key. The second problem is that the current community CICD is only limited resources under the same price. This will often lead to many developers waiting for a long time to complete their tasks. The third problem is that the current community only supports single-packing verification. This will not help developers to verify if the reflection is affected. These are some of the problems that Olamac has taken a few measures. First of all, Olamac has opened up the COI of the local structure and the local line. This will support developers in the local to be able to see the line and adjust it. The second is that Olamac will use the built-in infrastructure resources to share the built-in tasks for the local community so that the local community can use them. The third problem is that Olamac has made the built-in power supply of the software package. The built-in power supply can verify the current PR-related software package and if its reflection-based package is successful. The fifth ability of the software package system is to use the support of the web to implement the distribution system of the community developer and the community partner. The community partner can use the machine to connect to Olamac's built-in machine and then use the distribution of Olamac's tasks and the ability to automatically match the resources to share the shared building and the shared test. Olamac has actually connected to the REST file,龍arq and PowerPC64LE devices. This shows the front page of the software package system. From the left is the handkerchief, then the construction label, the construction cover, the construction configuration. The upper left corner is the construction history of the construction. The construction user management. The lower left corner is the construction cover of the single package. The right side is the construction history of the single package and the information downloaded by the RPMs. This shows the software build-in system and the demo of the single package of the construction. The above one shows the results of the construction of it's own power build period flow. The lower left corner shows the project of ‫the source of All Azure Tracks in this project. Recently started! There is also a result. This picture shows the application proof of all The Azure Tracks in this project. This is shown on the side right corner and is part of the project which and the changes in the structure of each software package. Next, I will introduce Olamac's all-round OS structure system. Its functions are mainly categorized and built-in. From the overall view, there are about six parts. The first part is to use YAML to make the software package to describe the new format, to support the software package to customize the structure. The second part is to provide a new public-based OS structure, including YAML documents that can be converted to SPAC, and to customize the structure. The third part is to use the social partners to customize the space and product based on the public-based OS. The fourth part is to use Olamac's platform to provide a unified image and customize the structure. The fifth part is to customize the whole space, including the server, border, cloud, and computer. The last part is to include more multi-layered storage. This is to customize the SPAC to describe the software package, and to convert it to YAML. Here we can see the YAML with the Python expression. It's simpler and more flexible. This part is to customize the SPAC to convert the software package to YAML, and to convert it to a unique network script. The unique network script is convenient to edit, customize, and adjust. This part is to customize all the settings supported by Olamac, including the software package, the architecture, the operation, the configuration, and the selection, etc. The most important feature of the configuration is that it allows users to customize their own long-term OS just like LeGao. Olamac has provided a different level of customization based on the public-based OS. For example, here is a chip add-on layer. You can choose the chip one to add the OS image. Here is a demo of the configuration. The left side is to customize the Redis, and then re-create it into an APM package. The configuration here includes the configuration of the editor, the selection of the architecture, the script of the pre-up stage, and the strategy of the second-gen purple package. The right side is to customize the new Redis, and then re-create it into a Redis server-oriented configuration. Finally, I will introduce the second configuration of Olamac's long-term OS configuration. The current needs and needs of the configuration are the application of the application and the installation of the application. The requirements for the operation system are different. Olamac provides a simple and flexible configuration. It allows users to choose their own software package to customize the OS that suits their own scene. The basic principle of customizing is to mask the entire scene in order to customize the smallest configuration based on a scene. It allows users to add their own software to carry out the operation. Olamac's customizing consists of two customizing tools. One is OEMac and Imagetailer. OEMac is mainly used to measure the performance of the standard image of the RPM. Imagetailer can be used to measure the depth of the image and allow users to modify the file and the configuration. This shows the several common scenes of customizing. The first one is using OEMac as an open-lock or local ripple to create an ISO with an air-conditioner. The second scene is using OEMac to add or delete the software package based on an ISO released by OEMac. The third scene is using OEMac to allow users to modify OEMac's kickstart to set an ISO to start the process. The last scene is using Imagetailer as well as open-lock or local ripple and local configuration. The third scene is using OEMac to introduce the current scene. OEMac is using OEMac as an open-lock or local ripple to create an ISO based on an ISO released by OEMac. The third scene is using OEMac as an open-lock or local ripple to create an ISO based on an ISO released by OEMac. The second scene is using OEMac as an open-lock or local ripple to create an ISO released by OEMac. The third scene is using OEMac as an open-lock or local ripple to create an ISO released by OEMac. The third scene is using OEMac as an open-lock or local ripple to create an ISO released by OEMac. The third scene is using OEMac as an open-lock or local ripple to create an ISO released by OEMac. The fourth scene is using OEMac as an open-lock or local ripple to create an ISO released by OEMac. The third scene is using OEMac as an open-lock or local ripple to create an ISO released by OEMac. The fourth scene is using OEMac as an open-lock or local ripple to create an ISO released by OEMac. Thank you very much. So we'll have a short wrap-up together about this session. Thank you very much. Okay, okay. First, I'm from OELA community. I'm the member of the technical community, so I want to make a supplement about what is OELA maker. So what is OELA? As Dr. Shon already said, we are operating system for all scenario. So as you know, we are operating system for all scenario. I mean, we always use OBS. But the problem is that OBS cannot cover the scenario of embedded and edge, which require cross-convell, fine-grade customization. So our current status is for server version and cloud version, we use OBS and similar mechanism to build it. So why can't we unify them together? Just use one unified platform to build different OBS for different scenarios. Because as we know, no matter server, cloud, embedded OBS, they are also, they are both, they are all Linux-based systems with a kernel, with partition, with container, with E-Met, right? So it's definitely we can use OBS to build it, but the status is deactivated. So that's why we have the idea of ORA Baker. So this is a young project and sending and all my colleagues already do some initial work to improve the distribution compiler of, I mean, server OS. The challenge of server OS is that server OS consists of cloud, native, or distributed mechanism to build it in large scale, right? So this cannot be handled by, I mean, embedded system like YAKTO because YAKTO can build, I mean, OS versions, for example, hundreds of packages, right? So that's why our current faculty focus is on large-scale distribution and when we done this and in the future we will try to extend ORA and we will try to build it in a better scenario. For example, you just need to write, I mean, the recipe or spec or YAML or whatever one time. Just one time, just once. Then, okay, the build system according to your configuration, according to your need to build different, I mean, OS for different scenarios and according to your requirements. So it's not such a system in the world, as we know, right? And the journey to make it happen will be long. So we hope that we just start ORA maker and we welcome more and more developers and users to help us to make ORA maker better and better. Okay, thank you. Thank you very much. As we are an international event, thank you. We have a short wrap up here also in Vietnamese. So thank you very much. Please go ahead. Okay, thank you very much. Yes. That's all for this session. So we move on here. So very helpful this change either bring something here also. Yes. Next time we have this. Yes, thank you very much. Yeah, thank you for our translation team. Thank you. Thank you very much. Thank you. Thank you for our translation team. Interpretation team. Yes. We have the next talk coming up. I would like to ask Mr. Liang Li to come here and already set up his computer while I have already shared about Mr. Liang Li. Thank you very much for joining us already. So, Liang Li, a developer at Huawei Tech specializes in the design and architecture of low-level software including BSP. BSP stands for Board Support Package. Bootloaders and architecture specific parts of the Linux kernel with expertise spanning across X86 64, ARM 64 and with five architectures Liang's comprehensive knowledge base underpins his significant contributions to the field. Recently, his focus has shifted towards advancing virtualization technology and memory management areas further enhancing his profile as a key player in the development of foundational technologies that drive modern computing systems. Thank you very much for joining us and we check in the microphone. Yes, does it work? Yeah, all good. Okay, perfect. So, big round of applause. Thank you for being here with us. Good afternoon, everyone. Here, I'm glad to share this topic about the new shared memory mechanism design. Here is my agenda. So, from the brief review of memory model basis and then shared memory system design and brief demo then discussion of the application scenarios. First of all, let's have a brief review of the memory model. In short, the memory model is a set of rules about for memory users about how to use memory. Generally, there are two kinds of memory users. One is hardware and another is hardware. Hardware users are usually the processing elements such as general purpose processors, accelerators and ASICs etc. Generally, hardware users could only cancel computing units that could use to memory access. From hardware's point of view they would particularly consider the memory hierarchy and memory attributes. Also, the memory order and atomic semantic abilities are also interested. From software designer's point of view the memory model is mainly about the memory ability that being presented to programmers. So software designer's cares about the points like how does the memory being organized in flat or sparse mode that the virtual address that the virtual memory architecture is enabled in a second mode or the page mode and moreover that the memory model has different access ordering and different atomic semantics to support software mechanisms like different logs and all the locally software abilities. Consider the multiple users of memory system. We actually extend the single user memory model to share the memory model with the software designer. The particularly cares about the memory model is something tricky on the cache ball or buffer ball points of the memory system. Software mechanism needed to some special effort to take care of these hardware abilities to make sure the software can be running in a predictable behavior. Sorry, my slide is a little out of date. So I'll jump to the correct page. Here we just take the memory pool as an example. We would discuss some aspects of the memory pool on this memory model basis such as does the memory pools support the caching or the buffering, does the memory pool support which kind of data buff caching and how many operations could be buffered at a different place and if the caching is enabled will the cache coherence being implemented by hardware or by software it's all about the memory pool needed to care about and someone wants the points like the atomic ability and the data consistency assurance and access order for the memory pool it's a big problem for memory pool software and before we jump into the software share the memory software for memory pool when we just review some existing shared memory mechanisms in Linux system the first one is the system file shared memory design each shared memory region in the system file shared memory system is represented by a unique user the user process could use the share memory attach to map the shared memory region into their address base based on the unique K and after attach the user could gain the access control to the shared region if the user has proper access to the shared memory regions and if the user has finished the data consume it will have to call share memory detach and share memory control since library interfaces to control the in a real user application if the application needed to use the shared memory region to exchange data between its tasks there are two additional important mechanism is also needed one is the combination channel that distribute a case for the memory segment to tasks and the second one is synchronization mechanism for the real workload to force the sequence of the sequence use case for the shared resource these two mechanisms is not contained in shared memory system and last a second mechanism I want to review it here is memory file described design in Linux as its name indicated each shared memory region has a memory file as a backend in the memory but the memory file has no file system entry just anonymous file in memory so there is no content on the persistent storage the shared region is represented by a file descriptor this enabled file descriptor based operations like read, write and map etc and one notable feature of the MMFD is the ceiling and on ceiling state of memory file once the file is sealed the corresponding memory region is in a state like it cannot be accessed in some certain cases it will just be accessed after the on sale and there is some just like the system file shared memory file descriptor based shared memory is not system wide to all user process it's just the nature of the just the nature of the file descriptor it's kind of a per process resource so it has more security benefits compared to system file shared memory and on the two aspects I just mentioned in the communication channel for distributed file descriptors and synchronized mechanisms for different users of the memory file descriptor the MMFD and system file shared memory and on the same line they just both have such kind of ability so next though I will present our design shared memory system for memory pool first we introduced a kernel module just the mpsham to provide the interface for libraries it provides the IO control interface for users use the share memory creator or delete option to create and remove the region according to one notable option at the creation of shared memory region our design is the global share memory cache coherence option that means if this option is specified at the creation time of the shared region the all users subsequently of all subsequent users of this shared memory region must in the same cache coherence in short the cache coherence domain is it means all users will have a local copy or local cache for the shared data but this local copies can be maintained by the hardware or by the software to keep these copies in a consistent way that for the memory pool shared memory system this option means for mpsham mechanism it won't care about the coherence abilities just offload the responsibility to some other module once the let's consider some use scenarios like the shared memory region is reside in the memory pool the different users such as different architecture systems like ARM system or some RIS5 system will be all shared data from the memory pool so if the different as I just said different computing systems almost defines its own memory model and clearly the different architecture will have different memory models so the single system memory, shared memory mechanism like or the system file shared memory won't suitable for this scenario so for that's why we have this new memory pool shared shared memory system and we provide two important options at here the first one is the DMC command which stands for data cache maintenance and another one is the GK option. GK option stands for the gate cable the data cache maintenance option means the MPShan will take care of the cache and the cache coherency abilities and the GK option means that it's similar to the seal and unseal option of the MIMFD design it just means after the GK which stands for gate cable the users of the data will give access rise to the shared region let's take an example the code blocks on this page simply abstract your scenario of application it starts from MPShan to some shared region then in the worker function it will produce or consume data within the shared region after the work done call MPShan data interface to give up the access rise it will do this secondly in a loop and just as the comments in the pseudo code there are two data maintenance it keeps sub functions on behalf of the MPShan function actually this is a high level abstraction of the working load for the user scenario of shared memory system and next with on this page we can see that the interface design of our system is not comfortable with existing applications so for memory pool scenario there are certain existing work loads such as the large scale database systems and data applications and some cloud applications which would potentially have benefits from the memory pool scenario could be worse to be modified to adjust the MPShan abilities to maybe these applications can be benefits from the memory pool it also adds the native supporter into operating system share the memory mechanism to natively enable the open order to be ready for this memory pool systems that's all I'm sharing today thanks for listening thank you very much and yeah please stay here while we have the short wrap up about the talk so definitely appreciate it also to share some things in Vietnamese ok thank you ok so now we are getting ready for the next session yes please come here on stage Mr. Weijang Kang weijang kang yeah ok yeah ok ok ok ok ok yeah I was very happy when I get it right so while you prepare already I would like to share some key facts about you Weijang Kang with over a decade of expertise in technical development and project management currently holds the position of senior manager at Jiangsu Hope Run software his vast experiences in the broad range of foundational software fields including Linux OS virtualization cloud computing automotive electronics for example auto SAR HPC supercomputing and artificial intelligence Weijang has made significant contributions to various open source software communities such as the Kernel K-Dump LTP, Camu, Libvert OpenStack and OpenEuler and the advancement of open source technology in addition to his professional achievements Weijang is actively involved in the OpenEuler community as a member of the OpenEuler user committee where he continues to influence the development and adoption of open source solutions thank you very much for joining us here on stage please go ahead, thank you ok, thank you Hello everyone, I'm Weijang Kang from China it is a bit lean it is a bit non thanks for your patience I have been working on open source software for more than 10 years I was used to be maintainer of our kind of project which is a very famous test for my work around the world and supported by Red Hat as a non MP spend more time on OpenEuler community and as a member of OpenEuler user committee today I want to share a new technology we call it a distributed soft bus and we can share the competing power and data by it ok, let's get started I will follow this contest to introduce this technology in general the Linux platform privates most basically communication technology but this technology traditional IPC mechanism only provides the basic capability and they are not connected to the end user such an example the pipe can only be in half-bit duplex mode and the single is a way to have the assembly event the share memory supports not just amount of data but it requires the same for to work together so the distribution of power and data depends on a reliable communication mechanism of course we have some new technology such as deep bus but it also has a disadvantage such as it is suitable for the network transmission and it has multiple such as your bus so what we need is not only the IPC but also the cross-platform and feature-based mid-wheel for your work this page I want to talk about the orange distributed soft bus we all know the hardware bus is the hardware bus complies the CPU memory and other peripherals to together and it transfer the data and control the information is passed on this hardware bus and the hardware bus has very typical features such as plug the high band and so on so we get inspiration from the hardware bus so we decide and develop the distributed soft bus we want to build invisible channel between the different devices with different particles and with different transmission rates in fact we have a big plan we plan to develop an ecosystem based on the distributed bus the distributed bus mask the difference of the different particles and different devices so based on the soft bus supports the internet Wi-Fi and other the particles based on the distributed soft bus we develop a connection or distributed components such as the distributed data distributed field and based on this layer we develop a scenario-based distributed service and mechanism such as the remote control and data sharing so the end user can use the distributed capability the up-based layer to develop their own application is very convenient this page shows design and the architecture so at the bottom we can see the soft bus needs a bond driver in kernel based on the kernel's layer the middle layer shows the soft bus the most cool functions such as the top management and all the teaching messaging and buy and other very important features service management and so on the up-loaded layer we provide some high-level functions and this is why I give an example about the distributed camera so at the outside the open-euler exhibition area I give another example for the distributed calculator if you have an interest you can experience it by yourself this page I want to talk about the innovation brought by the distributed soft bus in general if we want to develop an application in general we should create the socket both the connect end and the server end and then at the same we can bond and listen to the ports at the server end and then we repeat the circles sending and receiving and it looks more complex and it's not convenient so let's look at the distributed if you use the distributed bus to develop your own application in this way the registering and the callback is a very important application we can register the function and the listener on the server end and then if there is data from the connect end the server end will call the callback function to process this data so it's very committed and cost to the users so let's look at the example this is an example I just mentioned not second you can experience on the open user exhibition area the the mathematic expressions can select the situable device for example since the complex 30s as the bottom layer we use the distributed bus and based on distributed bus we use the distributed data subsystem to support the distributed collaboration yeah this is a video to show an example okay you can see the mathematic expressions it's a collaborative case and you can you can operate it by yourself under the open user exhibition area and last I want to talk about how to explain and we want to support more particles such as industrial particles and increase the maximum of device connections the value is 20 now and we want more distributed soft bus can be wrong this distribution such as open to and center s and so on and other distributions and we hope more programs know this technology and use and contribute to distributed soft bus and post the main page of open user if you have any interest please join us okay that's all thank you thank you very much and we have the translation that brought up in Vietnamese yes so thank you um um um um um um um um um um um um and other languages. holding programs at OpenEuler that you can participate and earn up to 42 million Vietnamese dong over the summer. So please go to the OpenEuler booth and inquire about these opportunities and discuss this with the developers here directly at the booth. If you have any questions or want to learn more about OpenEuler's soft parts of Huawei, you can go outside and go to the left side of the booth, you can go to the OpenEuler booth and listen to more details about the soft parts that OpenEuler has organized. Thank you very much. And I would like to give a big round of applause for the entire track of OpenEuler and for you and everyone else here in this track. Thank you very much. Okay, and we now have the break. So there's a 30 minutes break. We can get some refreshments outside and in 30 minutes we continue with more topics on operating systems. Thank you. Good. So yeah, people are still streaming in. But guys, we are already starting here. We need to keep to the time and there's a lot to share. We have very interesting talks coming up. And at first I would like to welcome Sian Chaudhuri. Sian, you have quite an impressive bio and I would like to share with you first. So he's a senior software engineer at Microsoft and a Linux software engineer indeed. And a maintainer of Flatcar Container Linux as a release manager. He works to maintain and build Flatcar with a strong passion for open source. And that's absolutely true because open source engagement for many years. Sian has been involved in other communities, namely Python, Fedora and Mozilla. And he's a PSF fellow and former chair of Picon India 2020. And Picon India is like a huge event, right? So we are kind of a small event compared to Picon India. Not at all. Okay, but Picon India is huge indeed. So that must have been an amazing deed to do that. And he fosters community engagement and provides mentorship to aspiring open source contributors. So if you're interested, maybe show your interest to Sian. He might be able to help you and give you some advice and mentor you. Beyond technology, he's an artist, enjoys bouldering, tracks and photography. Just very interesting what I read about you all the time. So thank you very much for joining us. We're very much looking forward to talk. Thank you, Sian. Thank you. Thank you, Mario, for the introduction. So good afternoon, everyone. I hope you had a good break and had some tea coffee before the talk. So let me start with the talk now. So my topic is, so you want to run containers, but which OS do you want to run it on? So this is, we have seen an uptake in the number of people using the whole container ecosystem with Kubernetes Docker in the picture. And there has been a strong focus on the OS side as well on to make things better for this whole ecosystem and support the ecosystem of containers. So moving on. So the agenda of the talk is, so I'll be having a 25 minutes talk and I'll be giving my intro at the beginning. And then we will discuss on why we need a container OS, why there is a need for container focused OS. Then moving on, so if you have read my CFP that I submitted that this container OS focuses on four pillars, which are minimalism, speed, security, and immutability. Now, once I cover all these four items, and then I'll give you a brief introduction on a few of the existing container OSs out there. And after that on how you can take, how you can use this talk as a leveraging point to decide which OS you won't like to pick up for your next container workload. And then finally, I'll give a brief introduction of Flatcard Container Linux. That's a project I work on on a day to day life. I've been associated with it for four years now. And I'd definitely love to give a brief introduction about it so that you can use it for your next workload. And finally, a few rounds of questions I would love to hear what your thoughts, any feedback are in questions you have for the talk. Cool. So moving on. Hi, I'm Sain Chaudhary. I am from India. I work for Microsoft. And I'm based out of Bangalore in India and have been working for the Flatcard team for the past four years now. So just to give a small history. So Flatcard was initially spawned in this company called Kinfolk, which was later acquired by Microsoft in 2021. And that's how we ended up being at Microsoft and building this amazing OS with more better support from the company. And we are essentially under the Azure umbrella. And we try to, we try to, even if you have any questions about Azure, you'd feel free to contact me after this talk. So the four pillars. So what is a container focused OS? So we essentially have the focus areas if you see are targeted for four pillars, which is minimalism. It focuses on how minimal we can keep it. Then the next is speeds, security and immutability. I'll go through them one by one. But before that, why do we even need a container OS? Now, the thing is, like, if you have worked in the divorce space, then in the recent times you have seen that the space is widely divided into two categories. One is the mutable infrastructure and the other is the immutable infrastructure. Now, what is a mutable infrastructure? So the mutable infrastructure means that you have your setup, you have your servers. If there are things that are going wrong, you basically what you can do is that you access it into the server and then quickly edit things or update your configs or restart machines or have a touch with the servers wherever and whenever you need. Whereas in an immutable infrastructure, the idea here is that you try to break and if things break, you try to shut. You basically try to kill that particular server and spawn it up again and it should be in the same state as the previous one. So in case, suppose your OS breaks, so what you can do is that you can maybe update the base image and then spawn up a new instance immediately with the updated configs. Now, why do we need this? The question is in my next slide, which is cattle not pets. So this has been a very, a term that has been going on for ages now in the container ecosystem where we treat the basic, just to give an example or to understand this in much finer details is that when you have a pet, you take care of it. You give a more, you have a name for your pet. You take, if there is some illness to the pet, you take care of it, take it to the vet and take care of it, love it, give it some care. Whereas in terms of cattle, you have, if you have seen a cattle farm, they are particularly numbered. Maybe they are categoryed by breed and they are doing just the job they have to do. So if this is taken into the perspective of infrastructure. So pets are basically servers which may be contained in a mutable infrastructure. Whereas in caters, you have your servers grouped by their categories. Suppose you have a database cluster or your web cluster or your backend cluster and then whenever there is some issue, you don't really cater to their needs. If there is something wrong, you just kill that particular machine and spawn a new machine up. So that's the ideology that has been going in the microservices space with the container ecosystem that if, just so that we can have faster deployments, you have much more reliability and it's easier to debug issues in case things go wrong. So this is where the container access comes into picture because it aids you to build up a system that can support the slide cycle that you have for your apps. Now going into what is a container-focused OS. So the first topic is minimalism. So with the minimalism part, we are seeing here three primary aspects. One is a minimal package set. Second is minimal human interaction and third is minimal resources. So what we are targeting here is with the OS that you have, so essentially suppose there is a container OS, most of the container OSs don't have a package manager inside them. So you are limited to the packages that are provided by the OS that you are using. And they are usually small, as in you have only the minimal packages that are required inside of the OS, and that can, depending on what are the OS goals, which OS are you using, that can be from few OS have lesser than the required few have more, but it all depends on the goal that the OS has. But all of them would have a very small set of packages which is required for the containers to run. The next is minimal human interaction. So as I mentioned, we try to build a cater to build a immutable infrastructure where you as a person would not require to SSH into the machine. There are few OSs which actually restrict to do so. And others have the option to SSH, but then at the end of the day, we want you to tear down machines and if required, spin up a new machine with updated versions or updated packages, or if your packages apps are getting upgraded, you tear down the machine and build up a new one. And then finally, minimal resources with lesser number of packages. You end up having lesser resources being used and more of the resources can be used by the apps that you are working with. The next is speed. So with speed, we have faster deployments. Then you have optimized for container workloads and boot speeds. So given you are essentially tearing down, so the whenever the boot speed, whenever booting up a new machine, it would be ready in a few minutes or actually seconds, so to speak. Now this is because there is a less overhead, less dependency chain involved during the boot process. So you would see a tangible reduction in your deployment. They would be accessible or be available in a lesser span of time. And then because this is, our focus is only for containers. So you would see that most of them are very highly optimized for all your container workloads, like all the tuning has been done for Docker or Portman, whichever container runtime you are using. Or if you are using an engine like Kubernetes, it would be definitely be very highly paired up and in tuned with Kubernetes releases so that you get a much better performance at the end of the day. The next is security. Now among all these four points, one of the biggest point that every OS puts a focus on is security because given the number of series are getting released on a daily manner, the security leaks that are happening around the world, security takes a very high center stage and you would see this has a lot of points in it. So one factor comes is immutability. That's something I'll discuss at the end. But then the next is automatic updates. For example, every OS has a different implementation. Some do AB updates, some do some other kind of update mechanism, but all of the OS have the option that whenever there is a new release by upstream, they would end up trying as giving a signal or the payload is directly uploaded and then all the OS that all the machines that on a particular version would be automatically updated so that you have the highest or the latest patch versions on your infrastructure for the machines. The next point is reducing the attack surface. This essentially happens because when you have a lesser number of packages in your OS, essentially we have a lesser attack surface. The reason is there are lesser packages that would have faulty areas and also because this is immutable, we see a sharper drop in the number of CVs that we need to attend to. Now, this is one of the target areas for most of the content OSes that they keep this small so that your infrastructure is usually more secure than the normal traditional OSes like suppose Ubuntu, Fedora or others. Now, one thing I noticed while building this talk was that I actually gave a similar talk in 2019, the last Fosheh I came to and it was but it was for desktop. Now, I saw that the points which I mentioned was this three last points were kind of the same thing I mentioned in my previous talk because if you categorize content OSes, they can be broadly divided. I am talking about infrastructure here, but it can be broadly divided into desktop as well. There are a few desktop content OSes as well which you can use your day-to-day laptop on that. And finally, then you have the secure container runtime as I mentioned with the intuning happening for the previous container runtimes. We try to have the latest container runtimes in all with the container OSes and finally the principle leaves privilege maintaining so that whichever services or users, whichever access that is required, they only have that access or issues. And finally is immutability. So in immutability, as I mentioned, there is no individual package management. We highly focus that you run your container workloads. Anything you need is driven through a container backend. But you cannot really install anything on any of these container OSes. A lot of them have the root file system read-only or for us like it's a USR that is read-only so that you can really not alter anything inside the root file system which again increases the security of the OS overall. And then during the updates, whenever there is a new update, it automatically updates. But what if there is an issue? Suppose something broke down in between. So what do you do in that case? So it has most of the OSes have atomic rollbacks and updates. So it tries to update the OS. And if there's a failure mode, it is done by a different... All the OSes approach it in different manners. But at the end of the day, the result is that if something fails, it would re-roll back to the previous version, the stable version. And then you would not even see a thing. Suppose there's an update on a Friday evening. If there's a breakage, you would even see a thing and it would update to either the next stable version or if something fails, it would re-roll back to the previous one. Single version identifier. Now... So for example in flat car. So we really don't have multiple version. At any given point of time, it would be only one version that is getting tested. So we have an epoch time for when Kodos started their first release, which was July 1, 2013. Since then we have been doing various releases. So each of them have like a single version identifier. So each OS would just pinpoint to one version. And then with the immutability, one great factor is, it again reduces a lot of attacks. Like in 2019, we had a runcy vulnerability, which was totally not affecting the container OSes because the way how it operates and immutability was one big factor there. Now, coming down to the OSes that are there in the market. So as I mentioned, we have flat car, which I'm part of. We have Talos. We have Fedora Core OS. We have Micro OS. We have Bottle Rocket and we have Photon. Now these are the few container OSes that are out there and people use it on a daily manner. But they are architecturally very different, though they imply all the fundamentals that I discussed now, the minimalism security speed, but how the implement is different. So just to give you an example, so flat car has this release policy of maintaining three channels, which is Alpha, Beta, Stable, whereas Fedora Core OS does it via streams, which is a testing stream. And then they have a production stream. Or for example, suppose you have the provisioning can be, so flat car, Fedora Core OS, these all use provisioning agent call, like there's a software called Ignition. So we use that, but some other ways would use a different mechanism. Then for flat car, you would need to, you have the, like the USR is locked on, but then for others, you would have the root-size system locked on as well. And then there are a few OSes which have SSH also blocked, so you cannot really SSH in. Like for example, Talos here is very interesting because you have an API to manage your workloads as well. Now how to choose? So what I would suggest is that in case if you're choosing a new container ecosystem, in that scenario, first thing you should look into is the configurability. So how you configure your system. So suppose the thing that I mentioned is, suppose you need to update your, so suppose you use cloud units or suppose as Ignition. So you are limited by the options that you have, but they are usually like Fedora Core OS and flat car uses Ignition, and we contribute to Ignition together to make it better. So in that scenario, you would need to change on how you provision your machine or how you attend to your configuration so that you choose a good OS. Then comes the security on how locked down you want your system to be or how free you want your system to be. That becomes a challenging aspect as well. So you basically pick up, you do probably your analysis on your current setup and see how much of a security need is for you. If you wanted to SSH totally locked down, then probably you would choose from one of these OSes. Not sure what happened here, but yeah. So you could probably choose from one of these OSes and then finally immutability. Immutability is probably same for all of them, but that becomes a factor because if you need to update things, like for flat car when you do Ignition, it's just on the first boot that runs and applies a config. But if there are other OSes which gives you option to append things later, then that could be also be added to your choice. Okay, so yeah. So as I mentioned, as I worked with the flat car container Linux team, so I wanted to give a small introduction about flat car container Linux. So it's an OS that was spawned out of, it was forked out of a bike in forked out of container Linux, which was maintained by CoreOS. And then after it had been AUL, we have implemented a lot of features in there. We actually quite went sidestream and the way we see things or kind of the architecture of what CoreOS container Linux had has been totally changed to what flat car has. Like we are working on to build, working on to build system, add system desystem extension and other features as well. Then you have the releases, which happens at monthly cadence right now, but if required, if there are security patches, we increase the cadence to bi-weekly sometimes, but we have technically alpha, beta and stable. Alpha is probably released every few weeks and then beta at a more bigger cadence, we try to soak things and then finally stable. Stables are one of the ones which has a lot of features going at once. And then we maintain 18-month cycle for LTS. We release every year, but the support cycle stays there for 18 months. And this was a requirement because with LTS you try to find some stability there, so this also becomes a choice, like if you want more stability, so LTS, flat car, LTS becomes a good choice for you. And next is provisioning. So the provisioning for flat car happens through ignition and it's first time boot only thing, so you have butane and ignition working in parallel. So you can have configs which actually helps you. It's a provisioning agent, so you can really set up your system and then reuse it again and again to have the same system at the end of the day. And then on the immutability and security experts, so we attend to, like our security cadences, we give a very high focus to security, so we try to have a very high cadence for any of the security fixes that are or the CVs are released and we try to do a release whenever there is very high maintenance security that needs to be taken care of. And the last one is, it's got messed up, but the availability on the cloud provider. So we have a lot of cloud providers that we support and we keep on adding more and more cloud providers every day. You could see flat car on digital ocean. So we have a server where we host all of our images if they're not on Marketplace, so you can take those images and upload them to the cloud providers. But if they're on Marketplace, we are there on like the Azure Marketplace, we are there on GCP, we are there on AWS and Moors that we have been working on. We recently like, I guess we added the bright box. And the other thing is, the last thing I would talk is about the community because it's an open source project and it's growing day by day. We have, if you're looking to contribute, so we have open office hours every month and it's on the second of Wednesday and a developer sync meeting that happens so that you can directly interact with your developers, with the developers on the team. And these are to cater to any of the needs that the community is having. And if you're looking to contribute, we are available on Matrix where you can directly talk with us. So just to give you a list of the things that we have. So the flatcard.org releases shows all the releases. You can just go to flatcard.org which would have all the information for you and the GitHub repo is flatcard slash flatcard. Documentation is there on flatcard docs and we actually build a tutorial because the whole container OS consensus is pretty new. So we built a tutorial around it which is a four step process and you could follow, read it through and follow it on your local machine which is flatcard tutorials. And then finally our communication channels are we do office hours on our flatcard jitzy server. So every month on second Wednesday we meet the, you could find it on GitHub discussions and then we are there on the flatcard matrix. If you have any issues, you can directly drop on that channel. We are also there on Kubernetes Slack's spawned flatcard and if you have any questions related to flatcard you can directly jump in there. So now time for questions. We have time for questions? Any questions that we have? I experienced it. Sorry, I experienced it with the people a little bit shy in the big hall to ask the questions but I got the questions from the back and somebody said they saw this slide from you with immutability, immutably and they weren't very clear what it actually means and if you could like explain this very shortly. That's a very good question. I don't know if you can go back to the slide. So that's a very good question actually. The question was what is immutability? So in plain English words immutable when you refer in computer science is usually something which cannot be changed. So if something, suppose in the programming constructs if you have any item or any type that is immutable you cannot really change it. So immutably in OS aspect is like things are locked down in a lockdown mode or in a read-only mode so that you cannot really alter it if and if the user wants to. So the reason for this is that so that we increase the security aspect of the container so that your container workloads are more secure and this is actually done through all the points that I mentioned here which is like having no package manager having a root system locked down or like having atomic rollback so that you have lesser control on the actual Linux host and if you need to alter things you go to the provisioning state and then change your configs and then upload a boot up a new instance. You cannot really, the reason for ignition having forced boot only is the reason that you cannot really alter things. Any more questions? Yes, there's one question here. The usage of Docker images is often how to say a transitive issue. Most of the usage is not going back to the original OS when you start from whatever image there is. How do you plan to deal with that? I mean if you have a Python application running you pull from Python and they run on DBN. So you pull Python slim or whatever. So you get what they use and that's not optimal. Have you made any plans to work in this direction? So just to understand your questions meaning like if there are issues that are more from the container or the Docker side? No, no, no, this is independent of Docker but it's purely from the Docker image. Most of the Docker image we use in daily life and not we don't start from the OS. We start from a specific image. For example, for a Go, for us, for Python, for whatever. And by this, transitively use what the Rust developers, the Go developers, whatever, decided as their base OS for the Docker image. So it's often not for us to choose a different operating system because we would have to go through a lot of stops and settings ever up. So that's the question if you have any ideas on how to deal with that. So the question is primarily that a lot of times when you're deploying your workloads then if you have an OS but then when you're working on Docker you really don't have a lot of control on the image that you are getting from the upstream, right? So in this scenario, I would say like we really don't have much control on that manner. We would probably recommend you to have like build up your own OS image in that, sorry, your own Docker image or any container image in that sense. But we don't really have much control over that sense because there are issues which we, like when we release an OS, whenever there is a stable release we do see a lot of uptake on the number of issues that are getting reported. But there are a lot of times those bugs are probably coming from the Docker images as well. So we don't really have much control in there. We probably would create, we probably would help with working with the Docker folks to fix that issue if that issue is there in tinkering with the OS. But then, yeah, if there are security loopholes or like if there are issues which are, suppose there's a fraudulent mirror or something we cannot really help there. Yeah, so as an example, I would be more than happy to have a Docker image with unmutable routines, user file system, and Python already installed. Right? A system that is locked down but provides with Python environment. So these are the things where you might, on your side, provide these kind of base images for certain programming languages based on lockdown, user route, different changes, like whatever. Right? So one thing that comes to my mind is that we are working on this new technology that SystemD is bringing in is SystemD System Extensions. And we are on the way to implement. So this helps to build your own images on top of those, because the OS root file system is locked down. So this gives a better control on, you can alter, create your own images. And in that scenario, we also provide our own images as well. So this is where we can provide maybe an NVIDIA image or a Python image so that people can consume those and work with those instead of like working with the upstream ones. Yeah, I think that could be one of the solutions there. Thanks a lot. Yeah, thank you. Okay, so I think there are more questions, but we unfortunately don't have time anymore. Yeah, the next session is up, but Sayan, are you available here like also after, what are you out in the hallway track and people can come to you and ask you more questions a bit? Yeah. Yes, so if you're interested in anything about containers, container OS or Azure, definitely find me in the hallway track. I'm mostly in the ground floor, so feel free to contact me or my Twitter handle is Udoka or you can contact me on my email with a gmail at Udoka.in. So, yeah. Perfect. Thank you. Thank you everyone for coming to my talk. Yeah, thank you Mario. Thank you. Thank you very much. Yeah. Okay, so we just have a two minutes break to set up the next person here. So talk about the practice of developing Sogo input on OpenKylin. Okay, so we are ready here. And yeah, I'm very glad to welcome Mao Joe with a practice of developing Sogo input method on OpenKylin. Let me share a few keyfix about Mao Joe. Mao Joe is a member of the OpenKylin technical committee, OpenKylin community. And yeah, Mao Joe is also a member of the technical community in the infrastructure SIG and packaging SIG. With a prolonged engagement and operating system software research, he has authored and presented several papers related to operating systems at international conferences. Sorry. And yeah, for example, ICACTE and SEAI and he has additionally obtained multiple software copyrights and inventions. So thank you very much for joining us here today with the talk on OpenKylin. Thank you very much. Welcome. Good afternoon, everyone. Today I'm excited to share with you our journey and experience depending the Sogo input method on OpenKylin. First, let me introduce myself. My name is Joe Mao. I'm the OpenKylin technical committee member and packaging SIG maintainer. OpenKylin community is an open source community founded by many people from different places and armed to be the easy to use desktop operating system. This image on the right, the image on the right shows the UI of the OpenKylin, which has desktop and tablet modes and support running on X86 ARM architecture. And below is our home page. You can go there for more information. This is our main platform architecture from source code to package compilation to image generation and unit testing. Document management and translation management are also connected with our Github repo. We try to automate the process as much as possible, including automatic code uploading and automatic testing of daily image. Sogo input method is really popular, not just in China, but all over the world. The main job of the input solution SIG is to develop and keep up with the Sogo input method software and to release its community version. The mostly the folks from Shanghai, can limited some company team who maintains the input solution SIG. Moving forward, OpenKylin's input solution SIG plans to share more with the open source community, especially in debtors and secondary development. We stick to a simple approach, focus on touch and always put the user first. This way, we make sure our input solution is easy and smart to use. We are the go-to team for Linux input solution trusted by many business across the different fields, like insurance, government, and public security. Our goal is to keep making things that met the real-world needs of our users. Input solution SIG has also obtained a number of certifications and certificates, such as triple A graded enterprise, five-star certification certificate, ISO 27.1, information security management, system certificate, ISO 20000 information technology management system certificate, and so on. Let's talk about the next generation of so-called input method based on cross-platform input system files. This new version includes several import modules. The engine service module ensures our input method is fast and reliable. The keystroke flow module captures your typing efficiently. The panel system module allows for customization and supports various plugins. Lastly, front-end module connects everything and makes sure the input method works smoothly across different platforms and devices. Our focus is to provide flexibility through engine plugins and to ensure that so-called input methods are easily adapted to any platform. This makes typing more convenient, no matter where you are or what device you're using. So-called input method engine service currently supports Swift, Debar, and another service and can be customized and extended. Each service module supports both independent service process or process inlays. Multi-language and multi-module input functions can be extended by engine plugins such as national standard minority language input and remote character input, and so on. UI plugins for so-called input methods include handwriting pack, function key buttons, data buttons, and more. These call-back modules include fixed file, input engine, key flow, and so on. So-called input method is the first commercial input method adapted based on fixed file, and some problems were incontent during the adaption process. Currently, fixed file is based on one or two even loops, lift bus and lift system bus. Different even loops are enabled in input method framework. Depending on current system configuration environment, as a plugin for fix is needed to consistent with the event loops of fixed file, otherwise it cannot receive signals. Another problem is fixed file version different caused by function differences is different. The interface of fixed file has been changed in the high version, which leads to errors in compilation which can be solved by changing the parameters of the interface. Here is a demo of so-called input method for open-killing risk. It shows you can type in many languages using so-called open-killing risk OAs. There are also some problems with type in so-called open-killing risk. The first problem is image rendering. Calling gdk3's interface returns every message bad parameters to the lift. The problems were solved by recombination and loading the static library from the updated lift package. Another problem is dependency conflict. Pipes depends on libgdk3 demo when compiling but installing it from the list repo resulting in a dependency conflict. It can be resolved by updating the version of the conflicting package in the repo. In order to provide scalable multi-language functionality for open-killing, the input method solution team have been worked deeply with open-killing. In terms of national standard support, it provides input support for 15GBT standard and 5GBT 18,030, 2022 new minority-language characters to improve user experience. All features have been integrated into the community solution. The next generation of so-called input method also provide virtual keyboard input support for all minority-language characters and handwriting input function. Below is a list of open-killing multi-language support including 20 minority-language such as Depeitan, Coroyan, and so on. There are also 10 national standard supported such as GBT 12,510, 2015, GBT 3, 31,918, 2015, and so on. And now with a series of products going overseas such as Bethan Row, so-called input method is able to support the need to provide input local language and script. The script across platform information, the cross-platform input solution can provide input over 2,000 languages around the world through engine plugins, self-develop, or combined with third parties. In addition, it can provide customers with keyboard-like input devices including control keyboard input, remote control panel input, and non-language single input. Open-killing is a movement in a national collaboration starting in 2006 when they participated in an open-source effort in China, Japan, and South Korea as a leader of the Chinese enterprise group. A milestone was reached in 2016 when we became a full member of the Linux Foundation. In the same year, we became the vice chairman and the deputy secretary general of the China Open Source Software Promotion Alliance. Just last year, we became a platinum member of the Over 2 Foundation. We are continuously involved in open-source work. We have contributed millions of lines of code to provide like OpenStack, Linux kernel, OpenNabra, and Federa. It's not just about the quantity, it's about the quantity and the impact. Our code component is integrated into the official reports of 11 international Linux distributions. Furthermore, we are not only the contributors but also active participants. We integrate ourselves to the community by attending international conferences such as last for Asia, EPCON, Asia, and the Linux Security Summit. As we come to a close, let's continue to work together, share and create for bright future. Thank you. Thank you very much. We have a Vietnamese translation here to wrap up your session. I can already come to somebody who has a question. Potentially. In the meantime, I can already give the translation. Translation to Vietnamese. Sorry. This microphone. Thank you. To translate to Vietnamese. Translate? Yeah, what he said. I mean the wrap up. Okay. Okay. Thank you very much. I hope that helps with sometimes some short translation so we can already set up the laptop for the next speaker and while we answer the questions. Sorry. I have two questions. The one is do you support besides Chinese language and majority languages like Japanese or Indic languages, especially Indic languages where it's complicated to create the glue stacks? For the input method or for the OS? Input method. Input method. Your question is how to use the input method to input Japanese, Korean or other languages? Yes. Okay. Most of the languages are based on the alphabet. We input them, it's very easy to come in. But for the Chinese, Japanese and Korean, we need a convert. Yes. So we need an input method engine and we need a dict and we need a UI. Because if we use PIN in the UI, we'll show the A, B, C, D, E. But if we use a non-key PIN, the UI will show the 1, 2, 3, 4, 5, 6. I understand. I use it. I use these kind of input methods daily. So my question is do you support Japanese? Yes. Okay. And how is it with Indic languages? Sorry. Indic languages. No. No. Okay. Then the second question, it was maybe I misunderstood something, but I had the feeling that part of the input methods is a commercial service. Is this wrong? Sorry. A part of the input system, the input is a commercial service. Okay. Yes. So you would have to subscribe to assist. Yes. We use so-called engine. Okay. That part is commercial. Okay. And it is closed. Okay. So that's the reason why the compilation problems you had with the libraries. Okay. Thanks. Thank you. Any other questions? Okay. Not at the moment. Yeah. But thank you very much for answering that. Thank you. Appreciate it. Yeah. Thank you very much also for the presentation and we're coming to the next session in a moment. Of course. Yeah. Thank you. Okay. So give us a moment. We're setting up the laptop and then this track will continue. Yes. Great. So I would like to then also share some information here. So the next session now is about how I built a check-in kiosk for Ubucon Korea 2023 using Ubuntu frame, Flutter and Raspberry PI with Yongbin. Yeah. And yeah, Yongbin has quite an impressive background. He is an organizer of Ubuntu Korea community, member of the Ubucon Asia Committee, Ubuntu local community council, and he's helping people in interested in Ubuntu, its ecosystem and other relevant open source projects to join and gather together by organizing a lot of community events and local community gatherings in Korea. But actually you're also reaching out to a lot of other countries and regions and so across the Asian region, of course it's the focus and yeah, apart from his community engagement in open source, he's a software engineer at Cloudmade, a cloud MSP company in Seoul and involved with developing web products for internal usage. Client companies such as integrated cloud billing, portals, sales ops and much more, just a short glimpse. But Yongbin, I know our team has met you all over Asia in different places and you're quite well connected with the first Asia community. So special pleasure to welcome you here. Thank you very much for joining us and yeah, big round of applause for Yongbin. Thank you. Thank you Mario for the introductions. So welcome to my talk and my talk will be about, yes, Mario already gave you some introductions. It will be about how I built a tracking kiosk with the using Ubuntu Frame, Ubuntu Core and what else, Raspberry Pi. That was a long title. Yeah, so yes, of course, Mario already gave some introduction about me but to give you what's more, my name is Yongbin and I'm mostly involved with the Ubuntu community. So usually with the organizing local communities for organizing some meetups or some events like that. So yeah, I'm a member of the organization of the Ubuntu Korea and I'm also a partner of the Ubuntu Local Committee Council which is a team for the helping out the Ubuntu people who like to visit their local community in the region and help out making their connections across the world. And I am also involved with the organizing a lot of events this year. So I'm a partner of the Ubuntu Asia Company, like mentioned, and I'm also involved with the Ubuntu Korea and the annual deepening conference. So happening in Busan this year. So I'm also involved as part of the local team this year. And for my work, I'm working with the CloudMate for building some services there. So a few notes before starting my talk. It's not really about some kind of best practices because I was new to trying out the Ubuntu Frame last year. So it's just about how I tried to use it for my own project. So you made a note for that before listening to my talk. So let's get started. So why we built a checking kiosk for the events? So when I was organizing Ubuntu Asia back in 2022, we were actually using the multiple event platforms for the registration due to the some kind of payment issues. Like if we use the local platform, the foreigners, they can register and make payments. And Koreans, they can make payments on the other foreign platforms. So we just decided to use two. And the problem was that we were not able to streamline the checking process. So there was some missing check-ins and some people didn't get their tax. And so there was some kind of problem. So if I try this kind of event once more, I want to improve that kind of checking process at the event. So I started to work in my own kiosk project. And why I chose to build on top of the Ubuntu Core and Ubuntu Frame. Well, I've been to Ubuntu Summit and Ubuntu Asia back in 2022. And there was actually a lot of talks and workshops about the Ubuntu Frame and how we can build a kiosk or digital signage built on top of that. So it really catch my some attractions. So after participating in those kinds of some sessions, I went to try building some projects on that. So that was the reason why I chose that. So by the way, what are Ubuntu Core and Ubuntu Frame to give you some short introductions? I think you already know about the Ubuntu Right. It's one of the most popular news distributions in the world. And the Ubuntu Core is a kind of operating system optimized for building your IoT or edge computing or embedded devices. And one of the biggest notable differences between the Ubuntu and Ubuntu Core that would be everything's or snap packages in Ubuntu Core. Even system updates are managed with the snap packages. So that would be the biggest difference. And the Ubuntu Frame, while Ubuntu Core is like that, Ubuntu Frame is basically a kind of full screen wireless shell. It's built on top of the built-in display server. So since it's basically a wireless composter, so if your Linux GUI apps compatible with the Wayland, you can also launch that inside the Ubuntu Frame and you can use it like for kiosk or some digital signage for that. So since the Ubuntu Frame is also packaged as a snap package so that we can run it on the Ubuntu Core, there is something special you need to set up for using the Ubuntu Frame, which is called Interface Sees. It allows you to access some system resources, something like Windows Orca, like your text input screen, yeah, like that. So you usually configure the Interface Connections to allow your GUI app to communicate with the Ubuntu Frame. So that was a short introduction about the Ubuntu Core and Frame. So this was basically my original plan for building my kiosk. So my plan was to, of course, build on top of the Ubuntu Core and Frame and I had some existing webcams, so I wanted to use it for scanning your code. And I also wanted to try out working with the Flutter apps because at that time, the Flutter support for the Mint was also promoted a lot, so I wanted to try that with the Yaru Dart, which is a Ubuntu style theme for building your Flutter applications. And well, at that time, I also got a RISC 5 word, but yeah, I wanted to try it, but I was not able to use it because Flutter didn't have support for the RISC 5 at the time, so I just replaced it with the Raspberry Pi. And yes, of course, we also need a cheap label printer for printing some tags. So first thing I did for my project would be finding a cheap label printer because if you search for the label printer, you usually find the printers from something like AppSense or Jebra, and of course, they're very good printers, but they're very expensive, right? And it does make sense to buy those expensive for just your hobby projects, so I started to search for some AliExpress for cheap printers that seem to be working with Linux, and this is what I bought, and yeah, it worked with Linux, but it only worked with the X86, and it didn't work with Raspberry Pi, so I was not able to use that driver to make some label printings, so instead, I tried something alternative way, like if you look at the label printer, they have their own programming language, something like TSPL, TPL, or ASKPOS, you can use it to directly control your label printer, so I have tried out the TSPL, which wasn't easy for me, well, and I also did a lot of work around with printing, properly, so I was also trying to see how other mobile library apps, they are sending comments to the printer, and if you look at the printer, they have a dump feature, you can dump the comment data from the application, and this is what I got, a lot of labels printed in just a second, so this is how TSPL comment looks like, it was one of the comment lines supported by the printer about, and so it looks much easier than ASKPOS, so I decided to use it, and when you're using it on Flutter, you can build the command like that, and after that, I started to work with a Flutter app for implementing some features, like, first thing what I did would be building a QR code scanning feature using the, well, Flutter is a camera program, but they don't have support for the Linux, so I just look for some plugins that work with the DreamStreamer, well, I think I was able to implement it, so I implemented a QR scanning with the webcam first, then once I have built some part of my apps, I started to work in testing it on my desktop first, like my working machine, like laptops, like you can actually test it very easily, like if you're running steps on your laptops, and you can just install WootFrame snaps and just run the WootFrame with some environments, like you can use the Wayland display environment for configuring what kind of software you will be using, and then you can just launch your app inside, like this, like, if you launch the WootFrame, like I said, it's just a full-screen shell, it gives you the empty screen first, then if you launch your application inside, now that's what you got, yeah, you see your application, launch inside the frame, and you can interact with the application. Now that you have tested your applications, next thing what I did would be writing a Snapcraft YAML. Snapcraft YAML is kind of like a configuration file for building your Snap package, so Snap is actually like a similar to Docker container, in terms of building dependencies, you need to run your applications, so if you're familiar with the Docker files, maybe you can also, if you start with the Snapcraft, so you define your Snapcraft, and inside you define what kind of dependency you need, and yeah, something like that. And there's something a bit tricky when you're working with the Snap for the Ubuntu Core, because the environment is quite different from the package for the desktop, so you can really take advantage of the desktop changes, but fortunately, there are many, some example projects that you can build on top of it, so you don't need to start from scratch, and what's another different thing from your normal desktop application that would be, it's launching as a demon so that you can, so it will be launched automatically on boots of the Ubuntu Core environment, and it will also not interrupt your command line prompt after you've launched your applications. Then after you have defined your Snapcraft, I also started building this Snap, so you will normally build your Snap package for your laptop, but in this case, you're going to build it for the Raspberry Pi, but I think, well, usually cross-compiling is quite complicated to set up, so you can actually leverage the existing infrastructure. If you look at the Snapcraft CLI, which is the tool for building the Snap, it's a remote build command, which you can leverage to launch the infrastructure to build for some other CPU architectures, like ARM64, like that, or if you want to automate the build of your Snap, there are some key actions that you can use it for working with your workflow, like, say, Snapcore action build actions that allow you to build your Snap, well, you will also need to build Snap for the multiple architectures in case that you can use the Snapcraft multi-arch actions for cross-compiling your Snap for the multiple devices, like your PCs and Raspberry Pi and much more. Then, since I have built my Snap, now it's time to test it on the actual machine, like Raspberry Pi, like, used for the easy kills, like, well, setting up your application in Ubuntu Core is also quite straightforward, like, if you already have Ubuntu Core installed your Raspberry Pi, you'll need to install Ubuntu Frame Snap, then, like, you will need to copy your Snap over the something like SCP, then you can just install it. But you need some dangerous flag because you're installing your local package, so that will allow to install the package for the file. And one thing you need to notice is that I mentioned about the interfaces earlier, so you will also need to connect the interface, the Rayleigh interface, so that your application can talk with the Ubuntu Frame and you can interact with your applications. So, for my use cases, I also have some scenario for that. People need to enter their emails or checking in, so I also add the Austrian keyboard. Adding the Austrian keyboard is also quite straightforward. There is already a Ubuntu Frame OSK Snap. You just need to install it on your device and it will basically automatically configure and launch. So, there's really nothing to do. It just works. So, this is how it works in actions. My Kiosk setup I have built on the onsite. So, if the people touch the text input on the screen, then the Austrian keyboard will be enabled. And, well, but actually, not everything is 3-4. Mostly something like, if you're new to working with Snap, like working with interfaces would be a bit tricky because you don't know, you might not know what kind of interfaces you need to add. For that, you can use the Snapit debug. You have to find what kind of interfaces are missing on your Snapfrap, so you can use that for debugging. But even after that, there were some many things like tricky to work with. One of the things would be like, I have found that it's working, the protocol scanning with the webcam is working on my laptop, but it wasn't working on the Raspberry Pi. Now, the problem was that I was not able to fix it like while the deadline is approaching. So, for that, I just decided to give up and just borrowed the protocol scanner from my friend and replaced the scanning feature with the text input because the protocol scanner gives me some keyboard input, so I was able to easily replace the main patient. And I also had some problem with the quick USB plug-in. They only had support for the x86, so didn't work with the Raspberry Pi. So, I also need to do some weird workaround. I wrote the simple Python server using the Pi USB. And yeah, that was the... Well, I have made the setup anyway. And another problem I got on the outside while I'm setting up was that it was a network configuration issue because the venue, they did provide the Wi-Fi, but it has a captive portal, which means you need to turn on the web browser to log in, but in the Ubuntu Core, we don't have the web browser, so I was not able to connect the Wi-Fi. Instead, I just took up ethernet cable in my laptop and just shared my ethernet connection. Well, it was a workaround, but it worked anyway, I think. So, that's about looking at action, how it works. So, that was how it actually worked in action? Yes. So, I told you there were still a couple of issues even though I was able to set up on-site. So, I'm still trying to improve after the event because I need to use the same setup and the same event this year. So, what I'm trying to work at first will be adding a network configuration interface so that I can configure network ethernet on-site. So, I was actually able to implement it using the NM package, which allows you to interact with the network manager so that you can configure the network connections on your Linux system. And there are actually many packages available already that allows you to interact with the Linux system, something like Divus, Loose, and much more. Yeah, I'm also thinking about some kind of how can I deploy much better? Maybe I can try out some OTA feature that SnapStore provides or I haven't tried out yet, but I think I can also try using the gadget snaps. I heard that it's using, it's for adding some, like including configurations in this snap package so that I can deploy in the multiple machines. So, that was my use cases of the Flutter and Ubuntu frame. So, to sum up my thoughts on these technologies, I think the most things were quite straightforward, something like working in Flutter for Linux. They have already many tools available for the Linux and packages also available so I can work with using my base code and my Ubuntu desktop. And using the Ubuntu frame and the Ubuntu core was also... Just using itself was quite straightforward, but I think building a snap for that environment was quite tricky because what I need to bundle would be quite different from the desktop snaps. So, that was something tricky. And yes, the working with the network on site. I think that was one of the most tricky things when I was working with the Ubuntu core. So, that was for my talk. And just before I finish my talk, I have something to advertise. So, there is a Ubuntu Asia happening in... If you're in Jaipur and India, it will be happening around late August to September 2. So, we are currently calling for proposals. So, if you would like to join, you can scan your code for more details. And you're also going to sponsor for the event. And the annual event conference is also happening there, happening in South Korea. So, we will be calling for proposals quite soon. So, if you will be interested, you can also scan your code for details. And yes, we are also calling for sponsors for the event conference. All right. That's all for my talk. Thank you for joining me in my talk. And if you have any questions, feel free to visit the Korea False Committee booth on the second floor. Thank you. Okay. Thank you very much. And you're there tomorrow the whole day at the booth. We can find you, right? Yeah. Cool. Thank you. And we have the next speaker coming up. Yeah. With the next talk. The talk is about open source operating systems in higher learning and research with Jerry. Jerry Vasquez. Yeah. Is that correct? Or you speak that in like Spanish way? Vasquez. Vasquez. Yeah. Like the English way. Okay. So, Jerry is a seasoned Linux and networking engineer at the facility for rare isotope beams at Michigan State University, where he is known as the OJ leader, Linux bird cheer pet, and an accidental network engineer. Okay. With a fascinating journey from chef to tech leader, Jerry brings a unique blend of skills to the table. His background is in professional kitchens, honed his abilities in quick decision making, accountability, effective communication, and urgency, all of which proved invaluable in the tech domain. Transitioning from culinary arts to network engineering, Jerry has applied his leadership and operational skills to significantly improve network uptime from 96 to 100% within a year. Manage a diverse team of 20 network engineers and foster a culture of continuous learning. His leadership philosophy emphasizes personal accountability, the value of each team member's contribution and the importance of collaborative team environment. I'm sorry for the mistakes. It's a bit getting late here. But we still quite a number of people who'd love to hear a story and we have 30,000 people watching online here at the moment. From all over the world, all over Asia and China everywhere. So big round of applause for Jerry. Welcome. You didn't have to tell me how many people are watching online. That is nerve-wracking. Thank you. I also didn't expect him to read the entire biography because I put some really funny stuff in there. So 15 minutes. We're running super late. I know everybody wants to get to the event or the people online probably want to go and have a snack. So I'm going to jump right into it. He gave me a good bio, so I'm not going to give much of an introduction. So yeah, let's move right into it. Open source software. In higher learning specifically. So there is a big timeline of computer and computer things. So back in 1939, Bell Labs created a first real calculator, the calculating machine 44, the Colossus machine came around that was actually able to introduce vacuum tubes into computing. So you had the two of the big primary things for computing, which is processing and storage. The Colossus machine was also one of the machines that was using for deciphering codes and such during the World War II. That's pretty cool. But at that point it was mostly US, not US, but mostly governmental environments and organizations that were doing all of the complicated stuff with computers. So 1948, University of Manchester was one of the real big moves from governmental organizations into higher learning organizations to actually start pushing computers, computing forward. And that was 1948 when they made the Manchester Baby. That was the first machine that introduced RAM. So now at this point we have processor, we have RAM, we have storage. So we got the three things that actually make up a computer. Now, at this point computers were freaking huge, right? The entire size of the room. So digital equipment corporation came through and they created the first self-contained programmable data processor. They didn't want to call it a computer because a computer was massive and this was self-contained and was trying to get smaller and smaller. And that started in 57 and it went all the way through 90. The first couple iterations, they broke some great ground because they were starting to get self-contained. But they didn't actually sell very much. The PDP-3 only sold one unit and it was to the CIA, which is kind of wild. But however, when they moved into the 6s through 11s, they started going into the universities primarily. And that's kind of where we get started with operating systems and where it started to influence in universities in particular. So at that point the PDPs were given to a bunch of universities and universities were actually starting to work together a bit and you would end up in universities like most of you guys are here you would see these kids who were able to just hack up code and hack up stuff to make the thing work. And that's kind of where the term hacker came from. And so you end up with these college kids who were actually a community of tinkers that were trying to get this stuff to work. And the cool thing was when they were a community like that they were able to share their successes and share their failures with one another, commiserate with one another and start to really share the things that they had and that they came up with and that they were able to solve. So one of the awesome things at this point because of the sharing and the collaboration that was going on it was considered a professional courtesy any time you put out some sort of instruction some sort of operation, some system that would manage operations on a computer that you would offer the source code along with it. But that changed a little bit in 1980 when Richard Stallman decided to fight a printer. So Richard Stallman was working at MIT at the time and the MIT was given a printer by Xerox and this was the very first real printer like the printers that we know. At the time Xerox was making the photocopier you put a piece of paper down and it makes a copy of it. This one you actually took data, sent it over the network and put it onto the printer and the printer decided to print it out. And it was fantastic except it had a few problems. There were some issues with the networking specifically lots of paper jams of course because they always had those but there were some situations where in the data would go from the machine on the network over to the printer and get lost somewhere in the pipes. And the only way to address that was to actually work with the operating systems that was maintaining the computer itself. Here's the problem. Xerox decided to send the code not paying attention to the courtesy of including the actual source code with it. They compiled it into binary so all it was was bits and bytes and Stallman although he was a good programmer and a great engineer he didn't really do too well with bits and bytes and such. So remember I talked about that community of folks and community of hackers. He decided to go to Harvard. Now at Harvard they had two machines also. They had a PDP-10 originally actually the machine that Bill Gates wrote the first version of Basic on and they got themselves a PDP-11 and they weren't able to talk to each other. So what they did was Stallman reach out there after they figured that out and he told them I'm working on this machine that's got a similar network issue that you guys are seeing with the PDPs can I have your source code? And they gave it to them because that's what hackers did at the time. That's what people in community do when they're working with community-based operating systems. We share stuff. And so he shared that with them. He was able to update the code and fix the issues and that's fantastic. He was able to go forth and do also. Now at the time so that was like the first real situation there was an operating system that was being openly sourced where the source codes were available to him ever. That's fantastic. Of course OS's went all crazy from there. Unix was made by Bell Labs, BSD came off which was based in Unix and just turned into this big legal nightmare. DOS eventually came out from Seattle computers. GNU project was started by Stallman and then it kind of fell flat and it really didn't really take off the way that he wanted to. Windows ended up, or rather Microsoft ended up purchasing DOS and then building Windows off of it. And then at that point you had a machine that had an operating system but the operating system belonged to the people that sold it to you. You couldn't touch the operating system. You can change configurations. You could make it do stuff but you don't get the code. And tinkerers, hackers, people like us like to look at the code. We'd like to have our own systems like the gentleman before me that was working on his system. He wanted to take a piece of machinery and make it work for himself. He needed access to the code. And so, 1991, Lannis Torvalds out of Helsinki decides to write an awesome operating system that we know is Linux and just give it to everybody. Anybody who wanted it could have access to the source code that could actually drive a computer that we have right now. And that's fantastic. And Linux just blew up from that point because it was free, it was open, it was available and so now we got Linux on all of the top 500 supercomputers, three of the top five websites of the world right now. Actually, this is evidence of 2023 so this might be a little dated. 96% of the world's top million computers and 90% of all cloud services are running off of Linux right now. And it's a free open-source software that everybody uses and these people are making lots and lots of money off of this stuff and that's fine, that's fine. But the base of it is a free and open-source piece of software that started out with a base in higher learning and at research. So, here's the weird thing when we talk about research specifically. Now, before I went into the research area, before I became an engineer, so right now I work for a linear particle accelerator in Lansing, Michigan, in the US. But before I started working I worked in web hosting and like my bio said, I was a chef before that. This is the first time I've ever worked in a place where we didn't create a product and that blew my mind when I first started, when I first considered it, like how do you pay for stuff? How do you create things? What's the drive behind that? And what the reality happens is that because we don't make a product, it fundamentally changes the way that you look at the work that you're doing. You don't get funding quarterly because of how many things you sell. Most of the time funding comes in the form of grants where you try to get as much free work as you can from folks who are just really interested in things and the crazy thing about grants, the US where I'm based, is grants usually come from the government and the US government can change over every four years. So you can get a grant that's great and it's moving, it gives you all this money, but then four years it's just gone. So what do you do with the money that just changes out of nowhere? You end up employing people that are actually seeking out knowledge and looking to push forward the boundaries of society as we understand it. That's what science does. Technology changes quickly, every what six months you get a new version of something every three years you get some, you're obsolete at that point. After three years, most folks that are working in technology are looking for a new job, but science doesn't change. The numbers doesn't change, only technology moves quickly. You plan, you plan, you test, you test, you plan, you plan and you keep on going from there. So let's get back to Linux. At CERN, at FRIB, at Fermilab, at NERSC, they're all using Linux to drive so many of their great research. That's, like I said, just changing the world as we see it today. Why? Well, like I said, that first part about the funding is a big thing. Funding is free, so you don't have to worry about having to change the way you're funding or whenever something just changes randomly because we get a new administration. Not just the new administration, but we're college kids. Well, I'm not anymore, I'm an old man at this point. You guys are college kids right now. You're not going to spend money on the stuff that you're studying, gaming, right? Last time you actually spent something on the worksite. I tell you how much money I spent on my gaming stuff, especially when I was at your age, that and going out and having a good time every so often and a few drinks. Anyways, I didn't say that. So it's customizable as the other cool thing. Super grandly, all the way down to the hardware itself. You can change the actual way the bits and bytes interact with the specific hardware as it goes. And not to mention, that goes along with that, is built into the way it was actually created way back in the day, is this community that comes with it. All these folks, we're a community of tinkerers that just like to make things work and like to share what we got, share our successes. And so that means that Linux really moves forward, really helps to move that stuff forward. So, we're looking at science specifically. What's the big deal with science? So, I had to learn the hard way, honestly, that science isn't science unless it's recreatable. You can't just make one quick discovery and decide that you've changed the whole world or you've come up with a new paradigm. Science isn't science unless you can come up with, well, you can find those discoveries, you can do those testing, and then you can share it with the rest of the people that have been working on them. And it's not important unless we're actually learning something and you're doing something new. Also, science moves incredibly, incredibly slowly, like I was saying earlier. And we don't create things, we break ground. So, with that in mind, we're going to move. So, with the idea of science where we are right now, we're going to move into what we consider the future. And anytime I talk about the future and talk about time now, somebody wants to tell me how do we use AI with this? The future of research isn't AI and I'm very sorry that I had to say that. I can't tell you when the matrix is going to happen and I can't tell you when the terminators are going to attack. Because that's not with science though. Science moves much more slowly than the technology we had yet. So, when we're looking in the research, the OS's that research is going to favor as you move into the future are going to take those fundamental ideas that I gave about science early on and kind of build off of them. The first thing is that when you're breaking ground in research, you're trying to figure out how things work. And fundamentally, as our technology has grown, the way things work is just getting smaller and smaller. My job specifically is subatomic particles, little neutrons that are kind of shooting into the atom of a nucleus. It's tiny, tiny, tiny. We're talking picoseconds. The picoseconds are a millionth of a millisecond. It's crazy the way that they're measuring this. So, not only smaller, it has to be smaller, it has to be lighter weight and it has to start to move much more quickly. Because you're capturing these things like the Higgs boson was recently discovered in CERN where it was recently proven to be discovered in CERN. It was already proven by a mass. But we actually found it and it exists for something like a millionth of a picosecond. It's a tiny, tiny amount of time. So, the operating system has to be able to keep up with those sorts of discoveries as well. On top of that, our world is getting smaller and that's good. We can share this stuff out with the rest of the world and with the rest of humanity and help kind of push us forward. However, that also means that there's a chance that bad actors can come around. So, operating needs to also maintain some level of security and some leg-mindedness about how we're going to make sure that bad actors don't misinterpret or take the research that we've been doing and use it in a nefarious manner. And the last thing, because science moves so slowly we're no longer looking at three- and five-year terms of support. The longer the amount of support the more. When I was in CERN, I got the great opportunity to go and visit CERN and they were working on a project right now breaking ground and doing planning and finding funding that doesn't actually start until 2045. We're talking 20 years in the future. So, if we're starting to look at the operating systems that are going to be able to manage that product we have to look at longevity of the support that's actually going to come from those guys. So, that's kind of what I had today in a quick 15 minutes introduction into where we are with operating systems and science. We have a whole lot of ground that we can break. We're learning stuff so quickly. We've got a whole bunch of fresh minds that you guys specifically are out there like willing and able and looking forward to do something awesome. You'll find, I think, that using these sorts of open-source software is going to be one of the fundamental backbones of how we actually push ourselves forward. So, hope that was alright. Thank you guys for the lovely time. Thank you very much. Yeah, quite impressed how you get the audience really clapping like so loud and excited like, yes. But like, yeah. Are there any final questions in the room? Okay, I have a question here. Do we use only Linux for this open-source science works? No, so there are lots of different opens, there are lots of different pieces of software that are involved in it. There's lots of windows, there's lots of Linux, there's lots of Mac, there's a whole bunch of different flavors. You find that in some of the pieces of hardware that require more customizability, they do turn to Linux however. And it's mostly because they can change things and cut things off as they need to. Have you applied what all you explained to us for any science project or any of your work? Oh, yes, absolutely. So, most recently at the FRIB, there was a science experiment which discovered the drip line for the sodium molecule. And what happens is I'm going to try to dumb this down as quick as I can. So, an atom can only hold so many neutrons and you're trying to add more neutrons to it. And in the process we got these two big beams that are moving together, converging right in the middle, and they splash against the back splash and the hardware that captures it is running Debian right now. Is it electrons? Electrons and neutrons? Yes, thank you very much. Thank you. So, thank you very much. That concludes the session and the track. We're back tomorrow here. Yeah. Jerry, people love you. Great, great. Thank you very much for giving back this energy here to the stage the whole day. Been a fantastic audience and we can't wait to see you again tomorrow here at the First Asia Summit 2024. Goodbye.