 So, everything big happened. Before everything big happened, always something happened, right? Every time, always like this. So here I'm very happy to come here to give a speech about OpenUR. The topic is OpenUR brings new opportunities to diversify the computing era. So, I'm Yixing, I'm the member of the technical committee of the OpenUR community. Okay, so the first question is, so when we talk about OpenUR, maybe somebody say, okay, is it kind of another boring OS distribution? The answer is no. So let's clarify what's the problem we try to resolve. Personally, I think there are three tough challenges for the OS industry. The first one is the fast development of chipset bring tough challenges for the OS development. I think the chipset has been very booming in recent three or four years. And in the future, we can forecast that there are more and more new type kind of chipset will appear. So the second one is the OS need the more aggressive innovation to bring things to be lighter and faster. So personally, I think that nowadays the OS, especially for the linear system, it become bigger and bigger and heavier and heavier. So our idea is how to reduce it and reduce the size, reduce to make it lighter and faster. And the last problem I think is the gap between the server card and the embedded system is that the linear system is divided to be server and embedded system are two worlds. They don't think about it well. So that's another kind of challenges. So what's the OpenUR's way to resolve? I think the one word is aggressive. So we will do some aggressive innovation or modification to resolve this kind of challenges. So I think the latest has been the most successful OS during the last several decades and that kind of spread from the server cloud to the add computing to the embedded system. It has been very successful in the industry. But everything looks fine. But is it perfect? Is it so perfect? I think it's still a question for us. So first one is the chip set, challenge chip set. OS OS can work with the chip set. If you look back to the chip set development, maybe five or ten years before the chip set evolved slowly, you just pick up Intel's chip set is enough. But the more and more chip set has appeared in recent years. So suppose you are a chip set vendor. What kind of security you will face? It's very easy to understand. You take out the chip set and you put the feature to the upstream kernel. It's not very easy. Maybe spend half a year to do that. And if the upstream kernel accept the feature, then the OS vendor will take the feature down and build the OS. And then they also spend another half a year and put the OS to the customer. We will spend another one year maybe to be mature. So it takes around one half and two years. But the chip set, generally speaking, a chip set has a three-year life cycle. So that means half of the life cycle, the chip set life cycle will be spent to do this kind of job. The customer can access it then. And consider the kernel evolved very quickly. 4, 5, 6, and 4.1, 4.2, 5.1, 5.2. And also the vendor also has a lot of kind of distribution. So they also exist a lot of OS vendors, right? So that is also a problem between the chip set vendor and OS vendors. And in addition, we consider more and more chip set type. If everything is x86, it's fine. It's fine. It's not so complicated. But now, more and more ARM architecture has come into the market, especially for the server and cloud. The more and more company will adopt the ARM server to use, right? And additional, a very shining star with five appears. And especially for in China, I guess there are maybe 50 companies. Not bigger, small or medium. Company, they produce a lot of kind of this type of chip set. I said, papered. So when we consider all of the things, the OS and chip set will be a big mess. It's called very complicated. So how can we handle this in some degree? So, over here, we will have adopt this kind of way. We will have the release. That's the number where we release a version two times per year. One is in the March. Another in the September. So this number, those who release, we call the innovation release. You can put everything in mature and blood bleeding features into those kind of release. You can try to make a try. We provide some environment to do the verification job. And then for two years, we will release LTS, long-term service version, which can help the company. We will make sure the quality and we will maintain the LTS for six years to ensure the company will adopt mature products. So that is the way. So always, if you want to try something new to verify some chip set, you can go very early to the innovation version cycle and do the verification and go back to LTS version. And so we will adopt more aggressive release cycle to meet the aggressive chip set development. Essentially, for the new architecture, especially for the ARM and LTS 5, we will adopt a very aggressive strategy to adopt these new features. So, you know, a new feature, especially for some big feature, it's not easy or spend a very long time for the upstream, for example, kernel, JLBC to accept the feature. But for OpenUler, we open the door to those kind of features to promote the usage of the new architecture. And then we have another very important feature is we share the same code base. In this point, we have released two LTS. For the second LTS, we choose the 202.03. This is a very milestone version. In this version, no matter what the chip set you use, as you see, it's 645. They share the same code base. That means one code pool to create three kind of versions. So, if you are a developer, you just click to use the API. They share the same API. If you are a device producer, the driver will face the same kernel version. So, we will maybe reduce the gap between the usage of the new architecture. Okay, so that's generally information how we handle the chip set issue. But it looks not so aggressive, right? That's a regular way. Okay, let's move on. If we just consider the general CPU, as you can see, it has been very complicated. But more complicated is coming because we have more and more GPU, DPU, TPU, or XPU, a lot of kind of PU. So, for those kind of PU, how to handle them? And they evolve very, very quickly. For example, the DPU in China, I guess there were over 10 companies, startup companies to do DPU, very different, a very different SDK. And also, for a very complicated OS, every week, we can reserve over 40 CVE issues, vulnerable issues, how to handle them. And also, this, you have to update the kernel, update the OS in the cloud if you have millions of machines in the cloud. So, the only way to make the kernel to be more flexible cannot be fixed. So, if you consider kernel, kernel is the interface between hardware and the user base and the developer. This is a very traditional way for us to do it. But the problem is why we cannot handle both the tripside environment and so many new tripside type? The root cause is kernel is highly coupled and unchangeable. Unchangeable doesn't mean you cannot adjust it. You just can put some parameter into the kernel to do some adjust. But it's very slight. It cannot change a lot of things. So, generally speaking, when you change something in kernel, you have to compile it to be a new kernel and then deploy it again in cloud. So, that means changing and anything means changing everything. So, that is the problem we have faced. So, here we have a new idea. So, is it possible to make those kind of key components of the kernel to be more mobilized? So, for example, we know nowadays the kernel will adopt the eBPF mechanism. eBPF is very good, but it's only used in the network in the beginning. Now, it's expanded to more hardware, but most of the user is still limited into the network. But it's a very good mechanism because you can make some policy in your space and compile it and inject it into the kernel, and the kernel will run those kind of policies to do that. So, it's a perfect design. That means the policy is divided from the framework. So, the framework is a framework. The policy is a policy. The framework is only to load the policy around it. So, we can bring those ideas. Is it possible if we can modify or redesign the kernel to follow this way? So, that means the kernel only to run the framework, but we can make a different kind of policy in the user space and compile it and inject into the kernel to let kernel to always flexible. It can change something. Another way is in the left of the PPT slide, you can see we can the driver. In this system, especially for the kernel, we have a lot of kind of drivers. The driver is very complicated and not easy to migrate. But it's possible for us to make a new driver framework. That means you can write the driver once, but it can run in different kind of kernel version. So, that is also kind of an idea. If we can finish these two kind of steps, that means we can handle some difficulties we are facing. So, that is the framework we are doing. We call it kernel as a service. We call it CAS. So, the first step, I suppose we can release the first photo type by the end of this year. So, welcome to come to OpenViewer to see it in this year. So, the idea we can say, our idea is to make the server currency more lightweight and streamline kernel. But OS is not only kernel. We have a lot of things. For example, the virtual machine level and the container level. The kernel visualization and the container, the three levels. The same idea, we think Docker is too big to think. And also we think the cumulative is too big to think. We get together and everything in the same image. So, we developed two products. One is for the server world, which is try to replace the cumulative system. And also, we developed an extra. Actually, it's also to replace the Docker. So, it's a very slight. It's only the maybe 10% the size of the cumulative or the Docker. For example, the Shredware is only, I guess, four megabytes. It's very small and very fast. So, it can be deployed from the card to the server to the add even to the embedded. So, embedded system also can enjoy the virtualization work and the container work. So, that is the idea. And also, we said this kind of structure component, component of the OS. We also want to do some aggressive changes. We want to modify the number one process. And then the one process, we all know that the system is very big. So, also, we develop a new project, which is called SysMaster. It uses Rust to rewrite process one to make it lighter and safer. Because if you get used to the cumulative or the system, you will find that it always has some CV problem, always affected by the bugs. So, we also, for the server, we also use the Rust language to write the program. So, we have done so many things. And also, all of the project, we want to make the things lighter and maybe faster. But it's only for this? No. We have bigger vision. So, that means a universal OS platform. Somebody will challenge that. Universal is a too big board, right? Okay, let's explain what it means universal, right? Actually, if you check our industry, we find that there is a very clear line between the server side and the embedded side. For the current side, we often use the Santos, Suzy, or Ubuntu. In the embedded system, you always use the WinRiver, WinRiver Linux, or the Yuktu system, right? But unfortunately, the two systems are not totally unlinked. They share the different architecture. They share the different, they share nothing. They don't have the same code base. And if you use it, it's not totally different. And when you build a system, you want to do the STL-CD. The server side, we use Koji or OBS. But for the embedded system, we always use the Yuktu. They're totally different. But everything should be connected. Not should. It's doing the connection around the devices and the devices to add computing and connect with the cloud. So everything of the system will connect together. So in theory, the ideal world is everything should share the same thing. And that application can run smoothly from the cloud to add to the embedded. And if I'm a chipset vendor, I just need to work with one group, and they can make the embedded add and the cloud or server share the same thing. I don't need to work with what complies with this vendor, that vendor, this OS, or that OS. It's very tedious and consumes a lot of effort and money. So is it possible to do that? Okay, so let's consider what is OS? No matter it's the embedded or add or server or cloud, actually OS is a collection of packages or component. So no matter you use as an embedded or server, you share the kernel. Kernel is one of the packages. So we came up to an idea. So can we make the component to be component? And then if we want to do some OS, we can use some language or some config file. For example, the YAML file is very easy to understand, very easy to run, and just give the system a YAML file to describe what kind of OS I want. And then the compositing will produce the OS what I want. So in China, we call it a tenon and a socket structure. It's a kind of structure used by the Chinese traditional building. The component is very similar, but you can use a very similar component to form a very huge building and also a very small building. But internally it's similar. So we build a system which is called OlaMaker, but of course it's still under development. Also in the middle of this year, we can release it that maybe the alpha version or call it a 1.0 version. It can reach the requirement that we have a very big component pool. We share the same code base, no matter it's X86 or M64 or RIS5, they share the same. And then we provide OlaMaker Composer System. And you can put some description or requirement for the OS to the OlaMaker and the OlaMaker will produce this OS for you. So OlaOpenUla is not an OS. It's not an OS. It's an OS platform which builds different OS. So that is the, I think, the more accurate definition of OpenUla. Okay, so let's do a smooth summarize. So we optimize the OS component for Lighthouse and Modular because we want those components can be deployed in cloud and the devices. So we need to make every component to be smaller. And also we can use those components to reorganize to be different kind of OS. So that is to meet a different scenario. So that is the idea. Okay, when we talk about we replace a lot of things, but it doesn't mean we abandon the old ones. So for the Docker, for the QML, it is still in our package pool. So you can't use it to do this, to do the, to compose the system. So we have just provide a second choice. Okay, I have told, but I came to the end of the speech. More than that, we have developed a lot of new things. You can come to the Web Scientist. For example, ATUN, AI based performance tuning, too. You can do the performance tuning by AI. You use this cam too. You can have the kernel hard replacement, which can enhance the cloud or ops technology. And we have B2JDK, which is optimized for the M64 and it is five. And also we have LFS plus. We reorganized the protocol and LFS. LFS is very popular, right? But it's not fast. It's not robust and not stable. So we redesigned this protocol to enhance and up to six times speed and more robust. And we have QMS, Gazelle and AI ops. AI, use AI technology to do the DevOps. So until now, we have around over 300 kinds of new projects. So if you can go to our website to find those components. Of course, you can use the component to other platforms. For example, Red Hat or Ubuntu. Okay, so here is the IDINVIP member, right? It's very big. At least in China, it has a very, very big size. And we have accumulated a lot of developers and we have accumulated a lot of organizations and companies. Okay, finally, so it's the end of the speech. It's a very quick introduction of OpenUr. So here is the website. Of course, some of the material content is still written in Chinese. But if you find some, this can be sure you can, you know, report the issue to help us to improve the project. Here is the website. After the website, you still have the project. We have GTI repo. Over here, HottingGager. So you can take the picture and we can communicate in those channels. Okay, everything is new. So we try to bring some new things to the old OS industry. So welcome to John OpenUr to enjoy the new world of the year. Okay, thank you. Thank you very much, Wei Xiong. Unfortunately, we don't have time for questions because we're running out of time. But you guys can find them in the group area, right? Yeah, they guys have a very nice group down there. So please, if you have any questions, come die and ask them later. I will stay in tomorrow. I will leave here because I have another meeting, right? Tomorrow and the day after tomorrow, I will stay here. Okay, nice to meet you. All right, so you can find him tomorrow in the booth? Yeah, all right. Yeah, thank you very much. And next we have our panel. I think Hon Fu is going to come over. So we're going to cheer up the chairs for the panel. We're going to be facilitating the conversation today. Today with me, I have four wonderful speakers come from very different backgrounds. We're going to share about their experience and their insights on the topic. So before we begin, I would like to just do a very quick introduction to all the panelists. So on the left hand side right here, I have Dr. Lim Tich-Meng. He is the president of the Singapore Association for the Advancement of Science, the president of the Singapore National Academy of Science, the member of the Singapore Bioethics Advisory Committee, a former board member of the Association of Science and Technology Centers and the president of the Asian Pacific Science Network. Welcome, Dr. Tich-Meng. This is not the first time you are here. You are a regular speaker in Asia and you supported in the past as well with the Science Center. Thank you very much for being here. Thank you for inviting me. Welcome. And next I have Xin Hu Cheng from Microsoft. She is heading the Azure Business for SAP, Digital and App Innovation in Azure Hybrid for Microsoft's Asia-Pacific region. She has been here for eight years with Microsoft. She is responsible for defining the GTM strategy to deliver customer and partner success and for the revenue growth of the businesses. Xin Hu, once again, welcome to the FOS Asia Summit. Four years. Four years. Yeah, but a few long days like this. Yes. And next I have here Kiwi Tang. He is not only a friend of mine, but also one of the previous organizers of the FOS Asia Summit. So Kiwi at the moment, leading the CEO of a local hardware production company, called Lyons Forge, this company actually produced laser cut dough. So if you're not aware, like people talk about Singapore, we are very famous for shopping more, a lot of services industry, but we actually produce something here locally. And Kiwi will talk more about it later. Previous to his company, he actually a fighter air room for the Singapore Air Force for many years. Yes, and as mentioned, Kiwi used to be in the organization of the FOS Asia Summit previously. Welcome Kiwi. Welcome back. Thank you. Thank you. Last but not least, very excited to meet Cheryl Tukub. This is your first time here at Summit, Cheryl? For Summit Summit, yeah? Yes. So Cheryl is the regional director of Marketing for Asia Pacific at Rafaana Labs. She recently established a marketing team for Rafaana, originally from the Philippines, but Cheryl been here for over 16 years now and over 20 years of experience in the IT industry. Very happy to have you here. Thank you for joining us. Thank you for having me. Okay, so let's go into our first question. So I have a few questions prepared for the panelists before we open up to the audience. So my first question is actually, I only did a very brief introduction. I want to understand and would like you to share with the audience what inspired you to pursue a career in science and technology and how did you get started? I'll start. Would you like to start? Hi, everyone. Good afternoon. I hope everyone's having a good time. Yeah? Yeah. Sorry, I'm in marketing. I need to do it. So I started in tech really oddly. I'm actually, I think one of the things that I put in my bio is that I actually graduated with a finance degree, major in banking. So I should be in a bank right now, right? I graduated in 2000 and anyone here who was graduating 2000, don't be shy. It's okay. That was a dot com boom, right? So everyone's talking about the tech, the dot com and whatever I was thinking, I want to be there, right? So instead of going to a bank sitting there talking to your clients and getting them to open an account, I decided to go to tech, right? That's basically it. Yeah, and never look back. Yeah. Well, since you passed the mic to me, I was hoping that you would go that way. You talk about graduating in year 2000. Who were born in 1960? Okay. Right. Hong Fu, you know that I'm not a so-called IT person. I'm a biologist. I'm a biologist and I always left science and nature since I was a little kid. I can claim the status of being the first owner of CSTCs using computer simulation. At a time, I'm always very curious. So I'm a learner by nature. I'm a learner. I'm still learning. That's why I come here to learn whenever I can. I remember when I was doing my owner's CSTCs, I was introduced to programming language called basic. I mean, it's way ancient to many of you. And it was so basic that it's no more in existence now. But I learned how to do programming. And I think that was the first thesis ever in the U.S. department. Without touching any single animal, I did a project simulating sampling of animals in the field and using that to calculate so-called impact. Meaning you can compare this community and this community. What was the difference? And at a time when I presented it, the professors, they all didn't understand what I was talking about, I heard the professor saying, ah, let's give him the benefit of doubt and give him a good grade. But my professor was telling me, let's try and see whether the field out there believed in what we came up with. I was very happy that the paper was accepted in a journal. And now that I'm in NUS, and they all talk about ranking of a journal, at that time it was considered top-class journals. So that was my first foray into using IT. And that's the reason why I believe in tech. And when you came to Science Center to ask whether we could support you for First Asia, I saw that it is a very meaningful proposition. And I was very happy that you have grown to this stage now. Every year you are back, every year you become stronger and bigger. So thank you very much. So that's my journey out of curiosity that I continue to stay in the science and tech environment. I'm a curiosity, yeah. And then, Sinhu, before we move on to the next question, would you like to go next? Sure. So when I was thinking about my options, 20, and 2000 is when I graduated as well. Tech is where the money and the opportunities were. And so many years later, that's still where the money is. So fast forward all this time. I've still been in tech. I started off my first job in Dell because my degree, my undergrad was in computer hardware. I didn't want to do what everybody else was doing, which was software. So I said, okay, I'm going to do something different. And I went the hardware route. But nevertheless, I did a part-time software course. And then eventually, so I started my first job with Dell in hardware. And then I didn't move on to my MBA a year later. Because I always thought I wanted to get into the business side of things. And guess what? Even after my MBA, on day zero, when all of the companies were coming to campus for hiring, the tech companies were once taking the slots, right? The day zero slots. Not the consulting companies, not the finance companies, but it was the tech companies taking the day zero slots. And that's how I landed my second job in another tech company, but this time in services. So I landed up in the IT services industry, one of the global players in that space. And so I've never looked back again. I've been moving continents. I've moved countries. But eventually, I continued staying in the tech industry. That's where it is. So you and QE have something in common. So you're so into hardware. How about you, QE? Okay, so I think the story goes back to about 2015. Then I was still a major in the flying F-16 in Air Force. So what happened here then at 1215 is that the government is promoting innovation and entrepreneurship. I think that's where the smart nation initiative was rolled out. So at the back of my mind, I totally believe in this movement, at least for Singapore, coming from a very patriotic kind of thing. But one thing that I've seen is that the manufacturing ecosystem in Singapore already more or less is reaching its till-end of the decline. I decided that as a... So I imagined that how can an average Singaporean come out using open source or whatever resource they have back then to really enterprise and create a product that's made in Singapore. So that took me on a journey from 2015 to 2017 where I do a lot of open source projects. In fact, that is the only resource that I can find. And that is how actually I met Hon Fu in one of the Science Hack days. That's how I joined Forziger. Then in 2017, as I kind of mentoring Singapore students, they said, okay, you need to innovate, you need to enterprise. But now you're having a secure job and now you're telling people to take risks. That doesn't make sense. So I need to lead by example. So I quit my job, decided to start a company called Lions Forge. And then using... Same thing, I have no engineering background, but just using pure whatever I can learn from the community and then develop a product that can compete on the global stage. And then with that lesson, I'll just pass down to all the young folks here who are willing to listen to me. And you took part in a lot of Maker's Fair. That's how we got to know one another. Again, we're happy to see you growing from strength to strength. I walked downstairs and saw your booth downstairs. Indeed, Lions Forge is doing very well. Thank you. Thank you. Feel free to come down, see the booth and then see what we can do with open source. Right? Yeah. So if I really am inspiring to hear the story like for some of us, didn't it not start in the tech major, but we all ended up here in tech. And can we mention something that I want to... Actually, the main question that I have here today, which is related to open source. How important is having experience in open source projects for getting hydro in the tech industry? We are at the open source conference. What is your opinion on this? Kiwi, do you want to talk? And you mentioned already that you want to show the potential of open source to younger students in Singapore? The context is slightly a bit different because here you're talking about how to get a job. Right? So for me, it's more on how to be an entrepreneur, even if you do not have any kind of background. It's just that the wealth of knowledge and resources in the open source community or the world, or I would say the open source universe, it has grown really strength to strength. It is amazing. I couldn't imagine like, let's say back in the 1990s, let's say when I started in Poly, if I were to do the exact same thing that I want to do, it's impossible, not without open source. So in a way, just as if you guys have listened to a few of the lectures, you can see that the technology with fresh rate and high rate is so fast and so rapid, trust me, no education system can support that. So you want to keep yourself up to speed, you need to be part of the open source community to keep up. I'd like to add on to that. What Kiwi said is open source by its definition is open for access, is democratising the tools, so to speak. And in the science centre field, we also look into open source. And I also encourage our people here, especially young ones, to get into that. Because through open source, you get the opportunity to co-create many things. And in the science centre, there are many things that can be co-created because of open source. And in the format of including even making very immersive kind of omni-data system. Because now with a lot of data coming in, you can actually pull out data from NASA, you can pull out data from Google Earth, and using open source can stitch together and narrate a program. And you can actually share with the community who can create those. Recently we just created one that is bringing you from Earth all the way out to the edge of the universe and come back to Earth again. And that is because the open source platform. I can probably attest to that as well. Open source is about creating an open mindset. So it's all about collaborating and making sure that you're sharing so that others can benefit. You build on the value of creation and let others use it. So today if you think about, like even a statistic, about 90% of enterprises are using open source in some way or another. So there's no running away from open source. And even large enterprises like Microsoft use as well as contribute a lot to GitHub. They contribute reuse code from GitHub. So it's a lot of open source that's being used across. So yes, to hiring, I would think yes, but definitely will be a useful skill to have. Yeah. And to me it's more, it's in everything that was said to reiterate that fact that it's a lifestyle now. Kids, they Google something, they only go for the free ones. It's technically open source. It's inevitable. And most big tech companies that are technically legacy, like, I'll mention the one that I came from at five, right, are investing in open source, acquiring companies like Nginx, IBM, Red Hat, right. It is a lifestyle, it is the way to go. So if you want to advance, you will inevitably get to an open source journey anyway, to be able to actually advance your career. I think it's just the way it is. Yes. So Cheryl, seeing you have the mic, I want to continue with you. So you mentioned that you were in finance and then later on moved into tech, right? So we regularly receive questions from people in the community how I can get into tech if I did not have a major in computer science technology. And the next question that I have for you and also perhaps in who, you mentioned that you're getting to tech because of the money, right? So people here, we have a very young audience also from online. They all want to get a job like with cooperation, well-known companies, right? And it's also even more important in Asia. You know that we all need to do a job to support our family and a lot of challenges these days in the society. So the question is what are the most important qualities in lawyers look for candidates in applying for tech position? So especially, for instance, you can give an example from Rafano or Microsoft for quite a quality. Well, so I've been marketing. So I just hired, I recently just hired someone in India. My first marketing manager there. I have one in Australia too. And for Rafano because we are start-up in APAC. We only started a month, a year and a half ago, right? So I don't have the luxury of the big tech or get an agency to do everything. No, we have to, I'm a director, but I also do, I also take the pull-up banner from Mario Sami yesterday. I was the one pulling up the banner. I was carrying luggage, right? And yeah, and to me what I look for for a person's resilience, you know, a person has to be able to do the strategy part, also the tactical part, and the hands-on part. That is at least for a start-up perspective because that's my forte, at least from my perspective, right? So yeah, I think that's really what's important. And that relates back, not looking for tech. That's where I think the current career opportunities anywhere in the world right now. Yeah. So what you say, people need to say, yes, tech doing things. Yeah, you have to be willing to learn new things and to be able to actually do it. Like, not just talk the talk, but also do the talk, right? Yeah. Sinhu, how about Microsoft? I will give you, I'll sum it up, in three Cs. Okay, so we'll give you three Cs that you can easily remember. So one, and these are again, like Cheryl said, these are skills or things that you would take across industries. So irrespective of where that's tech, but tech can be applicable across industries as well. So it's relevant to all of these. So one is strong communication skills. That's something that any employer would look for. Make sure that you're able to communicate your ideas. You're able to influence for impact. You need to make sure that you're getting the message across. Whether you're in the form of writing, whether the form of speaking, you have to get your ideas across. The second is collaboration. You have to collaborate. So there's a no one-man island. None of us can exist alone. So everything is a very, is a lot of teamwork. So you end up, whether it's creating code or building an application, or whether it's marketing, we end up doing a lot of teamwork. So collaborate is your second one. The third would be curiosity, which means be a lifelong learner. And continue to keep learning. I think there's no limit to us learning and no matter what job it is or what, and sometimes you get disrupted. Now think about how COVID disrupted everybody, a lot of industries. And there were people who went right from being in the travel industry to learning DevOps and moving to a DevOps industry. So you have to be open to learning and being a lifelong learner. So no better place to start because we're at the lifelong Nothing Center. So those are the three C's. Thank you. Yeah, so talking about lifelong learning, Dr. Tiffman, so you have a lot of experience and indication. Can we mention earlier, so whatever we learn in school, university might not be relevant anymore by the time we graduate. So what do you think? What should people do like to keep up with the rapid changes in technology to stay relevant for the job markets? I think you have to recognize that the world is changing so fast that you cannot stop learning. So first thing first, get it right. Get into your genes, get in your DNA, whatever you call it, lifelong learning. I think this is something that you cannot afford to invest your time in. And of course, be curious. If you're interested, you can find things out. And coming to a platform like Force Asia, it's a good platform because there are a lot of new ideas just now, just sitting in listening to the pure speaker. Then when you talk about how why two systems cannot come together and then you create, if you change one thing, you've got to change everything. So these are the kind of things that stimulate you to think, yeah, how can I learn new things to adapt and apply? So lifelong learning and always look at the applied and not necessarily solving problems but sometimes just for your fun, your enjoyment. So don't just always work, work, work but also enjoy and then learning is a joyful process. So I would say always invest yourself and discover the joy of learning. I think these are important things for us to move on. That's the reason why I always enjoy I'm in the CEO of the Science Centre. That's the place that we play with learning. We encourage people to play and learn but instead of learning, we also instead of learners. Thank you. How about Kiwi, do you have anything to add on your own experience? So you did not started as a hardware maker but now you make a machine? I think Dr Lim has mentioned most of what he's like having the curiosity to learn. I think the only thing I can probably add or just come from the industry is that in this era of technology changes open source has been so established most of the disruptive technology all come from open source. So rather than seeing technology as a track to your current job or expertise you should embrace it and then use it since it's open source use it, enhance yourself and then compete again. So that kind of self-improvement mindset rather than a very self-improvement mindset. I want to protect my job because AI is going to take over my job. Why don't you make use of AI to create more product and more jobs for yourself? Can you also give some advice for the younger generation how to get started in tech as a career? I think there's a coming from the entrepreneur standpoint is that when you go on a hiring at least someone that can be very adaptive innovative and willing to really look out for problems to solve than someone who just waiting around taking instructions. I think we are right now in a new era when things are just been changing nonstop whether it's geopolitics, whether it's climate change, whether it's a climate change, so on and so forth. I think the young generation find themselves needed to adapt to changes and to deal with a very uncertain world. So those who can do that will survive. Yes, so thank you very much. So we're talking about how the young generation can get into tech, but I also want to understand the people who are already in tech. So what have them to get more advanced in their career? What are the opportunities? And of course I know that there are many developers here in the room. How can you create the joy in your work? If you find joy in education, find joy in the work that you do, you can advance the most better, right? So the question is what are the requirements in order to advance your career in tech? And what keep people motivated? Well, speaking by experience, you just have to love what you do, right? Like, I came from sales. I was there for eight years, right? And I got sick of having CODA. Sorry for any sales people here. I got sick of having CODA and thinking I want to try something else. I want to change my life so I want to do something, try something else. Try marketing, never look back. So you have to love what you do. And I stuck in startups because I like the hands-on part. Some people cannot do that. That's fine too. It is what it is, right? As long as you're comfortable with what you do, you're happy. I think that's, and then advance the career will always follow you around because when you're happy you're productive and you keep on learning because you want to learn. But if you're not happy, then that's when you get stagnant. And that's when you sort of lose productivity. Just a curious question. You don't have CODA in marketing, but what is the KPIs? What we do, we help the sales people create the pipeline. We call it marketing source pipe. We do have CODA, but it's not detrimental, I guess. If you don't meet your CODA, your boss will kill you. Obviously, everyone has KPIs. So for marketing, it's marketing source pipe. We have to help the sales people create the pipeline. And FOSAsia, events like this obviously help to create adoption and then ultimately some of these people will hopefully buy our product at some point. Just a bit on the keyword there. She kept saying create, create, create. I think in a tech field you can create value for your company, for whatever agency you're working for. I think that is a very good indicators that you can progress more. And the creating bit, one thing I like to encourage people is that you should create with a community. You do not create all by yourself. Because once you come together, one plus one is greater than two. And therefore, again coming to advise the young people, you must be able to work with a team of people as much as possible just this morning at Meta. So I ask the people how they work. They actually form teams. And when they find that they need to solve a client's requirement, they're bringing expertise to make the team and so on and so forth. So I think this co-creation to create value is how you can progress. Yeah, to create value. Havasin, how do you have something to add here? Yeah, I would say be a team player who demonstrates results. I know it sounds like corporate, but when you get down to what it really means, it simply means show results. Right? You have to show the impact of whatever you're doing. The results have to speak for themselves. And that's what eventually then lets you grow because people see what you're capable of. And that's what then gives you more opportunities to take up. Yes, I totally agree. Being a good team player only applies for corporates. I think in the open-sort community we also need very good team work in order to success. Thank you very much. Kiwi, do you want to jump in? Okay, good. How are we with the time? So I think that it's time now to open up the floor to the audience. So let's see if you have any questions from the audience. So everyone very successful now with their career. They all have very good job. I can ask a question. You're more than welcome. So I'm going to go a bit towards the hiding part of finding a job. We see these big companies, especially the big ones, Microsoft, Google, Amazon maybe. They do this very focused data structures and algorithm for software engineers. They do these very focused data structures and algorithm interviews where you have to type or write on a white board and all that. This has been very controversial in some tech communities. Are we assessing correctly students? Are we assessing correcting candidates? Or are we just kicking them out because they couldn't find how many windows are in New York. So given this, I think there's a very good option on looking at a candidate's profile if it's an open source contributor because you can see what he does. I want to know what's the answer to that. And if candidates, because not everyone has the luxury to be able to work their daily life on open source, so what would be your, that's the second question, what would be your advice for the people that are not 100% in open source to somehow showcase their value through their work? So I'd like to answer that. And the two questions. The primary thing that these companies are testing for is your problem-solving skills. So what they're trying to do, some tests, yes, maybe more difficult, or the questions maybe slightly harder, but they are constantly also revising all of that, but one main thing that they test for is problem-solving. And how are you able to break down a problem where you may not have all the answers, but this happens all the time in your day-to-day work where we do not have all of the answers, we don't have all the resources, but you still have to go solve. So they're trying to see what would you do if you lived in all of these constraints and how would you think an approach? So that's what they test for. And if they're able to see that you have a clear way of trying to approach a problem, that's what they're trying to do. So if you're a project or if you're completely new to this, the one way that people should go about this would also be to just self-learn. These days it's a lot easier for you to do, let's say, to go after some online training, to consume content, to take up a certification, and I know that people who have not been, who are not technical, they are, but it is easy to go take up certifications if you have the interest or inclination. So the first step would be, yes, go take some of those content or consume some of that, because then you're able to sort of get to a point where then you can meet some more like-minded people and try to get some foot in the door in times of doing projects, etc. The other thing, and I think there's a person called Annie this morning who talked about something called The Code With Our Barriers. This is a program which we initially started through Microsoft and with a lot of partnering companies for to build women diversity in the tech industry and to upskill them. So the idea was we came up with all of these companies came together to say how do we upskill women and then give them some of that content. So this program, The Code With Our Barriers, we give them a way to upskill which is consume some of that content, take up the certifications and then to apply it, we give them the opportunities to participate in hackathons. So then they're able to use some of these skills to apply it for real life problems and solve it. So we invite them to be part of hackathons and then we also give them mentoring. So I'm also my evening job or my non-day job is also the mentoring and pillar lead for Code With Our Barriers. So I look at pulling together mentors and mentees. So mentors who've been in the tech industries and then mentees who probably are starting off or are career shifters as well try to get them together and then to see how they can get more guidance in that direction. So getting a mentor again is a way if you don't know where to start and that's probably the journey towards getting landing an internship or eventually landing a job. Thank you Sinhu for being insightful and so. Are you happy with the answer Marco? Okay. Thank you. Unfortunately that is all the time we have for today. I would like to once again thank you very much our analysts for being here today and thank you all for joining us. Enjoy the rest of your day at the summit. Thank you. Our new speaker is Fana Richie. Fana. Technical advisory group of servatory chair CNCF technical oversight committee member, governing board member and more. And he's been organizing leading and helping run a bunch of conferences. I can see here a lot of people like CubeCon, PongCon, forstlem, Danock, DapCon, CCC, a bunch of cool ones. Yeah. All right. And it's going to talk about how Grafana labs build and sustain communities. The floor if you are reaching. 10 minutes ahead of schedule. But I can also get started now. I'm completely okay. That's good. Okay. Thank you. Thank you very much. It's largely the same as we use in our internal onboarding and for our internal training on how to interact with communities. That's no mistake. We do believe in transparency and we do believe in sharing knowledge. So this is quite deliberately us externalizing things which we do. How we do them and not only talking about us externalizing what we do and live internally. So just out of interest, because I always do this when I'm in little bit adjacent communities, who knows what Grafana is? That's maybe 30%, which is great because I always like having audiences where people don't really know it. So I probably don't have to ask you if you know what Grafana labs is. Do you know what Prometheus is? Anyone? Kubernetes? If you're using Kubernetes and you're not using Prometheus, you might want to reconsider. I mean they're literally literally made for each other and two founding projects of CNCF. Anyway, so I'm going to start with a few theoretical bits and then we go into the application part. So what is a community? A community is usually a group of people who have a certain commonality. This commonality might be in various shapes or forms if you are into fishing or skateboarding or whatever. You have some reason why you feel the need to meet with other people of that certain group. Like for example, open source. There are a few, also in the source files you can also click the things you can download them. There's also source links. So there's a few when you come from German, and I am German, there's a few nice translations. Gemeinschaft and Gesellschaft. Gemeinschaft is this community and you are based largely on roles and values and shared interactions and a lot of this is intrinsically motivated. People can't force you to go to this one bicycling club. You come there and you go there because you want to do this out of your own desire. Contrary to a society where a lot of interactions are not very direct. A lot of things like you send a letter or something is very indirect. You have a lot of impersonal roles and a lot of formalized values and a lot is driven by external or extrinsic motivation. For example, you don't want to be fined. I mean there's a good example while here in Singapore there are other people to look down on you or something so a lot of this is extrinsically motivated, not intrinsic. There's another translation of Gesellschaft of where people come together and that's cooperation and this is very much based on formalized roles and it always has an inherent structure. The others also have inherent structures but not as raw, as easily seen as corporations and part of why you are at a company and if you're here for work is literally called compensation. So you are being paid for using your time for not being I don't know at the beach with your family or whatever to do the thing you're doing at work. So this is very, very much an extrinsic motivation. And yes, any healthy company has a healthy internal culture. So what is community? Not. And again this is what all other people joining Rafaal Labs including marketing and including sales. We take this seriously. Communities are not a sales channel. They're not a marketing target group. They do tend to reject outside commercial interest quite strongly as most people in this room will probably attest to. And communication with and within communities often has its own rules. It's not that you can just like go from one to the other and everything is the same. But over time they develop certain patterns of communication, certain memes, certain in-jokes, certain ways of how to interact with each other and it's important to take this into account and honor this kind of thing. Because it's really, really easy to get this kind of thing wrong. So why do we care? And again this is from the perspective of internal. A large, large, arguably the largest part of the success of Grafana Labs. We don't know what Grafana and Grafana Labs is. Grafana is a visualization tool which visualizes data for basically all databases you can think of. And people use this primarily for operations of IT, of cloud, of cloud data, microservices, whatever. But also you can use this at industry. I literally know someone who runs a port and they have a way waiting thing in the conveyor belt and when they take on coal is from this they deduce how much moisture is in the coal and they can stop the conveyor belt when it's too moist. Of course it would spoil the rest of the batch. So that's all of the things that you can do with this and Grafana Labs is the company which provides this open source but also sells services based on this. So, and this again is something we teach the people internally and also which is coming directly from the founders. Grafana Labs honestly tries to make the community well. And the communities as you can see are also repaying with engagement are also repaying with like attending conferences or speaking positively about a thing. So there is also some quite some some egoistic motives in this from the company perspective but the good thing is we found a way to align those incentives so we don't do open core or anything. It's true actual open source. So as a company which actually is built on open source we actually do believe that this is a strategic requirement for the continued growth of the company. And if done right and this is also part of being honest for anyone who invests in community like I'm director of community Grafana Labs pays me to do this. Yes, there is something we get back and that is an integral part of the sales motion if done right. Another really nice survey they did one again I think last year but the results are basically the same. Stack Overflow asks how developers choose software. And if you look at the first three starting free trial asking other developers and visiting developer communities those are things which are done really well within open source. Open source enables all of the three of those. Of course you can just use it yourself. You can find people at conferences who are excited about this. So leveraging this dynamic where software is eating the world and as such developers are eating the corporations or defining the corporations and more and more power is given to developers. You can actually shape as a company what people way before they ever have a potential commercial conversation with you and you can also use or do not use by honestly making the open source absolutely stellar and supporting the open source user as if they were paying users. There is also a personal example in this. I have been in open source for 25 years by now which is a long time. And there was something within Prometheus. Again Prometheus is a monitoring tool. It is a database for metrics of monitoring or observability data which Prometheus emits is precisely in the format which Prometheus ingests precisely cause Prometheus access. So they are super super tightly coupled. Anyone who is using Prometheus in any way or form is using some way of Prometheus or Prometheus compatible software. And in late 2015 early 2016 it was me as a member of the Prometheus team looking at Grafana or anything recommending within Prometheus can we deprecate our own visualization solution cause Grafana is so much better is this something we would be willing to do and yes we were willing to do and this has had an outsized impact on the trajectory and on the growth of Grafana Labs the company. That was years and years before ever me personally ever even thought about joining Grafana Labs or I didn't think I would ever do this and still purely community perspective I just trusted the people who did this. So what makes a healthy community? Well again every community forms a rather common cause for example ethics and such in open source where we believe in carrying stuff forward and they tend to coexist and work together and the cause is tendering in long term so if you have someone who is KDE and you have someone who is genome or VI and EMEX or whatever Gen 2 and Debian yes they might interact and they might even you might have people who are part of more than one community but those communities tend to really carry whatever their cause is forward long term they tend to not merge like even with VI and NeoVim you see that there's different groups who the one really use VI or VIM and the others really use NeoVim and there's not huge overlap like some people migrate but the actual communities the mailing list and everything are relatively static and you have to just accept this and to both as a company or as a community manager or as someone who works with the communities honor this kind of thing and not just enter something and just try and shove everyone to a different direction so on respect and on trust humans are herd animals we are optimized for social interaction being an introvert myself some of us are optimized for social interaction to varying degrees but as a species one of the reasons why we have survived and thrived is of course we have social interaction and as such we can do more than the individual can do and this means that a lot of those a lot of the things which you do while interacting with other humans or with any other system made of humans a lot of this is built into deep into your psyche coming from your DNA coming from thousands and thousands and thousands of years of basically people dying or not dying because they starved because they didn't work or work together with a group so a lot of this is automatic in the background so if you don't feel respected in a certain group you will not want to interact and if you can't trust your environment you will either just leave or you have significant overhead in your interactions with this community because you just don't feel safe which means you might retract from that community or just not be as open because you are always thinking about protecting yourself and not about actually engaging within the community on the flip side if you are accepted as you are and if everything is positive then yes, social interactions will actually feel positive and energize you if you are not, they will feel draining so yes, we need to safeguard communities, that's why we have code of conducts, that's why we have diversity drives that's why FOS Asia and FOSDEM and others are working so much on those kind of things and there's a few caveats we really as a species hardwired for fight or flight and it is fully automatic in the background you can influence this to some extent but you cannot fully do it away so automatically when you are exposed to negatives, all of this starts running in your head so if you don't feel secure you will act more insecurely you will act more defensively you will also act more aggressively and others also fall into this pattern so if you come to this inflection point of a vicious cycle where things just devolve it's really really easy to basically lose whole communities and just have them go down the drain cause a few people were initially maybe not very nice which is with my, for example my FOSDEM head on or many many moons ago with my free note head on we were extremely quick and vigilant about stopping things early cause once they devolve to a certain point it's almost impossible to get things back and also iron law of institutions anything anyone who is in a position of power within any organization is much more likely to defend their position and see this organization wither and die than to just give up whatever position of power they have and be like ok it's someone else's time by extension communities tend to to fade away over time I mentioned a few Linux distributions some of them maybe but they are very very unlikely to actually change so it's much more of the case that communities simply go away and fade than that they really change trajectory or change what they are about and speaking from a company level this is really important to take into account cause if you have I don't know a big migration or something doing this in a consistent and respectful manner which actually pulls people with you is exceedingly important to your success so yeah, safeguarding as we are social animals we are really good at detecting situations which just don't feel right and all of us will have had those situations where we just don't feel safe just don't feel it's weird children are really good at externalizing this then we are taught to not externalize as much cause we are less fighting but it also leads to people being less honest but the thing is communities really depend on open and honest and transparent communication like for example, literally putting internal slides out into the open as something of hey this is how we think about it and you can really only safeguard communities from within it's literally impossible to just swoop from the outside and be like okay I'm going to fix whatever no you won't but if you are a member within the community you have good standing and some position of power or influence or whatever you can't put them in a code of conduct enforcement or safeguarding situation cause they will just be rejected like if it's really bad yes get police and everything but then we are not in community then we are in society or get your bosses boss then we are in cooperation but within community you can only do it from within for anyone who interacts with the community 100% of the attendees here I highly recommend creating culture canneries for yourself there you will find certain things about a culture or about social interactions which you like which you do not like and you deliberately think about them and you deliberately write them down and every x amount of time 6 months to years whatever you actually go through them and check yourself am I still still the same community did certain things go in a direction I don't want to can I change it should I pull back can I ignore it cause my own outlook on things changed but having this and doing this on a somewhat regular basis is really really really powerful and the best examples are usually the ones where things go wrong like someone does something bad says something bad whatever how is enforcement being handled is this a super painful process for everyone or is it as okay as it can be made and in the end there is a public and transparent summary and everyone moves on and is able to move on except for people who might be kicked out or whatever things like these tell you much more about how people how communities actually behave and interact than just randomize words on a mission statement the summaries basically to avoid changing your own definition of what you accept within a community interaction write it down keep yourself honest about it and never ever talk about it if I tell you what my culture canaries are about for example interactions within giving talks they become more or less useless cause people can start optimizing for them if they want to push me in a certain direction or whatever so yes you can make suggestions to others but don't come up with their own but don't just tell them your own cause as Goodhart said when a measure becomes a target it ceases to be a good measure so formalizing all of this most communities have at least informal roles that's a person who always shows up and does a certain thing over years and years we have to force them I'm certain we have to force Asia where people don't even like have real formalized roles they just show up and do the thing and then they go away and they just keep doing it there's also obviously formalized roles in particular as you grow as you for example need a legal entity once you need to start handling money and such you need to have some structure and you need to have people within the structure who have formalized roles cause that's how you interact outside of the communities but they always have informal ones visible and transparent structures so how can I actually interact with this if I have a complaint or if I want to help with I don't know shepherding speakers or whatever how can I interact with this how can I get started which by extension means writing good documentation about the communities you care about about the communities you run to enable people to actually come and start helping out if they so choose some structures are a signal as much as a tool for example is code of conduct cause the best code of conduct is well written in everything yes but the best code of conduct would be in theory the one which would never ever need to be enforced cause everyone already acts in a positive way and that's not a function of the code of conduct being written well that would be a function of that community being perfect to some extent so this is both it is a signal that yes by having a code of conduct you signal that yes you do take things seriously and that you actually formalize and write down and expose and externalize how you think about things how you will treat things if certain things happen what you find acceptable what you don't find acceptable what you list what you don't list in those lists but also as the tool for actual enforcement and obviously then as the run through and as the person who always gets pulled in when we have anything code of conduct which is thankfully very very very little the point about enforcement is quick early transparent if you just pull it in oh we're going to form a committee and we're going to talk about 10 years for like that thing and maybe you'll get a reply maybe you want to get a reply that doesn't build trust it's not it's not a soap opera so you don't have to do all the details in the public you absolutely shouldn't but at least at the end you should post a summary or a once yearly report or whatever where you just say okay this is this is what we what we did in the end so there's also limits communities exist within the context within the society if here I don't know the fire protection brigade comes and tells us we are not allowed to have this course random example I don't know if we just continue having this talk we have to actually follow whatever the rules of the society are we are within you're fishing community can't overthrow the government I hope and also if someone who's external to the community or in the community decides to just unilaterally change a thing which the leadership of the community or of the project decided differently it won't work so there are still limits to this so yes they have limits in the impact and it's really important to be aware of them applying all of this so most and again this is the internal slides on purpose no changes so the hour here is Grafana Labs at least for how Grafana Labs chose to interact with open source is that the products are actually based on open source projects and the vast majority of what you have in open source is what makes up the product it's maybe 98% of the code base we have a very thin veneer of differentiation on top of the projects on purpose because we want to have healthy long term projects and healthy long term communities and the thing is the communities themselves usually don't really care about about the products they mainly care about the projects and that's fine and that's something where as a company you have to be really really careful to not overstep the boundaries again open core is a good example where you basically put all the really important bits and the really useful bits behind the pay gate and people stop using your product as much and your project as much we've seen this with several competitors to Prometheus, to Mimir and others which tried this and failed and that's part of why Grafana it's important to have an open governance speaking with my Grafana Labs head on we have a lot of projects which are under the umbrella under the legal umbrella of Grafana Labs but the governance is not completely tied to Grafana Labs so you can join the project you can have votes about stuff without Grafana Labs deciding every single thing in my specific case all of the governance is within the Grafana Labs projects are based on the Prometheus governance of course I'm part of Prometheus and I liked it all the communication happens on Github on mailing lists like obviously you have some work stuff in such internally on Slack or whatever but the vast majority is on issues in tickets and things like these it's important to have the membership truly open to everyone even competitors we have people working like with formalized rows as full maintainers with full voting rights on Grafana Labs projects who are not employed by Grafana Labs it never planned to be and no matter if you sponsor as a company someone else being a member of some project or if it's people who join your project membership is always personal it's not the case that oh they pay X amount of money or we have this and that partnership and now we make this and that person this and that role it's a person who does the work they might be sponsored by their company or they might not whatever but the point is them as a person are the one who actually are part of the project and also are if they move companies they still retain the membership they're not kicked out there are other projects who do it differently and I strongly strongly disagree and I strongly implore you to never ever do this because it kills the community over a relatively short amount of time so what does it mean to be project first every project which you nurture should have a community lead that doesn't need to be a full-time role ideally it's not a full-time role ideally it's someone who's driving the tech or the people or whatever within the project for what anyway and it's important to identify the people who like doing this work as opposed to just assigning someone and then they don't they don't find a lot of fun and positive sentiment in this and that they basically don't do it there are a few mechanisms which you like to apply for example community calls for us this means a monthly call always the same day of the week always the same time yes time zones and yes UTC sorry still jet lag summer winter time things like this but still like keep the same time slot make sure everyone can join don't gate on like subscription or you need to register anything just let people join make certain that everyone can actually bring topics so it's not just you talking to people it's an extra change of ideas actually talking with each other make certain that everyone can can edit those notes it's not the case of you writing and defining what the reality of that meeting was no everyone should be able to put stuff on the agenda or to help write your notes and make the make the recordings public other things which we like to apply is we have a meet-up series called Grafana and Friends which is in various cities around the world it's always in cities where we have one two three dedicated people who actually are committed to running this thing long term and then we help them with finding venues or getting pizza money or getting speakers maybe paying for a speaker to travel to a place to speak at the meet-up things like these keep local communities healthy and that's what we as Grafana Labs do is to keep our communities healthy public speaking like Grafana Labs is paying for me to be here and not to talk about this and that product which you must buy I'm literally giving you the internal presentation from the onboarding Grafana Labs does this because we do believe in open and transparent communication and just by externalizing what we do that we find other like-minded people who either like use the stuff or maybe want to work for us or maybe want to support it and buy from us like all of those are fine and all of those are intended obviously but we are not we are not forcing anyone we are actually like paying for people to speak about completely irrelevant stuff at conferences big and small blog posts at Grafana Labs are written mainly by engineers not mainly anymore but at least for the tech topics they are written by the engineers of non-tech topics these days or non-grafana tech topics which is sometimes hard because upper management is like okay there's an engineer that engineer is really expensive why should they be spending half a day or so to write a blog post I can have them implement a feature in this time so this is an actual investment into making certain that yes the people who built the thing are also having a public voice and speak to the wider community and show that yes there are humans how they how they went through and obviously a ton of webinars we just explain your stuff and do this again and again and again and again because one of the truisms of the fundamentals of tech like for example this talk is the actual content doesn't change the audience changes like your math teacher at school they will probably start their career and end their career teaching the same math and that's fine it's not that the content really changes it's that the audience changes yes webinars and just repeating what what you've been saying forever is also good as I said engineers are expensive travel is expensive and if you only send marketing people or if you only send field engineers at some point you will lose authenticity within the wider communities you have to actually send the people who do those things so one of the reasons why Grafana projects are so successful and one of the reasons why Grafana as a company is so successful is because we actively invest engineer time engineering time into public speaking writing blog posts interacting with the community and basically just doing doing good stuff and people take notice and they appreciate it yep final words of warning again this is internal and every sales and marketing person being onboarded sees those those things keep marketing and sales mechanisms away from community mechanisms you will mess up interacting with your community you will also mess up in other ways and forms in life in your work in wherever because that's part of being human when you mess up be honest about it be transparent about it learn from it document it whatever it was and move on and everyone else does the same and again this builds towards this psychological safety we are social animals you don't want to get put someone into this fight or flight or just think that you like to them because if you're dishonest or treat communities just like a random sales or marketing initiative they will absolutely turn away that community will die and your project will also weather thank you I think we have time for some questions I was just curious where can we see which cities have grafana and friends meetups meetup.com just go to meetup.com and search for grafana and friends with the co-conduct some conferences to transparency reports they anonymize it that everyone knows can lead to defamation cases what's your thought on transparency reports for co-conduct I think they're good I also disagree that everyone knows what they're about depending on the size of the community if you have a group of 50 people yes everyone will know everything of course but like for example just yesterday or today Linux foundation put out a report a transparency report on co-conduct enforcement and I knew about one single of the cases out of maybe half a dozen or so maybe a dozen I strongly believe we need to have them of course the thing is any investigation into a code of conduct issue is not a public soap opera part of protecting the victim or potential victim or at that time alleged victim and everyone else is to not make everything public and to actually take everything private have actual deep conversations talk to the people have interviews blah blah blah so by definition you pull everything very very private the one way to validate this kind of thing is only by being transparent after to the extent that you can that you actually prove that you did something so you can talk about okay as a result of this complaint one person was told to not interact with this community anymore or they will not be invited to the events anymore they knew to have a cool down period of X amount of time Linux foundation also had someone apparently go to counseling and provide proof that they did go to counseling like whatever but you have a paper trail of yes we did engage with this and yes this is the rough thing which happened and this is the rough outcome so people who are thinking of should I be considering to join this community actually see proof positive that this is what you're doing anyone else anyone else more questions that's a question I feel honored Richard I'm just curious your title in director of community what do you do exactly you write any code I don't write any code so what do you do your daily very good question the answer changes more less daily and I'm not joking so at a very basic level I care about the continued strategic success of the projects we care about which is primarily all the projects which we have ourselves plus in particular premises and open telemetry which are highly relevant to us as a company which means for example organizing the deaf summits for permissions until last year running prom con is the first year I'm not running it yes things like these writing the governance making certain that code of conduct is applied everywhere all of this but way way more like giving public talks helping internal people give public talks polish their CFP submetals a lot of license work it's endless right one more question anyone I have one a bit going on with the maybe with the people like the structures of communities like the governance and how they work do you see like different communities have different governance methods do you see certain methods being more effective than others and would you advocate for some of them that's a very very very deep question so I know of at least one legal umbrella which had to actually break their own bylaws because they were in such a stalemate because people didn't vote anymore like they had too many members who became inactive and with a little bit of wink wink and a lot of closed eyes they basically did one move to get back into an operating state as a member of that thing I fully agree of like this was the right move so one of the one of the things is be careful that you stay operational with your community with your governance which means you need to prune people who become inactive every X amount of time either by asking them nicely or by kicking them out because it themselves but most people don't leave they just stick around and stay so that's one of the things the other thing is try and prevent hostile takeover because there are so many examples where a company is paying directly or indirectly and then just have enough people in positions of power and then they flip the switch and they own the thing that's also not great as a general rule I would say if you're still small full democracy is good don't require a vote for everything put in consensus mechanisms like for example similar to ITF you have a rough consensus and once rough consensus is achieved you just move on, you don't have to wait for every last person to voice their opinion you just move on and everyone can progress and keep pushing the project forward for important things like changing governance, adding people have votes, have formalized votes with a very rigid structure and time boundaries you can only run it for X amount of time for example two weeks have different limits of acceptance rate it's a different thing to add a person to a project than to kick one out or to change the governance have different levels for this and once you are above a certain size it usually is most efficient just how humans and communication have a square growth rate to have a inner core of sort of trusted people like governance committee or steering committee or something usually it should be an uneven number three, five, seven is good hello, thank you very much I think we are going now to the coffee break so everyone the coffee is probably downstairs we'll see you back in 45 minutes and after that question and answers data and AI data, AI and you so what is your response to big data, AI and you let's have a show of hand is this the first emotion you encounter I don't know anything I know nuts about it I'm confused, I'm worried I am perplexed because everybody say CHAPGPD is so good and that AI is that good I don't nuts so difficult that I can't achieve it can't do anything about it well, I hope this should be your last response you are happy, you are able to use technology you are able to use data, you are able to use AI to grow your company to write the wave of digitization so this is something I hope you maybe if you are in the middle or frustrated, hopefully after my PowerPoint slide will give you some hope of light for the future so this may be the current emotions we feel about big data and AI so confusing just like a scrabble full of noise, full of things everybody is talking about it but do you know the solution to AI is not about technology, it's about you it's not how good is the technology how smart is the technology it's about you, you want to use AI or big data to solve your problem you are the solution because you know the business if you do not know your business please do not use data, use AI you must understand your business then you are able to use data to visualize, to predict and come out a solution and many people miss out on this everybody blow out the proportion of technology wow, this is so good, wow, so this good rogue, hype is for stock market and this is what I tell about my students hype is for stock market, it's not for you you need to know and understand the problem then you are able to solve it next thing about it is about AI and big data you are human I already told students I teach my students these things you want to use big data, you want to use AI effectively you must be a human first you understand human nature you understand human behavior then you are able to use data and AI correctly and properly in fact nowadays I look at all the big AI technology out there those are in the news personal thought is going to destroy their own product that they have been built up all these years why? they forgot about trust, they forgot about how to become a human so there is full of thought so be not perplexed, be not worry to use big data and AI effectively you are the solution because you know the business and you are human that's why I here therefore I offer three steps that offer you to help you grow the AI and data and talent pool with us first step one, proof of concept proof of concept will start small second, internship engage students whether be with us in IT the poly or the university engage us get them because they are the future next, grow talent pool with us so three step, one proof of concept internship next is talent growth so let's look at proof of concept proof of concept project start with proof of concept one small step and that's the you can follow up the next quotation it's a big leap for human race in the moon we must take one small step to AI to use AI and data don't think too complicated and that is actually what happened during COVID one company come in and talk to us, say William we want to try AI in my business we are traditional business we are people that sells well let me show you the next video this, we are engaging construction chemical people wearing helmet vest, personal protective gear I want to transform my traditional business through into this area can we use AI can we use data so let's say, okay let's try let my true student try it and get them is it, you must not pay my students the answer is no, it's free of course free of charge, because it is an industry project to get our students practice what they have learnt during COVID, I have one group of students that are challenging to use AI to test whether construction worker or factory worker whether they are wearing their personal protective gear correctly or equipment correctly so this is their proof of concept video I gave a student few months this research they use their TensorFlow Python code their experience and they get our whole of Intel OpenVINO which is an open source library and you can use it and test it out, so you see without wearing gear, so it's red now those who are wearing full gear it's green and that's my student doing testing again proof of concept nothing fancy and now he's taking out red turn green yep, he turned green, turn red sorry look and we show it to the traditional company who is doing this kind of sales right the personal protective gear sales after this the company say alright, if ITE students can do it I believe we can get this project up and do it in a big scale of course they did not continue to ask to do a full bloom project they continue to engage external party to do it but it is one small step for the company to move further than that so step one engage us, give off your problem statement that problem statement is small enough to give proof of concept and let our students start thinking about it's a simple one month, two months, three months project let them think about it and let them show you the results next so therefore we have two courses that actually look at data and also AI one is higher night tech which is why it's night tech in data engineering and next is higher night tech in AI applications so what is these two courses and you can engage them to do the proof of concept project so for higher night tech in data engineering basically you need big data you need to clean it up, you need to put it inside a database or a data lake or data warehouse you need to process it, clinic, visualize it right and there is the skill sets that the students do, support collection of data process such as data labouring data acquisition, data digitization conversion, organization raw data and so forth and most of you who are here when AI and data field you'll be familiar with all this so it's to get them to visualize to make sense, take insights of what you have and let's say you have a lot of data you don't have anybody to go and label it engage our students to do it as a proof of concept right so there is higher night tech data engineering and of course the next course this is my sister course and this is a course that I'm handling AI application where we are training students to become associate AI engineer first we start with mindset change where we teach about AI ethics and also software development practices where you come at a gel development coming so we have to teach them to be human first and how to do software development in a correct way after changing a mindset where we look at coding skill we teach two things, Python mobile application programming which is Android why you ask me Android because we are seeing some of the application and moving down to handphone, mobile phone all the way even to microcontroller where you talk about tiny machine learning and our students have to go through very tough training because they have three practical exam where they have to code one and a half hours code from the top to the bottom first exam, practical exam one and a half hours talking about computer vision and the application will be doing drones and another open source project which is Intel open board a fun project, alright one minute, so I will quickly wrap it up natural language processing also play as an after that data where we talk about data all the way to AIoT which is the current course they are doing now okay after this the third internship and we have six to nine months internship for these two courses where you can engage our students as interns and work with us and I give all my students one project, one target in this false Asia, please find an IA company that want to engage you for internship in false Asia so please engage them and talk to them and the last one which is and you may do a full-blown project with them and this is an example from now group where they did a full-blown project during COVID let me fast it up it is to do what whether you wear a mask or not wear a mask you are doing social distancing there is a very full-blown project where they can send the data to a telegram chat app and tell the reinforcement officers that somebody is flouting the room okay and this is featured by China's news Asia and that is one of our students and this project of course is longer term is six months they really spent a lot of time with and they channel the information to to a telegram app to tell the reinforcement officers that somebody is flouting the room and you can engage them to do that during their internship third step it works study diploma which IT is now doing where you employ the trainee to be your employee and after that give them time maybe work three months in your company one month back into IT to do intense training what kind of training they will cover let me quickly run over and what does this work study diploma entail very importantly is heavily funded by skill future so you get benefits, students get benefits trainees get benefits for skill future companies get benefits from skill future too and you get ready trained worker in the area of data AI quick summary so what is the certification so these two and a half year program they work and study review is quite similar to the German model where we have the foundation where the students will learn about programming all the way to exploring data analysis data preparation project management second year we speak to two tracks let me quickly run through AI and data track so there will be a group of students go on the AI track and the data track because at the end of the third year they are supposed to do a capstone project in the company and showcase it so within these two and a half years we will also encourage students to get the industry certification from Google AWS from IBM for Apache and so forth so that is what we are actually getting our students to do so and key tone is we are launching this in 2024 so we encourage companies who are interested to have your talent pool grow to join us in this program too with this I come to my last of my slide Q&A feel free to connect with us this is my QR code you can connect me, we link it and my email is very simple william underscore tan at it.edu.sg if you have any question if you can't even get my contact no worry my students is up now in the 4th Asia for these two and a half three days get hold of them and refer them to me so we maybe have a quick question is there anybody have any questions all right I think we don't have much time so we are going to keep going on thank you very much thank you very much william tan so let's give him a round of applause and yeah if you guys want to connect with them you can scan a QR code we also have these QR codes on the badge so if you want to exchange contacts that's another easy way to do it next up we have Frank our next speaker is Frank Cardilis-Tek good thank you he's the CEO open source strategy but I think I'm probably most well known and invited here to be found of NextCloud but today we want to not specifically talk about NextCloud on a little bit at the end but more about AI and the challenging around it I mean we all saw the hype and the explosion that happened around chat GBT and generative AI in the last few months it's obviously a huge topic I personally believe that the job of computers and software is to make our life easier to remove boring tasks from us to remove mental load and just to make our life our society better and that's of course a lot of AI and there's a lot of potential behind it unfortunately we also saw the last few months like the dark side of AI where we see the problems that it might cause it might problems around privacy it might be around CO2 emissions climate change it takes a lot of power there are some challenges around discrimination and some other things which actually brought we from NextCloud into a challenging situation on one hand we want to provide to our use the latest technologies the latest tools and there are a lot of opportunities to do cool stuff with AI on the other hand we have our own ethics and our own values from open source to protecting privacy and so on and so on this actually led us to after a lot of discussions to start an initiative around ethical AI and this is something that we want to basically use to judge and to categorize the different functionalities based on what we think is like ethical what is like not so good and which is what is really really bad this helps us to reflect on our features that we implement ourselves and also the AI features that we integrate into NextCloud let's talk a little bit more about the problems first that are there and then later for a possible framework to address that as I already said is of course the CO2 footprint I mean if you look at this statistic here you can see the energy consumption of different things on the left side is just a flight in a plane then we have overall the human life what we as humans basically cause on energy consumption and CO2 output then there are other things like cars and then at the right side you see the training of a midsize AI model and the reason for it is obvious it takes a huge amount of GPUs huge amount of power so the CO2 footprint is really significant this is a bit of a problem of the whole AI thing the second thing is around discrimination I mean AI systems can be used in lots of not ethical ways examples are as you can see here systems where for example the police here in the US is judging who might be a criminal who might not be a criminal based on certain things like in this case here for example color of the skin so how is that? the reason is obvious because AI systems are trained based on data on the internet public available data on the internet and as we know in our society we have a lot of discrimination racism and other problems and this directly is fed into the AI models which amplifies that and this is of course a huge problem next challenge of AI is the whole area on privacy and security and there was a story last week that Samsung employees basically used chatGBT sent the data to chatGBT to have it analyzed and it's obviously a data breach because then the data is left the company is no longer protected I really loved when I saw the story because this is not new, I mean this happens all the time this happens with translation systems who of you sent like internal documents already to deeply or Google translate or other services I mean this is something that happens all the time and this is of course a huge problem the next challenge of AI is just the concentration of power and we all know that the really big machine learning models, GBT4 and so on this is something that at the moment only big organizations can do so basically heading into a future where only like 5 big companies can really do all these advanced machine learning things and this is of course a problem last one is the availability so there was a study lately from Oxford where it basically researches who has access to AI technologies and who doesn't and this is the map this is the result of this study and as you can see in the rich areas it's obviously easy to do all the advanced stuff and not so rich areas in the world it's really hard so the digital divide is getting bigger and bigger so these are challenges again what can we do we can of course say okay AI is bad let's not use it but it's not really the solution because there are also lots of good things that can be done with AI and again we want to do ethical AI at next cloud so after lots of discussions with experts we came up with like 3 requirements where we want to measure different AI systems on so the first requirement for an ethical AI system is that the code is open source and with code I mean the code that's used to train the model and also the code that's used to use the model so why is this important because only if the code is open source you can run it yourself you can actually look inside you can optimize the power consumption you can even measure the power consumption right with lots of hosted AI systems and open AI for example no one really knows even what the power consumption is no one really knows what's happening behind the doors so open source is a key second requirement is that the model should be freely available why is this important because if it's freely available you can run it yourself you can run it on premise you don't have to send your data to some web service that you don't really know what's going on second reason if it's freely available also means that everybody can use it even if you're not a big company even if you're just a student there are tons of students here it means that we can all play with it always use it so this would be a requirement third requirement is that the training data should be freely available why is this important again because if the training data is available you can actually check if there's discriminating data inside and you can improve it move it and improve it if the training data is not available then it's a complete black box what's happening and that's not really good so based on those three criteria we developed this ethical AI scoring system where we say code availability, model availability and trained data availability gives points if you have all three points then we consider this a green ethical AI solution if you have only one or two then it's yellow orange and if none of it is available then it's red and we already used this system I will show you in a second to judge our own features we have also integrations we do and also the roadmap for the future where we invest energy into and we're not okay let's go a little bit through and now it's next slide specific but I think you can translate this also to other software let's go through the features we already have and see how we are doing here you will see at the beginning it will be really a lot of green at the end will be a lot of red let's start with the easy things so here if you know next cloud then you know that on the top we have a system where we recommend files to you so recommend files to you based on behavior in the past we do some intelligence features here and this is a system which is according to us ethical because the data is available the training data is local the data is leaking anywhere there's no discrimination next feature we have in next cloud is a system where we train the login behavior so you train who logs in from which IP address at what time so we can basically detect if someone logs in from a different continent in the middle of the night then it's maybe something that is a bit weird and again this is a system like machine learning that happens on the server no data is leaking anywhere code available and so on is face recognition so if you use next cloud photos to upload your photos then we can actually recognize faces and group them by people that's a feature we implemented last year this is a system where the model is running on your server it's trained by 3 million publicly available faces that can be verified to have no discrimination and again this is a system where the data is available the code is available the training data also this is a green ethical IC system similar to object recognition again we can automatically detect certain objects if they're uploaded in the photos to next cloud again this is a model that runs local and everything is open source and there's no discrimination it can be verified that there's no discrimination so that's an ethical system now it gets a little bit more interesting another feature we can do is actually music recognition we can actually detect the show of different music so if you upload your music collection to next cloud we can automatically detect if a certain song is like rock or a metal or whatever and this is again using a machine learning model that runs local so your music is not sent somewhere else but unfortunately there's only yellow so why is that? because the training data comes from Spotify so on Spotify obviously the music there is not publicly available but at least the model is available and the code is open source inside next cloud next feature I want to mention is that we have these related resources so based on the document you're viewing at the moment or folder you're in we automatically show related chat channels, calendar entries or other things and again this is something that we do on device with no external training data which is ethical then we have a feature next cloud mail where we have a priority inbox as you might know from other solutions and other things which mails might specifically be important for you and which not again a machine learning model we improved it lately even made it more sophisticated running on device no mails are sent somewhere else and so on I don't know if everybody realizes how interesting this is because other solutions like Gmail or even some plugin that exists for other mail solutions they all require that all your mails is on a remote server like managed by a big company basically out of your control so this is the only solution that you can keep your mail under your control and still have these innovative features so that's something that's again privacy aware completely local and ethical another feature we launched actually four weeks ago is a system where we can do document classification so you can upload a document and we can automatically detect this contains credit card numbers social security numbers bank information to other critical things we can automatically reflect the file and say okay this is critical and we can trigger certain things for example it cannot be shared outside the organization or if it's shared and it is forces of watermark or other things again this is something which is completely ethical now it gets a little bit more interesting because translation that's one of my favorite topics because basically all good translation system that exists at the moment like from deep or Google translate or other things are SAS solutions so all of them require that you send the documents to them and then you get a translation back that's obviously very very critical from a security perspective so we want to do a little bit better here so we implement the next load translate how does it work? it actually has two different things start with the first one is that any place in next load you can mark any text that's the German text here obviously you can mark this sentence you can then click on translate then you get this nice dialogue where you can select in which language it should be translated and you click on it and it's basically inserted back into the document so here we have two different modes the first one is we integrate DEPL so if you want to use DEPL for that then this works but it's obviously a red solution because no one knows how the DEPL source code looks like no one knows what the data happens no one has the model no one knows how the training data looks like and so on it's a complete black box so this is red but we want to do a little bit better so we actually implemented our own translation system so we have a machine learning model that runs completely on device on your server which is completely open source and there are no data is leaking anywhere it happens very similar you have some text can be by copy and paste or just there you select the language and translate press the button and it's translated so this is quite unique if you really care with sensitive information as far as I know the only translation system in the world that really happens completely local which then makes it a green AI system the next topic is dictate because maybe you want to dictate something into your document or a chat message or into an email again there are some dictation systems available on the cloud but not necessarily trustworthy so what we can do that in any place like here for example in a chat conversation you can press you get this menu here you say you want to dictate something and then you can automatically dictate into the chat or here it's an email for example if you're writing an email you can press the button say here insert text to speech you dictate into your browser it can even be translated on the fly if you want and you press submit and then you dictate the text into it again this is using the machine learning model that actually comes from open AI which is surprise surprise it's called whisper because this actually comes it's a few years old where they were still nice so this is actually open source and can run on premise which is a very nice solution which works all over next cloud unfortunately it's still only a yellow ethical AI system because the training data is not there so we cannot really verify how this model is trained but it's a freely available translation system that runs locally on device now we move into a little bit more cooler things for example image generation of course you want to generate some images and work with them in next cloud so there we have a system again for example here in a chat you just press the slash button you get this menu here you say I want to have a picture and it plays a lot with certain descriptions I don't know because maybe you're discussing some jewelry project I don't know and you press generate and then boom you have the picture generate an incident to your chat journal so that's a very useful and handy feature unfortunately this is using Dali from open AI which means the data is sent to them the result is coming back and no one really knows what happens which unfortunately makes this an ethical red AI system but you want to do a little bit better unfortunately there are solutions for that for example there is stable diffusion which is an open source alternative that we also integrated and there you can also again do the same option type in any requirement and it generates it and it inserts into the chat or any document you want to insert it and that's again that's an ethical AI system relatively it's yellow because the model is open source it runs on device the code is open source the training data is not unfortunately but at least the privacy is protected here and the last but not least of course there are text generation features as you expect for example in the document you can at any time say I want to insert a document insert a text in the document you have any requirements for the text you want to have and you have the text generated and inserted directly into the document which is like super, super handy and this works in other places too here for example we have next cloud office which is based on LibreOffice where you can say hey oops I was too fast I was too fast again I want to insert something into the office document for example I want to have a draft for a contract draft mean employment contract and directly insert into the document that's obviously very handy and it works in other places too let's say in next lot mail I want to write a mail and I want to have like a funny invitation email to my friends for my birthday party next weekend I can do that and it directly generates a nice email that I can just modify and send out immediately same in other places for example here in chat you can say hey I want to send a nice message to my team members about a status meeting tomorrow and it generates the message and you can send it directly and works in other places too for example here in our project management tool I can say write me a project plan to generate some documentation and it automatically generates a project plan directly there you can work with that so this is obviously integration of chat GBT which is as you all know not an ethical AI system again because it's just a web service you send your data to it you don't know what's happening there the code is not available you cannot locally run the model you have no idea how the training data looks like it's completely closed but it's of course a nice productivity feature we are working actively with other projects to work on a large language model that we can completely integrate with ethical values into nextcloud there are options at the moment very positive that we are in the next version or the version after that are able to have a large language model which is directly part of nextcloud where you can actually check what's happening in the background and you can actually see that no data is leaking anywhere else so this is our ethical AI system which I think is really really important it's also important like if you have a group here if you have a student you want to learn, you want to build something up then it's really important that we follow our open source values that the code is available the data is available, you can really innovate or you can build something on top of it this is so important otherwise if you're just using a web service then you're not really in control of anything you cannot really control your innovation your new products or your digital lives so next cloud to illustrate this a little bit I created this chart here where we want to compare ourselves from a strategic level to other collaboration solutions on the X axis you have the powerful intuitive area and then on the Y axis you have the privacy very ethical and open source solution so as you know there are some solutions out there proprietary ones like Google from Slack or from Zoom and others which are definitely quite powerful and intuitive unfortunately they're not very high on the ethical or open source area they're actually not open source at all the good thing is that there are actually some open source alternatives there are usually a lot better on the ethical and privacy side but unfortunately they're not very we as next cloud we want to challenge that and we want to build something that is located here which is more powerful and privacy and also super innovative on the intuitive and the innovative level so the question is of course okay great how can you do that next cloud is a small organization how can you manage to do that the answer here is that we use our super power the super power is our community we are open source and this means we can all work together on that we are not like one small company but we are a big international community as you can see here in this room and if we all work together I think we can build something which is actually ethical and powerful in AI but also respecting our values thanks a lot alright is the CEO of Perkarser and he's best known for his work in Microsoft Xbox and he's going to talk about debugging trust with very low when you don't give people a title I guess they make one up for you there is no Perkarser organ I'm not CEO of it but it's fine thanks for having me here I always feel a little bit of an imposter because this is an open source software conference and I'm like well what do I talk about here as you saw I had a little struggle setting up the I had a live demo I was going to show but then I had a video back anyway it's a long story it's not happening but we'll talk anyways and I apologize for the font mismatch because I'm now my slides are on a computer that doesn't have the right fonts so things are going to render weird anyway so the topic of the talk is debugging Rust actually just generic code with Veralog if people don't know what Veralog is Veralog is a language for describing hardware basically in the line stupid open hardware tricks trying to explain a little bit what open hardware is so I think most people when they hear open hardware their reaction is like that's cute but what do I do with it I don't actually have the time to build this thing I don't actually really care to fix it they just want the thing to be delivered at their doorstep now and they want it to be cheap so if you had the option to buy a house that had the blueprints and a buy a house that did not have the blueprints that doesn't matter to most people the fact that you could have the source to your house doesn't move the needle for most situations but there are some things you can do with open hardware if you're not into hardware so one of the typical canonical things that we all jump up and down and scream about in the open hardware community is that you can do security audits so principle care cost principle not to be confused with care cost first of all care cost principle which is that there's nothing at my sleeve the idea that if you're going to have a security system then disclosing the full function of how the lock works and where the tumblers are the pins are allows you to make an accurate assessment of the security parameters but in reality again this is an application that's mostly just for paranoid people maybe a thing that's a little more relevant to more people in this room are things like spectre mitigations still largely a theoretical area but how many people here know what spectre is in terms of a vulnerability okay a few hands I'll go over that very briefly basically there's a class of attacks that can happen on your laptop the machine using right now in your lap where information about a secret computation can leak through what's called a timing site channel sometimes depending on the data you're processing the data can process very quickly or it can take a little bit longer if you measure the amount of time it can leak for example information about your password or your secret keys the reason why that happens is because even the machines have this abstract model called the instruction set architecture like oh I have an x86 so I have an arm on an M1 or something like this on the inside the time it takes to execute instruction varies quite greatly and it depends upon these tricks they play do we have a laser pointer or anything like this is this a laser pointer here me and my old eyes can't see oh there you go excellent so like this is an example of what's called a branch predictor so every single time you run through a piece of code you'll go ahead and remember the last state that you took for going through a loop and they'll generally say well since you went through a loop and you went this way the last loop you're more likely to go that way than the other way and so we'll speculate ahead and try to save you some time and guess that that's what you're going to do that internal state becomes a problem because that's the vector for leaking the information about your key if we had the source code for your CPU we could actually have the compiler write provable automatic mitigations for Spectre in other words this whole thing about this patch train you have every two or three months to patch new Spectre vulnerability this whole industry of researchers that's now employed basically finding the series of vulnerabilities could be worked around if guys like Intel and AMD would just share the source code and then we could actually write compilers instead of having to reverse engineer this whole pipeline right but this is still largely theoretical thing because no CPU that really actually matters that you're using on your lap has that available but that's something that I think that would be interesting to most people another thing that you can do with open hardware it turns out is you can do debugging in performance profiling of code which is something that's a little more softer relevant right so the source code of the CPU this is an example here of like some source code can be run and turn into this display here which probably a lot of you aren't familiar with but a guy like me would feel very comfortable looking at this this is a set of waveforms that describes the state of the CPU so we're looking at for example the the data being fetched out of the register file the instruction being executed at this point in time like for example whether these are compressed instructions or not legal instructions is it a multiply pipeline exceptions the virtual page numbers the state of the ACSI bus on the inside that's all visible when you run at the hardware level using this type of simulation so it's extremely powerful view on the inside of a computer we can use this to actually debug code so just to review what the typical approaches are to debugging you know print statements how many people debug by print statements here everybody right like it's it's awesome it's tried and true like it's like even in the most minimal setups when you have almost nothing available a print statement will generally work it's inoperable ASCII comes out you can pipe it to a Python script you can wrap you can automate other things so it's awesome right but it's limited for debugging very complex and concurrent environments anyone try to print debug two threads running at once we'll see a garble of stuff emerging on their console of two things talking on top of each other not to mention the incipient performance problems of trying to talk to a 115 kilobot you are when your CPU is running on at a gigahertz so then you know you have more sophisticated stuff like oh we have an IDE you know you're debugging you're going line by line you can see all the state of your Python code wherever it is GDB and all this stuff it's really awesome we can have it but then there's a question of who debugs the debugger right so when you come up with a new platform it's actually a lot of work to try an instrument to bring in that the debugger and again even when you're in a multi-process system these debuggers aren't a straight shot you have to be able to attach to a right process switch through there's overhead that's incurred in doing that especially when you get into things like performance profiling so people who've done performance profiling may be familiar with this guy here it's Flamegraph basically a call stack shows you how much time you can spend in every single call all the way out to the outer routines and you can sort of determine very quickly which routines you should focus on optimizing it's very powerful it has beautiful output but it can have artifacts due to overhead so there's plenty of stories of people like you know I put Flamegraph on my thing and I spent all my time optimizing and I found out that I was actually just optimizing the system call for getting Flamegraph to run right or something like this is like the overhead of actually getting Flamegraph to work can be a little bit tricky so there's a kind of an art that comes around doing really good performance profiling you have to use hardware counters you have to use instrumented kernels you have to go ahead and there's a whole bunch of different tricks that come into play to make sure you're actually capturing the events of interest if you're going across system call boundaries so you're bouncing between the kernel and the user space and you want to sort of plot that that introduces a nearly consistent amount of space you can't correlate time stamps as easily and then a whole bunch of other types of problems that happen when you're in concurrent spaces so people do do it all the time in really big systems but it is and it's not obvious how to do it and it's not and you have to set it up takes quite a bit of time so just a quick review so the niches that are not kind of handled particularly well by the approaches that I've over viewed are things like for your machine hits the reset vector and you have to debug it how do you debug the reset vector that's a very very tricky problem you don't have print even, you don't have other things what do you do in that kind of case transitions between the user space and kernel or machine mode and kernel so if machines start life you know with physical memory they don't know about virtual memory they don't know about your process space you have to teach the machine where the programs are where the page tables are going to be and you have to tell it okay on this one magic instruction we're going to have the program counter magically teleport from this address to this address but everything's fine it's totally okay and the debugger is going to not deal with that very well it turns out and so then there's a whole bunch of other performance tuning things that you have to do like you're going across system calls that's very difficult there's a whole class of performance tuning what I call Heisenbugs they change so for example if you want to debug a cache or a translation local side buffer performance issue just the instructions you add to go ahead and try and extract those can affect the behavior of the cache or the TOB and you're no longer able to see it and there's also sort of an issue with reproducibility so if you have an aggression and you found it and you think you fixed it how do you later on know that you fixed it so reproducibility is a thing that a lot of people don't talk about but I particularly when you're debugging things at the hardware level you want to be able to go back and review the logs so the solution that I've been working with to try and get around this is simulating a full stack open hardware system so when I say full stack I mean not just like the CPU but I'm talking about the memory model the bus model, the peripherals, everything so from the reset vector onwards we're able to get basically a psychoacro over you have all the overhead incurred in the system all of this gets bundled together and thrown into this magic box called a simulator we combine it with our OS and application code as it loads as it just loads into the artifact for that and this grinds for a while and produces a file that contains all the machine state from reset to the point of interest right so it's like multiple gigabytes of data but it has everything the machine has done all the decisions that were made up until that point and then you can go ahead and dig through that with a waveform view later on so just briefly it sounds a little magical that we can have such a comprehensive model but this is where the models come in in particular I design open hardware systems from the ground up so for me this is a little bit easier I have I use an open source CPU core called the VEX Frisbee I have a bus the axes so the interconnect on the inside or from using wishbone it all comes for free as open source if you're dealing with something like this good luck peripheral models I write or my peripheral models or borrow other peoples that's all open source it becomes a little dicey on the memory because you have to deal with vendor models so for example if you are simulating spy rom or something like this and you want to get psycho-accurate behavior on the spy rom it turns out actually if you go to mechronics you can just download a verlog model of most of the spy parts which is really cool so I can actually get psycho-accurate interaction with the those and then some RAM vendors will also give you abstract models of the RAM as well and there's some decent for standards like D-RAM and stuff there are standard models you can just pull and use for that so you can get psycho-accurate all the way down to the where the code is coming from and where the RAM is coming from the similar itself that we have kind of been trying to use is called Verlator it's not actually a full spec compliant verlog simulator it's actually more of like a it can run sort of gate level models of devices if you have something that gives you an abstract model so it's like a behavioral model where it says if you're in the reset state then all this section of the thing magically turns off but they don't actually instantiate at the level of gates that will screw up Verlator so there's a whole class of useful models that Verlator can't run if you run into that then I fall back on some very compliance similar like Xim which is unfortunately closed source but it actually can actually handle those models correctly it runs quite a bit slower but at least I can get it to run the other problem with Verlator is I mean you can go to the website for Verlator it does a great pitch for itself so I won't pitch it for it but the other problem with it is a real big pain they ask to set up it's basically transpiling your verlog to C code and you have to throw it in this whole test framework and then run it there's a whole set of tools to deal with that but once you get through all of that you get a cycle accurate harder model and a fast simulator you can boot your OS entirely in simulation so this is a log that's actually generated by not from monitoring hardware but we're actually pulling out the machine statements and capturing them into a buffer takes about 5 minutes to boot about 14 million cycles there's about 140 milliseconds of run time which is enough for us to completely copy the kernel in the user programs in and run some useful applications in that amount of time so it's about a 2000 X slowdown over real time right so you're not going to go ahead and run Doom in this or something like that I mean you could you just wait a long time but it's good enough for getting into some real like loader issues so here's an example of what you can do sort of visualize system call overhead this is again a wonderful waveform view up here I have the visualization of the SRAM bus so there's a traffic on the SRAM where you see the bright green that's actually where the SRAM is active we see it sort of dim colored there's like no activity on the SRAM bus so you can already get an idea of like where we're using the caches when this bus is not active that means the caches are actually hitting when this bus is active it means we're missing on the caches a lot right and then we do a trick where we take the program counter and we plot it as a graph relative to the magnitude of the program counter so this kind of spiky little graph is actually like the trajectory of the code going through the executable generally tends to go up because programs go from low to high in terms of execution and every now and then you see these spikes that go up and down and those spikes are to kind of library calls that tend to get glommed on to the back end of the executable at the end of the day and we can actually trace through okay here's a particular call to for example just a delay function this is the message send we activate a thread we go ahead and run the user code we go back to the kernel so on and so forth so we can actually see very very fine granularity everything that's going on through this whole transition it's normally be really difficult to visualize and this all happens over a period of 174 microseconds so 117,000 machine calls machine cycles everything you can do is you can inspect like page table faults and cache misses so this is an example of a transition right out of code mode into user mode code and you can see that actually the program kind of just stays flat for a long period of time why does it stay flat for so long you say oh actually the MME is refilling it's doing a page table walk grabbing the page tables out loading it we're pulling the instructions in here and then finally we run the instruction there next one next one next one okay cache miss next one cache miss next and then finally we're hitting in the cache here you can see this sort of like repeated pattern that's where in a loop that's cache hitting all the time so you can sort of see this all coming out of the system so in order to facilitate the the usability of this and this is where we're getting into where I wish I had hopefully the video or the demo would have worked we actually I wrote a little extension to the waveform viewer where you can basically mouse over this and it'll actually browse through the assembly code real time so you know where you are once you're looking inside of that machine code basically this this GTK wave is one of those really when I was like playing out it it felt very homey it's like 90s era C code like you know I remember back in the day when we didn't have like bounded arrays and structures and we just had to rely upon like naming conventions to make sure we didn't mess up so there's but the great thing about C code is you can just jump in there and just instrument anything it's like there are no rules no it's no problem so I just stuck a UDP stack inside of there and just blast what the mouse over is putting out a port and they have a python script listen to that port and then go to the right line of the code at the end of the day you can scan the QR codes and look at it there's this oh look the video plays so this is a transition from you know user to kernel space and you can see here as I mouse over and click it's automatically going to the location in kernel code that's going so here's the trap code that's ranked and see it's it's doing all the things that you would expect the traps during the registers and so on and so forth and I'm scrolling around and just kind of zooming out and trying to try to get a little more on the screen here so you can see what's going on and there you go there's more of the trap code that's running and one thing I want to emphasize is this is you don't have to just go forward in time you can go backwards in time so when you find the particular artifact of interest you can just scroll back in time and find what caused it so a lot of times we're not running a simulation over even when you say oh it takes five minutes to run a simulation I spend two hours analyzing a trace because it's all there I don't have to run it again I don't have to go ahead and put it a print statement and I don't have to go ahead and we run it and break or something because the machine state was lost it's all in this particular file and so you can see here we're looking at the SATP the prosperity whatever so that's an idea of what the debugging experience looks like but the interesting part is this file exists on my computer and I was going to show a demo where I was just going to load it up and zoom back and forth so it's not like I have to run the program it's just a static offline analysis you could write other scripts and tools to go ahead and figure out what's going on from a single run so that 2000x overhead sounds pretty awful but when you consider what you can do with the logs at the end of the day it's not so bad so let's see I'm almost done I'm almost on time so this particular technique I would say is really useful for debugging things like bootloader issues like I said there's a magic event where you pivot from machine mode to virtual mode this is super handy because the simulator itself doesn't care whether you're machine mode or virtual mode there's no sort of controversial where you're putting a performance counters or how they're mapped or whatever it is you can just walk right through that transition back and forth back and forth scroll through time and figure out what's going on and then you know like I said earlier you can go back in time which is actually very helpful to be able to do in some certain really tricky bugs so again like open source hardware is cute but it could be useful for people who are not into software maybe there's a reason that if you were looking at a particular CPU and if it had the RTL available to you you can do tricks like this so that's a difference in terms of the visibility and the bug ability that you get into the system so hypothetically if you had the source code for the CPU and they would give it to you you could do things like micro architectural side channel mitigations but more practically in just demonstrate right now we can do debugging and performance profiling so you need a full open source hardware and software stack to do that but even if you're kind of using sham memory at the end of the day if you're not performance profiling you'll at least capture the instructions at correctness and you can you can get through hairy bugs that way and you know it allows you to sort of root cause tricky bugs in a single shot you don't have to re-run the thing over and keep loading it into your into your target hardware you can you can analyze performance of problems with zero overhead so there's no instrumentation overhead you're getting the actual performance issues played out and you can look at stuff that's tricky the highs and bugs, TLB, cache state whatever it is and figure out what's going on without interfering with it so that's it I guess I'm a little bit early but I think we're running behind so it's probably okay thanks yeah thank you very much Bani can we have our next speaker setting up and in the meantime everyone has questions for Bani, he's the CTO and co-founder of Cake BFI and he's going to talk about a deep dive into Bitcoin Ordinal's NFT and introduction to CEDRO Trustless Ordinalist Trading Protocol Thanks everyone who here has not heard of Bitcoin Ordinal's she asked the other way around then who here has heard of Bitcoin Ordinal's there's a lot of hands who here actually own a Bitcoin Ordinal who here has heard of CEDRO protocol I don't expect anyone because I'm going to talk about it for the first time here because we have been working on that I'm using Chua I'm CTO Co-founder of Cake BFI I've had Bitcoin space since 2010 active in the Bitcoin theorem Dash ecosystem doing a lot of development building tools from the early days I was lucky to be the chief architect of Sand Dollar for the CBDC in Bahamas still operating today I'm no longer active in that project but I'm still operating today as built on Bitcoin protocol and I was also lucky to be presenting in some interesting story on that one because my topic on that one was something about uncovering or uncovering a government API or something because I was demonstrating how to reverse engineer an API from a government API and I got a police letter on my door the next day thinking I hacked through a government server which I did not a software that was released by the government on my phone showing the audience how to do that there's no case in the end but took like I think about three months for it to be like cancelled or no, I didn't hack anything that's interesting because of this talk right, so further research it's an R&D arm of Cake BFI so if you're anyone here familiar with Cake BFI a few hints, thanks so Cake BFI is a blockchain platform it's not an exchange, it allows you to grow your crypto in a very transparent open manner all the yields are actually coming from blockchain from protocols, you can actually track where all the yields are coming from so it's not like a black box model like you would see on FTX or three arrows they're going to give you some return but you don't know where the returns are coming from so at Cake BFI everything's very open very transparent and you can see where the yields are coming from and birthday research is actually an arm at Cake BFI what we do here is we contribute to open source projects primarily around the blockchain space Bitcoin DeFi Chain Ethereum and now I'm going to talk about the one specific project that we have been working on that's on the Bitcoin space on Bitcoin Ordinals specifically with that moving ahead let's talk about Bitcoin Ordinals so if you're not familiar with Bitcoin Ordinals this is what it looks like on the website Ordinals.com I'm going to get into the details on what it is it looks a bit like NFT and I'm going to talk a bit on what it is so it also made the news that it's catching up right now that a lot of Ordinals are taking up the block space of Bitcoin and we have over a million Bitcoin Ordinals right now that's floating around in the space and if you're Bitcoiners you also have seen that the Bitcoin block size has gone up by many since about six months back I mean Ordinals have been around for a year now but only about six months that way kind of take off because it supports inscriptions I'm going to talk about that later as well now average Bitcoin block is about before that six months ago it was like that's like never over one megabyte but now it's like two megabytes and sometimes even at a capacity of four megabytes so yeah what are the coin Ordinals basically what Bitcoin Ordinals are if you know what Ordinals is in number theory it's basically just ordering of things like the first and second and third and so on so for those who are new to the Bitcoin space or crypto space one Bitcoin is can be broken down into 100 million which is one E8 Satoshi so the smallest unit on Bitcoin that cannot be divided further it's known as Satoshi with a small letter s because the big s will refer to the name and every single Bitcoin can be divided into eight decimal places and there will only be 21 multiply by 10 to the power of 6 there will be only 21 million Bitcoin ever mined ever in the lifetime of Bitcoin right now I think with mine probably 19 million right now so only about 3 million to go and it just the mining system gradually goes down so what Ordinals are are basically ordering of Satoshi so you would order from the first Satoshi ever minted or ever mined all the way to the last one that's ever been mined so naturally the first one is actually mined by Satoshi himself and so on and so forth so basically it's ordering of Satoshi and so it's basically kind of a game you know started by the creator about how you can make Bitcoin kind of non-fungible right because Bitcoin is meant to be fungible like the one Bitcoin that you get from John the same as one Bitcoin you get from Alice the same one in distinguishable but NFTs or you want to make something that's interesting you want to differentiate the Bitcoin that you get from someone from the other Bitcoin so you have to create a rule on how especially if you send someone else the same Bitcoin how do you track that all the way to whoever that's holding that so you have to have a rule so the person that created this basically just came up with a rule that on the Bitcoin system there's multiple inputs and there can be multiple outputs so you track that how do you even track that where the same Bitcoin is going so you just send kind of like an arbitrary rule so the person, the creator Casey picked up a rule he said it's firstly and first out so if there are multiple inputs and multiple outputs you're going to put the first one and you're going to match to the same output and it's going to keep distributing all the way to to the last output so this kind of you track the provenance of Satoshi in this case Ordinal to kind of track it all the way from it being mined so with the Ordinal system you can actually see where the Bitcoin is coming from where the Ordinal is coming from from the moment it's being minted at the first time so if you get a Bitcoin you should run through an odd server you can actually see where it was mined, who was mined and you can track all the provenance all the way to where you hold it if you look at through a normal Bitcoin kind of a transaction tracking because sometimes your coin gets mingled with other inputs and sometimes it's not you just can't make it up where it's coming from but if you set a rule first and first out you can then track all the way so to make it more interesting the Ordinal system came out with this idea it's kind of like a fun idea now that we can track we can kind of make it non-fungible like what more can we do with it so the person come over with a game on this kind of creating some ideas on how to make some Bitcoin or some Satoshi more rare than the other so there's some mod rule you can kind of go on and start kind of reading on how that's being derived basically it's just arbitrarily defined on how rare Satoshi is so you can see here there's a mythic Satoshi and then you can kind of guess which one is the mythic one which hasn't ever been moved ever from day one so it's signed as a mythic and then the rest is kind of like you have some certain pattern and it's going to fit into some of these uncommon, rare, epic, lingerie so most of the Bitcoin that you get is going to be common and you can also see that there's a kind of supply here so 19 million has been mined 19 million Bitcoin has been mined multiplied by 100 million so that's 1.9 quidrillion even uncommon it's kind of rare actually if you get a Bitcoin, if you run through the system it's very unlikely that you get an uncommon Satoshi so yeah it's quite interesting so but quite almost it's a digital artifact also it's more it's more nerdy than the NFT counterpart on Ethereum because there's a certain rule in place where it's also the reason why almost community doesn't like it to be called NFT because NFT is basically a pointer to something that's arbitrary like if you had an NFT on Ethereum space it can be pointing to a real world object it can point to your sneakers, it can point to your your virtual world art piece, it can point to anything and it's very arbitrary, the link is not real whereas for Bitcoin or no there's a very specific rule that it has to be non it has to be immutable like if you inscribe something on a very specific Satoshi it stays there you cannot override it I mean just a rule is made up that the rule is there if you override something it's not going to be valid as a Bitcoin or no so no external link content is fully on chain it's going to speed the Bitcoin community to half as well because there are some Bitcoin maximists who say that Bitcoin should not be used to store data it should only be used for financial transactions and do things that relate to Bitcoin itself and right now we're using a lot of Bitcoin space to store like binary data and the thing about Bitcoin is that these are it stays there forever so if you download Bitcoin blockchain you're going to have to download all these stuff as well even though like because this uses the a witness layer technically it can be purgeable but people are not purging it yet just yet and inscription it's an added feature to ordinals just so that instead of just a game on a number on how rare a Satoshi is so it adds this feature to attach something to that Satoshi so it's called inscription so you can attach like image you can attach audio you can attach video to a certain Satoshi so inscription is kind of like a feature to make all those more interesting it was added kind of quite late actually more than I think like six months or nine months in after ordinals was invented and it basically can attach any binaries it's not limited to only things that are renderable on the web you can attach anything at all as long as it fits a four megabyte space because that's the limit of a block and you can only fit that and everything is fully on chain so all binaries are fine but usually because you want to render it on a website you would only attach things like text, audio, image and movie it's kind of you're going to convert all the binary into ASCII and then you're going to attach a mine tile on top and then you're going to put it onto a bitcoin blockchain so basically this is how it is on this example here it's it's a text that says hello world you can attach it to the bitcoin witness space and this goes in there and you can see even the logic here that uses a bitcoin script it doesn't really do anything these are just to attach that binary data so what's next? so we have all knows we have kind of like a token what's next that people want to do with it you want to trade that all knows I think that if we speak right now they're probably about I would say maybe eight or ten kind of trading sites out there a lot of them are very centralized a lot of them are very you have to log in there you have to upload your bitcoin you have to send them your bitcoin you have to send them your all knows so what we have here today is can we make that even more decentralized more trustless because that's what bitcoin is and also what free and open source software is and also reason why I got into bitcoin in the first place because I like OSS and I've been working on open source software for a while now so trading let's talk about trading first before we talk about marketplace so trading how to make trading works on a bitcoin system so basically before we talk about that we have to introduce the concept of all knows and cardinals so it's a name that all knows user came up with to differentiate between non fungible and fungible so the non fungible satoshi is known as all knows so everything can be tracked cardinals are it can also be all knows but it's just basically just a way to say that these are just a bitcoin these are just satoshis they are technically also all knows but they just don't have much value so the beauty of bitcoin model the UTXO model the unspent transaction model basically allows for multiple input and multiple output so you can send it looks exactly like a real world dollar bill if you go to a stall you go to a restaurant you want to buy a drink cost $7 you have $2.5 in your wallet you're going to pay the cashier $5 and $5 and then you're going to send $7 to the restaurant and you're going to send yourself $3 back so that's basically how bitcoin system works and this example that I just gave a hypothetical one I have two inputs which is two of my $5 bills and two outputs which is the $7 to the restaurant and $3 back to myself to a new address so that works on the bitcoin side we can actually make use of this very unique system to create a system where you can actually trade ordinals in a completely trustless manner without having to have intermediary to facilitate that trade so how it works is basically by owner of a owner becoming the owner taking the the bitcoin that's been paid for and then the buyer sending the bitcoin over which is the carnal and the buyer taking the owner from the original owner so basically the switch of input and output switch it around and the owner gets the output the owner gets the output of the carnal the carnal gets the output of the owner so it just works exactly on the bitcoin system so how it works a bit on the lower level side is either the buyer or seller so someone has to do this first so the step on how this works is the buyer or the seller could craft a partially signed transaction so anyone in bitcoin space you can craft any transaction in order for you to sign it you need to be the actual owner you need to have the private key to sign something but you can craft any transactions so a buyer obviously going to negotiate the price first and once you negotiate the price say okay I'm going to buy your NFT for for one bitcoin for instance so I'm the buyer here I'm going to craft a transaction with my actual bitcoin with your or no and I'm going to send you that transaction I signed my part so you haven't signed your part yet so you're going to get a transaction which is partially signed you've got to validate that one bitcoin is true you would then sign your part of the transaction to say that okay I accept this transaction I'm going to sign my part therefore I'm going to send you my my or no and I'm going to take your one bitcoin so it just works exactly like how it is and you don't need any intermediary to kind of facilitate all that it's completely trusted it's also atomic if the if the owner of that or no doesn't like it he doesn't have to sign it also there's also an additional feature that can be added to make sure that that transaction actually expires so that you have to sign it within a certain time so that the the seller doesn't have to hold on cannot hold on to that forever and just sign it wherever he wants so you can actually add a date as well for it to expire so bitcoin supports all of that so now that that works on the actual trading part so the next thing is the next one that we need to solve here is that if you own a bitcoin card how do you know that you actually want to sell and how do you name the price that you actually want to sell so you can go to some of those some of those sites that I talked about earlier that you can just list them on sale but the problem of doing that is that some of them may require to give up ownership of your custodial of your or no to upload to the site therefore you can trust the site to hold it for you and you can list for sale there but can we make it better can we make it completely trusted and completely centralized sure we can so this is what we worked on it's called CEDO protocol you are a CEDO space it's still in a work I think the website is still kind of rough right now but I'm going to kind of explain how it works so it uses a combination of two technology IPFS and bitcoin so IPFS if you're not familiar it stands for Interplanetary Fast System it's to put it simply it's a torrent technology behind it so it's like a bit torrent where you run multiple seeds and there are seeds all around the world and if you send a send a file it's going to jump across multiple nodes and arrive at your destination so it's kind of like a bitcoin I mean bit torrent technology so how it works in this case would be the maker so I'm going to use maker as the as a person that's intend to sell and all in all so I'm just going to make it simple like the person that intends to sell would be the maker so the seller will create a JSON order and uploads the order to an IPFS and that gives you the CID content ID so that order itself will contain the signature to prove that you actually own the or not that you're trying to sell so you put it on the IPFS IPFS will give you the CID you then send it on the blockchain itself the CID so they get broadcasted to the world so this in CID protocol will listen to the bitcoin blockchain to see is there any broadcasting sale on the CID protocol so if you pick up one you pick up the CID which is a really really small one so it's not going to cost you a lot of money to put it on the bitcoin blockchain to pick up the CID you can then grab the data from IPFS do your evaluation to make sure that the maker actually owns the order and it's still there and the signature valid will actually have to sign that as well to prove that you own that if you see that order and you're the buyer you're going to get the IPFS and you like the price you will then create the partially signed transaction just like I said upload the signed partially signed transaction to IPFS so you're going to sign it on your site first and you're going to upload the partially signed because you cannot broadcast the partially signed transaction over the bitcoin only fully signed transaction can be broadcasted over the bitcoin network so in this case you're going to upload partially signed transaction to IPFS you will then send the offer to CID through the bitcoin system this is a separate transaction independent from the partially signed one so you will create another Bitcoin transaction to kind of tell the seller that here's my order and here's my partially signed transaction if you agree to that if you agree to that go to IPFS and grab the partially signed transaction sign it on your site and then you just relay it so you don't have to go through another round not to dance through the IPFS anymore because now that it's fully signed the seller can just broadcast over the bitcoin network and that completes the deal so you got a full transaction on how to initially announce your intent to sell to the eventually completing the deal so in this case there could be also multiple offers so if you prefer a really good piece of art that gets a lot of offers then you can basically decide which one you want to sign it kind of looks like on the CIDO protocol website it's got a REC test right now and it's working well so CIDO protocol is free and open source it's self authenticating so CIDO stands for self authenticating decentralized auto book so free and open source, self authenticating all signatures are validated before it can even be treated for a valid offer and it has a default global auto book which means that any sites that you guys CIDO protocol you're going to be able to share the same auto book globally and there's no fee or whatsoever, everything is free and it's also open to allow private use that you can have your own private CIDO auto book if you want to, it's all completely open so to make it more user friendly on birthday research we're also working on another more of a front-end website making it more user friendly so it's called Odzar, Ordno Bazaar it uses CIDO protocol and it's non-custodial you don't have to upload anything you don't have to send us a bitcoin it's just a viewer, you go to there give it a string for you to sign so in non-custodial you own your own bitcoin you own your own kernels and the key thing about Odzar is that the UX is going to be a little bit, I mean for Ordnos today it's actually quite tricky it's very developer friendly it's very decentralized app friendly you have metamask, you have a lot of tools you have a lot of JavaScript tools for you to build whereas on bitcoin, you don't have that right now it's kind of like just open up right now so there's no de facto wallet there's no default wallet, if you ask bitcoin bitcoin is what wallet they use you're going to get like 10 different answers you can even find a trend on what kind of a common bitcoin wallet that people are using so Ordnos' support is so very, very limited on bitcoin wallets today you don't mark it properly because it's just like any other bitcoin you might actually send it out to someone else if you pay like a $10,000 for a very priced order you might actually send it out to someone else because you don't mark it to not send it out so you have to use a wallet that you can freeze to send UTX so to not resend it or use a bitcoin wallet that supports Ordnos and it's how it looks like yeah, for them it's kind of a tight if you want to learn more about it you can check that out and set it up space thank you thank you very much what's in any questions? we'll get our next speaker set up so in the meantime maybe we can have one or two with that can you have there's one question so why you and she's going to talk she's from GovTech Singapore and verification framework using the blockchain thank you very much hi everyone I'm from GovTech Singapore our mission is to engineer digital government and make lives better so today I'll be sharing an open attestation this is an open source document endorsement and verification framework using the blockchain so I know I'm the only thing standing in between you and probably your post conference drinks or dinner so I'm just going to get straight into the meat of it I'll just say that I think in the public consciousness and we hear about the term blockchain most people think about cryptocurrency and we've definitely heard a very fascinating presentation earlier on Bitcoin Ordinals I'll just say that for open attestation what we're interested in is how the key attributes of blockchain can be applied to document endorsement and verification at the same time there's certain limitations about the technology we think that the way that we're designing open attestation can help us to either get around this or to basically make it less of an issue I think for how we are using blockchain for open attestation so I'm just going to get into how it works so for open attestation what this is is an open source framework to endorse and to verify documents we have two key components the first we look at verifiable documents so these are temper evident documents that cryptographically prove the authenticity, the source of the document so you can think of credentials like academic qualifications proof of identity proof of employment etc we also look at transferable records similar to verifiable documents but these are documents that can have an owner usually such documents confer ownership of assets so we are looking at documents like title deeds or a bill of lading what I think is interesting for open attestation it allows for decentralized issuance of credentials and I think we really do believe in the power of open source software and that's why all of our code is open source so the verification can of course be distributed, anyone can set up a verifier based on our technology but depending on the use case we can also centralize the verification process in terms of directing people to a particular verifier and that's just for perhaps a better user experience or for certain government use cases just for better public comms in terms of how to use the documents so earlier I mentioned that a double-edged sword or like one of the I guess points of blockchains that the data stored is public right so you may be thinking hey you know some of these credentials are confidential it may be personal so should we really be putting that on a blockchain so what we are doing in open open attestation is to design it in a way that upholds data privacy because we are only publishing a document hash but not the data itself so I'm going to get into basically how we are doing the verification and this is essentially premised on three critical steps so when we look at the verification tech which is the core of open attestation we are trying to provide three critical steps the first one is to verify that the document has not been tampered with and that it has been issued by the issuer we want to verify that the document has not been revoked and we also want to confirm the identity of the issuer so how does this work when we issue an open attestation document we first take the raw document in JSON format and we put it through a process that we call wrapping so the end product of this wrapping process is a unique hash that represents the document and of course if the document is tampered with then it will not be possible to derive the same hash and we can prove that it has been tampered with so the flow is pretty simple for each property we salt to prevent rainbow table attacks we then flatten and encode the properties as a string each peel is hashed with the value and salt and then we store all the output hashes in an array finally we hash the array of hashes to produce a target hash so I think to put it simply it would be a hash of hashes what this allows us to do is also to perform selective reduction on the document so if we want to say redact a particular field of the credential for example it's not necessary to disclose this and people would naturally want to minimize what they are sharing then it's possible to redact a particular document property by removing it from your credential and storing the hash under a different field that we call obfuscated data however this still does not invalidate the target hash because it doesn't invalidate the target hash of the document because we are computing the target hash out of all using all of the hashes that have been stored in the open attestation document including those that were moved to obfuscated data so after performing the wrapping process this is an example of how it will look like so as we can see the document properties have been sorted and we now have a target hash that represents our document okay so now we have found a way to wrap the document next we need to verify that the document was issued by the issuer we have two ways of doing that so the first one would be via a smart contract the issuer basically deploys a smart contract that acts as what we call a document store this keeps a record of the issued documents the issued hashes of the documents but I do know that this involves an on-chain transaction and does consume gas data so this allows anyone to check if a hash which represents a document has been issued and then when we are doing the verification process what we are doing is to first check that the target hash of the document is in the document stores list of issued hashes and we are also checking that it has not been revoked that it is not in the list of revoked hashes so I would say this was probably the first iteration or the first version of open attestation another newer method that we developed would be to use decentralized identifiers to issue documents and currently we are using DID Ethereum so when we after signing the open attestation document basically it would contain the issuer's DID and the signed target hash then when we are doing the verification process what we do is to compare the Ethereum address from the DID document with the address that is returned by verification if it matches we know that the signature is valid the document has been issued and that it has not been tempered with okay so earlier I did mention that issuance does consume gas and this can potentially be an issue for users if you are issuing documents one by one and you have many documents to issue so what we also developed is to allow issuers to bet transactions using merkle trees so each document target hash is a leaf in the merkle tree and we compute the merkle root which is then stored in the open attestation document along with the proofs needed to ensure that this document is part of this merkle tree of course for a single document that doesn't need to be batched then the target hash and the merkle root are the same alright the final the final area that we the final area of our step of verification would be on the issuer identity this is an I guess it's still an area that we are still working on and developing currently we leverage on the system we use the DNS text records to publish the document store address or the DID and then this is then checked against so that we know that the credential is issued by the entity that controls this domain okay so now I'm going to get into the perhaps more exciting part of my presentation which is to talk about the products that are currently powered by open attestation and this is by no means an exhaustive list and we are looking out for more ways that the open attestation can be applied and so that's partially why we are here today at FOS at FOS Asia really interested to hear if there's any ideas for how this technology could be applied so the first one I will talk about is called open open certs basically it started as a proof of concept to improve productivity basically schools were spending a lot of time and on verification of the certificates so you would have people maybe employers who want to confirm that the graduates paper certificates were legitimate so they might say call the school and ask the school to check and verify schools were also spending time on replacement of lost certificates so this was consuming I think up to at least 7 men months in a year so with open attestation which powers open certs so graduates from Singapore institutes of higher learning can receive their degrees or certs in the form of an open attestation document and these can be easily verified by employers on the open certs.io verifier I'm not sure if we are able to open the link here or when I'm able to switch out to a browser so this is the open certs verifier so if you look at the demo this is how the user would go through it they would you can just drag and drop your cert into the verifier this will load up our decentralized renderer it will render the cert for example we have this demo and there can be multiple views of the same open attestation document another thing is earlier I mentioned that we allow for selective redaction of credentials so for example we are able to say redact certain info that's not required so for example if you're only focusing on the CS courses and the grades you can redact let me grab this so you can redact the details and download the file but it will still remain verifiable and then you're able to download it and if I try to verify it again maybe a bit all downloads so if we verify it again you can see that the redacted fields have now been removed but we also do mention that it has been redacted just not what was redacted and earlier when we were going through the verification steps this is how we would provide the assurance to verifiers in terms of it has not been tampered with, it has not been revoked and it is issued by this particular entity in this case, GovTec okay so that's one of the I think first use cases of open attestation the next one is a bit more recent so if I'm just going to toggle back to sorry let me click here so this is probably a bit more recent so during the COVID-19 pandemic it was really important to be able to prove that vaccines and test results were legitimate also to eliminate the existence of fake COVID-19 test results and also to support the opening of borders and resumption of life as normal given the decentralized nature of the assurance and the need for a way to attest to the documents in a cross-border way so open attestation was adopted as a solution and this powers the health certs that basically Singaporeans use to prove that they have been vaccinated for example or that they have passed like their pre-departure COVID tests so users could go to this notarized platform to apply for a notarized health cert for travel and then cross-border immigration authorities could easily scan the QR code of a health cert to verify it so I'm sharing this example because I think it also shows the strength of the free and open source movement because OAA was open source because it had been used for other applications there were providers out there that were able to help perhaps less tech-heavy medical clinics to onboard the health cert solution and for them to quickly issue vaccine certs in the open attestation format so we were able to adopt this solution quite quickly and I guess in a scalable way and the last use case that I am excited to share about is called Trade Trust so I'm the product manager for Trade Trust we are basically looking to accelerate the transformation from paper-based trade documents to digital documents when we look at the current state of cross-border trade today it's largely paper-based one shipment can involve many parties across different sectors many exchanges of information many siloed systems and this is very inefficient and costly at the same time when you have manual handling it can be vulnerable to fraud I can take the same document and use it to take out two different loans from two different banks who are an underwiser so Trade Trust is a framework to accelerate this transformation and we look in two ways so for trade we have verifiable documents so these are perhaps things like an invoice or a certificate of origin that tells me this shipment comes from this country is eligible for certain preferential tariffs for example the second is to find a way to allow electronic bills of lading to be transferred between parties along the supply chain so a bit of background what is a bill of lading in trade financing in international trade the bill of lading acts as a receipt of goods and anyone in possession of this bill of lading can claim the goods at the destination port so for example in this case if I'm trying to sell something to someone in a different country I might enlist carrier or ship to help me transport it to them the carrier will issue me a bill of lading I will then create this bill of lading to the buyer and the buyer can produce this bill of lading at the destination port to claim the shipment but this is really a very simplified version of how this flow works in international trade and trade financing but sometimes it's not as smooth because you can imagine sometimes the bill of lading will go through multiple parties who are financing this trade so it's even possible that by the time the cargo arrives at the destination port the bill of lading is still being created around the world and the cargo is just sitting in a port waiting to be released so I think that's where we see a lot of potential for electronic bills of lading because the transactions can be really fast and instant so what we do in trade or in trade trust is that we represent the bill of lading as a non-fungible token we bind the NFT to a what we call a title escrow smart contract which then controls changes to the ownership of the token as well as who is allowed to change the ownership status of the token and this set up mimics the current process for paper based documents in terms of who is allowed to change the the holder and the owner of the bill of lading so I hope I've given a brief overview of how OpenNet Testation works what are some of the use cases that we've applied it to currently and here's how you can get started with OpenNet Testation can learn more about our framework over here and please come to our GitHub and explore our code feel free to open issues with us and talk to us we're really happy to take questions and to figure out what other use cases we can have for this technology thanks everyone thank you very much Kani thank you great project here I love it question on the bill of lading example we just gave so the bill itself is issued as an NFT and it's verified using the OpenNet Testation smart contracts two contracts involved in this use case it's not verified with the OpenNet Testation smart contract but it is verified with the OpenNet Testation signature and proof method it uses the same cryptography that takes the document and digests it into a target hash and then we're using the NFT to deal with the transactions and regarding use cases you mentioned which chain are they using which EVM chain are any of them actually using Ethereum as that's quite costly just wondering if any of them have adopted a side chain or L2 so for our users as well as Polygon I had a question on the rationale for the use of a blockchain in the first place given that at least for the examples you gave the person relying on the credential the employer or whatever it is who is trying to evaluate the credential is relying on both the signer and the working replication function if there's no working replication function then there's no way to know whether fraud and certificates have been issued and not cancelled so the organization has to exist it has to have a working replication function and further that you're using the DNS to deliver the the authenticator the public key that's used to validate the signature that's a real time service dependency in real time I'm wondering what value there is in using a blockchain at all versus a more conventional web service because there must be an organization it must be responsible and it must have real time services otherwise your mechanism won't work so what have you gained by using a blockchain at all so your first question was on there being a way for the issuer, the signer to hash a document and distribute it let me sort of make the counterpoint if you rely upon the organization having the ability to revoke then I would suggest that a blockchain is useless if you rely on the organization presenting a service that's available to you in real time then a blockchain is useless blockchains are only useful I would suggest if both of those things are false in this case both of them are true if I don't why would I use a blockchain why would I just ask your web server what this certificate is a few things I think when it came to it came to this situation for example if you're looking at academic certification I think what ended up being the case was that verifiers would want to go to the Ministry of Education and ask them to verify for example these documents and that was assessed to take quite a lot of a lot of efforts so the idea of this is there a way to I understand why I understand why take away the human process I'm asking why use a blockchain it seems to me it would be simpler and more reliable to use a web service because there's the same it's DNS you've got a real time service give out and I think I think DNS is probably not the best way to look at the issuance any more questions one more from Harish gonna make me go up the stairs next slide sit down there I got a question about how do you when you have the assert issued by an institution the institution merges with something else how do you ensure that these things are sustainable 100 years from today what is the mechanism for something like that including the DNS portion there's a lot of dependency that requires that this entity exists that there's a way for me where it's a piece of paper it's still a piece of paper here this is something which I've been struggling with trying to find out how does this even make sense from a sustainability perspective thanks so I would say that in the situation where maybe the institution merges or maybe they move to a different domain the document store the smart contract on the blockchain would remain and it might just be a method of linking that document store to the new domain for example I think maybe one way to address that also I suppose if for example you have institutions merging I don't think that would invalidate the or it should not invalidate any certificates that were issued by the old entity and operationally they may not want to reissue the search under the new name because it was issued at a time when that institution was a different entity so with the immutability of the blockchain that's one way to ensure that even after the entity has maybe closed down or moved on the records would still be there although admittedly you need to find a different way of tying the issue identity to the document store on the blockchain yeah just to add on to the answer to his questions like if you guys in the early days of doing cryptography if you guys ever went to signing parties and X509 certificates and dealt with all the complexities of that compared with the ubiquity that Ethereum and these things offer today it's just night and day difference from a practical basis. From a theoretical basis yes it's not perfect it doesn't do everything but from a practical perspective it's just so much easier to use and it's just common how everyone is using it my question was so I was kind of surprised that we were describing that this was just a protocol but in fact you guys have actually developed and released software what is the scope of that software like if I'm an organization and I want to build to do verification of my own documents and stuff is that what this is on is that what this is going to provide also I guess what I would call these reference implementations and we also do release the code for that as well so for example for trade trust .io the verifier is open source and implementers can refer to that to spin up their own verifiers and to use our CLI to help users to issue documents so this underlying framework those are the applications that we have built on top of that and to the best extent possible we do try and make it open source this is very forward thinking stuff for you guys in Singapore so I hope it catches on outstanding alright we go more questions there so I was curious when you did your research on how to implement this if blockchain was a winner which was number two option that you rejected okay so I think the two well one question that we do get asked is why not just use docuSign or a similar like signing a PKI based digital signature that you can easily open up a PDF and you can also have that verification that hasn't been tampered with and that was issued by a certain entity I think the two benefits of authentication would be in terms of replication it's a lot easier to revoke as opposed to trying to revoke the signature of a digital document that you've already signed a PDF and sent out and the second one which I think has a lot of potential maybe hasn't been realized yet is that I mean the open attestation files it's in JSON format I think it's more machine readable I suppose like a PDF docuSign PDF for example that is maybe harder to integrate with systems so I guess that was one of the other options that we considered and eventually decided not to go ahead with it. I don't have a question I just want to add a bit of color to this great presentation and I actually happened to work with the protocol a bit because I was working on a software that actually utilizes called Next ID and it utilizes as you really really great and also I just want to add a bit to Roland's question just now on that there's no need for a blockchain which I actually agree in this case because it's basically just it just ties into the mocha root because all the search actually not published on change every search has a hash just hash that and then just build that tree and then you just have the root that you refer to somewhere so you can see that on the DNS or you can actually see that on the blockchain itself but blockchain itself is actually not used in this case but there's one fundamental of blockchain that's actually adding some value here which is the D part so it helps when assert is let's say that that seed is changed someday through DNS and you cannot see that it is changed so the blockchain part actually adds that seed or that the root is changed so you can actually see that it is changed so everything else is just it's just cryptography just PKI so there's no need for blockchain or smart contracts but that blockchain part actually helps to make sure that it is unchanged and you can actually track that it is changed or it's being added and all that correct me if I'm wrong it's something I want to add there, thanks alright thank you very much and I think with that we close our session today thank you very much Kali they give you a round of applause thanks for taking the time I think just responding to what Yuzin mentioned I think we don't see there's just a pure blockchain thing that is more is wider than that as we look at verifiable credentials and that's not something that is necessarily tight strictly to blockchain and I think we are so interested in seeing perhaps what are the different links and between the different ecosystems alright thank you very much thank you very much guys with this we're going to