 All right, everybody. Welcome to the LFX mentorship showcase. In this showcase, we, our 2022 graduate graduates will be sharing their experiences. Let's get started. Okay. All right. My name is Shaukan. I'm a Cardinal maintainer in the next foundation. I lead the mentorship programs. Let's start a little bit about, I'm going to show you resources today to learn. So let's start with the beginner's problem. Where do we start? We all struggle with, where do we, when we want to learn something new, when we want to change a career direction, or when we are trying to figure out which open source project we want to get involved in. All of these things, we have to start somewhere. Where do we start is a problem for any journey to begin. First of all, the first problem is to figure out what we're passionate about. Are we, what I'm, and what do I enjoy doing in the long term, or even in the short time, five years, what do I want to do in the next five years? And which open source project to choose from, we have several to choose from, and it is always hard to determine, am I going to like doing Cardinal programming? Or am I going to be better at doing AI? What do I enjoy the most? Or CNCF, or Hyperledger, and so on. So then the next question is, we understand, we do our research, we understand what we want to do, and then we start figuring out, how do I get started? That is the second problem. And whenever we look at the core basis, they look always complex. And communities look intimidating. Who do we look, who do we reach out to? And how do we, what's the best place to start? Those are the kinds of questions that we all struggle with. Then comes the real. So once we figure it out, okay, so I have a project, and I want to, I know kind of where I want to get started. And then where do I find resources? And once we figure out some resources, and then we go, who can we, who can help us? And who can we reach out without being turned away or without being appeared engroute, and so on. So now we have the what and how and where and who, right? So what's the next thing? So we understand that the G's journeys are hard. And that is one of the reasons why we provide a lot of resources at LF for new developers to get started and figure out where they want to contribute to and how, and we show the learning paths. And you can go to planning learning paths at LF training, provides a learning paths, pathways. So you can explore those pathways and figure out where you want to be. And then you can learn a lot of topics, technical topics and learn to get a background information and also information from experts in interactive webinars. And we have several webinars uploaded. You can go check them out. They span multiple topics in various technical topics deep dive into and talking with the experts, learning from experts and then these are interactive sessions. And then you can start learning, applying for mentorship programs. And then once you complete the mentorship program, you can participate in the mentorship showcase, like our graduates are doing today. And I'm going to leave you with this practice slides with a lot of information on all the resources that we have at LF to help you with your journeys. And we just released a mentorship and open source report that just came up yesterday. And you can take a look at it on our site. You can follow this link and download and learn more about it. We are starting to continuously improve our mentorship programs and learning resources, taking feedback from our graduates. That is what we have done in this research. We went and asked our graduates from since the beginning, 2019 through 2021, and asked them, what do they want to see more of? What kind of resources do they want? How do they want the mentorship program to look? So take a look, read that and and then you can see where we are going. You'll understand our next steps as well. Okay, with that, I'm going to hand it off to Sonskar Bhushan to get started with his presentation. And we'll be following in this order for the rest of the lecture. Thanks a lot. So I would like to talk about my mentorship experience while working on collaborative cloud-native environments. And I would be discussing how they work. So I'm Sonskar Bhushan and I would love to connect with yours. This is my Twitter. So here's an African proverb that I learned from a mentor that says, if you want to go fast, go alone. If you want to go far, go together. And that's really important. That shows the power of collaboration and what collaboration can provide us with what type of values collaboration can provide us with. So before discussing the problem, we should learn a bit about different ways to collaborate that exist right now. And what is collaboration on a piece of code? By collaboration, we essentially mean live collaboration. It's not like get a base collaboration. It means live collaboration where someone can teach you what you're doing while coding along with you. That's really handy, especially for those beginners. As you said that for those who are new to the journey, they do not know what to do. Building stuff and compiling stuff is really difficult for them. So for those type of beginners, this can be a really painful step. There are a few tools that are available right now. For example, GitHub code spaces probably would be the most famous of them, though it is proprietary. And it's of no use in an open source community because it's proprietary. There are a few alternatives that have open source counterparts such as GitPort and Coder provide open source alternatives. I would be discussing about them very soon. Pair is an in-house open source software that is built in IEI. That is where my mentor belongs from. And we would be discussing a bit about Pair as well. So now what is Pair and what is Coder? So Pair is a use method for humans at IEI to collaborate with. The only limitation of having GNU Emacs has its only IDE. And for those of you who have used GNU Emacs, they know the pain of using MX plus a lot of times different sort of keys, binding keys. So that's quite painful. And Coder is more of an infrastructure management tool that comes up with code server or that is an open source software to run VS code or sort of any machine. And you can access it through a client such as browser or your local VS code instance as well. Now here a few of my favorite memes that display why it is very painful to use Emacs though the Emacs user look pretty cool because they get invent browser as well. But you know, you can see how many binding keys exist and how many times you have to click in order to get help. So that's not really handy. Now in order to deal with this thing, we wanted to provide a reliable cloud native environment to CNCF projects that can be used by those projects to onboard new contributors. And the main problem that is faced by the new contributor is to get started. As the famous Chinese proverb goes like a thousand-meter journey starts with single step, that single step is often very difficult. And we wanted to reduce the friction to take that single step. So our approach was to initially be used pair that is an in-house product to automate lots of tasks while initially testing our infrastructure that was spent using Coder. In the end, in the end of this whole project as a whole, it's quite big project. The only part that I did was to test it in order to spin up the infrastructure locally and automating the task using pair. And we are hoping to see a fully hosted solution to run secured Kubernetes cluster that can be deployed on infrastructure that might be provided by CNCF in the future. So what it would look like? It can look like whatever you want it to look like. For security purpose, we can have B cluster that is virtual cluster instance. We can even use Q-Bert that provides operating virtual machines using Kubernetes and Talos is instance provided by Equinix. So it is basically a bare metal server. And similarly, we have a cluster API provider for Talos as well. We can directly spin up a Talos bare metal to check whether we can set up an infrastructure because the code that we are using is essentially not for only cloud-native environments. For example, if Linux Foundation wants to automate some of the Linux project itself, we can use bare metal to set up that Linux project particularly. And we are good to go. So what it would look like in the end? We hopefully want to look at CNS to be somewhat like this where Palco.CNCF.Coder.IO, QTL.CNCF.Coder.IO, or maybe Kubernetes.CNCF.Coder.IO can be used by the new contributors to just use this project link and they are good to go. That is what we are aiming for. So the whole big idea behind this whole thing was to make that first step for the potential contributors easier because as I said, a journey of thousands of meals begin with a single step. This is a famous Chinese server. So let me show you the loop for the first timer. These weeks have a change build deploy. And we want to make it like this. This is comparably very easier. You just need to change things and you do not need to take into consideration what is happening underneath because the whole infrastructure would be managed by someone else. And I would like to thanks my mentor, Hippy. During mentorship, I was able to learn a lot. So I'm thankful to Hippy. And as this project was not very much code inclined because we were doing lots of testing and infrastructure provisioning and administrative stuff. I realized that coding is the most easiest of all tasks. In fact, the biggest problem is to realize the system architecture, the security was the area, the surface area of attack. A lot of things such that I was not aware of, including developer productivity. So I'm thankful. This mentorship gave me an experience that definitely would be able to bolster up my position. And that's all. I would like to thank everyone for being a great audience. Thanks a lot. So hello, everyone. I had been joy along with Priyanserati would like to present the mentorship works related to feature optimizations for RISC-5 compliance test generator and RISC-5 ISA coverage. So a little bit about ourselves. I am Edwin Joy. I'm currently working as a verification engineer at Encores and Conductors Private Limited. I was an undergraduate student, a final year student during the spring edition of this mentorship. And I'm interested in the arena of computer architecture and embedded controls. Hi, everyone. I am Priyanserati and I'm currently pursuing my bachelor's degree in electronics and communication engineering from the Indian Institute of Technology, Rootke. I'm really interested in open source development and have been taking part in many open source programs. I took part in the LFX mentorship program in the fall of 2022 under the RISC-5 organization. And I'll be talking more about that experience later in the presentation. So let's set up a ground of this mentorship. So we worked on a framework called RISCO. So RISCO stands for RISC-5 compliance framework. And it's used to test if RISC-5 implementation pairs well against the RISC-5 reference model. It's used to test if the implementation follows all the guidelines and all the steps required to make a RISC-5 process. So it has a dependency on these following tools, RISC-5 config, RISC-5 SAC and CTG, which we will be talking about in the next slide. So as an input to the framework, we have the specification of the device under test. We also have the two plugins which connects to the device under test and the golden reference model. So the specification is passed in RISC-5 config. RISC-5 config validates the specification and spits out a standard specification. On the other side, we have a test pool which has all the tests. The framework generates all the tests and generates a test list. And based on the specification, it filters out the important tests which matter for the said DUT. We also have a cover group format file, which is a blueprint for all the different cover points which needs to be covered through the entirety of the test. It is passed to RISC-5 ISAC, which calculates the coverage through the test. It also has a dual purpose. It also can be passed to RISC-5 CTG to generate the test. So the filter test list, the tests are run on both the DUT and the plugin. The model gives out an execution trace which can be used to calculate the coverage. And also, both the model and DUT are executed and compared. The comparison happens at the signature region of the memory. So the signature region is the region where a unique value of each and every test cases are stored. So we can check that unique value. The unique value is usually the value stored in the destination register of an instruction. So these two produces its own report. And that report tells us how well the implementation adheres to the RISC-5 standards. So this entire box constitutes RISC-5. So here are my contributions during the spring edition of the mentorship. So the first task that we had was the design of a descent sampler, raising RISC-5 opcodes. So it's a RISC-5 opcodes. It's a repository which enumerates all RISC-5 instructions. So a typical ad instruction is encoded as follows. And then we designed a hierarchical descent sampler using that as a metadata. So we used a way of masking and comparing recursively until we reached final destination, final instruction. So this is used to keep our code base more future proof because the RISC-5 opcode is continuously updated and maintained. We also had a task of parallelization of the coverage calculation. So coverage calculation is an embarrassingly parallel workload. And it was parallelized. And we also had an option to dynamically remove the PowerPoint once they are hit because it makes more sense to take the percentage of coverage rather than just the number of times the PowerPoint is hit. So we also made changes to the CGF to accommodate different pseudo-instructions in the RISC-5 world. So here are the contributions to RISC-5 CTG. This is the test generator. So the support to the new CGF is added here. And we also had a new type of infrastructure added to the framework, which is cross-combination test generation. So in this type of test, we cover instruction sequences. So we would like to have an I type instruction followed by three instructions, which we do not care. And then we have a sub-instruction pertaining to the condition that the first and the last instruction should have dependency where the destination registers are seen. So we would like to generate a test corresponding to that. And that is one of the major additions through this mentorship. So a test corresponding to that can be seen on the screen. So this type of tests are very important because it helps us to detect and isolate different kinds of pipelining hazards that can come into the UT. So now I'd like to hand over the control to the presentation to Priyanswathi to talk about the fall edition of this mentorship. Thank you. Okay. So it's Priyanswathi taking over and now I'll talk about my contributions. Okay. So I also worked on the same two projects that Edwin worked on, the RISCWI compliance test generator and the RISCWI ISA coverage, ISAC. So talking about my contributions, my contributions on RISCWI ISAC involved re-architecting the ISAC code base to make it easier to add support for future extensions. So the thing is RISCWI allows for extensions for its base ISA to be extended by a number of extensions and many new extensions keep coming up very frequently. Each time a new extension comes up, we'll need to account for it in our compliance testing framework too. And my project addressed exactly this issue. So we basically made the process of adding support for new extensions intuitive and such that it only requires very minimal changes. We also made sure to take full advantage of the syntactic sugar provided by Python to make the process as simple as possible. Now moving on, my second contribution was also again on the ISAC component. And this time it was to implement robust tracking of data propagation. Now first of all, what is data propagation? So you see in an architectural test and an assembly test, two things need to be achieved. First of all, you need to hit a condition on the architectural state, which we call as cover points. And the second thing is to store the output values as a result of that condition as a result of your instructions and store that output value from the affected registers into a designated signature region. And here's a simple assembly test that describes this. So you've got your conditions for a hit by this ad instruction and then your store instruction propagates your output, the affected output registers value. Now I basically, so before my contributions, the coverage evaluation for propagation of this data was done in an ad hoc manner. But I implemented a robust registered tracking approach to correctly track data propagation and update the required matrix. Okay, so moving on, my third and final contribution was on the risk-free compliance test generators. And it was to add basic support for checking compliance with the privilege part of the risk-free specification. So the thing is, we had decent CDG support for the unprivileged part of the risk-free specification. And our current focus is to add some basic support for the privilege part of the risk-free specification. And so I started out with supporting very simple cover points of this form where you're just checking a field of a machine mode CSR register, if it equal to some value. So we started very basic supporting these kinds of cover points and then added complexity incrementally. And now we can support cover points, much convoluted cover points of this form too. So that's it about my contributions to the project. Now I'd like to talk about some of our key takeaways from the mentorship. The first and foremost, we obviously learned risk-free assembly programming. We also learned the importance of incremental software development. So when you're developing a complex piece of software from scratch, you cannot add all the complexity from the beginning. You have to start simple and then add the complexity incrementally. We also learned to invite a future proof software design philosophy, which is particularly important in the risk-free ecosystem as it is constantly evolving. We were also introduced to many fascinating risk-free open source initiatives and finally we learned to better follow the open source etiquette and I'm sure this will help us a lot in our future open source endeavors. So finally, I would like to take this opportunity to thank everyone involved that made our mentorship a very smooth experience. I would like to thank our mentors, Mr. Neil Gala and Mr. S. Pavan Kumar, who took out the time from their busy schedules to help us out. Thank you so much. I would also like to thank the people involved in the non-technical aspects of our mentorship too. And finally, I'd like to thank the Linux Foundation and the risk-free organization for providing us with such a wonderful learning opportunity. So with that, I'd like to conclude our presentation. Thank you all for your patience. You've been a wonderful audience. You're coming in a bit soft, Piyush. You can increase your volume. Yeah, am I audible now? Yeah, it's good now. Thank you. So first of all, I would like to introduce myself. Piyush, we cannot hear you. We just hear noise. Let's go on. Hello. Piyush, we can see your screen now. So am I audible, ma'am? Yes, we can hear you. So starting my slides, I would like to show a first call. I would like to introduce myself. My name is Piyush Mishra and I'm from University of Uttar Pradesh, currently a professor of technology in computer science and engineering. I started a Linux Foundation mentorship program in September. I have gone through first how it started. I started using the Linux environment and then the curiosity just grew out to me. And I started developing scripts and developing what I can develop from by searching web. I certainly came with the fact that we can actually be enrolled in Linux Foundation mentorship program. And I just submitted the application and I got enrolled in it. So starting ahead. My project was Linux kernel bug fixing project, a fall and pill. So journey of my open source development and debugging. So major topics that we are going to be discussing in these slides will be virtualization, syscaller and sysbot, sending patches and kernel billing list. So kernel, according to me, kernel is something that really works beneath the layers and works for your machine for delivering the messages from your hardware to your software and the perfect messenger for machine would not be anything else than kernel. So as I stated here, so what are bugs? Basically, what are bugs? So bugs, our life evolves and revolves around bugs and fixing them is something that is called life. So this should be stated before I start my slides. One of my major motivation working behind the Linux kernel was to start working with the architecture level. I have experienced, first I have experienced Android development and then I started to work upon the open source development and then I thought that working on the architecture levels of Android could be very beneficial for me. So I started learning. So this mentorship program was not a job for me. It's more than that. It's something that's to be learning, but to be considered as learning the core components that enable. So I would like to come to my next slide, starting with the open source as I stated websites like Linux from scratch.org. So this was the first site I came in reference to with where you can actually go and just learn how the Linux kernel development environment works. I just started studying the material there. I watched videos, I searched, I Google searched everything and I was able to build something which could be considered as a build environment, which is still under progress. So major challenges are bugs. Understanding the task that what has to be done. Okay, so the first thing that was challenging, that was challenging in this program was assigned tasks to me. As honestly, I don't have any bug fixing experience before this program. I was not having any bug fixing experience. So just studying the material over there and getting what has to be performed was the first challenge that I faced. Starting with the mainline source as I remembered my first task was to build and compile the Linux kernel from source and run it over a build environment. Virtualization actually works there. So the tools that I used were KMU, GDB and build environment for making build environment GDB and for virtualization I used KMU. So understanding the kernel.org logs, major tasks of any bug fixing regarding Linux development and Linux debugging environment is to start understanding the bugs that are stated by developers all over the world. Building the key foundation resources for debugging the kernel. So as it all goes with Google search and it all goes with the mentorship under which mentor you are getting and luckily I was having a wonderful pair of peers and my mentors that helped me through contents that what I can get from tools. So tools are KMU, VirtualBox and GDB. Virtualization is actually something that prevents you from destroying your machine as when you are working with the scripts as when you are working with the scripts and you are just testing scripts that which could be used as in my case I have to test each and every script that I could Google search. So virtualization really saved my machine that time. Techniques. Booting the compile kernel with KMU using GDB and build root. So actually my first task was to build the kernel from source and I have compiled it using KMU which took me around five to six days actually as I have to get first as I have to understand what the task is asking me to do and then I have to search everything how to perform that. So yeah procedures. Fuzzing and stack trace. Fuzzing was the best thing that I got to know about as I don't have any experience before that and one of my tasks was to search and write everything about you can get about fuzzing. So fuzzing goes like that that you are having a program and you have to test for it that what bugs is this program is having. So giving a pair of inputs until the program breaks is something fuzzing. Stack trace. Every PID, every process that is going upon your kernel could be stack trace and the decode dot decode stack trace dot s was the script that was used at that time. This color in sysbot. I really took a lot of time understanding sysbot. Sysbot is the sysbot dashboard that we have to progress to find which activists are currently going in the kernel and the dashboard actively tells you about the vulnerability degree of the bugs. So the scholar tools have to compile the bugs and then you can attach it to the scholar and find the bugs in your kernel. In all the procedures the most effective this this actually virtualization really helps you guarantee I am going to windows operators and I could use as so virtualization was really helpful at that time as upon my legs for testing the build environment sending patches. So patches I have generated a couple of patches. I have basically gone to checkpatch.pl script which tells you about the leftover work of the developers that they could be corrected by other developers and I work on the driver staging android. So this was the basic folder that I worked upon and send it to the patch to my mental. So checkpatch.pl script is really helpful as it can give you a lot of insight about whatever the leftover work built by the developers who are developing the kernel. So that's a lot of work to do. In the meantime as September, I came to the knowledge of a security bug with rotifactors actually it's not installed by me but it was a good resource to learn about. Open switch picks OV access in the server which was fixed by rearranging the flow access. So that was quite helpful but the proper size was the offset was fixed by not fixing by providing a greater difference for the next proper size. This was and the actual where where I learned from was a different method. So declarations were changed in this book. Sending patches is the best part of how the developers all over the world share their work by sending emails and emails are used and patches are the messenger of developers while we are the developers. So every work that is done by you can other developers by sending patches and checkpatch.pl should work with that too but you have to work with great kernel mailing list. So it's actually important when you are working with a kernel deboning program kernel mailing list and loads every bug kernel mailing list and loads every bug it can decide you where bugs are placed. It actually holds the source salutations in regards to everyone who are with this mentorship program. And I came in contact with the other bug that actually that was an evas tropic effect in android background. So android API responsible for audio can find and learn more about how the system works. These four vulnerability were actually resolved by october patch 2021 and December 2021 but later I see on that later I see that the bug is still active. Upon my google search I would like to form a device on my site that the bug should be resolved in android version 12 and android version 13 as I see it. Upon my expertise that I put together from google search. So yeah android 12 and android 13 exactly resolve these if you find these or because the effect on android 12 that's the thing to do. Apart from my mentorship program I face these bugs too and find the tropical solutions that we need. So updating and updating your systems are the best that we do. Salutations and their regards. All the learning areas are gone but on such good information and what could be done on how to do it. So basically I would like to find and at the end of the program. And the community driven methods used by Linux foundation and open source in the driven methodologies really helped me have access to the servers. So I know a lot but they are resolved if you talk about what could be done. So learning this cooperation really helped me. From my side this project was totally learning. I faced serious bugs and faced my life bugs too. I moved forward in the development and faced the bugs and I faced my journey. But yeah we can solve them and upon what information I can carry forward from this. So please keep updating and updating the everyone. Thank you. Hello everyone. I am audible right. Can I start? Yes please go ahead. Thank you. So hey everyone. I am Dave Mitron. I am a junior blockchain engineer at CHECK. My project was to create a CLI tool called DRMAN in order to provision and administer DID registries. I'll start with decentralized identifier. They are permanent. They should be permanent. They should be resolvable. They should be cryptographically verifiable and it should be decentralized. Every DID looks like this. They have a DID method which specifies their CRUD operations. How should they be created, resolved, updated, and deactivated. Generally a DID, it resolves for DID document and this document contains multiple public keys and service endpoints of a DID subject. So this subject can be anyone in the real world. And the DID and DID documents are hosted in a verifiable data registry. So this can be a blockchain. It can be a database. As long as the ecosystem trusts that registry, it's fine. So our motivation was there are a lot of DID methods being proposed and most of them are based on blockchains such as INDI. It's based on a public permission blockchain check which is based on Cosmos blockchain. They have their advantages but we wanted to create a very lightweight DID registry which doesn't consume as much resources as a blockchain but still has most of the properties of a blockchain. So we chose Git because it's distributed. It has data integrity. It's widely used and mainly it has a membership management layer. There are multiple providers such as GitHub and GitLab. There are some disadvantages such as there is a steep learning curve. You can't manage a complete DID registry just using the GitHub commands. So that's why we created a CLI tool which is very easier and creates a standardized protocol. And the main disadvantages here is that the providers are centralized. The GitHub and GitLab, they're going to be centralized. So we made sure in our design that the data in the DID registry, they're going to be independent of the DID provider. So our CLI tool is created in a modular structure. It has three plugins so far. It has a DID plugin which creates updates and results and deactivates the DID. And there's also a registry plugin which creates a repository with certain rules which matches our DID registry and also an organization can add more rules on however they want. There is also a wallet plugin which can be used to create and store keys. This wallet plugin can be swapped with any other external wallet if they want. Our registry architecture, first they should select a provider which can be GitHub or GitLab or any other providers. The management layer is going to be dependent on the provider. Generally, all the providers, they have organizations. They have different teams which we can create and different rules for reviewing a pool request and workflows, etc. So the management layer and there's a data layer which is going to store the DID documents and different resources which can be binded to a DID document such as even an image or a credential can be published in this DID registry. So we have created in a way that this data layer is going to be independent of the provider because this is a DID registry. Now there are different identifiers here and if the document of the identifier is going to be the DID document but nowhere have we include a GitHub user name or GitHub organization name, etc. So the data is going to be completely independent of the provider and for every identifier a folder can be created where they can publish as many resources which they want. So there can be multiple DID registries and they can create a sharing environment and they can exchange data as long as they trust the other organization. So our DID method is going to look like DID get followed by the provider either GitHub or GitLab and then the organization name, the registry name and the identifier itself and if they want to resolve a specific content of that identifier and that can be done. So we are going to show a quick demo. We created using DRMan and Aries Framework JavaScript. So this demo, so DID registry is an organization here. I'm going to create a new registry called Hyperledger here. So I'm going to use the DRMan CLI tool and I'm going to quickly... So yeah, so this creates a registry with different teams. It creates mandatory teams and then specific rules which are needed in a pull request in order to review it. And then once the repository is created, so we can see Hyperledger repository here. So this is going to act as a DID registry. Currently, there is no DIDs being published here. I'm going to publish a DID using Aries Framework JavaScript. Yeah. So first I'm going to publish a DID. Let's check the repository now. So what is going to happen is it's not going to directly add it here. Instead, it's going to raise a pull request which can be reviewed by analyzing different rules which the organization decides. So depending on the commit name, we can trigger a workflow and run a few tests and only if a minimum set of people review it, then it can be moved. Along with the DID, I can publish a few resources which need to be part of that DID. So I have published two other resources. You can see three comments here. Along with the DID, I have published a schema and also another... This is needed for a verifiable credential basically, these three resources. So once it's merged and we can see the repository, it has the identifier DID document and also creates a folder for it. Within that, it has multiple resources of the DID. So I'm going to use the DRman CLI tool now in order to resolve whatever we have published so far. So I'm choosing the DID get method. I'm going to resolve it. And I'm going to enter the structure which I have mentioned in my previous slide. It's going to be GitHub and the organization name and the registry name which is Hyperledge and the identifier. So it returns me a DID document and it contains a signature. It can have the public key which I can use in order to go and connect with that DID subject. So if this is to resolve a DID, now if I want to resolve the specific content within that, I can show a demo of that too. So I published a schema which has three attributes and I can use this to in order to issue a verifiable credential. So yeah, we created the CLI tool completely using BAScript so that it's very lightweight and it can be used in even an IoT device such as... Now let's take a smart home and then those devices can create a GitHub repository and then they can use that in order to manage the identities between them. So the project itself can be thought of application specific DID registries. There can be one organization and they can create multiple registries for different applications. Right. So as I conclude, the main advantage I got through open sources, the connections I got through this project, they meant us, they were very helpful. So more than the project and what I learned here, the connections were a very big deal for me. And this introduction of me to decentralize identity field and that opened me a lot of job opportunities. So very thanks for that. You can check the repository here and any questions are welcome. Thanks. Hi, hello. Hi, hello. Hello to a bit of introduction. My name is Umebe Wee. Great. I'm from Nigeria, Nigeria. I attend the University of Patakot in Sokka. I worked on the FABLU project and my mentors were Jakub and Peter. So my project was to enable Kubernetes operator for FABLU. FABLU is a simple tool to generate the Hyperlegic fabric network from a config and run it on Sokka. So one of the main goals of FABLU is like to provide an easy way to get started with Hyperlegia fabric. So it uses a declarative approach to define components in the network, components like channels, the pairs, the CAs, and the organizations. So that file is called the FABLU config. And when before I came in, only Doka was supported. Why I came in to this? What I did at this was to build the Kubernetes engine to add support for the Kubernetes engine. So the technologies used in this project was Bash, TypeScript, a bit of Kubernetes and Doka. So yeah, this is what the FABLU config.json or even MMO file looks like. So the orgs are defined, yeah, you can put in the org name, the domain, number of instances, which DB to use, level DB or postgres, the address, and the rest of them also the tools. So from this config, Hyperlegia, FABLU takes this config and provisions your network. So the objectives of this was for FABLU to support current features, for case like bringing up a network, taking it down, pruning it as deleting the target network, installing chain codes and also upgrading chain code. And also FABLU is able to generate YAMLs for K8 deployment. So for the second objective, it did not really come off as we wanted because at the end of the day, we ended up using an operator to deploy these networks into K8. So there was no actual need of YAMLs. So project deliverables. So the first deliverable was to set up a simple network using the HLF operator. So HLF operator is a Kubernetes operator that provisions different components of Hyperlegia fabric. So those are the CAs, the address, pairs, channels, and chain codes. So what I did at first was to write the target scripts to set up this using the HLF operator. So the second deliverable was to template the created scripts. So FABLU has an engine which takes the values defined in the FABLU config and passes them to the template, which generates the values in the script. So this is kind of a whole back end work. So also the other was to write snapshot tests for the templates and also write E2E tests to make sure that what we have built are working properly. So for deliverable was to verify FABLU commands are properly supported, generate, which was to generate the FABLU config, also to start, stop, up, down, and the whole chain code operations. So project executions and accomplishment. So I was able to complete the requirement for setting up a network on K8 using FABLU and I was also able to complete the templating and test cases. Before this, I did not have any experience with Hyperlegia Fabric. I only knew that it's kind of a private blockchain. So these projects helped me to explore the whole ecosystem of Hyperlegia Fabric. At first I faced initial problems with installing the chain code using HLF operator because this was kind of a bulk on HLF operator. So these problems I faced helped me to explore the internals of HLF operator, understand what is going on there, and fixing things and interacting with the community over there with the HLF operator. So before this, I haven't also run like setting up a set of community clusters in the CI environment. So this also helps me to understand how communities are set up in the CI environment and how tests are run. So recommendations for future work, a lot of the features and what we are doing are defined in this GitHub issue. But I'll just go over some of them that are really kind of priority. So one of the recommendations for future work, I'll also continue working on FABLU because I think I'll be here for a long time, hopefully becoming a maintainer also. So one of the recommendations for future work, one of the future work we're looking at right now is like implementing a test or there are sharding to improve scalability and performance. Also we're kind of exploring the option of using snapshots and restore commands for disaster recovery. Okay, we're also evaluating the use of private data collections for sharing sensitive information. So features like TLS and FABLU REST, blockchain, esplora, and fabric in lower versions than 2.0 are not currently being supported. So we're also kind of evaluating that also. We're also looking into adding the depth mode for testing and development purposes. So this has been a really valuable experience. I was able to challenge and improve my skills so much. I learned a lot about collaboration. I also learned a lot about hyperledger fabric and also how template engines work. So for the foreseeable future, I'll continue working on FABLU and other open source projects. So I would say a very big thank you to my mentors and Jakub and Fitri. And I would also say a very big thank you to Hyperledger and Linux Foundation for giving me this opportunity to work on FABLU. Thank you very much. Thank you everybody. Thanks everybody for speaking, sharing your experiences. It's awesome to hear all of you share what you have learned in the mentorship on your mentorship projects. And this is the reason why we do what we do. And I thank all the mentors. Without them, we won't be able to do what we do and our sponsors as well. Thank you so much.