 Hello everybody. Welcome to the very first LFX Mentorship Showcase. Could we advance? My name is Shor Khan. I'm a Cardinal Maintainer and the next fellow at the Linux Foundation. In this role at the Linux Foundation, I lead the Mentorships at LF, which allows me to learn, share at the same time, also design mentorship programs that empower others to do the same. Next slide, please. Next slide. As beginners, when we try to learn a new area or trying to get into a new area, we have to start somewhere. We are always thinking about what we are passionate about, which project would we like. Next slide, please. We are thinking about constantly, so where do we start? Where do we begin? We're all faced with the same set of problems. Next slide. And once we decide the project we want to get involved in, then the next problem comes in. How do we get started? Next slide. And where are we going to find the resources for us to be able to learn and who can help us? Next slide, please. So once we figure out the project and how we want to get started and where are our resources, then the next problem comes in. Who can help us? Because the communities look intimidating, code-based, looks daunting, and at this point we are looking to see where we go next. Next slide, please. At LF Training, you can plan your learning paths and several free courses and webinars are available to explore what you want to do. And learn as well, basics and more advanced training at the LF Training site. And you can also learn from experts in interactive webinars, LF Live Series offers those, which are also free. Next slide, please. And if we have, once you figure out all of these things and you want to learn one-on-one working with mentors on projects, you can apply to the LFX mentorship programs and participate in the mentorship program and work on a project while getting paid. Next slide, please. Once you graduate, what next? At this point, we are connecting new graduates, graduates with people looking for talent. That is what we are doing today, having our graduates speak and share what they have done during their mentorship project and what they learned. Next slide, please. Before I hand it off to Harish to talk about his project and I would like to take a moment to recognize all our mentors that helped and volunteered their time and to train the next generation. Well, thank you to all our mentors. Without mentors, we won't be able to do what we do. At this point, I will hand off to Harish to present. Good morning. Good afternoon. Good evening, everyone. I'm Harish and I'm working as a blockchain developer at Ironworks. So the mentorship project that I have been working on is the integration of Ares and Fabric. Like about the description, what Ares is. Ares is basically a wrapper built around which uses Indi as a ledger and Ursa as a cryptography library. And it is Indi is specifically built for securing identities of an individual, basically securing identities on the internet. So with Ares, I can selectively disclose like there are particular credentials which an individual owns. So the issuer issues that to the user and user can verify it to the verify using by disclosing certain information which would come under selective disclosure. And by not disclosing the other information that he or she does not want to disclose that can happen through zero knowledge proof. So now what the scenario is Ares only supports Indi as a ledger and most businesses today use Fabric to build their blockchain solutions, say for supply chain management or for CBDC projects that are being implemented worldwide or for energy trading. The businesses that develop these solutions, they mostly use Fabric as a ledger and what they want is they want to verify some or the other kind of identity in their use case. So what they would do now is they would use Fabric blockchain for their main purpose use case for supply chain management and Indi for managing identities. But they don't want to do that and they want to use only one ledger for managing everything. So therefore this project comes up where the businesses can use Ares framework JavaScript and integrate it with Fabric to verify those identities. Okay, so the objectives that I had to do I had to implement in this project was to write a chain code and a module chain code would get deployed in the fabric and module would be part of AFJ. Then I had to write the test cases that would test the module interaction with the ledger and finally I had to do a demo of issuance and verification. So I have the chain code for that and you need to know what kind of transactions Indi supports now. So Indi right now has a total of six transactions, name transaction, schema, credential definition, revocation registry entry, revocation registry definition and at-trip transaction. So I had to write a chain code that would support all these six transactions. Then a module was to be implemented in AFJ and then the test cases and demo. This project enabled me to know how the Indi ecosystem works. Like there are various parts to Indi. First of all, first of them being how roles are being managed in Indi. There are certain roles like steward, trustee, network monitor and endoser. So I had to study how these roles can impact my solution. As well as there is one more thing called as transaction author agreement. I had to also study this. Anyhow, we have not gone with implementing the roles and TA as of now. I had discussed with the open source community of how we can go about this and they were also of the view that this particular use case of integrating Ares with fabric right now doesn't require roles and PAA right now. So the next part being add to support all the six transactions that goes to Indi. So add to support all those and they should go to fabric. Also add to know how the fabric network works and how Docker containers manage the fabric network, how their communication happens. So right now in our solution, there are two wallets that we are using. One is Indi wallet for managing the secret and non-secret records in the Indi ecosystem. For managing credential records or a connection record between an issuer and user, all these records are being kept in Indi wallet and the fabric wallet contains the identities that are being used by the peer of the fabric network that is being used by the client application of the fabric network to connect with the peer of the fabric network. So the fabric related identities are kept in fabric wallet and the Indi related identity verification main records are kept in Indi wallet. So we would have to integrate those wallet into one and this is one of the use case that we are going to do. The second one being AFJ right now only supports dead schema and thread def transaction. We need to whenever the revocation transactions are being added, we can also add those transaction in the chain code as well as in the module. Going forward we would also try to support AFJ with other ledgers if the need arises. So I would show you a demo. So this is where my fabric network is running. Here I have installed packaged and installed the chain code on peers and here you can see the logs of one of the peers. So you can see that till five blocks there are in my blockchain network as of now. So I would just run the test case. So now what this would do is this would write three transactions from AFJ to fabric network. Those being dead transaction, which we also call as NIM transaction schema transaction and credit of transaction. So what you can see you would see logs in logs in this console that would go in three blocks more being added to the network. So yes it's working. It has built a wallet. You can see that one test case has passed and the sixth block has come into the network. Now I will see one more test being passed. So that was the block of dead transaction. Now you can see that two test cases have passed. And now the seventh block has come into the network. This is the block for schema transaction. And then the credential definition transaction block command. Now you can see the eight block coming and that is the block for credit of transaction on the three test cases have passed. And the dead schema and credit have been registered to the. Similarly, what we can do is we can add support for revocation registry entry revocation registry definition as well as at the transaction. And once this is done, we can support full SSI use case with fabric starting from issuance of credential verification and also we would be able to support revocation of credentials. And we can then verify the credential is is the credential is valid or not. That's it from my side. Thank you. Hello everyone, my name is Naveen and today I'll be talking about how did I get started with contributing to basis. So, who am I, I am a software engineer working at Hasura and I personally like to think of close to the hardware level so system programming is something that is close to my heart. And, and, and, and to put it in some of them just in I'm confused human. And, okay, now let's see why did I even want to start or participate in this kind of mentorship mentorship program right. So there were three reasons. The first was, I've always been interested in the local layers of the computer architecture and I wanted to experience what it feels like like to be at the layer where, where like we are most of the abstractions and computer is what you tell it to write. So I want to experience what it actually feels like. And in its kernel was fit the perfect spot and I wanted to contribute to it. And second, I want to understand how large open source projects work together like how people collaborate with each other, and how the asynchronous communications happen. And third, I had this childish fantasy to see my work, including the next kernel I mean it's a place where all the legends work right so I just wanted it to be the thing. So that's the reason I was motivated to apply for the leadership program. So, what did I work at what did I work on. So I applied to Linux kernel project for under the PCS up system, and I was mentored by the john help us. So to give away and to give a brief context of what PCI is PCI is basically this particular component interconnect that is like whenever you open up your CPU, you see the small slots where you pick up, fix up your graphic card or memory or hard drives and everything right. So that's, that's the slot, which is called as a PCI local bus. And this is the bus which allows a computer to communicate with this external physical devices. Now, you're always needs to interact with physical devices right, how do they interact with the code which makes this happen is nothing but the PCSF system. Now, the pieces of system is basically the code which helps these external devices communicate with your computer. Now, when we have this physical devices connected to a software. This is a bond to happen because we don't live in an ideal world. And this errors is also handled by the PCI code. So we have so the PCSF system basically logs the errors, which the third party devices give us. And my project basically was to make this error diagnosis more easier. And there were lots of long standing defects which I participate in. I wish I was responsible to fix for. So the task. So to summarize, I work on the PCI and link code for the pieces of this subsystem and the tasks that I work where basically this set of tasks which is which you see on the screen. Three tasks which are very close to my heart. The first one was this unified PCI error response checking. So what is it so as I said before when devices talk to the computer. So when they're different kind of devices, you have different controllers that means you have a mouse so mouse can have a different controller. I mean this example might be wrong but so I'll just to put into perspective. So they were different companies of Moses and they'll be different controller drivers for each Moses. Now, whenever any error happened, it was a responsibility for the controller driver to actually, I mean, whenever the read error happens or write it happens, was your responsibility of the controller driver to actually set the error or to tell the computer that there has been, the error has been occurred. But with this patch what happened is a PCI code. So it now takes up the responsibility of setting this error and relieves the controller drivers from this task. So that was my first task which was to unify the error response checking. The second task was to implement the key unit test for the PCI users assignment. So what, so the reason this spawn up was because beyond for found out lots of people when they wanted to test out the resource allocation thing. The testers had to write manually and the trace statements and send back to the user, user had to test it, then they got back, then there were lots of changes so on and so forth. But if you had this key unit test, but having this key test fixture would help us, would help us save this time of communicating these users. So this key unit test is something which is close to my heart because I spent majority of my time in reading about the PCI, the code which actually makes a PCI run. So that's the second thing. And the third thing is, these are the things which I like. And for the people who actually would like to see how we get started, the PCI subsystem actually wrote a blog, which might help you because I personally had lots of air pulling moments where I could read a lot of spec sheets and configurations and everything. And it was full of, it's PCI subsystem is very huge, right. So this blog might help you, this blog might help you get started with how to contribute to PCI system. And then, yeah, and the last thing which I personally worked on was the patients, because I know that I am impatient when it comes to getting the views for the format batches, right. And so it was really hard for me in initial stages to restrict myself or restrain myself to keep painting maintenance with the reviews, which is the wrong thing to do. So yeah, so that's something that I worked on. And one of the things that I learned. So the first thing that which I wanted to learn was how the open source projects work or how large open source projects was. So I learned about how the kernel development system work. What is the workflow of the kernel development system, right, like, how, how do I, how do I make patches, how do I send the patches, and all of this is using this. We send the patches via meals and stuff. So that was a really nice exciting to learn because I've been used to only using Google Gmail or any other GUI main clients but using, I mean, you know, development and the workflow really help me understand that there's so many possibilities and easier ways to actually change my personal workflow. And the second thing that I learned is about the entire protocols of pieces of systems right there like, like, each device did. They are different kind of registers and how do they interact and different spec sheets and then this advanced reporting and all this stuff so it was really exciting to read the spec sheets and see what each block of a register can actually do when each register does indeed have a specific function. And the third thing was about how do we compile or cross compile your code for different architectures and stuff. So yeah, so these are three things, one was the development workflow and this and learning about PCS sub system and kernel compilation. Okay, so what was the mentorship experience like. So, okay, so let me just go back, let me just go past. I actually understood or rather learn department like, like there's a program called as Linux on a mentorship program, doing this open source summit in 2019. And that's when that's the program kind of stuck close to me and I really wanted to apply to it so when I applied in 2019. I was really focused on my application so I didn't get through it, but I really would like to thank sure sure for making the application process so exciting that even though I didn't get selected into it. I was able to take away a lot of knowledge from that. So I know I'm having that application process really takes time and really want to thank sure for making that application process, which inspired me to reapply again this year, because I had lots of time with moment stuff. So, this year, I just wanted to apply because for the sake of applying right I mean, I mean, okay how to I mean, not for the sake of applying I just wanted to, I knew that the application process would be fun. So I wanted to pass my time and then I just started doing the application process, the different tasks were of different wages difficulties and doing all of them kind of made me more interested and then I got into full zone of applying to this mentorship and then after doing all the things I did get selected and once I got selected I had these three phases of my mentorship. The first thing was I was really really worried. I mean because I didn't expect myself to get selected right and they really didn't expect myself and there was like, I mean it's, I mean the perception that I had was Colonel mentorship like Colonel development would be really hard and the community might not be beginner friendly and these are all the kind of myths that I read or heard about, but to be really frank, those are really myths, they're not the facts and I had really blast with this mentorship because my mentor was Bijan Helgis and he's really awesome and I exchanged like long emails with him talking about how to implement my task and Bijan was a very patient with the playing back and that really cleared a lot of things up. So my things really went from being worried that I might not be able to do it, to being okay this might be possible that I might be capable and then wow like ending the journey that I finally realized that this mentorship program really grows you as a person right from being underconfident to being confident that okay that mentor should develop like contributing to Colonel is not hard as long as you're persistent and as long as you do what you're doing, I just want to give you a best. So yeah, so that was my mentorship experience and I would like to thank again my mentors Bijan and Shoa for doing the support system and helping me whenever I do get stuck. And what are my aspirations, what do I want to do next? I really want to get more detailed system level program because I have always been saying that I like it but I've never got the I've been lazy enough not to actually get into it but this mentorship program actually started my vehicle right I started my vehicle and I really wanted to, I really want to explore more and get into deeper business learning programming and then yep I would like to improve my my skills to the level of the people that I've interacted with and then finally be capable of becoming a Colonel developer as a full job. So that was my aspirations. And that's it thank you everyone and this is my contact slide and that's it yeah thank you everyone. So, I am Puranjay Mohan and I'm a senior year student in India I'm pursuing an engineering degree and today I'll be talking about supporting PCI latency tolerance reporting in the Linux Colonel and even I didn't know the meaning of these words before I applied so. So, something about me. I'm really passionate about the hardware software interface and things like operating systems embedded systems system software computer architecture etc. So, this is my first ID know know which I bought in my seventh grade. This is an image from 2012. So, now I would like to tell you how I got into the Linux Colonel and how I started with all this. So, initially, I wanted to learn about computers and digital electronics and how everything works. So, in that a very important part was the operating system, and I wanted to learn how an operating system is built and how it interacts with the hardware. So, Linux came in because Linux is open source I can see its code and try to understand how different parts of it work. So, I got into Linux Colonel development and then I got to know about Linux Colonel mentorship program and applied to it to get some experience and to learn how things work. So, this is the overview of my project. So, my project was also related to the PCI subsystem. The previous speaker has already introduced the PCI subsystem. I'm happy that I'm talking after him. So, my project was related to the power management of the PCI subsystem. Like every system which works on electricity we need to save the power. So, there are methods to keep the devices in low power modes when they are not used. One of them is putting the device into L1.2 power mode, which is the lowest power mode. But if a device has to go into this mode then it will take some time for it to come back to the normal mode. So, that is called the exit latency. So, if we want to put a device into this mode we need to make sure that its latency requirements are not more than the latency that is required to come back from this mode. So, if something like a GPU wants to work very fast then if we put it into a low power mode and then it comes back it will take some time and it will make it slow. So, my project was related to calculating the latencies and then summing them up and programming them into the PCI structures. So, basically because the latencies are added so if something is, if we have like three things they are stacked one after one. So, the time taken for a message to go through them will be the sum of the distances between them. So, that is how it happens in PCI also. So, in PCI subsystem the CPU has this root complex and after that all different PCI or PCI express endpoints are connected. So, in that case we can make a tree like structure as you can see on the right side in the image. So, an endpoint could be connected to a switch which could be connected to another switch and then it will reach the root complex. So, basically what I did in my project is whenever an endpoint is enumerated we go through the endpoint till the root complex and we add all the latencies and then program it into the endpoint structure. So, to find out the latencies there is something called device specific methods which are implemented by PCI endpoints. So, using those device specific methods I could get the latency of each device and then submit and program it to the endpoint. And when it is programmed after that that value will be used to check that if this device can be put into low power mode or not. So, this thing is done by the bias also but that is on the boot up. As PCI supports hot plug devices also so when a device is hot plug then the bias does not program it and the kernel has to do it. So, now I would like to talk about the things that I learned. So, the first thing is that I learned about Linux kernel compilation and testing in the course of working on this project I had to compile the kernel many times and do modifications to it and also do it through custom configurations. So, that's what I did and I got very good at like keeping four or five kernels in my computer and then testing them on the go. Then I also found out how to log the results when a kernel panics or when it stops or not works then we should know why it is not working. The next thing is the PCI subsystem as my project was related to PCI, I had to learn the PCI protocol and different things that are like the device specific methods as I talked about the configuration space the capabilities. So, and I was able to add code to the PCI code base and then make it work. So, to do that I had to add two new functions to the code base one was to find the latency and another was to add it and program it. The next thing that I learned is the kernel development flow and how to use git to make the patches send it to mailing list and when a reviewer asked us to change it then we had to make the changes and send the new patches do the versioning of the patches and everything. So, that was very good. Even this helped me to do other contributions to other open source projects also. So, now I would like to talk about my mentoring experience. So, I guess this was the main part of Linux kernel mentorship program. I was mentored by beyond Hellgas who is the PCI subsystem maintainer and the process was quite enjoyable and enriching like we used to chat every day on IRC and like he even shared code snippets with me when I used to get stuck. And I used to send all my patches to him before sending them to the mailing list because if I send something wrong on the mailing list I could get like backlash from the other developers so beyond help me by first pointing out my mistakes in it itself. And then like because this was a complex project initially I was very much scared and because beyond was with me he could help me and then I was able to do a few things which alone I would not be able to do. And like we are still connected and after that I started working on another project with his help in the PCI subsystem itself. So, like I faced a few challenges earlier, like in the first Linux kernel mentorship program which was when it was first initialized at that time also I applied in the PCI project only but I ended up withdrawing because I was scared that I won't be able to complete the project. And like so I made the application did everything but then I dropped off and then submitted. So, after that I talked to one of my friends who was going through the mentorship and I got the confidence and I applied this time and got selected. So, this is very good for me that I was not able to do it before but now I got through it. This is only because like we get both technical and interpersonal mentorship to succeed in the project and to keep contributing. So, in this Shua and beyond both helped me very much. So, now I would like to say what happened after LKM. So, after Linux kernel mentorship program, I got enough skills and experience to use Git and to compile the kernel and so then I got selected for Google Summer of Code with the Linux foundation itself. So, in that my project was related to IIO subsystem and sensors. So, I wrote sensor for an accelerator meter and then upstream that. So, after LKMP and GSOC, I got an internship offer from Texas Instruments and I have started yesterday only there. Now, my aspirations are to attend a physical conference and give a talk there and I really want to meet other maintainers and especially beyond who was my mentor. I want to meet Linus towards get a few images with him, Greg and Jonathan and others who have helped me Shua also and I really want to turn this passion slash hobby into a full time career. So, you can contact me at these links and this is the link to all the code that I've produced in the in the mentorship program and so this was all from my side. Thank you. Hello everybody. My name is Shalva Foyesh from Kenya. And today I'm going to talk about my experience as an MFX maintainer for the Tremor project last summer. So, my project is designing Tremor's website with emphasis on improving his experience and also its wealth presence. But in order to do that, there are a lot of decisions that had to be made and eventually made by myself and my mentors. And that's what I'm going to talk about today. But first, I'll tell you a little bit about why I was interested in the LFX mentorship program. And that's because of its emphasis on support for first time developers, open source developers. So, I do not have background in technology and economic statistics of anybody in Europe. And I was just still teaching myself learning on my own how to code. So, I thought that this would be a good opportunity to go faster and it was overwhelming. It was totally worth the needed items which I have set up for my mentors at Tremor. So, Tremor is a small project and a distance of event processing system. And it's a CSEF sandbox project that became open source just recently in 2020. You can read more about it and there's a link on the slide. And so my goal, like I said before, is to really design the website, which at the time was, well, first of all, the team, because it was still a small project and they were trying to put the documentation and all the content of it as fast as possible. It became a bit overwhelming and scattered and there were different platforms. For example, the docs, the mkdocs, the website was on Hugo. So, the first thing for me to do was to unify all these different content forms in one platform. Then after that, they designed it, made it beautiful and user friendly. So, some of the criteria that you are looking at was an easy to navigate theme. So, we're looking for smooth navigation between the different sections of the website, the docs, the RFCs. Also, we once had a platform or a framework that supported the documentation version, that we solved mkdocs, which was not really being used because of its ease problems. Also, other things we're looking at was syntax highlighting and just something that was both user friendly and it had things we're looking for. So, the first month was just getting myself familiar with the project. So, meet with my men because I was pretty new to GitHub and the GitHub flow. So, the first, especially the first week, that was the main thing in that. Also, I learned about community collaboration, writing good tool requests, spacing good GitHub issues, a lot of not taking which I learned with a skill. Yeah, so, like I said earlier, there were a lot of repositories. For the main website, on the Hebrew repository, there was the docs to the RFCs repository. So, I was just getting familiar with all this and then for that, I analyzed the old website and it was possible there is an improvement because before I even started working on the project, the team had already identified things that I was supposed to work on, but they valued my fresh eyes. So, I analyzed it and came up with a few possible errors in this room which you can read at your own time as I link over there. So, and then after that, the team really values feedback from the community. So, we requested feedback from the CLCF and from the other members of the community, which was very helpful and helped us to find a way forward and know what to consider from their own. So, the second month, we tested a lot of frameworks, for example, Hugo, Docusaurus, Zonmas, things, so many, using some criteria, some of which I've already mentioned. So, for example, support for center-side lights, documentation versioning, and of course, the other more specific criteria, for example, Hugo is endorsed by the CLCF because that's what they use, so it should be better for us to use it. But again, for some other reasons, we're not able to use it. Docusaurus has the dark mode, light mode functionality, for example, which are not available in the other platforms and something that we find out that the community would like. So, we had to weigh all these options and then come to the decision about what to use because, I mean, we could have closed everyone, but at least we were confident that it was a right decision because we analyzed it and we looked at what the pros and the cons is a summary on the link slide that you can look at later. So, finally, we selected Docusaurus and this decision-making phase was very important because I learned a lot of things about working in a community, for example, that a decision does not have to be perfect. It has to be good enough. It does not have to be the best solution, the technically best solution which has to be the right solution for your specific project, for your specific situation. So, midway through my mentorship, I had learned so much that I couldn't be happier, I couldn't still believe it, but I had been selected and I had this opportunity so I wrote about it, I wrote about my experience and my excitement which you can find on this blog post. Okay, on to the third phase where now after we had made a decision to use Docusaurus, I just started the implementation, setting up and configuring Docusaurus, migrating the different points and forms, the RFCs, the docs, the websites from the Hugo and MK docs repositories. And to now the Docusaurus repository, then after that we enabled the features, for example, the blog feature which also was very important, an important criteria, the implementation version in dark mode, such part one. Yeah, and yeah, since that's how it's going to work. Then, when we were confident that it was working and we had shared the repository with other members of the community for feedback, and they liked it, so it was the final touches, the design and the layout, and yeah, then after that we deployed the website. There's a link to the repository down there, and one thing I learned from this phase was, again, the benefits of early feedback from the community from other people involved, because it just makes things go faster and then you don't have, you don't have to, you know, finish the project and the site and the work, and just because you didn't request for early feedback can be really helpful in any project. And yeah, so this is how the website currently looks like, this is the whole page. As you can see, there's the docs, the blog, the blog already up there, there's the versioning, there's a doc. Doc mode is not visible. There's a doc mode switch. Doc mode like with switch. There's a search button. Yeah. And that's the docs, the latest one. RFCs. So on to my mentoring experience. I had my, the best mentor I could have asked for high skills. With the help of the materials, but they work mostly with hands and we met twice or thrice a week, and I liked his approach, because mostly he didn't always tell me just what to do, but he gave me a chance to, to work problems on my own. And, yeah, and he would maybe offer some, a few suggestions, some resources to help me come up with solutions, but he did not understand me. Which I liked because I got to, I got to learn a lot during the process and to find different ways of reaching on the same solution, choosing which one is the best way that that was a great, great experience. Another thing that I was understanding was that he exposed me to other members of the community because naturally I am not very good networking people, especially online and starting conversations, but with his help, it was easier to reach out to other members of the community and talk to them, ask for help. And also the CLCF TechDocs team introduced me to them to send us help and it was very helpful with the project. And the maintenance circle also pointed out a few things that we could do and it was, generally it was really fun, challenging. It was a much fun, which I appreciate. Thank you very much, guys. And to the highlights of my mentorship and the surprises. First of all, I was very surprised to learn about the reason I was selected. I was, I did not expect to be selected at all because I did not have a lot of skills. I had not worked on a lot of projects, I did not have a lot of grounding technology, but I was selected anyway and I was surprised. It was interesting to learn that because of my CV, not my CV, but my cover letter, just the design and the wording and that was enough to make them know that those skills would be helpful in the web design and the documentation part of the project. I mean, we did, I was very surprised to learn that. And it was also nice that everyone was helpful and it was so nice, a drama, the CLCF meetings. Also, I got an opportunity to attend the open source summits in KubeCon, North America last year through the Linux Foundation's diversity scholarship, although I was not able to attend because of COVID, but still it was an interest. It was a great opportunity and I am grateful. I was still able to attend by 3rd January. And another great thing that happened was that after my three month mentorship, the TRIMA team wanted to retain me as a maintainer for the docs. It was surprising, still is, but it's great that three months ago I didn't know what I was doing and now it's not. Another thing was that even after the end of the three months after the end of the mentorship, the TRIMA always told us that that was not the end of the mentorship. I mean, the cruelest happened. They can always help even right now. If you need any guidance in your systems, they're always ready to help as much as they can. And also through my mentorship at TRIMA, I was able to get a recommendation to work on the project I'm currently working on right now. It's Beno in Berlin. Thanks to Derek, who's part of the TRIMA team. So, what next for me, what I hope for is just in the short term to continue making TRIMA's documentation letter. The help of CELAS review and CELAS from SCNCF actually made a very impressive review and a very thorough review, which I think if implemented could make documentation really awesome. So that's what I hope to do. And also to become more outgoing, maybe, and involved in the community and open schools and also in the same breadth, maybe speak at an in-person conference because to overcome my fear of public speaking, because then it's a procedure kind of very nervous. Very nervous right now, but I hope to become better and, yeah, I hope to speak at CELAS. And one thing I really want to do is to perhaps mentor someone else, then a little bit, then half as much as I was meant to at TRIMA, I think that could make a big difference in his life. So this is, in case you want to contact me from the front here, my email on TRIMA is just called server, it's a scratch and on my blog. I'd like to thank the Linux Foundation for this opportunity, this really life changing opportunity. I thank everyone involved, the CELAS team and the tech team, the maintenance circle, thank you very much. Also the TRIMA team, I cannot thank them enough, they really support you and inclusive and thank you so much. Hi, Mr. Matthias, Natalie and Anna. And in general, the whole of the TRIMA community, thank you very much. Welcome and thank you for being nice and great. Thank you so much. All right. So, hello everyone and welcome to my showcase. I'm a plugin system in Rust. So I'm Mario, I'm a last year computer science student in the University of Saragossa, which is in Spain, and I've always loved open source, I've been involved in many projects of my own, and also others out there. And I wanted to get some more real world experience in terms of programming. So, I thought this mentorship combined the best of both worlds, and that's why I joined it. And for the project, I worked with Tremor, which is basically an EPS, an event processing system. It intends to be pretty fast and it has lots of cool features for the most basic operations such as button matching, filtering, and transformation. For those that aren't familiar with an EPS, the typical brain sample would be a system where you have multiple applications with locks in different formats for even different protocols. And you want to transform them and process them so that they can be outputted into your database for later analysis, maybe so that would be perfectly possible with Tremor. But that was the boring example. The limit is your imagination, basically. So another example would be to take these commands as input, transform them and filter them however is needed, and then output them as Spotify commands to control just Spotify player, for example. So yeah, why a plugin system in the first place? Well, the main issue here in the Tremor team was compilation times. I first tried to compile the project. It took me over seven minutes on my laptop, and it was only going to get worse. Even in the case of incremental changes, it was like 10 seconds. So it's not really a good developer experience. Another good thing that comes with plugin systems is modularity. It's a good idea to be able to decouple the deployment of the executable or the runtime and all of its components. Some things are just updated more regularly than others, or it's just good to be able to fit them separately, right? And the last thing would be maybe to learn from others. There are other major projects with similar repayments as Tremor, such as NGINX and Apache, and they have been benefitting from a plugin system for a long time now. So it's a good idea to at least try it out and see how it works. And I also wanted to address why they're choosing the path pane part of the title. It's a bit of an exaggeration, but there are basically many ways to implement the plugin system. But after some tests and some research, the only one that ended up being performant enough for Tremor's use case is the one that's called dynamic loading, which is actually perfectly fine by itself. But the problem that wasn't so clear at the beginning is that it's not super easy to do with Rust. So the thing is that it is possible with Rust, but we basically have to resort to writing C with Rust, so it's nowhere near as idiomatic or as easy or even safe. So, but yeah, in the end, I got it working. I'm super happy that I'm able to run the project and compile it in the first place. It's far from perfect, but it's still not in production, but I think it's a good basis for what the plugin system can look like in the future. And also the usability, as I was saying, didn't end up being as bad as I thought, even though the code and the concept is more or less low level. And I was using C after all, but with a lot of macros abstractions and tooling, it does get a bit better. And the last thing that I really liked about the mentorship is that I was able to contribute to many upstream dependencies. I wasn't really expecting so much of it, and I'm very happy for that I got to help a bit the open source Rust community. I learned lots of lots of skills in the mentorship as well. I was researching on my own. I didn't know about trust in the beginning, but I have no clue how plugin system could be implemented or what dynamic loading was. So I had to figure it out by myself. And this was also in part possible, thanks to note thinking. Everything that I was researching and learning and my entire progress is written down on the blog that I created for when I started the mentorship, you can see the link there. I was writing everything in there and it was not only useful for me because it helped me organize my thoughts and remember things that I had done in the past, but also for others if anyone else wants to implement a plugin system in Rust in the future. I'm sure my blog posts will be super useful to them. And it was even useful for my mentors because they were able to track more or less what it was doing at the moment. So I would recommend anyone to do something, all of the mentees watching this to do something similar, maybe at a smaller scale or whatever, but take notes definitely. I also learned lots of programming itself. I also learned from the prototype and the final implementation that I wrote. I had never really got into such as carrier and huge code base as tremors so it's good that it was finally be able to do so. And the last thing would be maybe organization. The task was more or less simple, so all of everything that I had to do was more or less listed on the issues or the pull requests related to the plugin system, but I ended up resorting to a proper organization method that's called campaigns. And it helped me organize the tasks that I had already done, the one that I was doing at the moment and the ones that I was going to do in the past in future. And it helped me organize, but the experience is the most important band after all. So in terms of the development this diagram more or less explains how it went for me. So I started by trying to learn as much as possible about the topic and this was actually the lengthiest part of the mentorship, which I didn't really expect. But yeah, more or less. Once I was confident enough with the topic of plugin systems. I had a new approach. And I wrote it in self contained smaller experiments because it didn't really well, as I said, like the tremor code base was a bit scary at the beginning. So I tried to do it a smaller scale. And more or less once more less that was working. I tried to move on to the framework code base but it was good. I think to be able to do some experiments before. And yeah, I tried to move on to the tremor code base and work once more or less. It was working. I actually went to failure. Actually, actually catastrophic failure. But yeah, like you learn from it. If this approach doesn't work, you can maybe try something different you can figure out what went wrong how to improve it. It's part of the game. And eventually it's a bit of a cycle for a few times you fail and you learn from it. And once you more or less know how it works, you eventually arrive to that's to something that's good enough success. So my advice to the mentees will be as well to learn from your mistakes failures at the success for sure. In terms of communication my mentors were a key as well a mathias was the official one, but I also got tons of help from direct and highest. And basically anyone in the discord server as well who were trying to tell me out understand how how tremor work and how the plugin system could be implemented. So yeah, another advice is to take advantage of your mentors because because they are there for you, and they want you to succeed, of course. So what's next, well in the short term, I actually decided to make this my final year project. And that's kind of a beauty of open source right even after you're done with your project you can continue contributing to to it. And that's what I what I will be working on for a few next months maybe it's getting just getting the plugin system ready for deployment polishing it running some benchmarks, some tests, and just getting it a bit better. In the long term, I think tremor helped me a lot as well. I will, for me, my problem was that I was always into computer science, but I never really knew what field I was into or like what my future job could actually look like. And I think tremor is a good example of that because it's, it's a lot of things that I really liked one of them being really open source friendly, of course. You also have a really healthy work life balance, and they were really open minded and very inclusive. The topic as well is very interesting to me it's more or less on the back end side of things. And it's low level ish coding not super low level, but still interesting enough to me so that's more or less what I will be looking for once I graduate. So yeah, last thanks to everyone basically to the Rust open source community for all of those libraries, all of those tutorials and all of that help in implementing the plugin system. All the tremor rates out there in the discord server and everywhere for the bearing with me and trying to help me understand tremor and also the language foundation for making this possible in the first place. So yeah, any questions you can leave them later and I'll answer. Hello everyone. My name is Santiago. I have recently finished my PhD, and I am currently working in data analysis and information management group of say it a research center in basket country that's part of the University of Nevada. First of all, I would like to thanks to project mentors and the Linus Foundation for the opportunity to present the work, the use of NLB and DLT to enable the utilization of the telecom running agreement, as well as the upper layer chip 2021 program. You can see the Menti project presentation and another presentation that we did as part of the hyper layer telecom group presentation on the laser in the in the slide footer. Now I will start with an overview of the right agreement concept. Running agreement operation ensure business continuity and service access to the end customer technology is stuck including technologies like, you know, 5G and AT. The running agreement address the technical and commercial components are necessary to enable the service to running agreement customer. So that is constitute an essential part of the of the business managing issues such as interoperational in charge thinking like a running agreement are mainly a bit of an agreement with an established template provided by GCMA. Therefore, conducting dynamic and transparent running agreement drafting process ensure advantage for telca companies. The necessity of the path comes from the problem detected in the running agreement drafting process, which include, first of all, that the running agreement negotiation model is low and also is worthy. So the GCMA approach for the wholesale running agreement initiative is like a big generalist in terms of negotiation and drafting of the running agreement. So it's lacks of standardization. And, you know, this necessity establish a framework that provide capabilities such as, you know, first of all, provide the fine payment that utilize running agreement drafting process. And the second one is to promote the transfer negotiation process between two BIOS networks operator. And that one is to ensure traceability in the running agreement drafting process. In this way, blockchain integrated with other technologies such as natural language processing can cover this capability. An overview of the project can found in our first medium article, published as part of this project in the link show it in the footer of this slide. For this reason, the main objective are first of all to build a library that will capture the different variation and variables that construct a telecom running agreement. And to the proof of concept of a set of a smart contract that will automate the process of drafting and negotiation. The reference architecture of the project include not only, you know, the participant entities, but also the functionality that they perform throughout the application of life cycle. First, the entities participant in the running agreement are composed by two mobiles networks operator and the GCMA as administrator. There are there are functions that they perform in commons and functions like, you know, like the maintenance of the hyperlegia for blockchain network, and also the participation in the network consensus. But, particularly, mobile networks operators negotiate and draft the running agreement among themselves, maintaining the privacy of the information and using the NLE ending to create a template which be used as part of drafting process. In addition, GCMA is in charge of the registration of mobiles networks operator, the network monitoring and the audit accountability of running agreement conducted between two mobiles network operators. The application life cycle is composed by four stages. The first phase starts when two mobile networks operator, a new running agreement principle, throw out, you know, a document that could be shared with the GCMA as a central authority. At this point, GCMA can create the running agreement output template using the MLB ending action. The second phase implies the registration of the mobiles networks operator. The third is include the running agreement drafting process. The fourth stage consists of the proposal to reach out. The two main components of the project are, first of all, the NLEB ending and second the chain code. The overall architecture of the NLEB ending is built over the infrastructure, establishing input the running agreement draft as well as the GCMA template. The closing layer is identified as the NLEB ending which include as output the classification of ARRI in a standard clause, variation, custom text, variables, using the Amazon Comprehend tool. Detail of the NLEB ending can be found in the NLEB median code published as part of this project. This image illustrates part of the methods that make up the chain code lifecycle. The invocation of this method establishes the transition between differing states of the chain code. This transition takes place at the application level. First of all, a running agreement level, second one, articles level, and finally at article level. The details of the chain code design can be found in the third median article published as part of this project. The project implementation is based on a set of my service using Docker infrastructure. I've also noted how GCMA is the administrator, the mountaineer of a set of service including front end NLEB ending, Yonkibana, Grafana, Prometheus and the documentation of the API using SWIR. The second place, the two mobiles NLEB operator include the functionalities to maintain and monitoring the ledger, and one of the most remarkable points of this project is the integration with other open source networks that belong to the iProlayer mentorship program, specifically the project at a like thing iProlayer ledger transaction and logs using Elasticsearch and Kibana. As you can see here, transaction committing into the ledger was the same code was installed and instantiated. And as you can see, the attempt to access for the user failed because GCMA need to register it for GCMA registered admin and user of the mobile network operator. Use one is able now to register the mobile network operator. And as you can see, the transaction is connected to the ledger. And as you can see, the user to now it's able to register the second mobile network operator. And now you can see how the transaction is also committed into the ledger. The same network operator now is able to propose a roaming admin registration. As you can see now the pipeline created, can show the roaming agreement available. And the proposal now need to be confirmed for the other mobile network operator. I'm the mobile network operator one. As you can see the transaction committed to the ledger. And now the mobile operator is able to propose an article based on the roaming agreement output template loaded from the NLB engine. You can see now how the status of the roaming agreement is changed. And also an article is alluded into the pipeline. One important thing is that the proposal chain cannot be accepted by the same mobile network operator that propose it. Now the mobile network operator number two is able to propose changes. This one change another change. And now the article status is updated as you can see in the article pipeline. Now the mobile network operator one can accept the proposal for this particular article. Now you can see the transaction committed into the ledger. Now the mobile network operator one is case is able to add other articles. In this example, I will add two articles. This is one article. This is the second article. Now you can see the article included into the pipeline. So mobile network operator number one cannot accept its own change. So mobile network operator number two can accept the proposal changes. Okay, with just three articles, mobile network operator number two go to the pipeline and propose to reach an agreement. Finally, mobile network operator number one can accept the proposal and then the agreement is reached. Here you can see the last transaction committed into the ledger. Thank you very much. Thank you everybody for speaking and sharing your experiences. Thank you graduates. You did an awesome job. Next slide. I would like to thank our sponsors that have consistently been sponsoring these programs that had IBM GitHub and Intel. Thank you to all of us since 2019 supporting these programs. I'm willing to offer four projects and include help train next generation of jobs. Thank you so much. Thanks everybody.