 Welcome everybody to the last segment of our mentorship showcase 2023 and our mentees will talk about their experiences with the mentorship and what they have learned during their mentorship program experience. Okay, let's say let's start a bit about beginners problem. We all struggle with where to start when we are looking to do something new something different from what we have been doing something new to learn something new career to explore new technologies explore explore. We don't always know the first very first problem we all faces we don't always know what we're passionate about. Some of us do some don't we have to explore to find out what we would enjoy doing and which open source project we want to contribute to because there are many to choose from. And once we decide on a project we struggle with how to get started where to get started. We find the communities to be daunting, intimidating and then also code base looks very complex. At this point, when we do decide to the best place to start, we are looking to see where can we find resources for us to start learning. A little bit before we go and approach the community, ask questions, and then these resources are important for us to be able to gain confidence to get some help. And then after that we struggle with who can help us who can ask answer questions for us. Once we figure these things out the water and how and where and who we are looking at. So next, so at the Linux Foundation, we understand that access to resources access to learning paths is difficult and we provide them. You can explore your learning path and career paths at the LF training site. Here is the link that you can go explore take three classes there are several free classes to take on their training classes, and then you can go explore more learning at the live mentorship series. So this mentoring series is the 90 minute webinar style interactive style, experts come teach these webinars on various tech topics. Once you figure out which project you want to apply, there's a host of projects hosted on the LFX mentorship platform so you can apply to one. And then the last lastly, we connect graduates with the people looking for talent and our graduates are trained by experts in their open source projects. I will leave you with this slide with all of the resources here. And we do understand at the Linux Foundation that we recognize that access to resources is a barrier to a lot of people. And we, that is one of the reasons we provide all these resources for people that are wanting to get started with open source. And empowering learning you take ownership for your learning what that means is that you have various options to choose from you can participate in part time mentorships for time mentorships, learn from watching the webinars and training resources and so on. So you own your learning and you're empowered to do so. We continue to improve programs. We did a survey recently of all of our graduates from 2019 through 2021, asking them what they would like to see in terms of improvements to the mentorship programs resources webinars and so on. So we have a report out please check the left. With that, I am going to hand off to Abdul Rafi for his presentation. Take it away. Hello, everyone. My name is Abdul Rafi. And today I'm going to share my experience of LFX mentorship program. Little about me. I am pursuing a master's in computer applications from Jamia Millie, Islamic New Delhi, India. I applied for Linux kernel bug fixing. And the goals of this program for to introduce us to the Linux kernel internals to understand the workflow of Linux kernel development and also to analyze and fix books. So throughout this program, I was mostly working on a bug in file system caching, and I faced several challenges. I got stuck at several places, but nevertheless I learned a lot, and I would like to share some of them. I had a good understanding of Linux kernel internals found out that it is divided into subsystems for ease and differences of subsystems are for example, memory management, etc. And because I was going through a lot of C code, it made me a better C programmer. I picked up a lot of good C programming practices. And I don't remember when I began to utilize system logs to directly troubleshoot errors. Finally, I learned about static and dynamic analysis of the programs. I learned to use tools like sparse coxinell and C scholar, and one of the amazing skills I picked up was to utilize GDP to debug remote processes. So basically, this is one example on the slide, a very simple one where Linux kernel is running in qme virtual machine, and I use gdb to connect to the host from the host to the virtual machine to analyze Linux kernel booting process. It did make me a better Linux user. I found out about many tools that I utilize that I do utilize today that make me more efficient. And one of the problems that I faced was the Linux kernel compilation process would abruptly close somehow, and that forced me to learn more about how my operating system going to manage its memory and manages processes. And I found out about something known as out of memory killer, which was actually the reason behind the Linux kernel process being killed abruptly because out of memory killer kills the process that is that uses the most resources and also that that utilizes them for the most time. So I tried to increase the swap and other solutions, but nevertheless, it didn't solve the issue. Lastly, I have to kill the out of memory killer, and it worked, but maybe not a good solution. And she had been really helpful and I'm really thankful that she gave me this opportunity that is of being mentored by her. She had answered a lot of our questions. She had been guiding us on which books to choose. And also she demonstrated us how Linux kernel hacking is done. And also, she gave us overall guidance on our career. Now there were certain things during the program that surprised me. One of them was the flexibility of the program. We were allowed to choose what we are comfortable with, and we got the new support for that. Now we weren't just solving and fixing bugs, having discussions on just fixing bugs our mentor would discuss newer technologies that are being utilized in Linux kernel development. And open source community itself was a surprise to me. Because once I had a problem with Scholar, I emailed it to the Google group of Scholar, and the next day I got the reply later I realized that it was actually the maintainer who replied with a solution. I do have some aspirations. Like, I would want to be a regular contributor to Linux kernel. And maybe maintain a project and open source project. And also I would love to speak in an open source summit discussing some technology or some cool tricks that are worth sharing. At last, thank you very much for having me here for this event. And if anyone wants to reach me out, they can reach me out on my LinkedIn, I would be more than willing to help. Thank you. So, hey everyone, I'm Ruchi Pakle and LFX Media and open horizon for the spring 2022 cohort. And today I would be talking about my LFX mentorship experience my talk is titled roller coaster ride about my journey. My talk would basically focus on two things a quick overview followed by my work followed by my LFX mentorship experience. So a little about me. I am currently a software engineering intern at Red Hat and a final year student at MGM college of engineering and technology. You can reach out to me. You can reach out to me on Twitter or on my GitHub. So my talk would basically be focusing upon two things a quick overview followed by my LFX mentorship experience. So firstly why why I chose open horizon. So I always wanted to start learning DevOps and cloud native technologies but I was like procrastinating and and like I did not know how to start ahead and like there. I was like I was just scrolling LinkedIn on on someday and I found out that okay there are some opportunity for LFX cohorts at open horizon and since I was already active contributing to it. So I thought okay it's a good chance for me to see like you know dry my hands on on cloud native technologies and DevOps and see like if I'm selected or not. So that's how I applied to open horizon and then I got selected into it and I was good to go with it. Then it is like the edge computing like let's talk about the concept of edge computing so edge computing is an emerging computing paradigm that refers to a range of networks and devices at or near the user or like locating close to the origin. So edge is about persisting data closer to where it's being generated enabling persisting at more incredible speeds and volumes leading to greater action led results in real time rather than persisting data from the centralized database servers. So that so that is about edge computing basically and the. So let's talk about what is open horizon and what what it consists of open horizon consists of management hub where administrative operations are centralized edge device agent combined with container runtime where all the like operations of software engineering cycle. Happens over here and then we have a cluster agent combined with OCPQ platform so the management hub consists of open horizon and Cuban like management hub and the Cuban eighties so the in the Cuban eighties we have the OCPQ platform and in the management hub we have edge device edge cluster get away and network age where the like the networks for edge computing happens on the like it is basically edge for regional and local offices and it is hosted on Cuban eighties. So the main components let's talk about the main components of open horizon. So it has a edge location called as the open horizon agent. So in open horizon agent we have devices and cluster so devices for devices we use Docker as a device and for cluster we use Cuban eighties cluster to host the like host our containers on it. So the node agent the notation consists of register node negotiate agreements model synchronization and monitor agreement conditions. So again like it has centralized public or private cloud or on premises. So what happens is open horizon agent synchronizes with open horizon management hub and it performs operations on a wide basis. It has like container registry switchboard exchange agreement board board model manager secure device on board secrets manager and stuff. So switchboard and exchange is hosted on system state agreement board and model manager are based on in model repository and service device on board secrets manager are based in the world of the open horizon. Next so the next is let's talk about my mentorship experience. So like there were a few things where I like you know I thought like okay should I give up or like are things not working and stuff. So like my mentorship experience started with nowhere knowing how to like you know like how things happen in the example repository of open horizon how things takes place. So I started with learning go. Then I like the management hub installation to do around a month of a time and it was very hectic because I had windows laptop and the systems were feeling and stuff. But like I coordinated with my mentor and like the things were working out hopefully. And then I learned what is edge computing what is tough. I constantly was performing my task like hosting meetings with my mentor and stuff. So it was a great experience for sure. Then key takeaways my key to take my key takeaways where start small if needed like you know they're like it is not like that that you know you need to like know every and each thing. Start small takes a little steps take baby steps first take initiative of your work ownership of what you're doing and what you're bringing to the table. Seek feedback from your mentors on a constant basis on a weekly basis or like within two weeks or stuff like go ahead with what works for you. And like you have to be resilient when things are not working in your favor because resilience is very important in like in this programs and like being resilient and patient. Like you know opens new doors on opens the solutions to your problems basically then like like be active in the community harness the community community driven because I'm a huge community person and like everything I have done in open source is because of my activities and community my interactions with people be like being active in on different social media platforms like LinkedIn Twitter you get to know very much new things from like being active and you know this platforms being active in community basically because your network with like minded people you know like what is there out there you know what people are doing that who is better than you and how you can be better than better than yourself. So it is like a healthy competition with yourself where you know if you are present in the community you know like you know the path you like you're not dependent on your college and stuff basically and lastly maintain a balance of everything. It's not that you're just spending time on social media platforms and stuff so you have to maintain a balance. So yeah, thank you for listening. Hello, I am. I'm a senior years CSU student from MIT India, and I was a mentee at Elisa Medical Devices Working Group under the Linux Foundation mentorship program. So, Elisa is short for enabling Linux in safety applications. It is a project where kernel developers and system experts safety experts work together to analyze safety critical applications deployed on Linux like medical devices and automotive devices. So Elisa members are defined and maintain a common set of elements, processes and tools that can be incorporated into specific Linux based safety critical systems. Elisa members also work with certification authorities to explore various applications of Linux and safety critical systems. So open APS is open source artificial pancreas system. It is used by patients having type one diabetes, and it is designed to adjust an insulin pumps insulin delivery to keep blood glucose in a safe range. It is deployed on a Raspberry Pi. It uses Raspberry Pi. So about my project, I was as a mentee I was required to use Linux kernel tracing and a stress to discover the Linux kernel subsystems used by open APS, meaning I had to, and I had to see how open APS workloads interact with the kernel. So it is very necessary to see how these applications interact with the kernel because any failure in such cases can damage the device and also can also lead to loss of life. So Elisa has several working groups for it like medical, automotive and aerospace and I was working at the medical devices working group where we were doing analysis on open APS. So I was also required to write a blog or white paper on my findings, which would aid Elisa medical devices working group to focus on the subsystem system calls and modules that make up the footprint for safety. So I will talk a bit about system calls now. So in order to so far any user space application, if it wants to access the hardware resources, then it has to issue system calls to the kernel. User space applications can't directly access the hardware resources. They have to first issue system calls, then the kernel services those requests. So broadly speaking, there are two modes in an operating system. The first is user mode. It has very limited control over the hardware and in order to use any system resources, it has to issue system calls. The second is kernel mode. It has full control over the hardware and it can execute any instruction and it can access any memory location. So what are the advantages of this design? So system calls allow kernel to carefully expose key pieces of certain functionality to user programs. So as an example, if some file that is only accessible to the only allowed to use a root user to use it, and if some other user can directly read that file by directly accessing that the disk, then there is no point of having that food. So it is very important for user space applications to have very limited control over the hardware. A system calls provide us an interface by which the hardware can be accessed in a very secure and safe manner. And secondly, it results in less coupling between the user space applications and operating systems. So application designers don't have to design their applications, keeping in mind the hardware, different hardware architectures. So I will be discussing some tools now that I used during my project. First was Estrius. It enables us to keep track of all the system calls made by a process. So it is very useful for understanding what exactly is happening behind the scene. And we used it at Elisa to discover the system resources that are used by a workload when it runs on Linux. So it is fairly easy to use it. You have to use Estrius command, followed by the command that you want to trace. So here I have used LS and these are all the system calls made by the LS command. So here you can see the parameters also that pass to every system call. So this way we can see what exactly is happening behind the scene. We can troubleshoot media situations and we can also see that on which files a process depends on which files a process uses for its execution. So overall it is a very useful command. So we can also use the hyphen C parameter with Estrius to generate a detailed report of all the system calls made, the time they took and their frequency to get the overall to do to get a very high level overview of what is happening behind the scene. So next rule that I used was C scope. So it is a command line tool which is used for browsing C, C++ or Java code bases. And it is very useful for understanding the new code base and for exploring the flow of execution of various functions. So I used it in my project to find which system calls belong to which subsystem. This way we can find the kernel systems used and used by a process when it is executed. So to initialize the C scope database on the kernel code base, so we have to run the following commands. So this is how the C scope console looks at the bottom we enter the queries we want to execute and the output appears at the upper half of the screen. So system calls are defined in the Linux kernel using the syscall define macro in their subsystem directory. So we can search for this pattern in the code base of Linux and then we can get the subsystems to which they belong. So here is I have executed this. So here we can see various system calls of the memory management subsystem. I search for this pattern and here we can see hundreds of system calls like this that that can be found in their subsystems in the under the file column. So this way we can find the subsystems to which each system call lies. So during my mentorship program, I analyzed some workloads under a stress to see how they interact with the kernel. I generated one of the workloads using stress ng stress ng is used for performing stress testing on the kernel. It aims to test the robustness of the kernel under very extreme real world conditions. It is basically used for putting more load on the server. It attempts to crash the kernels by exercising various kernel subsystems like CPU, CPU cache, etc. So this is how the network stressor works. Here we can see it has issued various IO control commands on the network devices. So this is how the output looks. When we perform stress testing on network devices. And here I executed this command under a stress. So these are all the system calls generated by this workload. And I mapped, I used C scope to map the system calls with their subsystems. So these are all the system calls with their subsystem in which they lie. So there are system calls from the file system subsystem and memory management and signal and process management and network and time and few texts. So this way, S3 is a very useful tool by which we can discover the system resources in use by a workload understanding the path that a workload is taking in the kernel understanding the system resources. It is using is very important is important to avoid regression and for safety analysis. And this way we can do a high level analysis of the kernel and identify the components for safety. So I wrote a detail by people on the best practices for tracing a workload, which is available on Elisa GitHub. So this work paper can also help other Elisa working groups in their analysis. So now I would like to thank my mentors to show our ma'am and millions for believing in me and for the stimulating suggestions and an ending support. I am also very grateful to all the members of the medical devices working group to Kate ma'am to Nicole ma'am to Jason sir and Jefferson for their constant guidance and support. I would also like to thank Linus Foundation for providing me with this platform. So my own mentorship experience was very nice. My mentors were very experienced, friendly and motivating. I learned a lot from them and that and through the task, they assigned to me. And I learned a lot about the Linux kernel during this program and my and Elisa community also was very nice was very welcoming. It's members were always ready to help me whenever I needed any help and I got to learn very powerful tracing tools like S-Trace, S-Trace, Scope and Perf and medical devices working group members used to work on STP analysis for analyzing the safety of open APS and I learned a lot from their discussions. So, and during this time I realized that a kernel development is what excites me the most as it allows me to work at a low level and over thought it and knowing such things. It helps me to become a better programmer. So, and I also gave my first presentation at a technical conference at Elisa Summit where I presented my work, my white paper. And I have also compiled a list of useful resources on my medium blog here. So if you are interested in learning about the Linux kernel, then you can try out these resources. So my future plans are that I will learn more about the Linux kernel, contribute to it and pursue a career in it. And doing this mentorship made me realize the immense potentialities of the Linux kernel about which I was not aware before. So it was overall a very life changing experience for me. Thank you. Hello, everyone. My name is Mayan Kumar. And today, I will be talking about my experience building the VS code extension with cross platform support for an axe. Moving on, I would like to give you a brief introduction about me. I am a final year student at VIT fellow, and I'm very passionate about cloud and cloud security. It's truly amazing to see the kind of innovation that is taking place in this field. With my research and my learnings, I came to know about what is confidential computing and marks to give you a brief understanding of what is confidential computing conflict and confidential computing allows you to make secure environments where where you can run a program. Now, as your program executes the data that it uses for computation, even that, that data remains encrypted in memory, as well as while it is being processed by the CPU. Now, this is something great about confidential computing and acts is a tool that that facilitates this thing and acts is a is a framework that allows you that allows you to run bosom applications inside the secure on graves. And now allows you to run, run these applications irrespective of the platform that you are using, be it Intel SGX or AMD SCV. That's something great about an ox. Now, talking a little bit bit more about what is wasm wasm is a binary file format. Now, there is a wide range of languages that can be compiled to as a wasm binary. Now, this is something great. This allows the applications to be portable and very secure as these applications when when they are being executed a sandbox in an environment. I truly believe whether assembly can be the next wave of innovation is leading the next wave of innovation in cloud native in the field of cloud native. Now, knowing all of this, I got to be a part of confidential computing fellowship. This was truly a life changing experience for me. Moving on, I would really like to hang the superheroes of my journey. My mentor Nick Vidal was really supportive and helped me out with everything and anything that I needed at any point in time. It was really, really supportive of him. To guide me and motivate me at every point when we had our discussions is discussing about design, the design challenges and the feature that we're building and how it's going to make an impact on the community around anarchs. I would also like to thank the entire team of anarchs who were really supportive and they were very active in giving their feedback and really helped me out to complete my milestones and coming up. Coming up with a project that is really going to be useful for the developers that who are going to be using for using anarchs for the development. Moving on, I would like to discuss about my work. I was building a VS code extension for anarchs from ground up. Now, it's a really great tool. It has a lot of features right now. And it is still in development, so many more features are going to be built into it. So, right now, the extension can validate anarchs. which is a configuration file. It also notifies you of the latest anarchs releases. So, if you have an outdated version of anarchs on your machine, it's going to give you a pop up and you can update your local anarchs instance straight from that side. You can also install anarchs using the extension. If you do not have anarchs installed on your machine. You can run wasm workloads with anarchs. All this while having very targeted error management system. So, at any point when you face an error, the error message is going to be specific and they're also going to give you a link to the documentation which you can go through and read and fix and recover from any point of failure which you have encountered. So, talking a little about drawbridge and codec support that I have built into the anarchs. Now drawbridge could be taken as an equivalent for docker hub. Now, similar to docker hub where you host docker images with drawbridge you can host wasm binary applications, which could be pulled on any platform and anarchs can run those images. So, this codec can be taken as an example hub. So, anyone from a new beginner to an experienced developer can pull the code and build on top of that code example code and you can build some amazing applications there. Now, I want to talk about some of the challenges that we encountered by building this extension. So, one of the features of anarchs is that it has a wide range of support and it is rapidly evolving and changing. The thing that we wanted to keep is that the errors of the variation checks of these configuration files that we have those errors have to be very humanized and they are very readable and understandable by any user. So, what we did is that similar to accommodate the rapidly changing configurations and rapidly evolving state of technology, we, what we do is like when we load the contribution into the application we have a generic interface, which validates the loaded configuration which is then stored as a JSON and then JSON is validated by a JSON schema. Now, this JSON schema can, we can build a feature around this VS1 extension where the JSON schema could be pulled from the cloud and we do not have to make a release every time a new change is being made to the configuration with the inarchs, with the inarchs binary. So, even without updating the extension every time a new change, a new change is brought about in the configurations we do not need to update the extension as well. Another thing is that we use community libraries and to humanize these errors when we do valuation checks so that the errors are very simple to understand. So, like, if you have an error, it can point it out. And similarly, if you have used a field of parameter which is not valid for a particular attribute, you're going to be notified about that you use the port range out of like out of the usual 16 bit port numbers, you will be notified of that as well. Similarly, there are checks for the address that the application binds to and similarly there are many more validation checks. Now, talking about another challenge that we faced was the releasing these notifications. We did not have, at the time we were building it, we did not, we did not have a specific backend that fetches and pushes out release notifications and with the assets that are the assets or the binary that have to be released with a notification, we had to depend for the GitHub API itself. So we made a very generic interface so that we could pull the latest releases from the GitHub API and then match between the version installed on a local system and the latest release and give him a notification and without needing, doing this all without needing a separate backend service. So this is another challenge that we had to encounter. Another challenge that we had was supporting multiple operating system for update and installation of an ARC. Now, when we update and install an ARC, there are multiple dependencies that we have to check for. Now, there are many times these different dependencies behave differently on different platforms. For example, on Darwin and on Linux, so we had to do rigorous testing so that we can recheck across all platforms that everything is working fine. So these were, this was another challenge that we had to face. Now, talking a little about what were the takeaways from my experience and learn to design a tool or a software with extensibility in mind. I've worked in a structure and in a structure and established milestones and work with structure and establish milestones. So I was able to finish all, all the work in set timelines, and which was a great experience for me because it provided me the skills of, like, provided me to work on my skills on time management. I got to participate in discussions on complex design challenges that the team was facing while building an ARC. So it was a great learning experience for me there. The tool that I have made will be helping so many developers. I'm really going to be in rejoicing it as the project grows and the impact it makes on the work of the developers. Another takeaway was that my mentor in the ARC team was really amazed by my work, which was a huge confidence booster for me. My work was also featured in Azure plus Intel Confidential Computing webinar, which has over how in case of Skype, which was, which is really amazing for me. Now, in the end, I would like to talk about the vision I have for the work I will be doing and the direction I want my efforts to be in. My vision is to create a safe internet, safe environment for everyone who ever uses the internet, where the security is built into the tools and everything that we use out of the box. And someone does not have to be a security expert or a computer genius to figure out and enable settings for keeping him safe on the internet. Everything works out of the box, everyone is safe on the internet. All my efforts I want to be in making that scenario possible for everyone. I think that a lot of progress is being made in this in this field and I think I'm going to be working towards contributing this to this model. Now, with this I come to the end of my presentation. I'm looking for my next adventure. If you have if you want to connect with me and connect me on these credentials. Thank you. Hello, everyone. My name is Gautam Menghani and I will be talking about the summer that I spent working on the Linux kernel as a part of the Linux kernel mentorship program. I would like to start by briefly introducing myself. I'm currently working as a Linux kernel developer. My main areas of interest have always been operating systems and low level programming in general. I contributed to multiple open source projects and consequently I am I'm a firm believer in the open source philosophy and I think that it is the best way to build software. Now let's start with understanding what the Linux kernel is. So Linux is a kernel which means that it is a heart of an operating system and it is crucial to the working of an operating system. The Linux kernel can be found everywhere in right from smartphones to servers to automobiles and smart devices etc. We can safely say that Linux is running the world. Now what if somebody wants to contribute to the Linux kernel? This is where the kernel mentorship program comes into picture. I was a part of the Linux kernel mentorship program which is an initiative taken up by Linux foundation to help people get into Linux kernel development. So going into the program I had a few key objectives in mind. The first thing is that I wanted to learn more about the Linux kernel internals. So how do areas like memory management, networking and you know the architecture specific code works. I also wanted to fix bugs in the Linux kernel and help make it more stable and also contribute at least five batches along the way. Now I would like to talk about what I worked on. The first and foremost thing that every Linux kernel developer does is building the Linux kernel source code with different configuration options. This is important as Linux kernel has a lot of features and not all the features are useful all the time in every situation. Now on the flip side, this also means that there are some bugs that are visible that are reproducible only on certain configuration options. And that is why when a new version of Linux comes out, it is important to test it with different configurations to make sure that it does not have any regressions. Also, we have a lot of testing and debugging techniques available on the Linux kernel. One of the most common technique is using static and dynamic analysis tools to discover bugs. I used a static analysis tool called ClangScan to find and fix three warnings in the kernel code base. I also use the dynamic analysis mechanisms like GDB and the ftrace event tracing mechanism to monitor the runtime behavior of the kernel and understand the program flow. During my mentorship program, I mainly contributed to the kernel self-test and muscle. So kernel self-test is a test suite that is used to test kernel from user space. The advantage of this is that we get to test the kernel specific code as well as the system call boundary. So this is more like an integration test suite. In case of test, I contributed to the data access monitoring tests. Specifically, I added a secure boot check and also the huge page access test. I also worked on refactoring some of the tests in the networking and the seccomp section of the self-test. I also contributed to Masim, which is a tool which is an artificial memory workload generator that is used to test the memory systems. Masim earlier did not have support for using huge pages and I added support so that it can use huge pages for its workloads. Now, what did I learn from my mentorship experience? I learned and understood the entire kernel development process right from the mailing list to the thorough coding standards that are there in the Linux kernel and also the do's and don'ts of interacting with the community. Also, personally for me, contributing to the Linux kernel re-emphasized the importance of documentation as documentation is used by everybody right from the mentees and interns to the maintainers and also the users of the product. Next, I would like to talk about my collaboration with my awesome mentors. My mentors during the program were Shua Khan and Pavel Scribkan. Shua is the maintainer for the self-test suite, the USB or IP driver and also the USB IP and CPU power tools. Pavel has been contributing to the Linux kernel since 2021 and is also a reviewer for one of the Wi-Fi drivers. Throughout the program, our mentors conducted office hours twice a week to make sure that everybody was progressing and not stuck on anything. During the office hours, the technical discussions that we had on various topics across the kernel were invaluable. Also, our mentors guided us on maintaining a high level of discourse with the community so that we can effectively contribute to the community. And also during our office hours, Shua gave us a lot of invaluable career advice which has proved fruitful for me. Now my journey in the Linux kernel was not without roadblocks. The biggest roadblock that I faced was that I failed to make it into the Linux kernel mentorship program spring of 2022. Also, when I started working on the Linux kernel, I initially struggled a lot when fixing sysbot bugs as they are quite varied in nature. It can be anything right from race conditions to lack of safety checks to logic bugs. And also, I constantly doubted myself whether I am good enough for this. Nevertheless, I'm happy that I continued to learn and persevere and finally I could overcome the roadblocks that I faced. Next, I would like to talk about my aspirations as a kernel developer. One of my main objectives as a kernel developer is to help build the next generation of technology. Also, I would like to help to build more features into the Linux kernel and also make it more stable so that it can be used in almost all the workloads. Also, I would like to give back to the community as I have learned everything that I have learned from the community itself. So I would like to give back to the community by helping build tools, contributing to different mechanisms that we have and also by writing blog posts and giving talks about and share my knowledge along the way. Finally, I would like to give some advice to future mentees. The most important advice whenever working on anything, whenever working on a complex project like Linux kernel is to remember that persistence is key. So in order to make meaningful contributions to anything significant, it is important to be persistent at it and keep on continuously learning about it. Another thing that another point that I feel is important is that while working with complex bugs, it is easy to get stuck and demotivated. So I feel it can be very helpful if a person interleaves the complex bugs with easy to medium level fixes so that they can keep their motivation going. And finally, it is important to have fun along the way. Let curiosity be your guide and you will end up making meaningful impact. Thank you for listening to my talk. If anybody wants to connect with me, we can connect on LinkedIn. Thank you. So good morning, good afternoon and good evening, everyone respective to your time zones. Today I want to talk to you about learning from and giving back to the communities. Now in this presentation, we will explore the importance of the open source communities, the benefits of contributing to the open source projects and ways to give back to the community. Right. So a quick short introduction of myself. I am a student pursuing bachelor's in computer and in my final year. And apart from academics, I am a software quality engineering intern at Red Hat. And yes, I have also volunteered Kubernetes release team as a shadow of four times like two times in Docs team and two times in CI signal. And previously I have worked as a GSOC Manti with captain organization. And yes, I'm here right now speaking because I was a LFX Manti back in spring 2022 with CNCF. So yes, let's start like I want to share, I would like to start with the importance of the community. Right. So open source communities are an integral part of the software development industry, which drives the innovation and provides access to the high quality software for everyone. Now these communities are built around some principle like that software should be freely accessible to all with the ability for anyone to contribute, modify and distribute the software. Communities also provide a way for developers to learn and grow their skills. Right. So everybody would like to know the benefits to step into the open source community. So the benefits of contributing to open source projects are numerous. Right. So for one participating in open source communities allows developers to learn new skills and technologies. Now this can include learning a new programming language, working with the new tools or gaining experience with different development process. Additionally contributing to open source projects can help developers build a portfolio of work that can be used to demonstrate their skills and abilities to the potential employers. Right. Now another benefit is the opportunity to collaborate with other developers from around the world. Now this can include working on a project with other developers providing feedback and suggestions or mentoring others in the community. Now let me share my case like while I was applying to the LFX mentorship. I applied with the Linux kernel and the CNCF one. So it's a good opportunity for me like I got selected for both of them, but according to the rules I can only opt for one. So I need to drop this Linux kernel Linux kernel one, but it gives me a much and a bunch of learning related to the open APS system, which I was totally unaware about it before applying to the mentorship program. So yes, now everybody asked me like I get a lot of DMs regarding how can you get involved in open source projects. Benefits are the other thing, but the first thing is you need to step into the open source projects. Right. So I would like to say you like first of all identify your interest in the skills from the wide availability of the tech stack. Right. You need to identify your interest and skills. Now there are many ways after you choose your skills right to find the open source projects that align with your interest and skills. Now one way is to search for the open source projects on the platform like GitHub. You can do it by searching for specific keywords or text. Now additionally, you can also find the projects through online communities and forums such as the Google Summer of Code site is the big big platform to have a look at it. And yes, in recent years, like the Linux Foundation has expanded its support programs through events, training and certification as well as the open source projects like the Linux Foundation projects hosted. Many other also like Linux kernel project, hyper ledger, Kubernetes or say any CNC sub project and many others. Now, right after you have found your project that you are interested in contributing to it's important to follow the best practices for collaborating and communicating within the community and maintaining consistency. Now this includes reading the projects documentation and guidelines as well as being respectful and professional when communicating with other members of the community. Now after you have done your part right. It's important that you are playing your role in the community as a mentor. Now everybody invested their time in you to train you now it's your time to mentor others to follow the chain of learning and mentoring others. So as you contribute to the projects. Now it's important to give back to the community in return. You need to. This can include like mentoring others in the community by organizing some talks or seminars and the other way can be contributing to documentation like you know that without documentation you can't learn any new project everybody starts with the documentation by following the document the documented instructions right. Then the other way can be the financially supporting projects and maintainers like through sponsorships. Many projects rely on donations and sponsorships to fund their development and maintenance right now while giving back to community principle I have received once like from mentor like while we were talking randomly. I got to know that like it it's hard time sometimes to convince the senior people in the company to contribute in mentoring right some there is some work to do here so I will highly encourage you like after you are you are trained at level at some level. Please be open to mentor the newbies in the community right so people don't know like what's the hidden benefit of giving back to us right now. Famous quote from the Frank Nagli he's assistant professor at the Harvard Business School says that like firms that allow their software programmers to give back to the open source community on a company time gain benefits right so it's being said that like paying employees to contribute to such software boost the company's productivity from using the software by as much as 100%. Now let's take a scenario right like when a program or engineer runs into a problem or an opportunity to improve the Linux kernel code. Now what they generally suggest is a change to the Linux maintainer now a maintainer is an experienced user who provides the feedback and guidance on making the proposed environment improvement right so now through this collaborative process the less experienced contributor gains a deeper understanding of the system structure and functions. Now building and building on existing literature that has shown the value of learning by doing and that is the important thing right now many people argue that learning by contributing principle is also a powerful source of the competitive advantage for companies and certain circumstances. Now with that I would have come to an end with with my presentation and to conclude I encourage you to take the time to find open source projects that align with your interest and get involved in the open source community. I feel free to ping me on the socials my username is same throughout all and yes thank you so much for joining and listening to me. My name is and I am giving I will be giving my talk on the. I'll fix my mentorship showcase so I will be talking about my project and then I will share my experience throughout this mentorship. So a little bit of my introduction I am from Pakistan I am a recent electrical engineering graduate from UT Lahore and have been working as an hardware engineer at Tenix engineers and my core areas of interest are computer architecture digital systems and VLSI. So, so then this mentorship project was offered by this five international in spring 2022 links foundation mentorship program and the title of the project is this five compressed exchange instructions for self I was I was mentored by all of kingdom who is a senior digital design engineer at camp on also the the director and the co-founder of the foundation. So, let's get started with the project we will be explaining each word from this title and give you the overview about what was intended and what I have done in this mentorship project. So, many of you have probably heard about this five this five is the fastest growing is a which is open source open standard and really free. It has a modular architecture and the base is a is mandatory, and it has many optional extensions such as vector extension, floating point, and so on, many, many optional extensions, and it keeps on developing. So, we will be talking about the compressed extension of this five. So, this five compressed extension is for the deeply for for the different applications, where, where we have we are limited by the size of the memory. And this is it compresses the risk five instructions binary instructions in 32 by default, the wine instructions are 32 bit of size, but those instructions that can be compressed will be compressed by the compiler into the 16 bit formatting and and hence the instruction size is reduced by half. So, it is used for like very deep embedded applications, where our area is at premium and we are limited by the size of the memory. The next thing is the surf surf is a risk five bit seal CPU. This has been developed by my mentor all of in 2018. So, it, it is a hard winning CPU open source or the smallest open source is five CPU. The actual thing about it is that it is big CDL. So, if you look at the diagram on the left side. So the convention CPUs perform the parallel operations, for example, if we want to add two bytes. So, we have to have eight single bit errors, as you can see from the left figure, but in the big CDL way, we can use a single one single bit editor and then add bit by bit and then concatenate the results and hence the surf works in this way and but our head is that it. It takes almost 32 cycles to complete one instructions instead of a single cycle. So, so, so it's specialized in the area. So, so it's focused on the area where, where, where we are in the deep embedded systems. So the surf CPU is managed by the fuse as to see package manager package manager allows you to compile build and execute it for different targets for different simulation targets for different FPGA targets and for the open source is sick targets in very minimalist and easy way. And the service also equipped by the servant SOC and this SOC allows you to have some benefits for such as you are and and GP is so and also the surf supports the base instructions which are the basic integer instructions load store arithmetic and logic instructions. And also it has it has board of multiplication and to be an extension, which which is a respite ratified extension. And after this mentorship, it has no support of compressed extension in it. So, it has also a privilege privilege architectures for to run the safer or respite real time working system. Let's talk about the integration of compressed extension into the surf. So, on the left side of the in the left side of the slide you can see the memory interface of the surf. So the service sense and address to the memory and the strong bit and in return the memory sense and the corresponding data and the acknowledgement. So what we need to do is for the compressed extension sport, we need to first of all have this have the compress decoder that that decode the incoming compressed 16 bit instruction into the into its equivalent uncompressed 32 bit instruction. So, so we have to have compressed decoder then then we also have to integrate a real liner module, because with the support of compressed extensions, they are there will be inherent to misalignment. And that's misalignment with respect to memory. So, so real liner will take care of all of that. So, also we have to do some tweaks inside the surf to generate an address of program counter plus two, as well as in addition to plus four. The whole compression sport is parameterized and is managed by the views as to see as described before. So, this is the updated diagram of the of the serve after the integration of compressed extension support. And you can see that the address and a stop it from the serve is managed by the server liner and it generates the final address and the stop it and in return it gets instruction that's really a liner gets instruction and acknowledgement. And if it's a if it's a compressed instruction then it will be decoded otherwise it will be just passed to the surfboard. So these two most essential modules needs to be implemented. And the most challenging one was the serve a liner real liner module because it was a state machine or digital state machine. So this is the, this is the whole whole life of the cycle in of the compressive order, but I guess I don't need to explain it over here, you can, you can read it in the final report and get the understanding of it from there. Once we have the support of the compressed extension in the serve then we have to test it, whether the, whether the compiled instructions are working on the core or or not. So for that, we have the open source is five architecture compatibility desk, in which we have the compressed extension as well. So, so after the after adding the support. The next thing was to run and these tests on the serve and perform the debugging until and unless the whole all the tests of the complex extension that passing. So, so these are the 27 compressed extensions, and all of them passes the desk and with and in the serve. And we also run the serve. We also in the safer is real time operating system with the compressed extension enabled. So, just make to make sure that the extension is implemented correctly. And so after the completing the main part of the project. So I we had time to do some additional contributions so I added four to an access to FPGA board to the servant SOC and so and also the fix some bugs for the previous instructions. In the serve and make sure that all the privilege compatibility tests are also passing. So these are some additional contributions in addition to the main project. And this is, I guess, all from from the project side, but you can learn more details and the technical aspects and implementation details of the project in my final report, which is available on the medium. And you can visit GitHub repository for the serve. It's an amazing CPU I have been contributing since after after this mentorship as well. And my contributions to the serve can be followed in the last link. In, in the end, if you. So I'm, I'm interested in the computer architecture systems and certainly RTA design projects so if you are interested or you can collaborate if you want to collaborate with some open source project then do reach out to me through email or LinkedIn, and I would like to thank to my mentor. So, Mike, let's talk about my experience I have really great experience. So, it was my first open source experience I have not really contributed before this mentorship and this was my first like inter introduction to the, to the open source and I have been contributing since then, and my mentor and I have the mentor and the relationship, not for restricted to only that. But we have a long on relationship and we sometime talk about ideas and projects after after the mentorship, after that mentorship as well, I would like to thank links the links foundation who has offered this project, who has listed who has such a great platform for the new contributors to come and show their capabilities, and that is my international for offering this amazing project. So, this is all about and my advice for the future and this was my first experience and after that I have been contributing to the Google Summer of Code 22 program as well and and contributing on my own open source project as well my advice to the future mentees is that if you find some, some program like LFX or Google Summer of Code or some similar funding program, and if you want to get enrolled and so the best thing you can do is write a good proposal and list down the things that you need to implement once you write it down and once you once you listed down the list down the steps then you are more confident that in your skin that if that if you will be able to do this or you need some more research and work or some more work to do before apply to this project. So that was all from my side. Thank you very much. Thanks everybody for presenting. Thank you so much for sharing your experiences. It's been great to listen to all of you talk about what you learned. And this concludes our mentorship showcase for this year. We will do one more next year. Thanks everybody. Bye.