 Welcome everybody to our very first LFX mentorship showcase. And my name is Shor Khan, I am a current maintainer and Linux fellow at the Linux Foundation. In this role, I lead LF mentorship programs, and I get to learn and share, and then also work on programs, empowering others to learn and share. And I get to design mentorship programs and then launch them. So let's talk a little bit about what is a beginner's problem. And when we are learning in India, trying to get into a new technology, we face multiple problems. The first being, we worry about where do we start. And once we decide what is our passion and which open source project is the one that we really want to pursue. And the second, we are looking to see how do we get started, because we do have to figure out where we want to get started and learn about the project itself. And the next problem comes in, where do we find resources, because we want to be able to understand a bit about the project and how the development works and what is the technology behind it before we start asking the community the questions, right. So the community always looks like, you know, they're experts, and we don't want to go with the questions that questions that don't make sense or questions, not very intelligent questions so we want to be able to have some level of confidence before we go and approach the community. And after that we are wondering, who will help us with our journey of becoming expert developers in that field, in that area project. Okay, at ELF, we provide resources for you. At the ELF training site, for example, if you go there, you can explore learning paths in various technical paths, kernel, blockchain, and so on. And you can take several courses, several beginner courses are free, and there are a lot of webinars, tutorials and blogs, you can explore that. After that, once you figure out, okay, this is my path, you can explore LF live webinar sessions, we have several webinars are uploaded, and we do once a month webinars. So you can check those out and learn more about the area that you are wanting to get it. After all of that and you decide, I want to have one on one connection with the mentor, and I want to work with a mentor. You can apply for LFX mentorship program. And then we have mentorships in various areas and we connect experts in various open source projects with the mentors and new developers. So that's kind of the path that you're, you can explore. So in this showcase, we are trying to connect our graduates. Once you graduate from the mentorship program, you are looking to see what next, how do I, how can I get more engaged in the, in the project and then also be able to use what you learned in, in a professional role. In that case, what we are doing is we are connecting new graduates with people that are looking for talent in this forum. So, that's why we're here. And we have several graduates, eight of us, eight of the graduates speaking next after me. First, I do want to take a moment to recognize our mentors. If there are mentors in the audience, please introduce yourselves. I would like to thank without your efforts meant your time commitment and mentoring commitment to mentoring. We won't be able to do what we do. So, with that, her turn to audio with you. Yeah, would you like to just go ahead and start. Yeah, and now my name is Anton. So, and I'm a student in the city and I'm also committed for the programming course after becoming a mentee back in summer. My presentation today is titled simply both cobalt now and tomorrow. So from the title itself, you can see that my project for the 2021 summer mentorship is about the cobalt programming course, which aims to offer free and accessible education to individual interested in learning cobalt as your architecture. You may have heard of this 60 year old programming language and some may have thought, what is a 20 year old student doing with cobalt in 2021. So, why do I say to study cobalt and join this mentorship program. First, in my opinion, cobalt is simple. It's designed to be readable even by the non technical auditors so they can audit the business logic of the application. And if you see the screen now, you'll see a very famous example that students on the world are taught with hello world and it's written in cobalt. It has two part identification division which identify the program and a person division which is their pain, the main logic behind the program. And here you can see that it is just three people to display a string hello world. It's not only its simplicity, but it's also because cobalt is everywhere. And let me explain more about that. So back in 2020, when there is a sudden race in interest for cobalt, the community responded. The open mainframe project which was founded back in 2015 as a focal point of development and use of open source mainframe, the cobalt project under which are cobalt related. The cobalt working group, the cobalt programming course, and also cobalt check, which is a unit testing application for cobalt. And as a fun fact, the cobalt working group recently did a survey and here are some of their findings. First, based on their survey result, they extrapolated there are still around 250 billion lines of cobalt still in use today. A significant portion of this came from the financial sectors and also the software vendors. Some companies even have more than 1 billion lines by itself. Furthermore, they know that cobalt will still stick around. However, they are concerned with finding talents. Many organizations expect that they will have staff issues in the future, which is why this cobalt programming course will help in raising talents. Which brings me to the third reason why I decided to join this mentorship program. I found that the community has been stupid but amazing. Everyone has been welcoming me from the start and they are open to reason to someone who is way younger than them. So I decided to give back to the community and join the mentorship program to help improve the course and make sure that everyone can get an opportunity to learn. But how are we teaching cobalt now? 60 years from the day it was first designed. Obviously, cobalt programs are developed via punch cards and as tape come by, we will use a 32 segment terminal as you can see there on the bottom right. This will be an issue on today's era as many students like myself are taught using modern IDEs such as visual 2D code, Eclipse or even Atom. Now there are some IDEs which are entered from mainframe based at the Gaphin. But they are often always expensive and proprietary. Until a couple of years ago, with the help of our sister Project Zoe, you can even now code cobalt on the mainframe on visual 2D code. The extension Zoe Explorer will enable you to edit your cobalt program, compile it back via gcl from data source and see the result of your program from visual 2D code. The screenshot you are seeing now is one of the lab that we provide on the cobalt programming course. And it is not just the extension, we have a CLI code, Zoe CLI, which enables full CI CD development for mainframe based applications. And the best part is, our course is completely open source and it's at no cost to the course speaker. It is one of the highest start project at an open mainframe project and we have successfully created this pipeline. So, what do we do back on our summer mentorship? So, back during the 2021 summer mentorship, the course took two mantis. I have an 8 from Egypt and myself. We improve upon the existing cobalt programming course and we add more things to the course. We also make sure that the content of the course we then updated to fold latest development of Zoe Explorer. And we interact with the community and take ownership of the many issues that we reported. Additionally, as summer mentee of the open mainframe project, we get a chance to present on the open mainframe summit. We submitted a call for people along with our mentors, Michael Baller and Suzy Niva San there. For a panel discussion and we share what we have learned so far about cobalt to the audience. Additionally, all the mentees also have their own dedicated slots to present about the project they did. Furthermore, thanks to our mentors, we are able to network with specific matter experts and learn from their experiences. In total, both Ahmed and I managed to submit more than 20 PRs and we closed on 10 issues which are submitted by the community, which consider that both of us are need open source, is a very considerable fit in my opinion. And we also added in more topics to the course such as table handling, customer's improvement and multi-tabbing. These are the things that we know are missing from the course when we first took it. And there are also some topics we just built in this moment, such as the internal sort and merge sub-programs, a big-pointed programming for cobalt and how cobalt interact with kicks. So, what are our favorite parts of this mentorship program? This is a question that was asked to us on our open mainframe summit presentation. So I'm going to give you a few seconds to read the slides and to summarize what Ahmed and I said. Our favorite part of this mentorship program came mainly from our interaction with the people and the scientific matter experts. We are able to network with them and they are very open to our questions. So additionally to this mentorship program, we are able to learn more about open source contribution and how we can start doing so, considering that both of us are new to this field. And if you're watching this and you're easily applying for the mentorship program, our advice is simple. Don't be afraid to ask questions. Yes, your mentors may have a lot more experience than you. The community members even have more. Or your fellow mentee may even know more than you. But don't let that discourage you. Use that as an access for you. Learn from them and put in the effort to do your project. And as my friend Ahmed said, at the end of the day, you will be fine. So our challenge to you is this. Give it a go. There's absolutely no harm in trying out cobalt even today. You may find out that you like it. And if so, it might end up being a career for you. If not, then so be it. It's not a matter of learning this or that. Learning Java or cobalt or C. Learn all, export all. Add them to your technical kit. As students, we often have time to explore, so we better make full use of it. And that's all for my presentation. If you're interested in taking part of the course, give it a scan with your code on your right. So you'll be directed over to our GitHub page. And if you're interested in learning more about what the open mainframe project mentorship program is that you are caught on the last, you'll take it to the side. So a point of thought, feel free to take a look at our project and explore the mentorship program. Not all of our projects are written in cobalt or those legacy language, as they commonly said. We have projects written in Java, Node.js. And also in Python. So take a look and consider it for your next mentorship program. If you still have any questions, feel free to reach out to me on our event platform or also on Twitter or LinkedIn. And that's all. Thank you. All right. So, hey everyone, my topic for today's discussion is an architecture independent solution for vectorized 2D convolution and image processing. So let me just start with an introduction of myself. So I am a pre-final year undergrad ECE engineering student from VGTI India with a primary interest in high performance computing and machine learning. I've been a Linux foundation mentorship participant in 2021 with risk five as my host organization. My purpose of applying to this mentorship was to gain hands-on experience in leveraging low-level register functionalities to obtain significant speed-ups in high-level algorithms. Apart from that, I've been a Google summer of code participant in 2021 with boost C++ libraries. So let's have a look at a brief overview of what this project was actually about. So let's start with some predetermined goals. So the first and foremost predetermined goal was to develop a platform independent implementation of 2D convolution using MLIR in the buddy compiler ecosystem. So the buddy compiler ecosystem is a project which is specifically dedicated for efficient risk by IR generation. We basically implement techniques which are more favorable for the risk by architecture and would increase the performance on such machines. Apart from that, I was expected to develop some common image processing features such as variable anchor point positioning and boundary extrapolation. We also agreed on comparing and benchmarking our results with open series implementation in order to get a better idea of where we stand against state-of-the-art methods. So let's have a brief look at what was actually completed. So all predetermined tasks mentioned in the previous slide were completed and the code was merged with main project. Apart from that, me and my mentor had a discussion and we agreed on creating a digital image processing, DIP dialect for image processing efficient IR generation. And I also developed a custom algorithm on top of an existing CVSM approach for performing IP specific convolution. So before we move further ahead, I'll just like to clarify some terminology here. So in general sense, convolution involves flipping a 2D kernel in both horizontal and vertical directions. However, usually in literature, they do not mention the flipping part and many a times just use it exchangeably with a correlation thing. So even here for consistency, I am using the term convolution without explicit specification of any pre-processing done on the filter kernel so that we do not have any further conclusion in the PPD. So let's move ahead. So let us now understand what this project was actually about. So as you might have already guessed from the title, the two main pillars of this project were convolution and vectorization. So as many of us might already be knowing that convolution is a mathematical operation. Specifically in the context of image processing, it is used for extracting some amount of information from an image using a two dimensional kernel. So the size and value of this two dimensional kernel is actually determining the transformation effect on the convolution process. So let's now have a look at vectorization. So in simple terms, vectorization may be defined as the art of getting rid of explicit for loops in your code. So this is done by some special magical registers known as SIMD registers or vector registers depending on the architecture used. So these registers actually perform the same set of operations on all values stored inside them. And since they are present at the lowest level, vectorization is of a hardware dependent which makes it a pain to write generate code which is capable of extracting performance from these registers on all hardware platforms. So this was the actual problem which we were targeting in this mentorship program. So let's now have a look at what our project was actually about. So as it is evident from the title, a vectorized 2D convolution. So basically we are applying the vectorization on a 2D convolution algorithm. So since vectorization is applied, we were able to process multiple pixels at the same time which resulted into an implementation which was many magnitudes faster than usual scalar loops. Though our implementation works on many architectures, it benefits a lot from dynamically sized vector elements. This is available in modern architecture such as RISC-PY or AR-MSV. So let's now have a look at results obtained. So we are comparing our results with OpenCV. So this was the original image of Vlena and these three are the results which I obtained after applying Sobel 3X3, 5X5 and 7X7 filter. All of them were in agreement with OpenCV's output. So let us now have a brief look at benchmarks. So these benchmarks were actually implemented using Google benchmarks on an image having dimensions as 1024 cross 1024 on AVX5 and 2. So these benchmarks are implemented as time versus iteration count. Where iteration count is the total number of iterations, both of these algorithms will run for in each execution of the Google benchmark file. So as you can see for the case of kernel having a size of 3X3, the DIP implementation is performing better than OpenCV and we can safely say that the performance improvement is a bit more than twice to that of OpenCV. So let's just have a look at what happens when we increase this kernel size. So as we increase the kernel size from 3X3 to 5X5 you can see that the DIP implementation is still performing a bit better, but the difference is actually reduced and OpenCV is not that far behind in the kernel size of 5X5. And if we were to further increase the kernel size to 7X7 we can see that OpenCV now actually performs even better than the DIP implementation. So this is the effect which we are currently investigating and are trying to improve the performance even for larger kernels. So let us have a brief look at what I learned. So the first thing which I learned here is how compiler optimization works. Specifically I created a novel MLIR dialect named DIP in the Buddy compiler project for adding support of digital image processing. I even created a lowering pass for lowering 2D convolution in image processing fashion via vector affine, memberf, and SCF dialects of MLIR. I interacted with the MLIR-JIT tool for IR debugging and development. The next thing which I learned was related to memory layout and low-level register handling. So this specifically includes loading, storing, pre-patching as well as processing some elements of a register in a fixed batch size using masks. I also was exposed to the dynamic length feature of RISC-5 ISA's vector design. So the RISC-5 ISA's vector design actually helped me improve the performance of my algorithm quite significantly. I was also explored to the improved memory access pattern for complete utilization of cache. So the third thing which I learned is related to image processing and math. So I created a custom algorithm for handling variable anchor point positioning and boundary extrapolation for dynamic image dimension on top of an existing CVSM approach. I further developed an outline for adding support of separable convolution for rank 1 separable kernels. So let us now briefly discuss my future plans. So we are currently working on writing the research paper for documenting our custom approach, as mentioned in the previous slide. We also have further plans of development in the DIP dialect with addition of more image processing operations in it. We also want to implement the support of multi-threading in the DIP dialect with the pre-existing support in MLIR which would basically help us improve its performance. My long-term goal in my career is to be a part of the upcoming performance evolution and contribute significantly in increasing efficiency of modern systems after the end of Moore's law. Apart from that, I would like to attend an offline Linux Foundation event. So I would also like to thank my mentor Mr. Aung Bin Zang for taking out the time from their schedule and playing a very integral role in completion of this project. He provided meaningful feedback and constructive criticism whenever required. His guidance was really helpful for me and his experience was really handy for the completion of this project. So I'm very grateful for his time and his efforts. Finally, I would also like to thank the amazing people at RISC-5 and the Linux Foundation for providing me with this amazing opportunity. The staff of RISC-5 was really friendly and they answered all of my non-technical questions as soon as possible. This basically ensured the smooth completion of my mentorship project. So I'm really grateful for all their effort and time. So these learnings will definitely play a very significant role in all of my future endeavors. So really thank you for that. So this slide contains the handles to my social media. So you can connect with me on LinkedIn or Twitter or mail. So I think that marks the end of my presentation. So thanks a lot for your time and you've been an excellent audience. Welcome everyone again to Linux Foundation mentorship showcase. And I'm Sushant Shuddhwai and welcome to Kobal Modernization Renovating Core IT Applications. So first of all, thanks to Hattanto for setting the base up with Kobal and why you need to learn it. And as he said, you just give it a go. So some one or two years back I gave it a go and now I'm here talking about Kobal Modernization. So first of all, who am I? So this is me from some year back and I was a student then. So that was the important thing because when I did the project I was a student from Gunanadev Engineering College. So I did this project as a student and what was the project about? So the project was about to demonstrate the options for modernizing an existing Kobal application to address the business needs and possibilities. So our goal was not to create a new Kobal application or anything like that. We had to modernize an existing Kobal application. And as such our tasks were first of course to modernize the legacy banking application now by modernization I mean to change the architecture of the application. So these legacy applications they are known for a monolithic architecture and having very old interfaces. So our task was to change it to use microservices and to use modern interfaces within the existing Kobal application without changing the Kobal code much. So that is what we call modernization here. The second task was to generate the documentation required so that everyone can understand and reproduce a project by their own because that is one of the goals of the project. We want to show that if a student can do it then any business can too and that's the task to tell the world very loudly and clearly that if a student can do it any business can too using conferences, webinars and we had a lot of those. So this was about the project. Now what did I learn? So first of all Kobal and his friends what do I mean by friends here? So what I learned was how to read Kobal code present in multiple files. Now this was an existing Kobal application a demo application but yeah a working application. So it had like 60 plus Kobal files and I had to read all of them to understand the architecture and stuff. So yeah that was the second thing I learned how to understand the architecture of the Kobal application. So how the code interact with itself how it works. Third was to interface Kobal application with Kobal with other languages. So how we can connect Kobal with other languages so that they can talk within themselves and exchange data and stuff like that. And the fourth was to the tools that help you do all that. So of course not everything was done from scratch by me. So we have tools that help with organization and I had to learn those tools as part of the project. So here we have Kobal and in the project using JSON as the intermediate we learned how to interface Kobal with Android and with PHP and in fact directly with Python without any interface like JSON. So I learned Kobal and these were its friends that I learned in the project. The next thing was the modernization process. So this process has four steps. The first is analyzing the Kobal application. Understanding what the application already is, what it does and how it does that, understanding the architecture. Then identifying service candidates. So our goal was to create microservices. We had pieces of the code that can work as a standalone and that can be taken out of the monolithic architecture. So we had to identify those service candidates. After you identify you take out that piece and you put it on somewhere else. Somewhere like a cloud platform or some other place. So that is the monolithic architecture working completely on the mainframe. You can put it on mainframe also and it's a different topic. So the last was to connect connecting them to a new modern UI. So this is one of the main thing with Kobal application, legacy applications that I just lost my electricity but we will ignore that. So yeah, so legacy applications they have very primitive interfaces. So our goal was to give them new modern UI. And the third thing was presenting like I said, we had to get the word out. We had to tell everyone so that they know about the project. So we presented at six conferences and webinars which includes the PCC enterprise computing conference, the share virtual, there was a micro-focus webinar. So micro-focus was the company that was sponsoring this project and all my mentors were from micro-focus. So we had a webinar there. We had open mainframe summit and the open-source summit also and GSEUK. So we covered almost all the major mainframe audience in organizations and to write some blogs and articles. So my blogs are currently on the open mainframe projects blogs. So they are there and you can read them. There are still more that I need to write but yeah, I have written some blogs and articles. So how was my mentorship experience? Mentorship experience of course depends on my mentors who were awesome mentors. So we will talk a bit about them and I will thank them. So first of all we have Misty Decker from micro-focus and she got me into the project. So she was the one who found out about me from my applications and so she was the one who selected me for the project. So we had a lot of interactions before the project started about what the project was and what is needed and then she helped me with all the conferences. So all of these conferences that I just mentioned, she was my co-presenter with me and we had a lot of fun doing this. So we were on like our own world tour doing all the conferences and stuff and she helped me in writing all the blogs. So yeah, one of her duties as a mentor. So she helped me with the blogs and she was a constant source of motivation. So Misty is that kind of person who always keeps you motivated. She's always always smiling so whenever you talk with her we had weekly meetings. So whenever you talk with her you in whatever mode you are it gets better. Yeah, she was a constant source of motivation and thanks a lot Misty for all the work and everything. Then we had Guy Sofar. So Guy was my technical mentor. So he taught me all the tools that I used those were my micro focus tools and helped me understand the current architecture of the application. So what the application was all about and he helped me with the coding and debugging. So all the things that I added all the things that I changed. So Guy helped me with all that and he guided me all the technical aspects. So everything that was technical. So that was not with writing the blog. So even the blogs if I'm writing something technical about the project then we would get it verified from Guy. So he was the technical person in the meta team and then we had Gary Evans also from Micro Focus and so he provided me with the virtual machine that I worked on. So my own system I could have worked on that but it was not good enough to do all the stuff. So we did it on a virtual machine. So he provided me with that and he helped me with the coding and debugging. So when Guy was not available Gary was the person that I would reach to with all my questions and I have a tendency to ask a lot of questions. So yeah he helped me with all of that and he was the voice of wisdom in the team probably the most experienced person in the mentorship team. So yeah I felt there's anything that none of us is understanding, Gary has the answer. So he was our guy for that. So three unexpected happenings. So three things that happened which we were not expecting at the start of the project. Even the mentors were not expecting things like that. So the first was I interface cobalt directly with Python. So this was not planned when the project was planned. So they originally had a plan that we will create a website for all the interfaces. But on a whim almost without much thought to it I decided to have each service I decided to give each service a different interface. So we had a mobile interface built in android. We had a web interface built using PHP and we had another web interface built in class in Python. So yeah this I did that because I could do it. So I am familiar with all of these technologies. So I decided to go with that. But on the internet I could not find any way or any previously documented way of connecting Python with cobalt directly. So I had to experiment and reach out to the cobalt forum and stuff like that. But I was successful. I was able to have Python talk directly with cobalt. So something like Python talks with C. So that was something unexpected that happened and that was great. It was one of the most challenging part of the project and one of the most learning ones. So the number of conferences we presented at. So in almost all the conferences that we applied we got selected. So initially we were thinking that yeah we could get one or two conferences and that would be great because it's a difficult thing to get selected for conferences and but we presented a lot of conferences and in fact even the open source summit the flagship conference of Land Foundation. So yeah that was something expected but that was great. Working with Misty, doing our own world tour and yeah the amount of recognition and support we got from the community was huge. We got support from all the different kind of people. People would just come and on our LinkedIn post they would just say that you are doing a great project and that helps that motivates you a lot if you get that kind of support. So now what are my future goals? So first of all have a great career in Mainframe. So yeah I will have my career in Mainframe and I hope I can do a lot of great things because I have had a great start with this mentorship. So hopefully I can utilize that and I can have, I can do some great things in my career in the Mainframe. Secondly do more contributions to open source Mainframe. Of course this is where I started these are my roots. Open source is my root. So I would like to do as much contribution to open source Mainframe as possible. So yeah and third I to make Kobol and Mainframe popular with the students so I think Hattanto will be able to relate with this that Kobol and Mainframe are not the first choices for students because mostly they don't know. Even I did not know about Kobol till the last year of my graduation. So I would like to students to have this option that they know they can do Kobol and they can do Mainframe stuff even in their college. Even though it's not taught in any college or in a very few colleges at least. So yeah I would like to make Kobol and Mainframe popular with the students as much as I can. So that would be it. Thanks everyone for listening. These are some of my social media handles and stuff like that my email so you can connect with me whenever you want. So yeah thanks for listening. Yeah thanks also thanks my mentors. I mean I can say thanks a hundred times and that won't be enough so thanks to my mentors and thanks to Lang's foundation and to open Mainframe project for posting such great mentorships and projects. Yeah we'll be signing off now. Thank you. Hello everyone and welcome to the LFX Mentorship Showcase. So my name is Faisal and I was part of the LFX Mentorship in the summer of 2021 and my project was we've been binding in Rust so a little bit about me so I am a CAC student from Presidency University India currently in my final year and I'm a technical content writer and educator where I constantly write articles related to Rust and its application and data structural algorithm. So I've also been a part of various other open source programs such as Major League Hacking Fellowship in the spring of 2021 and I was also a part of FOSI which is free open source software for education back in 2020 and I think I consider myself as a Rust developer or a Rust station. So I live more about the project. So Fido is an open source data plan developed by Cisco and every heart of Fido is VP which stands for Vector Packet Processor so Vector Packet Processor as opposed to scale of packet processing processes a vector of packets rather than a single packet which results in high performance and since Fido is in the user space it generally avoids multiple context switches which also results in high performance. So the goal of the project was to study and analyze various different bindings that are written in different languages and you can identify ways in which Rust can improve what these languages cannot provide and it was not just enough to have a binding low-level API that was just performing and safe but it needs to also be ergonomic it should be nice to know that these are the objectives about the project and this is the project roadmap so as you can see it is also in the same way the layers are so the lowest layer is the VPP API transport so this was actually mostly done by my mentor where it is responsible for interacting with VPP via the unique socket or via the shared memory interface and this was already done as I said by my mentor so I didn't have to touch upon it and then we have the VPP API generalizing and deserializing the Rust trucks that are generated from the low-level API in a way that they fit well with the C-Lang and lastly is the VPP API Gen. Now the goal of VPP API Gen was to ingest API JSON files and then generate the low-level APIs so that's all what VPP API Gen does and lastly VPP API is what is produced from VPP API Gen so this was the entire project roadmap and I mostly worked on VPP API encoding and VPP API Gen and I think VPP API is actually generated from API Gen so I didn't really have to work on it so what are the things I learned so there are a number of things I learned during this mentorship firstly functional programming in Rust so I am someone who is mostly coded in object-oriented programming so this was nice to explore and I learned many things through it and then the most exciting part I enjoyed working was the procedural macros so I implemented various macros in Rust for our low-level API name and that was really fun to work with and especially most of them made the code ergonomic and also reduce the code size which made the low-level API to be as close as possible to how the C code is in creativity and then I learned Golan to actually read through GoVPP and understand what GoVPP is doing based on GoVPP and there is a trade provided by Rust which actually stands for serializing and deserializing which allows a lot of customizability in which you can customize how a serializer should work by your struct or your union or your enum and I think I learned a lot about how you can do that and create it so and then we have trade so I think before the mentorship I didn't really have a very good grasp of how Rust trade system work and I think during these three months I learned a lot about it and I'm much more comfortable right now using trades as I was before and then lastly about vector packet processor and DPTK so I think I learned a lot about how VPT and DPTK help in current networking bottlenecks and how they are a very good open source solution and I think not just VPT and DPTK but also I think as a student this really broadens your mind and how there are so many things that you can do with packets that are in a way you can do it efficiently so I think this is the things I learned so the outcomes of the mentorship are as follows so I had around 8 full requests and I would spend around 3 repositories so I created integration test for the interface messages for all the interface messages and then I created tutorials that is of the bindings where you can easily run them and I think they are mostly inspired by the progressive VPT example that are provided by Fido and then I created dumb details message function which was built upon a single message functions and then I created many custom serializers and deserializers for encoding and then lastly I think VPT unions I think this was the most important part of this mentorship I think where I had to deal a lot of it I mean I had to discuss and think about it with my mentor as well so Rust unions are provided by Rust but any operation with them is unsafe so we had to create our own way of dealing with union so I think for this I created a proc macro which would identify the size of the union and then create a tuple struct of the largest size large field size array and then the derived macro would also derive functions that would be responsible for serializing that type into the union and also deserializing that type out of the union so this was I think this is something that I am most proud about in this project the experience I think when I think about the mentorship experience so the first person that comes to my mind is my mentor Andrew and it was a smooth experience thanks to my mentor and I think I don't think I would have been able to contribute without him he helped me a lot during this process and one of the nice things about him was that he never had to micromanage me or tell me what to do and he would just let me explore in the wild and even though sometimes I would go beyond the scope of the project he would still let me explore things and do it and the way that we had this setup was that we would have weekly deliverables so I think we would get into a call and I would define the things I can do this week and the things I did this week and things I would do next week and I think he would jump in on like if I should do that or shouldn't and this was amazing and also I think our weekly calls would always go beyond the project and we would talk about software development in general and various other things you know since we both were Rust enthusiasts so we would often talk about different ways where Rust is being used and all of those things which were definitely beyond our project and he was always there almost 24-7 support via Telegram that was a communication medium whenever I would stuck into whenever I'm stuck with whenever I'm stuck with something he would always message me and tell me where I'm going wrong and I think also as I said like Andrew being a maintainer himself I think he would often show me things he does as a maintainer himself like he automates the release notes and various other things which reduces his time and there's a project philosophy that he had created one of the most important points that he used to tell me that I think the whole project philosophy was that don't try to reinvent the wheel unless the wheel is square shaped to your purpose and I think that really went well with me and I used to always think in that terms the way he told me and I think I quoted him here well I think I remember one weekend where I wanted to do something and he told me that Arrested Brain is better than a tired brain so I think I finally remember this quote so I think Andrew really helped me a lot and I think about the future of Rust VPP Binance is that currently it is an MVP a minimum viable product and I think in the future we would like to run more VPP workflows that currently folks are running using Rust VPP Binance I think for me I think I want to explore more about the network subsystem in the Linux kernel and I think I also want to perform benchmarks against VPP, EVPF as well as a default kernel implementation and I think lastly I'd like to also hack the kernel explore and break the kernel as many times as possible and that's all about the future things and I think lastly with this we come to the conclusion of the presentation and I think I'd like to thank the Linux foundation for providing this opportunity as it really helped me transition into a systems programmer as well as explore Rust in a very good way and I think I also like to thank my mentor again because without him I wouldn't have been able to contribute to any effect so I think with this I will be signing off thank you so my projects is coalition aware with this and this fall we made with this coalition aware so knock knock who's there it's me Lakshya and this is me traveling to Himalayas to get some new friends and meet some new friends I'm currently a junior year CSE student at IIT BHU through time I have become a polyglot programmer contributing to open source for projects this summer I was a GSOC mentee and in the fall I was a LXX mentee with CNC expertise so why did I apply for this mentorship a perfect cocktail needs certain ingredients and a perfect mentorship does too so this mentorship comprised of large scale projects open source contributions great experience and the mentor quiet guidance so it made for a great mentorship and that was the reason I applied to it moving on what was my project so with this is a database clustering system for horizontal scaling of my sequel and it was developed Google to support their massive user database since then it has grown to be used by GitHub Slack and many others so the initial plan for my project was to bring in support for coalitions into VITES so the targets that we set prior to the projects included fetching coalition weights from the mySQL code base a go implementation that can work with VITES aligned with mySQL exactly next in line was integrating the coalition module into VITES's evaluation engine and not to forget the robust testing that we have to do for the production ready project as with every mentorship my mentorship also involves certain learnings and I will just quickly go through them so before VITES I became coalition aware or I might say I needed to become coalition aware so what are coalitions? coalitions are basically rules to compare strings and they take into account the accent the case sensitiveness the Kana sensitiveness and at some times we also take into account word widths so there are various forms of available character encodings that we can use with coalitions and these are the ones that take into account all the sensitiveness that I cited to add the support we have to implement the algorithms from ground up because the available packages for coalitions weren't exactly aligned with mySQL in my PPD I have also attached a link to one of the famous blogs that goes by the name of bare minimum a developer should know about character encodings I hope everyone checks it out because at the end of the day we all are software developer and we should know about character encodings one of the major highlights for my mentorship was getting to learn goal language so in goal language I got to learn about structuring code using modules and how to make my task easier using the standard library there are certain design patterns associated with goal language which are very unique to the language like returning a tough returning error at the end of a tough call and the simplicity of the language which is very attractive to developers and so it is to me nowadays too because the number of keywords involved in this language is very less and a developer can quickly get started and make something very useful out of it so this was one of the things that impressed me a lot about learning goal language and what did I do with goal language so we made scripts to transpile and automate the fetching of collision weights from the MySQL database so that they can be used in the witness collision module and this process was automated using the scripts that I wrote the second major learning was how to write production ready code so in large scale projects everyone expects you to write error free code and quality code obviously you cannot write it error free all the time but they expect you to so there are certain standards that I was supposed to follow while writing the code for witness because it is being used in production by many people so a few of them were like test driven development which was very new to me and I had to work in an incremental fashion so that I don't break the production and finally all my contributions were merged into the project so the intricacies of the journey so I would like to talk a bit about how the journey was for me but before that I will take a moment to thank my mentor Vincent Marti aka VMG for considering my application guiding me through the mentorship we worked in a task based approach where Vincent used to tell me about my hand I used to quickly get it done and he had the next task ready for me which I used to jump in at several times in my initial days of the mentorship I was pretty new to the thing so whenever I needed any help Vincent quickly used to jump in calls and help me out by being with me while I coded we got the collisions module into the witness database and after that I was connected to several other community members alongside whom I worked to get the collision module integrated into the evaluation engine and to name some of them Andreas and Florent were very helpful while I was integrating the module so thanks to them as well so let's talk about my experiences so joining a new open source community and connecting with people always surprises you so experience and obviously coffee because you need to get it through all the coding that we do so the first surprises that I had being part of a mentorship was TDD or as we know is test driven development this felt weird like who even does that right but everyone right so I didn't ask my mentor much about why are we proceeding like writing tests before even writing the code but later on when I was listening to a podcast about test driven development I got to know that it is a very useful approach that people in the industry use and I finally realized it's important but initially it felt really weird and surprised me like why are we even writing test tests at first before even proceeding with the code talking about code review so I wasn't even slightly aware how to structure code in go and how to deal with memory allocations and communicate errors but through the help of Wincent I got to know more about it initially I wasn't even aware that there are certain functions in standard library that can make my tasks easier I used to implement those things for my own and those weren't proper so after I submitted my code by pushing to my repository Wincent used to make changes to it and used to jump in a mate and tell me like these are the places I made changes to and this is the reason that I made and these are the improvements that it does so that was really helpful for me talking about the impact so the reason I took with applied to Wittes was at the time I was doing a database management system course as part of my curriculum and I thought of that this would be a symbiotic relationship between the two where both will benefit but when I got to know about the story of Wittes and how it was developed this was really big for me because writing a few lines of code I was able to make impact in life of so many developers that are using Wittes in production again my mentor you might think of it as repetitive but it's actually not because while I was connecting with people and scrolling to tweets I found out that my mentor was the initial some of the initial starting members at GitHub the platform that we all love and use in our day to day lives and making it reach to the point it is today it's very inspiring to read but read about he even dropped out of college to pursue what he enjoyed and is passionate about so this was very motivating for me and got me all excited to even be more passionate about software development lastly I will talk about some of my aspirations so I want to contribute as much to open source as I can and impact developer lives be a part of their lives I want to write more blogs than I currently do I want to work with developer empowering walks that actually make software that are being used by developers just like with this I am currently on a face to DevOps so I'm learning more about it I want to speak more at open source conferences and I certainly would like to mentor students so that they can also begin their journeys and be a part of open source communities so that was all for my slide and thank you very much and you can get in touch through my social media handles my Twitter handle and my LinkedIn handle so yeah thank you so hi everyone welcome to the LFX Mentorship Showcase so this session I will be talking about basically what I learned that is entirely DevOps I am Shompal Trivala and I am currently a CS undergrad at VIT I started my development journey around two years ago I started with the basics of web development I soon jumped into cyber security and cryptography and then later to blockchain so I am right now a certified ethical hacker along with that I have also contributed to Bitcoin as a part of the summer of Bitcoin program that I had this summer and through this LFX Mentorship Program I contributed to this I contributed to Qvarno this fall so Qvarno is currently a CNC of sandbox project and I worked on the processes to enhance their security model and their threat defining and to add some security features in their CICD pipelines so that's roughly about me and Qvarno now I will talk about why I applied LFX Mentorship and how so starting with you know why should like one apply to the LFX Mentorship Program so for me it was basically fascination like you know the concept of DevOps was fascinating to me and getting a chance to work on you know real life production based projects along with that getting mentored by such great people like the people who created such projects it sounded surreal so this kind of was the sole reason why I applied and I learnt a lot and one thing you know that I learnt and after this Mentorship Program is again mentoring is not about the skill right it's about the experience you can learn on your own but the guidance and the skill that you get from these maintainers it's it's just you know like out of words so that is about it and so in the next few slides I will talk about my entire journey I will start with how I got in and after that I will talk about what I learnt and then my work at Qvarno so was it tight so I started with you know looking at the projects the various C&G projects that are currently there I tried to understand what each of them does and luckily found a project that excites me and kind of aligns with my interest as well so when you are also applying you know like look for a project that excites you once you have that you are in for a very fun ride so once that is you know just get the project compile it locally you know like understand the what how and why is of it it's very important to understand you know like how the project works what it does and why does so that's what happened to me with Qvarno and apart from that start contributing and interacting with the maintainers so again you know contributing would sound like okay like are we supposed to open PRs no I mean yeah if you can that's great but that's too big right you can just start by you know like just introduce yourself to the community attend meetings have small small discussions ask questions like you know like for a fact just look at the merge PRs understand what code change they do and why would they get merged start understanding open issues just don't open PRs if you're ever if you're worried or intimidated don't run functional test if they are there to understand you know like what smaller pieces of code ways are existing and why and talk to the maintainers I'm pretty sure you know like they would be really happy to have you on board and help them so again you know like interaction is a very important process but again keep in mind the time of the maintainers as well so make sure that you know you do a thorough and proper research before asking questions because the time is really important as well so once you do that you will kind of get a hang of the project you know you'll start to learn the insights of it so try to document stuff so this is something that I learned while applying to LFX documenting is a very important aspect according to me you know so everybody has that unique learning process so once you start documenting stuff you learn you learn and you and once you write it or you know speak about it you learn it in a way way better way so start documenting share it share it in your community let people know about what you're doing what your project is and how you're learning about it so once you do that you know like once you are comfortable with the project you have interacted with the mentors you're part of the Slack channels or any of the project channels start and then start understanding the LFX idea that is you know like what they do like what is the project what does it require focus on the skills it demands now the skills can be Kubernetes security documentation there can be separate skills they mention it quite clearly so that you know like have a look at it what it does and why build a proof of concept if needed for your application that is a really great way to you know boost your application and to understand what the project is and you can get a high level over your foot you're gonna do the next month so once you do that you know you'll kind of get in the project you'll be contributing and again once you like as soon as you start contributing you learn a lot so I learned a lot you know like just like even even before getting in so that was basically great and this is kind of the journey you know I followed to get in for Kivarno as well so that's about roughly about my journey now I'll talk about you know like what is Kivarno so Kivarno is a police engine for Kubernetes it is currently a CNC of Sandbox project and it helps you write policies for your Kubernetes resources you know like you can mutate validate and generate policies like for your clusters and you know it will take a look and whenever the clusters are valid it will close and restart them so all of this can be done simply using examples and there are already a ton of policies that are written by our community but you have free to create your own also so do check out Kivarno it is very exciting to me and will be great to have you so now you know like talking about what did I learn so I learned literally everything Kubernetes and DevSecOps so starting from Kubernetes itself right I understood its architecture you know like how it works the deprecation of port security policies you know like that that kind of affected the latest versions but then you know like that is what Kivarno came in the simple way of writing cluster policies in YAML that made it very simple for developers and for SREs to know like make sure that their cluster resources are working how they should be so apart from that you know like I also learned DevSecOps so DevSecOps is you know like I understood how big organizations use it in their entire software delivery lifecycle so that is roughly it apart from that I also research on various organizations and analyze their security mechanisms and how they function that is you know like how they integrate security in the DevSecOps cycle so I have a great part to go further now thanks to Kivarno and my mentors so now talking about a little bit about DevSecOps in details so DevSecOps is basically how you know how you integrate security in your development and your software delivery process so as you can see it stands for Development Security Operations so basically it's integrating the security of the project in your CICD pipeline itself so that it's easier, faster and more efficient this increases the agility of a project that is you know like how quickly you can deliver the product once it's developed and this was the first you know like I was handling the CICD of a project and it was great like you know I learned a lot I learned a lot about you know how you can integrate stuff how you can add stuff to pipeline making sure that the software delivery becomes faster more efficient and more importantly more secure so at Kivarno I did this in the following ways which I'm going to talk about in my next slide so yeah this is my work at Kivarno roughly so I started with you know like defining a threat model we took inspiration from the Kubernetes threat policy that is there that was also in the works we attended a couple of meetings to understand how that is functioning and so we made one for Kivarno so that the people who use it the community that uses our product knows you know like what all threats can be there and how we are mitigating it apart from that we also defined a proper security disclosure mechanism that is the one that you followed Kivarno if there are any vulnerabilities to make sure that you know the entire mechanism is very smooth and efficient now apart from that we also integrated image scanning in the CI we made sure that every Kivarno build that happens goes through a proper vulnerability scanning and if there any vulnerabilities that are informed beforehand and the build doesn't succeed so that every PR or every chain that happens in the code is passed through a vulnerability scan now apart from that we also integrated the co-signing of our builds so co-signing is basically that you know making sure that the release that we push is the same that you download there is no third person or any attacker you know that that has an intermediate attack to it so basically what we do is right now like whenever we release something we sign it with our co-sign key and you can using your own public co-sign key like our own public available on our Github you can verify the download and make sure that it is the one that Kivarno supports now apart from that we also generated S-bombs so S-bombs are basically software build of materials we use that to make sure that what dependencies we use and what versions so whenever there is a vulnerability in any of those dependencies we know that beforehand and we can fix it we also upload the S-bombs with our every release and that can be found in our artifacts so this is roughly it and nothing nothing can you know like I couldn't have done this without my mentor Jim Bogwadia so Jim the best part is Jim trusted me like a newcomer I did not know much about Kubernetes and not much at all about DevSecOps so he really helped me get on board with Kivarno you know like we had weekly calls to discuss my progress and doubts he really helped me with my silly doubts again and you know he pushed me to try out new stuff we also got out of the new project of our different project we tried out new things we integrated stuff we integrated scorecard that was like completely out of our domain we also discussed ideas in the contributors meeting to feedback from fellow contributors and maintainers to understand you know like what is their feedback on this honestly you know it is due to Jim that I have learned and started my journey in DevSecOps once again you like to thank him without him I wouldn't be here so again thank you Jim I have said this a million times but again it just cannot stop now that's it now adding to my thanks I would also like to thank my peers, my friends and the community the Kivarno community has been very accepting and apart from that LFX to you know to give us a great chance to get mentored by the best in the world and apart from that learn new stuff now on your screen is a famous quote by Philip Khan the person who invented phone cameras the power of open source is the power of the people the people rule so this is a quote that I live by interact with people interact with them, learn, grow how have they done it, what was their journey understand and grow because there are a million people to help you out they want you to open source so yeah I think that's it from my end hope you all enjoyed the presentation feel free to reach out to me here are my email and my GitHub ID it was a great time presenting this thank you everyone so hey everyone I am Sunskal and today I will be speaking to you about how I worked with GRPC last in the context of a cloud native event process system so let's get started so first a bit about me hi guys I am Sunskal so I am finally an engineering undergrad at Velo on Studio Technology India and I am also currently a software engineer at VFox where I currently work on writing Kubernetes controllers and other stuff related to cloud native so let's get started about my project so the first thing that you need to know about my project is the programming language Rust so Rust is a relatively new language compared to Python or Golang and what is Rust Rust is a systems language with a focus on performance, safety and efficiency it does it by the means of multiple features it has a border checker it doesn't have a garbage collector it has a very very strict static type system which is very very safe but can be rather inflexible as you will see later in my presentation so what is tremor? tremor is a cloud native event process system so it's written in Rust and it's very very fast it's efficient and the main thing about tremor is that it is capable of handling huge volumes of events and what are events are basically things that happen in the outside world so let's say there is a Kafka instance running somewhere or there is a Rabbit MQ instance running somewhere and they want to send some kind of information so they send it into tremor and tremor provides you with a script like language which you can use to transform that event into something else so you can take 50 events and then feed those 50 events into tremor and what tremor can do is that convert those 50 events into one event into one big event that is more meaningful than to your purposes and then store it somewhere else in like a Postgres database or like a log file so tremor basically connects to sources to fetch events from and then transforms them to other means or some pipelines and then they connect to some syncs and send that those transformed events into those syncs and the key thing to note about your event should be structured like JSON or YAML or message pack they cannot be some random garbage but they do not need to conform to any schema then there is no predefined schema that tremor enforces on you which gives you more flexibility on how you want to structure your events and lastly one more technology that you only do familiar with is GRPC so what is GRPC GRPC is an RPC framework so for those of you who are not familiar with RPC, RPC is similar to REST it has different conventions at REST but in layman terms it does pretty much what REST is meant to do and it uses protocol buffers for extending data so unlike in REST for APIs where we use JSON for extending data we use something called protocol buffers protocol buffers are basically these platform neutral and basically they do not conform you to and they do not restricted to any language you can have a GRPC server written in Rust or Golang or Python and you can have a GRPC client written in Python or C sharp or any other language that supports GRPC and what you do is you define your message definitions so if you have one endpoint so you define the body that the endpoint should receive in a protocol buffer file and what that does is it makes your code really really fast it helps you send messages and receive messages really fast because the wire format is already predefined so you already know what you are going to get and what you need to send and a lot of computational time and efficiency so here on the left hand side we have example protocol buffer message so here is a pretty simple person example message so if you think about how you will represent the same message in Rust this is how you will represent it in Rust by a structure so the thing is that protocol buffers do not officially support Rust so so my project needed to make tremor be able to send events into a GRPC server so that means basically that tremor needs to connect to a running GRPC server somewhere in the world and then whatever events that are flowing through tremor it needs to be able to send them to that the problem with that is that GRPC expects a predefined message so the message definition is already defined for GRPC so how do you actually get tremor to send those events without knowing them because you need to know that what is the message definition at compile time and the problem with that is every user will have their own message definitions so usually how you work on a GRPC project is that you define your message definitions in a protocol alpha for file and then you use some kind of CLI to generate some code that code is used in your main application business logic but we didn't want users to go through that because that is a terrible user experience we want it to just work we want users to say tremor here is the protocol alpha for file I want to send events to this GRPC server using these message definitions and make it just work without me having to write any code on your behalf so that is a relatively difficult project because what we need here is a generic GRPC client that works for any message definition any protocol alpha in the context of anything so and if you recall protocol definitions specify the message schema like in the last slide we saw message schema is already defined but tremor doesn't enforce any message schema it can be anything you can add another feel to it tremor wouldn't know the difference and rust doesn't support dynamic typing so what most people would think or rather what even I thought first is that this is hackable using dynamic typing we don't need to know the type at compile time if you can just make a hack working at runtime but trust makes it really really difficult to do that and as I explained working with GRPC requires generating scubs and skeletons and that's the thing we wanted to avoid for users users shouldn't have to go through that so there are several approaches that we looked into to how to solve this one was inter-process communication so if you remember in the previous slide I explained about dynamic typing so something like this if I were to implement this in python this would be more doable because python is a dynamic language so one crazy idea that was floated around was that maybe you could have a python process running and the rust tremor runtime could communicate through that using IPC and get events but that turned out to be not so great another good approach that I explored was dynamically encoding and decoding to the wire format instead of knowing the message definition beforehand we look at the wire format the encoded wire format message and then by hand decode the message into a proper structure I got this working to some extent but the thing was it wasn't performant enough so we decided to go to the most robust way which was to generate more code sound silly so with previous slides I just explained that the problem was that we had to generate code for writing our application logic that is the problem with GRPC so how can the solution to generate more code so what rust provides is two great things one is traits one is macros so traits I'll come to that later but let's look at what macros do so macros are some kind of a metaprogramming thing that enable us to generate code at compile time which lets us include that code into the binary itself so when that code is included in the binary itself that code is present at runtime so we can confidently call that code at runtime so even though we are not writing that code we are generating code at compile time using code that was generated at compile time so it's kind of like a loop where we generate code at twice compile time using the code that was generated in compile time and since that code gets included in the binary we have that code available at runtime to call and what we do is since we wanted to be generic we use traits which are kind of like interfaces to help define a common business logic that works for any GRPC protocol definition so I'll just do a quick demo so here I have a tremor instance running and here I have a GRPC instance running and I'm going to send a message so this is my message that will go and here I get back a message from the server and these are all debug statements so here I get a message back so without having so I could do this with any protocol but tremor did not know anything there was nothing hard coded this was all generated in compile time which is pretty remarkable so what did I learn I learned that research and design discussions are the most important phase of any project I spent nearly half of my project time just researching what are the possible solutions for that and my mentors were very very supportive of that so I would really recommend have a lot of design discussions with your mentor because that really lays it out for the next part of the project I always have a mental model of what you want to do even if you're down the wrong path just know that this is what you're trying to achieve never be clueless about what you're actually trying to do because that just causes unnecessary delay I got to learn lots and lots of cool stuff about rust, GRPC and protocol buffers like how it's imported to the wire format and about how rust, clock macros work how rust, async works and a lot of other cool stuff and probably one of the most important was asking for help which brings me to the next slide mentors, Matias, Dharak and Heinz especially Matias and Heinz who were very very very helpful during the entire project time and even after the project they have been extremely helpful not just to me in the context of this project but in the context of life and career in general they have had they have been very very good mentors they have given me extremely good advice and always always try to help me in whatever issue I've thrown at them I would like to thank my mentors for this and I would also like to thank the Tremor community it's genuinely one of the most welcoming and heartfelt communities I've ever come across in software development I would like to thank the CNCF and the Linux Foundation and mostly I would like to thank my peers and everyone else who motivated me to start going to OSB because without them I would not be here giving this talk thank you everyone, thanks so much so hi I'm here to talk about how I spent the summer fixing bugs for the Linux kernel first a little bit about me I am Shreyaan Shahan a college student who is pursuing bachelor's of technology from JP Institute of Information Technology in Noida, India I really like learning about things and how things work and I really like contributing to open source projects because I think contributing to projects is a great way of learning about how these projects were made and how they work, right I'm really passionate about low level technologies like the kernel virtualization software hypervisors and apart from this I also do a lot of other things, I have a lot of other hobbies I like singing, I like playing guitar, piano reading, traveling, piloting etc etc so that's about me, let's talk about the program, so I got to be a part of the Linux kernel bug fixing program for 2021 and I think it is a great opportunity for anybody who wants to start out contributing to the Linux kernel because the kernel is a huge piece of software there are lots of moving gears and levers that one needs to understand before they can start contributing to the kernel and to me personally it was actually a very daunting task I was scared of making my first contributions because I couldn't really grasp what I really needed to know to contribute to the kernel the program actually provides us with great stepping stones there are a few prerequisite tasks that we have to follow if you want to be a part of this program there are a great way of learning about things about the kernel in general and of course then there are these great mentors who are always very supportive and helpful I don't think I would have been able to complete this program without the help it was Shua Khan, so thank you Shua so that's about the program now how I got to know about the opportunity so fortunately in my college there is a fair amount of awareness about open source we have this hub in our college called the open source developer circle there are various contributors who are all college students of course meet and greet each other and talk about what they have been working on and what's up next these talks on meet and greet sessions to be exact our senior told us about this program and how it got him contributing to the kernel he got selected for the kernel mentorship program last year he worked on the PCI part of the Linux kernel so that's how I got to know about it and why I wanted to contribute to the kernel or why I wanted to be a part of this program was because as I said I really like learning about how things work and I'm actually very passionate about technologies, about computers and basically anything digital so the more I got into it the more I realized that at the core of all of it lies the kernel that is actually doing things so yeah I wanted to contribute to the kernel because I wanted to learn how it worked and how it helped in managing the software and the hardware together all of this always got me a very interesting and low-level programming and since I came to my college which was around four years ago and when I realized or rather learned what open source was and how it worked I just wanted to send the path to the kernel and you know be proud that the kernel runs some lines of my code no matter how small the patch was and the program was a great way of getting into it just like I said there were mentors, there were all these pre-requisite tasks, there was some material that helped me a lot in my journey of becoming a kernel contributor so about the things that I learned well these things could generally be divided into two categories first are the things that I learned that were specific to the kernel and the others were just good things in general that I'm sure will help me later down the line so for the kernel specific things these things that I learned were from different things that I got involved with first of all I was involved with bugs and writing patches and trying to debug these bugs and trying to figure out how the bug actually happened taught me a lot about various kernel subsystems I fixed bugs in the file system in one of the file systems an obsolete file system in fact it was ReZFS I fixed a bug in the networking stack I also fixed a couple of other bugs one was related to page locking so all of that really taught me a lot about these things that I didn't even know existed and another thing that I did while I was a part of the mentorship program was just to understand the kernel better was I started reading the Linux device drivers third edition that was publicly available I learned a lot of interesting things from that to a lot of really cool internals about the kernel and how drivers work how you could write a driver I got to know about compiler instrumentations shadow memory address sanitization a lot of cool stuff actually and of course like I said there were prerequisite tasks and they taught me a lot about kernel in general in fact even about operating systems in general how syscalls work and how you could create one and all that interesting juicy stuff this was something that was specific to the kernel that I learned and there were a lot of other things that I learned too some good things in general that I'd like to follow through that I had to follow in the future so first off since we are contributing to the kernel requires you to write patches I really got to write better patches more concise patches my commit message is improved by a lot that is something that I noticed in the three month period I also am really thankful that the kernel code is so well documented and I'll try to write more documented code in the future because just like I said the kernel is a very complex system and without the documentation it would be really hard to understand what's going on most of my mentorship program most of the time was spent on reading these documentations and the book that I mentioned various articles from the Linux kernel mailing list they were a great insight to the kernel I really learned a lot about debugging and what the workflow looks like while we are trying to debug the software debugging a multi-threaded software something as complex as the kernel it really teaches you a lot also I really realized the value of logs in them so when I was starting out contributing to open source projects I would try and only read about where the crash happened from the logs and you know basically skip all the rest and try and figure out what the bug was but as I continue doing that I realize that when I got back to the logs a lot of things that I figured out in let's say 30 to 40 minutes I could have just read them from the logs or from the dump so I really learned how to better utilize these resources right? for example one of the bugs that I fixed I was able to fix it very quickly because of how concise the logs were it was regarding a sleep in the atomic context we were holding a spin log and all of that was there already in the log file and if I hadn't read that I'm sure it would have taken a lot of time so yeah I realized the value of logs and dumps during the program also the hardest problem that I faced when I think this XKCD best sums it up the crash the bug isn't necessarily where the crash happens right? the crash could be while reading a value and the bug would be in the piece of code that wrote that incorrect value right? so that was the hardest problem or the hardest difficulty that I faced because reading logs and saying hey this is where the crash happens let's add a bounce check here you basically are trying to correct the symptom and not actually where the problem really really lies so tracing in logs and dumps to where these bugs actually happen it was one of the most valuable lessons that I'll be taking away from this mentorship program so my experience well as it must be evident from the way I have been talking about the program so far it was really really really nice it was a very fulfilling experience I was really curious about how the kernel world and a lot of my curiosity was satisfied I still need to learn a lot I still want to know a lot more about it but the things that I got to know I'm sure I wouldn't have been able to if I was not a part of this program my mentor Shua Khan was very supportive very helpful I could always you know help her with any problem that I had so yeah thank you Shua once again without you would not have been possible for me to complete this mentorship program so what where do I go from here well I'd like to continue contributing to the Linux kernel in the ways I can I'd like to fix more bugs I'd like to maybe even add something to the kernel later down the line I'd like to someday become a maintainer for some system and hopefully I find a day job that lets me work on the kernel because I really really enjoy working on it so and lastly will advise for future mentees will read the docs they are there for you people spend their valuable time writing the documentation so that you you do not have to ask questions or stumble across the internet just read the docs and learn as much as you can the program is not just about fixing bugs I think if you just focus on fixing bugs you miss out on how much you can learn about the kernel and the community in general the patches that you write always keep them short and concise and nobody is going to I read about this in some meme I guess that ask developer to review a 10 line code and he'll find five mistakes but if you ask him to review something say 150 lines they'll just say it works so yeah keep the patches short and concise because that really helps in speeding up the review process the maintainers and the the people who are that are going to review your patches are also a lot of them are also working on other day jobs so respect their time and try and be as short and concise as possible never skip reading the logs they are there for you once again something that I really something that I learned from this program and I'd like to recommend to future people who want to participate learn using the tools because I mean of course in the beginning you might get lazy and think that why do I need this tool if I can just do this by hand but the tools are there to help you and they will save you a lot of time keep communicating with your mentors something that I realized at the end towards the end of the program was that of course the number of bugs that you fix is important but what is even more important is your communication keep telling them what you are working on you get stuck keep asking them questions because not asking them questions might lead you to wasting a lot of time and lastly but not the least do what you love doing I always have been very passionate about the kernel and I think that is also a reason why I've got a chance to be a part of this program don't be scared don't be scared of asking questions don't be scared of making mistakes and never really give up on what you always wanted to do so that's it from me thank you once again to the LFX foundation and to my mentor sure thank you Shreya Nals that's a great presentation my pleasure mentoring you let me get started sharing my screen here alright thank you graduates it's has been I enjoyed listening to all of you the projects you have done the work you have done it's awesome thanks for sharing taking the time to share with us closing we have to thanks sponsors Red Hat, GitHub, IBM and Intel without they have been a constant source of support since we started this program three years ago and without their support we won't be able to do what we do and thanks everybody bye