 Welcome everybody to the LFX mentorship showcase, this is segment three, and we have several mentees that are going to be sharing their experiences with you all. Let's talk a little bit about the beginner's problem. And we all struggle with where do we start whenever we embark on a new journey, something new feel that we want to learn new project and, and so on. So this is a not an unique problem. We all face this problem. The first thing really that gets us all is what, what do we want to do, what are we passionate about, what do we enjoy doing, and which open source project to pick from, as we have several in different areas. And once we figure that out, we do our research and we figure that out. The next problem we face is the how, how do we get started in a with our learning, and also how do we break into the community. And the code makes it looks complex always, and the community looks intimidating and daunting, and what's the best place to start learning. So these are all the questions we all face. And once we figure a little bit out of that, and where do we find resources to learn, because that's one of the obstacles. And then comes the question, who can we turn to for help, who can help us with our questions, who can we reach out to. So, we all face this what, how, where, and who questions, and we at the Linux Foundation, we do understand these are obstacles for new developers. We provide resources for you. The site you see here, plan your learning paths, you can do that at the training site. You can go look at the Explorer Pathways, and figure out what you want to, what kind of technology and fields you want to look at, and you can do all that planning and research as well. There are several free training classes that you can take to get a feel for what you would like to do. Once you've done that, and you kind of identified which area you want to be in, you can benefit from looking at the webinars and learning from our webinar series. We have several webinars already archived. You can go take a look at them. They cover a wide range of technologies, anywhere from debugging to open source methodologies and fuzzing, static analysis and so on. This is a wealth of knowledge here, and some of them are cardinal ones as well. Once you have done that, you can explore mentorships. And we offer several mentorships, part-time, full-time options, and paid and unpaid, and so on. And once you have done that, we also connect you with people looking for talent. That's the event we are in right now, mentorship showcase. And I'll leave you with this slide with lots of resources on where to find information. And keep in mind that we do at LF, we do recognize that access to resources is a barrier for people and equitable access. So we try to provide resources for everybody, and we welcome everybody, whether you are a student or you are a career change, you're looking to change careers and explore new career options. We provide all of that to you. And part-time mentorships, we have now straining resources, and we do all of that, keeping in mind that you are balancing, a lot of people are balancing work-life situations for learning and growth. And we just did a survey. We are continuing to look at what we can improve, what kind of resources we can provide, and fine-tuning our mentorship programs. So we have done a survey recently of all our graduates, and we have a report out. Check that out just out a couple of days ago. Okay, so with that, I'll hand this off to Takumi to share his experiences with the mentorship. Okay. Okay. Hello. Hello, everyone. My name is Takumi Hiraoka. Today, I'm going to talk about the experience of LFX risk-5 mentorship with the title Opening the Door to the Open Source Software through the Open Brass Project. I'm an undergraduate student at the University of Tokyo. My research topics are computer architecture and compilers. In my research, I spend a lot of time dealing with risk-5 and LLVM technologies. So I'm very interested in systems programming, including them. Therefore, I formed the Open Brass Project, so appearing and decided to join it. After being selected as a mentee, I'm working on improving Open Brass. Now, I will give a brief description of the project I worked on, Open Brass. Open Brass is an optimized Brass library. Brass stands for basic linear algebra sub-programs. Brass has three levels of computations. Level 1 is about vector scalar, and level 2 is about matrix vector, and level 3 is about matrix matrix. Open Brass is optimized for each processor to achieve high performance in those computations. Level 3, Open Brass kernel didn't support risk-5 vector version 1.0, so I was trying to implement it. Then, I will now give a brief description of the project I worked on, Open Brass. Open Brass uses blocking and packing techniques to speed up processing by using registers and caches as efficiently as possible. You can see this in the figure. Blocking is done with register and cache size awareness in order to increase register and cache utilization and reduce access to main memory as much as possible. Packing techniques also arrange data in a contiguous manner in memory, which simplifies a memory access pattern and reduces cache miss-rate. These are the goals I like to achieve through this project. First, I want to contribute to Open Brass repository. I've never contributed to OSS before, but I've always wanted to do so. This is because it is very attractive to develop software used by many people and because I can deepen my knowledge of the area and improve my coding skills. Second, I want to deepen my understanding of risk-5 related technologies. Although I have had some exposure to risk-5 technology, I know that the scope of risk-5 technology is broad and there are still areas that I'm unavailable. Therefore, I'd like to further push my risk-5 technology through working on this project. Third, I want to get used to coding large-scale software. Large-scale software is often difficult to understand where to start, how classes rate to each other, and what kind of processing is performed. Therefore, through this project, I'd like to get used to working with large-scale software. From now, I'll talk about what I learned through this mentorship. First of all, it gave me a better understanding of risk-5 technology. First, my PC is equipped with a processor for x86, so I needed to prepare a compiler for risk-5 in order to cross-compile. The two most famous compilers are Krang and GCC, each of which has subtle differences and I used them for operating risk-5 assembly code. Throughout using post-compilers, I got to be able to understand the differences between them. Furthermore, I did not have an actual risk-5 machine and risk-5 PCs are rarely available on the market, so performance measurement had to be done by simulation. So I used simulator software, such as Spike and QM, and understood their usage. First of all, since I dealt with risk-5 vector extension this time, I gained a deeper understanding of the specification of it. I also learned the importance of breaking difficult tasks into smaller, simpler tasks. In the beginning, I was very nervous about the project to contribute to Open Brass, because I did not know how to accomplish it. However, my mentor assigned us detailed steps to accomplish our goals, and I was able to move forward without feeling too much difficulty. Specifically, I first followed the tutorial to run the elementary technologies of Open Brass by optimizing GMM for x86. Since I can visualize the results in graphs, it was easy to see the effect of optimization on performance. Next, I created an environment capable of executing risk-5 vector instructions. Specifically, I prepared a compiler, a simulator, and a Bronx kernel. Bronx kernel is a set of binaries necessary to run the simulator. It's easy to prepare a general instruction set such as RB64IM, but vector extensions are rather recent extension, and the tool chain around them is either newly developed or in the middle of development. So I couldn't find an article that successfully prepared an environment for vector extensions. So I had to check the documentation and issues to build, and I had a hard time using the correct version of vector extensions and the correct repository branch. So I decided to write and post a technical blog on how to build the environment. Third, I wrote a program using risk-5 vector instincts to compute GMM using 4x4 kernel. Here I was able to familiarize myself with the risk-5 vector instructions to some extent. Finally, I wrote a call to compute GMM using inline assembly of risk-5 vector instructions. At this point, the project is still in progress, and I think it will finally enter the stage of contributing to the Open Blast repository. What I have also run through this project is that it's not difficult to contribute to OSS. In fact, I have not yet contributed to Open Blast repository, but considering that I am at the stage where I can do so in a short time, if we take small steps like the one I set in a previous slide and build them up little by little, everyone can contribute to OSS. So if you are hesitant to contribute to OSS, I encourage you to take the branch. I also continue to contribute not only Open Blast, but also various OSS from now on. In this slide, I'd like to thank both of the persons for their support throughout this mentorship. First, I'd like to thank Mentor Cianni. He gave us appropriate assignments and guided us on our way to becoming contributors. He also followed up with us when we had technical difficulties. Also, thanks to Morazan. He is an assignmentee and worked with me on the project with issues. He gave me technical advice when I had programs and helped me when I could understand in the meeting because of awareness of my English skills. Finally, I'd like to talk about my future plans. First of all, I'll continue to study and develop Open Blast if after this mentorship is over. With the rise of machine running, material cooperation will become more important. And the performance of linear algebra libraries such as Open Blast will also become more important. Next, I will be more active in OSS activities. Through this project, I have deepened my understanding of risk 5 and I have always been interested in compiler technologies, especially LLVM. So I'd like to work on OSS related to those technologies. Also, I'm currently a senior in college and will be entering graduate school in the spring of this year. And I'm looking for an internship. I'd like to experience remote internship at and overseas companies to make some more stop this experience of working on a project with people from overseas. This is my email address and GitHub account. If you are interested in me, feel free to contact me. Thank you. Hello. I hope I'm audible. Anyone could just quickly confirm this thing. Okay, sure. So, hi everyone. Good morning, good evening, good afternoon, a good night depending on the place you're currently living. I am Anbub and I'm going to talk about my experience as Linux on which inventory for the spring of 2022. So a little bit of instruction of mine. Who am I? My name is Anubhav Chaudhary and I'm currently an intern at Veritas. I'm a CS undergraduate from Tripaliti, Bhuvaneshwar. I love piddling around with code and exploring stuff around. I've contributed to some open source projects before also like Calamaris, the system installer. And yeah, you can Google me by this name, DIPRO447. I'm quite famous. Not really. So, let's start. So I was interning under Linux Foundation for this, under this organization called CNCF. So what's CNCF? CNCF is a nonprofit organization founded by the Linux Foundation in year 2015. With the goal of promoting the adoption of cloud-native technologies and methodologies. And under CNCF also there are various projects, mostly there is Kubernetes, there is NV, there are Pixi. So I was contributing to this project called Pixi. What's Pixi? Pixi is an observationality tool for Kubernetes. And it works without having any changes to the main port base. So, let's see a bit more what Pixi. So what Pixi allows you to do is it lets you audit your Kubernetes environment. It makes you, it allows you to see different events. It allows you to monitor different stuff and it also allows you to log different stuff that's going around in a Kubernetes application. A little bit about my project. So, okay, so when we moved from this monolithic, monolithic kind of structure of our applications to more of microservices environment. The number of messages going around with these, between these services increased drastically, very exponentially. So naturally there should be some way to look around these messages that are going between different services. And of course, messages requires protocols, protocols basically rules. So there are different protocols available for different microservices to talk between them. There are HTTP protocols, there are GRPC protocols, etc. What my goal for this project was, it was to add support for automatic tracing and parsing of KMQP protocol. So Pixi already had support for various protocols like HTTP, it had for Redis, etc. And what was required at that time was to implement support for KMQP. KMQP is the full form of KMQP's advanced messaging protocol. It's an open standard application layer protocol for message oriented middlewares. It's very similar to something if you've heard MQTT. And what I had to do is implement different type of, implement parsing of different types of messages of KMQP version 0.9.1. There are different versions of MQP available, but I have to work on this. So proceeding further. Let's see how my three months went and there is a bit of story and excitement really around everything. So I'd be sharing that. So first of all, selection. My selection was, it was around the end of Feb. I was, at this time I was learning about networking in general, I was highly interested in networking. And I just came, came through this mentorship page on Linux Foundation mentorship website. Where Pixi was mentioning something to implement a protocol Mongo. I was somewhat familiar with MongoDB. So I tried creating a demo parser myself. Then we came to the Pixi Slack page and I told my mentor that I've done this and that. And he just loved the parser that I made. I was selected after a few days. Then one of the biggest step of these three months was setting up Dev environment. So Pixi works with Kubernetes applications and for an undergraduate like Kubernetes itself, I guess is somewhat tough. So I set up my Dev environment in, first of all, I tried to set up this environment on my local laptop, but it was very tough. Then there, there are Docker based Dev environment, which was, which I wasn't familiar with beforehand. So it was a great learning that, yeah, you can develop stuff around in Docker also. Anyway, then next was reading Redis protocol. Okay, so there, there might be much confusion around. Initially that was Mongo, now I'm reading Redis specs and finally I would implement AMQP protocol. So yeah, protocols are like, you should get a sense how protocols work and then you can implement various protocols all around. And that's I guess is universal for all computer science. You get sense around some things and then you proceed further with specifics. So yeah, Redis was easy to understand it has shorter specs. So I gave a read, I gave a reading to Redis protocol, first of all, then I read about AMQP protocol. And if you have observed like it's a lot of reading. So I guess for almost 45 days out of 90 days, I was reading stuff. I was just learning stuff. Then there was time for implementation. I started implementing various messages, various passing techniques in C++ in the, in the pixie environment. It was kind of, I guess stuff thing like C++. I was somewhat familiar with C++ but not with the development C++ but my mentors really helped me a lot. So really like to thank them. And finally, there was testing involved. So testing was also kind of different. I was get, I manually tested a lot of data. I was getting boundary strings and I was manually trying to parse them by myself and then seeing if my application, my parser was giving the same result or not. So what I learned, I learned a lot about AMQP. I learned a lot about Docker like really, really Docker was one of the great things that I learned C++. Yeah, of course C++. I was programming C++. So I learned a lot of programming techniques, build tools. I use Bazel for building. I learned a lot about build tools and Kubernetes in general. Yeah. So pixie is dependent on Kubernetes. So Kubernetes in general. But the main thing that I learned was computer science. Really this mentorship program helped me a lot understand the deep, deep things of computer science. So if anyone is thinking about joining this mentorship program applying for mentorship, I would say it's the, it's one of my best decisions of life. Then these two people are my mentors. Yaxiang and Omit. So I would personally like to thank them a lot. Like they are kind of very senior people. They are principal software engineers and the founding members of pixie. But they held me around in almost every corner of my project. So even very small things I was getting some issues and build and all they were manually helping me a lot, especially Zong. He was my, he was my main mentor. So he helped me a lot. And yeah, that's pretty much about it. You can, I go by this name again, DIPRA447. You can just Google me around on Twitter. You can follow me on at the rate 65 and you can message me here at DIPRA447 about my future aspirations. I would say, if you have an opportunity for me, let me know. I might be interested. And I would, I would always be contributing to open source. It's one of my main loves. Thanks a lot for hearing me. Hey, everyone, I am Sundipan Panda. And today I will be sharing with you my experience of working as a mentee to improve the supply chain security of CILIUM as part of the Linux Foundation mentorship program. So let's get started. A little bit about me. I am a senior undergraduate majoring in information technology at Maulana Abul Kalamajat University of Technology in India. I started my journey in open source as a mentee in the Kubernetes community where I am now a member. I am passionate about open source and cloud native technologies. Now let's get a quick overview of the project. CILIUM is a software defined networking and network security solution for cloud native applications. It provides visibility and security for applications running on Kubernetes using an open source high performance model that is simple to set up and configure. Now let me share with you my project goals. I improved the security posture of CILIUM family's open source projects as part of the Linux Foundation mentorship program. This includes implementing container image signing, generating software build-up materials and bringing the CILIUM monitor score of CILIUM to 100%. As you can see after the project completion, we are generating a software build-up materials and signing the container images and the HBOM images using cosine and we have achieved a CILIUM monitor score of 100%. Now let me share with you the key takeaways of my mentorship experience at CILIUM. While working on the project, I came to know about the importance of container security and signing container images and how it provides a means of verifying the integrity of the images which ensures that only trusted images are deployed in production. Signed images also help to protect against malicious actors who may try to modify or replace the images with malicious code. But why is keyless better than conventional signing? It is so because it eliminates the need for manual key management and reduces the risk of maliciously signed images being deployed into production. Keyless signing also simplifies the process of securely managing large numbers of container images as there is no need to maintain a key per image. And that's why we implemented keyless container image signing in CILIUM. I also came to know the importance of generating software build-up materials while working on the CILIUM project. It allows organizations to have visibility and control over components used in their applications. And this helps organizations ensure that they are using secure and reliable components as well as reducing the risk of security vulnerabilities or compliance issues. You might remember me mentioning that CILIUM attained the CILIUM monitor score of 100%. But what is CILIUM monitor? It is a tool that periodically checks open source projects repositories to verify they meet certain project health best practices. Now let me introduce you to my mentors and let me share with you how we worked together. My mentors Andrew Martins, Natalia Reka Ivanko and Zed Salazar played an pivotal role in my successful graduation from the Linux Foundation Mentorship Program. They played a pivotal role by providing me with valuable guidance and feedback throughout the entire project development process. This included helping me to define my ideas and develop an achievable plan for completing the project. They offered moral support and encouragement throughout the entire time I was working on the project. Their positive attitude kept me motivated even when things got tough or I felt overwhelmed by the task ahead of me. Finally, they provided technical advice, resources and connections that enabled me to complete the project successfully. Their constant support and dedication to helping me reach my goals made all the difference in my successful graduation from the Linux Foundation Mentorship Program. We worked together by scheduling weekly check-ins to review progress, blockers and upcoming tasks, communicating via Slack to discuss issues and get feedback, and coordinating with folks from the Kubernetes and 6-store community. I would like to extend my heartfelt gratitude to the awesome community of developers at Cillium, Kubernetes and 6-store for their valuable input and helping us implement the Kubernetes BOM tool used to generate software build-up materials and the 6-store's cosine tool used in assigning the container images. So, what's next? I plan to utilize my skills in coding and development to create open-source applications that serve a greater purpose, become a mentor for new students interested in open-source and helping them guide through the processes of getting started. Join or even start an organization or group of students devoted to furthering the advancement of open-source technology in both education and industry and attend conferences, workshops, hackathons and other events related to open-source software in order to stay informed on the latest developments and trends within the field. With that, I will wrap up the session. Thank you so much for joining. I would like to take a moment to thank my mentors who have helped me throughout my term. Your guidance and support has been invaluable and I am grateful for all the knowledge you have shared with me. Finally, I would like to thank the Linux Foundation for organizing and hosting this showcase and giving me the opportunity to build relationships with the open-source communities that will last far beyond this program. Thank you. Thank you. Thank you. Thank you so much. Thank you. Thank you. I have been like, worked and learned about K-1-O policy engine, how dynamic admission control works in the Kubernetes, how you can validate and validate the resources, deep understanding of various Kubernetes resources also. And I have also developed a deep understanding of YAML and Jamspath, which is a JSON query language used in K-1-O policy engine. Along with it, I also understand how you can handle and understand heavy code base, like K-1-O is an example of this. During this mentorship, I have also developed an understanding and a keen interest of learning Goal and also more about Git and GitHub. During this mentorship also, I have learned how I can write test cases for various scenarios. Along with this, I learned how I can meet deadlines and can handle multiple things at a time. It also helped me to be a better communicator. It helped me to come out of my comfort zone and ask questions to various people and clear my doubts. And it also helped me in growing as a professionally and personally also. And it helped me to make mistakes come while solving any issue or if I face any roadblocks while solving any particular issue. So, my mentorship has been an amazing experience and one of the reason for this was my mentors. My mentors Venkatesh Kuttarkar and Pratik Pandey both are senior software engineer at Nirmata. They both helped me really in understanding the K-1-O policies from dropping daily updates on Slack to weekly connecting meetings for progress and roadblocks discussion. They also shared me the right resources to learn about the various K-1-O policies. How can I understand about them? And they also helped me in how I can approach a particular policy and solve an issue related to it. Along with this while my exams came in between during the mentorship period. So, they were quite supportive and friendly regarding that also. And they also helped me providing career guidance. I would like to give a special gratitude to Chief Zola who is a technical product manager at Nirmata. Chief helped me in understanding the K-1-O policies very deeply. He helped me in solving each and every doubt and helped me in merging PRs in the K-1-O policies repository and helped me in proving the test cases. The surprises that I got from my mentorship is the K-1-O community support. All the community members were very helpful in making me understand the K-1-O policies. It also helped me in increasing my self-confidence. I also got some amazing people to whom I can add in my network. And the last one which I never expected but my mentorship experience blog got published on the Nirmata website itself. So, in future, like K-1-O introduced me to software supply chain security. So, I will be exploring that in that particular field. I will also in planning to increase my open source involvement in the cloud native world and looking for more opportunities to learn and improve my knowledge. So, you can connect with me on different social medias like on LinkedIn, Twitter and can drop me an email also. And you can check my work and read about my mentorship blog also. So, thank you so much for joining. Oh, my name is Juhi. Today I'm going to talk about my experience with Lin's Connor contributions, especially troubleshooting the Connor panic. Before we go any further, please allow me to introduce myself. My name is Juhi Kang and I'm currently working as an open source developer and have been contributing to the Lin's Connor networking subsystem for years. So, let's get started. When we develop with Lin's Connor, sometimes the Connor panic occurs. When I was a newbie for the Lin's Connor, serving the Connor panic was difficult. So, instead of debugging it, I choose just to reboot my computer. Actually, it wasn't a proper solution for this infinity loop. In order to get out the loop, I tried to find an effective way of troubleshooting the Connor panic. So, what I'm going to show you is about step through the Connor panic debugging with example. Before we dive in, I will show you how to raise Connor panic in this way. The command is pretty simple, just write character C to process R to trigger. This command raised the Connor panic like life size. But why does this command raise the Connor panic? Let's find that way. First, let's look at the Connor panic log. By skimming through the Connor panic log, we can see many details which are not familiar to us. Let me give you a brief description of how this log is constructed. At the top, the Connor panic log header is located. This provides the abstraction about the crash. Such as the cause of the panic or a poison of your Connor. Next is called trace. Coltrace provides the context information about the crash. As you can see on the left side, Coltrace shows the function symbol and the function offset of the execution flow. At the bottom, you can find the registered information. Register information provides the current executing dump of the CPU registers. RIP register holds the current executing instruction. And code includes the current executing code information. So now, before we start to analyze this Coltrace, we need to save this panic log. In here, I've saved this as error.log. In Connor, there is a script called decode stack trace. This script decodes the stack trace function symbol with the source code information. So we can easily troubleshoot the Connor panic as easily look up the Connor source code to figure out what caused the crash. Now, let's take a look deep dive into how Connor panic actually occurs. First, let's look what comments triggered the Connor panic. In here, the comments triggered Connor panic are pretty simple. As you can see, output redirection is used to the Connor panic, Connor profile. So what is this comment actually doing? Just write the character C to process our trigger to figure out why this comment raised the Connor panic. Let's analyze with the Coltrace. As you can see, at the right side, the Coltrace is reported from bottom to top. So entry score is called first, and panic is called last. Before we start, I will ignore the symbols that have a question mark. Before the symbols with question mark in Coltrace means that information about those tag entries probably not reliable. So as you can see it right here, I will visualize the Coltrace on the right side. Although I can explain everything because I don't have enough time, so writing a file in a loose environment involves several steps. The function on the right is color coded for each sub-systems. So let's begin with our panic troubleshooting with the Connor codes. Here, we will use the Connor code version 6.2.0 RC2. Initially, a system call interface and 3c score 64 after hardware frame is called. And as you can see the code on the left side, this function calls 2c score 64. Then from 2c score 64, the 2c score x64 is called. And this function calls the c score table by passing the register as an argument. Let's take a different look at the c score table. The c score table actually is an array that consists of c score handlers. At the second entry of this table, you can find the c score right handler. From this handler, the case is right function is called. Back to c score table, this will call c score right and eventually case is right will be called. After that, case is right will call and VFS right, which is the right function from the virtual file system layer. And you can see from the source code, VFS right is calling the right function. As we dig deeper, we can realize that this right function is actually met with the project right function. So this project right function is located at prog right system from the prox like the right function, the PDE right function will be called. And this will eventually call the prog right function. As we look deeper, we can find this prog right function is met with right c-circ trigger, which is located at c-circ device driver level. Moving on c-circ trigger will call the handler c-circ. And from handle c-circ, the c-circ get key operation function will check which c-circ command is issued. From this function, c-circ get key operation will check c-circ key table data structure. Then c-circ will finally realize that the character C is transferred to c-circ crash operation. Back to c-circ handle, this will eventually call c-circ crash operations handler function. At last, the c-circ handle crash function will call it. Finally, this will laser panic with panic function. So far, we've looked at how the tunnel panic is caused by deep dive from entry c-circ crash operation after hardware frame to panic. And for the last table analyzing the tunnel panic, you can find c-circ dashboard for additional resources. From c-circ, you can gather various resources related to your bug. With the tunnel panic analysis methods we have learned. So far, I was able to learn how to counter debugging in various way. So this opportunity also gave me a chance to boost my links counter contribution skills. So thanks for listening my session. If you have any questions, feel free to ask me via the email. Thank you. My title is LFX mentorship and me. So a bit about me. Hi, I'm Sitchul Maurya and I'm LFX mentee for Spring Term for 2022 for the Cubama project. And I'm a 2022 engineering graduate and currently I'm working as an associate product engineer and for cloud. So what is Cubama and why I selected to work on this project? So Cubama is a cloud native runtime enforcement system that restricts that restricts a behavior such as process, file and network operations. And it uses a lesson that is a little security modules like a Selenix app armor and BPF LSM. Cubama has policies that we define on the Kubernetes clusters. And by using Cubama users get rich alert and elementary that they can use to identify malicious actors. The other question is why I selected to work on this project. So security has always been an interesting topic to me since my college days and I used to explore and drink or how Linux works and how things work around Linux. And I was to, I also learned about web security during my college days where I learned about what stopped and eventually when I got introduced to the Kubernetes I started exploring projects which are related to security. And that's how I ended up working with Cubama, which is also related to security. So now the problem. So as I already discussed, Cubama provides rich alert and telemetry. So this is the image that you see on the screen is one of the alerts that we get whenever there's a tag on the Kubernetes cluster. Having a, having a bit before you can see that it give us some basic details like timestamp, cluster name, host name, pod name and container ID of the cluster. So some of the important details which we can see in the alert is the policy name that we, that was defined by the user and the important field is operation field. So the operation field is the field where we can see it is a type process. So as I already discussed, Cubama provides three types of protection that is for process for files and for the network operations. So here the demo is for the, the telemetry that you see is for the process type. It is of type operations and another field is of type resource. So when we check, it is, it is pointing to the sleep binary. So the policy that was applied to the sleep, sleep, sleep binary and whenever the malicious attacker will try to run the sleep binary on the Kubernetes cluster. It will give the permission denied to the attacker and it will send this telemetry to the user that who has applied the policy to the cluster. So this is one of the alerts that we get on the Cubama cluster. But if we consider a scenario where we are running, we have applied many policies to the cluster and there is an attack done on the system. So at that time, there will be a lot of telemetry that will be reached to the user and it will get really difficult to see which one is the more attention than the other one which is not so severe. So that's the reason we have CLI on the Cubama which is called as K-Armor and that's a part that I worked on during my mentorship project. So we call it as K-Armor and my task there was to work on adding the flag option to the current CLI that we have and the flags that I added were this. So the first was the names first flag that I added on the Cubama. So names space, as we can see, we have a Kubernetes cluster and there are different names that we can have on the Kubernetes cluster. And based on the requirement, there are different policies that are applied to the cluster. So if the user want to see the alerts on the particular main space, they can use this flag that I applied. The other flag was the log flag. So log flag, we have a two type that is host log and container log. So Cubama can be applied on both the system, the Kubernetes cluster as well as on the normal host system like Ubuntu. So if the policies are applied on the host system like Ubuntu, so the user, if the attack is done on the host system, they can easily use the log filter and see the alerts that are given by Cubama. The other flag was the operation flag. So this is, we have three types of operations on the Cubama side that is process filing network. So there are many scenarios like if you consider a scenario where we have this SQL running on your system and so we have a password file which is already stored in your system. And we have, if you apply a Cubama policy so that you can keep a check on the, if there's no malicious hacker who is trying to access files or SQL password files. So we can create a policy for it. And if there is, if the attacker tried the accesses, access the files, we can get an alert quickly and we can also fill, we can use this operation flag and we can easily filter it out. So the other flag was a limit flag. So this flag was a bit tricky to apply and it took quite a while for me to understand like how things work with the limit flag and eventually after understanding the whole flow I was able to apply it. So the limit flag is a very basic flag which can be used if you want to limit the number of alerts that the user is getting. If there's an attack down on the system and there are multiple different types of attacks done so it gets really difficult to see the number of, like there will be a plethora of alerts coming on your system. So the user can use this limit flag and they can specify any number of flag alerts they want to see. For example, if they want to see 20 alerts they can specify the 20 number and they can easily get 20 flag and the CLI will stop. So basically the last flag that I walk was on the label flag that we have, that is very similar to the QPCK label flag, we can use it and by using this flag we can easily get the policies based on the labels that are applied to the pods. So these were all the flags that I walk during the project and yeah, so basically this was the solution that was given from my side. So before my learning, so there were many things that I learned through this mentorship project and some of the things that I didn't highlight to us, but Golang, so as a project is written in Golang. So Golang was something that I had some bit of experience with but while working on this mentorship project I was able to learn things and how Golang works and how about some of the packages like Cobra CLI that we that was used to build QBAP, KABMA CLI, I was to learn about it and another thing was the GRPC. So I was already familiar with REST protocols but GRPC was pretty new to me. It started exploring how it works and when I got to know about it then I was very much surprised and happy to see that how fast the GRPC is going to compare in terms of speed when we compare it with REST. The other thing was EBPF, the current bus world that we have in the network namespace, so like EBPF, like there's a quote that we say, if I if I did like to define what the EBPF is, EBPF, what JavaScript is to the web, EBPF is to the kernel. So it becomes really easy for you to program your kernel system by using EBPF code. So I got introduced to it and I started exploring more about EBPF and that's how I ended up on learning CLI and I'm still continuing to learn about QBAP and contribute to the CLI project. And other than this, I also learned many other soft skills like how to present yourself, how to communicate with people, how to do us in communication and many other stuff. Another thing is another pointers when applying. So there are many people who are looking to apply this for the LFX mentorship project. So a bit about myself, I was selected to this mentorship project in my third attempt so I already tried twice and I was not selected, but in my third attempt I was selected. So some of the pointers that I would like to give you whenever applying for this mentorship project is first is a community involvement. It is always good if you join the Slack channel of the project, introduce yourself and start taking good first issues from the report, try to install the project on a local development and see how things work with the project. Another thing is to contact mentors. So it is always good to be in the eyes of mentors and let them know that you want to work as LFX mentee for this fall and try attending community calls and show up to the meeting and ask open questions to the Slack channel. Third point is you don't doubt yourself. So there's an imposter system that everyone has. So I also have this. So if you many people think I don't know stuff, I will not be able to make it. But I will say just keep applying and you have nothing to lose at the end of the day. You still have PRs done and you can show it in your interviews and anywhere you want to use it. And the last point is to learn the tech stack. So it is always good. If you know this stuff because that's make me way forward than other people because mentors look for the people who already know the tech stack the project based on it is not a hard requirement but still if you know the start it really gets easy for like it help you to get selected for the project. So I did like to thank both of my mentors, Barun and Raul. They really helped me during my whole mentorship. So I still remember like I was trying to set up Cubama on my local system. And like Cubama is a bit heavy project to run on your local system. It requires quite a lot of RAM. So at that time I was using Windows and I was using virtual machine inside it. So it was really difficult to set it up and I asked a question to on the Slack channel and Raul was very important to ask me to join a call and he can really help me how I can set it up and how I can use dual boot and I can run that on a human to machine. So yeah, and Barun also helped me during my whole journey with all my small doubts and really was very helpful while working on this project. So I did like to thank my both of my mentors for their help. Finally, thank you. I did like to thank the whole Kubernetes community because that's where it all started. I still remember like I joined KCD Bangalore Meetup where I was able to meet a lot of Kubernetes people and they asked me to join Kubernetes in-depth channel where a lot of people hang out and talk about Kubernetes and this is a channel where I got introduced to many people and started exploring more about Kubernetes and that's how my journey started. So I did like to thank the whole Kubernetes community, Linux Foundation, CLCF and thank you and these are the handlers where you can contact me and reach out to me once again. Thank you. Thank you everybody. It's awesome presentations and thanks for sharing your experiences today with us and thank you all the mentors. Without them, we won't be able to offer the mentorship programs as we do. We won't be able to run them. Thank you very much everybody and good luck and that's the wrap. Thank you.