 Welcome everybody to the LFX mentorship showcase and we'll get started. I am Shor Khan, my Cardinal Maintainer and Linux fellow at the Linux Foundation. And what do I do at Linux Foundation, I lead mentorship programs and my two passions are learning and sharing and empowering others to do the same. So I get to do both in my role at the Linux Foundation. So let's talk a little bit about the beginner's problem. When we are trying to get into a new area, learn new technology, we are faced with the problem of where do we start. And the first step is always very difficult. First of all, we have to figure out what we want to do, what we are passionate about what we, if we learn something new would be, would we be able to something we're passionate about. So once we know that how do we get started. The second step is always how when you are learning new technology area, we're not confident. We are trying to figure out if where can we learn where can we find resources to learn and who can we ask for help. And community, usually, when you want to approach the community, we feel like, well, this community is community of experts. Intimidating, daunting, and code basis always looks very complex. We have at the Linux Foundation, we have resources for you to explore learning pathways. What do you want to learn and what can you learn in different pathways like Cardinal or blockchain or CNCF and so on. So if you go to the Linux Foundation training website, you will be able to explore various pathways and then also take some free courses and webinars try out the webinars tutorials and so on. So it kind of gives you a feel for that will help you with what and you know, hopefully you'll figure out what you're passionate about. And once you kind of figure out what and you also can explore LF live webinar series, their free webinar series that cover various technology area software engineering overall. And any, you know, experts, lead interactive sessions, and you can ask questions live ones happen once a month and but we have old webinars on the website. Once you know the area you want to pursue you have learned a little bit you want to to connect with a mentor work in a formal mentoring program, you can apply to our mentorship program. This is our way to connect X open source community experts, maintainers and experts in each of the areas with a new developers that are wanting to learn that area. Okay, once you graduated from the program you applied you learn and once you graduate what's next for you. This is our LFX showcase mentorship showcase is designed to connect new graduates with people looking for talent. We have seven graduates talking in this segment. Before I hand it off to Aditi Ahuja. I would like to take a moment to thank our mentors, without our mentors we won't be able to do what we do, you won't be able to offer the mentorship programs. And with that, Aditi, you can go ahead I will stop sharing. Hello, I'm Aditi. And I will be talking to you about my experience as an LFX mentee 21 at Thanos. My talk is titled a mentee's for a into Thanos. Thanos is a highly available metric system with unlimited storage built on top of Prometheus. I was a fall 21 mentee mentored by Ben yeah. I'm currently an intern at couch base and I'm from Bangalore India. You can find me on these handles. My talk will be divided into two parts. The first, a quick overview of my work at Thanos and the second, a bit about my mentorship experience. Let's start with why I picked Thanos and the specific project. The observability ecosystem had me intrigued for quite a while and the mentorship with Thanos would give me the perfect head start. As a databases enthusiast, it was also an opportunity to learn more about time series databases. And hence I picked the project related to compaction of data. Familiarity with go the language of the code base was an added bonus. So now a quick detour to introduce a couple of concepts compaction. This is the creation of a new compacted block from one or more source blocks. The compacted block then replaces the source blocks. One of the reasons for compaction are deduplicating overlapping data, mostly an adjacent block since this is a TSDP, so that there are no duplicate data copies and this saves disk space queries with, which require data from multiple blocks to not need to dedupe data. Down sampling, this can be understood as reducing the resolution or the density of the data or thinning out the data without reducing its accuracy. From storing a data point for every one second of time, down sample data stores a data point for every five minutes and then at the lowest resolution for every one hour. So compacting a large amount of data can be time consuming. And there are no metrics provided in the beginning for the user to track the progress and the user had to manually check the UI repeatedly for an idea about the progress. Hence, to solve this, my project involved adding Prometheus metrics to show the overall status of compaction and its related processes down sampling and retention by simulating these three. Compaction is also a multi-stage process and it would be useful indeed to track the status and time for each step of a compaction run, which would be more fine grained than the overall metrics. This was a stretch goal and this was done by adding open tracing support and instrumenting the different steps, such as block planning, download, etc. Now, onto the latter half, making the most of my mentorship. I started out knowing close to nothing about the observability ecosystem. My mentor Ben guided me from starting out by making toy applications to export Prometheus data to adding metrics related to compaction and down sampling. And finally, working on the stretch goal that was adding tracing support for compaction. Thanks to this, thanks to his guidance, it felt like a gradual progression, simultaneously challenging yet supportive. These are my main technical takeaways from the mentorship. Improved my Go skills. Although I did have some prior experience with Go, the mentorship polished my skills and has made me somewhat nitpicky where Go style is concerned. Learned about Prometheus and developing Prometheus exporters for an application in Go. I also wrote a Dev.2 article about the same. I learned about tracing and instrumenting applications and the basics of working with Yeager during the latter half of my mentorship. And finally, I improved my understanding of distributed systems and PSDBs and Thanos compaction and specific. This was pretty fascinating since I hadn't delved deeply into these topics earlier. So, the key non-technical takeaways just as important as the technical ones were. Take initiative. Start small if needed. Come up with ideas or improvements for the project and take on small issues, no matter how small. No issue is too small. My first PR was a doc change and my first two PRs were cordary factoring and changing an incorrect log message. But they definitely boosted my confidence and familiarized me with the process. Don't consider time spent on a PR not merged or an idea not accepted as a sunk cost. The first major PR I worked hard on was not merged since it turned out it wouldn't be needed any longer. And I had spent a couple of weeks on it by then. I was really looking forward to it getting merged and it did stink for a bit. But then I viewed it as yet another hands-on learning opportunity. One of the many I received the Thanos. Seek out feedback proactively and frequently. And when you do, try not to get frustrated by mistakes and lose your morale. Keep trying even in the face of them. Well, this is still a work in progress for me. But this was definitely one of the most important learnings. Tap into the community. The Thanos community is an amazing one like many other open source centric communities. Interact with and learn from them as much as you can. Apart from Thanos specific learnings, it also made me aware of newer opportunities and taught me quite a bit about software engineering in general. And lastly, the importance of balance or downtime. This is emphasized by a lot of mentors in the community. And it's only now that I'm realizing how important it is. So here are some points to keep in mind while applying for future applicants. Get involved in the community as early as possible. Contact the mentor and understand the project. This has two benefits. One, it helps in assessing which project is the best suited to your skills and whether a project is actually what you think about it. And tailoring your cover letter accordingly. Basically show initiative. Keep applying. Don't doubt yourself. I got selected only after applying the second time round. And finally, learn the tech stack. If you don't know it already, this gives you an edge. Thanks. Feel free to reach out to me with any questions or feedbacks. And for a more complete account of my experience, head to this dev.to article. Thank you so much. So my project was to develop an e2e dashboard for litmus kios. So, giving a brief introduction about myself. I'm an undergraduate student at IIT Dhanbad currently. And I was a Google summer of school student with physiology and LFX mentee with litmus kios. And currently I'm an SK intern at kios native, which is the parent organization of litmus kios. Yeah. So before moving on further to my project, the main thing is what exactly is litmus kios. So in very basic understanding is that litmus kios is a tool used for kios engineering. Now the question is what exactly is kios engineering. So kios engineering is about testing your production environment that how that whether it can withstand any turbulent conditions or not. Because in theory, you assume that all your mic when you have multiple micro service architecture and all are hosted on cloud native technology like Docker Kubernetes, then you need to be sure that there's no downtime in your particular application. Say, for example, you have five copies of your server running on production. And due to any fault, two of the server crashes down. You can expect that the request would be a diverted to the rest of the three servers. And the due to increase in the decrease of the number of servers, the response time might increase a bit. But later you would see that the two new copies of the server would automatically be created. And later all everything would come back to normal. And the request would be rerouted to all the five servers again. But this is in theory and this should happen. But in practical, sometimes things don't happen as you expect. And even after the new servers are created, it might happen that the request is not rerouted to all the five servers and only three servers are fulfilling that request. If an IO fault occurs, if a network latency occurs, there can be many such faults which cannot be tested by a generic E2E test or end-to-end testing or whatever type of test. You have to test your cloud native architecture completely that how your different micro services are connected. So the general step of any Kairos engineering is first you identify what are the steady state conditions. By that I mean like what is the current response time, server usage, the load on CPU. So all these conditions you see and identify. Then later you try to introduce a fault. It can be any fault like you crash some servers or you introduce some IO delay or some network latency. These kinds of introduce you fault. You introduce a type of fault. Later, you try to see that are the steady state condition regained or not after some time. If after some times they are regained, then yes, your system is resilent, very good. But if not, then congrats, you have found a weakness and now you have to resolve that particular weakness. So Kairos engineering is all about this. And one more thing about LITMA-SKYOS that just two days ago, LITMA-SKYOS finally became the incubating project in CNCF community. So earlier it was a sandbox project, but two days ago LITMA-SKYOS became an incubating project which is the next level of project maturity in CNCF ecosystem. So moving on further about my project. So next question arise. Why was this project required at the first place? Or what were the limitations of the earlier architecture that a new fresh UI was required? So earlier the E2E dashboard was created using static HTML files. Yes, you heard it right. There were around 20, 30 HTML files which they were maintaining. So it's a good beginning like for documentation, it's completely fine to have HTML file. There's nothing wrong in that, but with time some new requirements came up. Like we have our tests running daily, nightly. So we need to show the progress of that particular test and the result of that test on our website. So now things get complicated. So for this, they didn't remove the HTML file. Instead, they created a Python batch file which updated the HTML files daily. So instead of creating a server, they relied on that particular Python batch script which was running daily. So you can see that now when more new requirements were coming, they saw that now it's getting much complex. And you can see that this particular architecture is not at all scalable, has very limited usability because you have 20, 30 HTML files and all of them have the same type of team and you cannot reuse them between different files. And obviously maintaining so many new HTML files was quite difficult and tiresome tasks. So what they now required was a new UI in some framework like React with a fresh design and many of the new functionality which cannot be added in an HTML file and they need a server for them as well. So earlier we were mainly relying on the GitHub APIs for those details. And in the end of any open source code we require the testing and the documentation. So these were the limitation and the project deliverables. So this is how it ended. So we created an e2e dashboard and here you can see the GitHub API results are very nicely displayed here that whether the last pipeline which ran on the GitHub was succeeded or failed and I added even the dark theme. Then in the tabular format you can see that how many pipelines failed, how many pipelines succeeded what were the various jobs and steps of that, what was the total coverage, a description and the most difficult part for me was to fetch the GitHub API logs because it wasn't something as straight as it might look because initially it was decided that we just have to create a print in React and the API and the server part was just the GitHub API. But for the fetching the logs part this came up as a new requirement and I came to know that now I have to create a new Golang server. So I was not much familiar with Golang, it came to me as a positive surprise as I have to create a new Golang server on my own from scratch though the mentors were very helpful in guiding and supporting me and they provided me the good resources about that and in the end I finally created this Golang server and I learned a lot in that particular process. The next thing is what I learned from this wonderful LFX mentorship experience I learned how to write production level React code so there are many differences when you write your code for your personal project versus when you write for a big open source organization because in your personal project you can write code in many a times suboptimal way or you can use any particular library of your choice but in open source whenever you are using any particular library you have to answer many things like why is that library required if it is required then is the license compatible how well the library is maintained when was the last release is it well maintained or not how good the community is so there are many things you have to keep in mind before contributing to an open source project every line you wrote you have to be sure that yes this is the best way to do it in this particular process you learn a lot about how things are done in a big industry or in a big open source project and as I mentioned I learned how to create a Golang server I explored more about the GitHub APIs and of course a deeper understanding of Tyros engineering so this was all the technical part and on the interpersonal skills I learned about work communication like we have weekly meetings with my mentors where we discuss about the project progress and its details to ensure that there are no blockers and both of us are on the same page also it helped me to teach time management which helps in general in life as well because sometimes you have to manage your schools, your studies your work and other extracurricular activities all this so it's a good lesson that you have to give time to everything and you should know how to manage these things you learn troubleshooting, self-confidence, work etiquette, good listener so all these things was a great learning experience so next, my complete LFX Journal so how it started and why I chose the Litmus Kiosk project so I was being a web developer I was interested in making my project highly scalable with zero downtime and I was quite fascinated with cloud native technology and I wanted to learn about them and contribute to them so it was then I was exploring about the CNCF projects and then I came to know about the Litmus Kiosk project that it required to create an E2E dashboard in React so it was the best project for me because I could contribute my skills to that particular project and give back to the community my skills and in return I would learn a lot about Kiosk engineering about cloud native technology how things are working at such a big scale so I learned a lot from them then I joined the Slack channel, the community and asked you discussions we have regular meeting monthly meetups and monthly meetings of that particular organization stand-ups so it was really fun discussing about those things then I created a detailed proposal with a detailed timeline and I applied and after I got selected then was the coding time where I developed the front-end the Golang server and then the most difficult part could be also to modify and incorporate changes by the mentors mentors were very helpful and supportive and I learned a lot through this PR review as well that how things are done in the production environment and a big open source project they very well make me understand that how things should be done in a good way how you can improve your things it can be even small change like if you are writing one line how you can improve that one line as well and in the end I keep maintaining that project because it's not just about one time thing that you created the project you need to maintain it forever now what's next? so I would keep contributing to more open source project and learning more about cloud-native technology I always get fascinated and I wanted to learn more about how these cloud-native technology work and how can they shape the future as they are working at a so high big scale and then help other people to make their first step in open source because I believe that the first step is generally the most hardest it is very difficult to reach from 0 to 1 than from 1 to 10 so I would like to help all the other people who are likely to take their first step so if you want to reach out to me you can reach out to me at my email my linkedin and I have also written many of my articles on medium about my open source journey and I have shared my experience there so you can go through them and if you have any query regarding anything you can reach out to me at my email or at my linkedin so yeah thank you if you have any questions you can ask hello everyone welcome to LFX Mentorship Showcase 2022 at the very outset I would like to wish everyone a very happy new year I am Anushka Mittal a computer science junior student from India studying at Ramaya Institute of Technology I worked as a LFX summer mentee with CNCS Kubernetes following which I interned with Nirmata for about four months working on the development of Kivarno apart from loving technology I absolutely enjoy dance and dramatics today I will be presenting the making of Falco adapter so let's dive in a quick look into the problem Kubernetes Policy Working Group has created and defined a policy report CRD the use of this is to basically unify you know the it's basically to study and investigate the outputs provided by various policy engines and to unify them to use them as Kubernetes resource this will help cluster admins manage cluster by using any Kubernetes management tool like Qt CDL the scope of the project was to build a tool that would run Falco in any Kubernetes cluster and then generate an update periodically a report from the Falco alerts received so there were many steps involved many phases and every day was a new journey to answer different questions but the major three steps involved were the following one GRPC output versus HTTP output so we had to decide how we would get output from Falco the Falco alerts to our adapter we researched and compared the options that were output via GRPC client versus HTTP output via Falco sidekick we finally decided that the easiest and best way to go about would be HTTP output via Falco sidekick and we decided we would integrate our adapter as an output for Falco sidekick the second very important step for us was mapping the information received via Falco alerts via Falco sidekick to the fields defined in policy reports CRD and this had to be done in a very meaningful manner this was finally done with discussion with a lot of discussion and approval from the folks in working group policy the last important step in my project was generating reports with optimized configuration options so once we were able to generate a normal single report we wanted to make sure that we address the customizations user would want the number of various reports namespace specific versus cluster specific reports and we wanted to make sure that we don't run into problems when there's a huge cluster with multiple Falco sidekicks running so in the project during the entire three months we researched we worked on and implemented all these steps and we had a final result so I have made a very short video of what it looked like finally in the development environment on my local system to just give you an idea of how the policy report looks like that's good so now that we know what my project was and what I built let's look into what I learned over the three months honestly I learned a lot of things there were so many things new things that I learned some old things that I refined but the major three things that I want to cover today highlight today are my go lang and github learning I learned a new language just a month before the project and applied it and of course gained an in-depth knowledge over the three months of the internship and I'm really happy that happened and I'm really happy that LFX provided this platform because I use go lang to date I was able to understand code base write my code production level code and tests in go lang I extensively used github and learned most of it during this three months and made my first big project the second thing that I want to highlight is the knowledge I gained about policy engines, CRDs and CNCF technologies well I studied using documentation available online resources used Kubernetes Falco on my local system and learned a lot about them I also learned about policy engines while working with Falco and about CRDs and how powerful they make a Kubernetes and finally I cannot emphasize enough on how much of personal growth I saw in these three months I was able to deal with presentation anxiety which was a big deal for me back then well I did well and I worked with a team all across the globe and built a project from scratch to what it is I met deadlines I came across unexpected roadblocks addressed them to responsibility and this was truly an incredible experience and I owe it all to the LFX team for this platform to my mentor and the entire community here I would like to take a minute and express my gratitude to my mentor Jim Baguardia he is the co-founder and CEO of Nirmata and the co-chair of Policy Working Group he is overall above all of this an awesome person and we used to connect almost every day via Slack weekly on Zoom and discuss the project I could always bank on him to help me guide me and you know just be there and he was always willing and ready to help so grateful for that he was very supportive to through all personal issues and did not miss any opportunity to provide very relevant career guidance I'm very grateful for my mentor and a special thanks also goes to Thomas from Falco he is a creator of Falco Sidekick and he worked with me through each step he made it very easy for me to contribute to Falco Sidekick and build my project to what it is he used to sync with me correct my mistakes review my PRs and it was just a great experience it wasn't really just a three player game it was the entire community I worked I remember having open conversations about questions and doubts I was facing on Slack channels for weeks everyone just pitching and putting in what they think about the issue and multiple people in community meetings telling me that maybe this is the direction I should also look at and posing some very relevant questions to address which helped me really refine my project and bring it to the very step it is at now so I think this was one of the surprises that came along out of this mentorship the community experience I am so very grateful for the community experience that LFX Mentorship and just Kubernetes community has created and the Kubernetes is one of the communities that I have been actively a part of and I wish to explore more but I worked with so many people got inspiration from them became a better person and open source developer because of them very grateful for that the second surprise was self-awareness for me and I worked with getting introduced to security I really realised my passion for security and how my interests lie in that domain I hope to take that forward from here and finally the growth and opportunities LFX Mentorship opened a lot of opportunities for me some of which were speaking at KubeCon North America 21 in a panel and interacting with people at that level just had that presence in the community the second was my internship that came out of this mentorship itself at Nirmada and many more this makes me answer the question what's next LFX opened an ocean of opportunities and I wish to explore more and more in open source communities over the coming years and I hope to increase my open source involvement in security domains especially involvement for me does not just mean coding contributions it's non-coding contributions interacting with new people being active on Slack and being able to contribute in any and every way I can being in my pre-final year I am constantly looking out for internship and research opportunities to upskill myself to learn work and contribute and in future I wish to work in places and organisations where I could make a key difference as an engineer in important projects with that I have come to an end but here are some of my some of my handles that you can reach out to me do you know just discuss technical or non-technical stuff that would be really great if you want to know a little more about my work I have prepared a really detailed blog on everything that happened during the three months more technical and yes you can definitely check it out and there is also a link to my PR my work in Falco Sidekick finally thank you so much for joining thank you so much for listening to me thank you LFX team for this opportunity and that's me signing off thank you Hello everyone and welcome to the Linux Foundation mentorship showcase I am Ash and this session is going to be about how we evaluate dependency updates in the upstream Kubernetes project which is what I worked on during my Linux Foundation mentorship so let's get started before we begin I want to give a brief overview of what I will be covering in this talk so I want to keep this session very beginner friendly so without assuming any prior knowledge we will start by a brief intro into what even dependencies are how go handles them and why you should care about your project dependencies in the first place then I will introduce you to Depstat which is the command line tool I created during my mentorship project and use it in the upstream Kubernetes project to analyze dependencies I will go over all the sub commands it offers and then we will see how exactly we integrated it with the Kubernetes project in the end I will touch on how I got the opportunity to work on this project and I will go over some other mentorship opportunities available if you are looking to contribute to Kubernetes so first things first what exactly are dependencies well dependencies put simply are external packages which your code uses these external packages are distributed as modules as per the definition of a module in Go it is nothing but a directory containing nested and related Go packages with a Go.Mod file at its root if you aren't familiar with what a Go.Mod file is don't worry I wasn't before the mentorship I will be covering what it is in the next few slides so for example you can see here that we make HTTP requests in our code and we are using this very common module called Julian Schmidt HTTP router which then ends up being a dependency of our project so if you look closely there are other packages and modules we are importing to like log fmd but these are internal to Go so we don't consider them as dependencies of our project so once you put the code in its own module you will see that a Go.Mod file appears in your project directory a Go.Mod file simply describes the modules properties including its dependencies on other modules of Go and on versions of Go so when you add dependencies Go also creates a Go.Sum file that contains the check sums of the modules you depend on Go uses this to verify the integrity of the downloaded modules you can also note however that the Go.Sum file is auto-generated and you should never have to edit it manually to keep your manage dependency set tidy you can use the Go.Mod tidy command using the set of packages imported in your code this command edits your Go.Mod file to add modules that are necessary but missing it also removes unused modules that don't provide any relevant packages and lastly it will regenerate your Go.Mod file super technical but long story short if you stop using the package example.com this module then you run Go.Mod tidy it will just remove this package from the Go.Mod file so why should you even care dependencies you know what they are and how Go handles them but why did we need a tool and all that stuff so the thing is sooner or later you would have to update the dependencies of your project this might be because you want the changes in the latest release of that dependency but let's say even if you're satisfied with the current features you might have to update it because of a security vulnerability found in the older releases which got fixed in a newer one and updating dependencies brings with it a whole set of headaches you'll have to make sure that it doesn't break the current code and that it is compatible with existing dependencies you are already using so I think when it comes to dependencies it is safe to say the lesser the better now this does not mean that you should try implementing the functionality of each external package on your own no the reason I say that lesser dependencies are better because that means you'll have to keep track of fewer releases for your project dependencies and you would have a much easier time updating them all of this might seem trivial for a small project and frankly it is and you could get away with not caring at all about dependencies but when a project grows to the size of Kubernetes all of this becomes very important updating dependencies could often mean breaking stuff and skipping a crucial dependency update could mean exposing a lot of users to a security risk so long story short the simpler the dependency chains are the better being particular about your project dependencies right from the start and tracking them is extremely helpful in the long run it was to solve all these problems that we created the command line tool called Devstar during my mentorship so before we before I tell you what Devstar does let me tell you how it is important before you start working on a project which I learned to first analyze the problem you're trying to solve I would say this is one of the major key learnings which came out of the mentorship it was to stop looking at fixing things or how we are fixing this and taking a step back and asking why why are we fixing this what do we need out of this so we knew we needed something to analyze dependencies but what should this thing do the biggest problem we wanted to solve we realized was that the Kubernetes repository was receiving so many pull requests and it was getting tough to notice which one of these were changing dependencies not only that but how are these dependencies changing and what was the impact of these changes we also wanted to be aware where PR authors can themselves see the impact of the dependency changes without one of the maintainers having to ping them and to solve all of this once we knew what we needed we created with we came up with the command line tool so Devstar is a command line tool for analyzing the dependencies of Go modules enabled and you can install it by running Go install GitHub.com Kubernetes whatever the URL is or by grabbing the latest binaries from the Devstar repository so it provides this with four sub commands and the commands are stats graph cycles and lists stats would give you statistics related to your dependencies which I'll show in our next slide the graph sub command creates a graph of all the project dependencies you have useful to visualize how your dependencies look and track the paths and help you to figure out where a security vulnerability might lie cycles basically shows how many cycles you have in dependencies and cycles in dependency is something you should try to avoid so cycles in dependencies are if your project depends on A and then dependency A depends on B and then B again depends on A so you have a cycle in your dependency and this is a simple command which just simply lists your project dependencies so now that you know what Devsat does let me go over how we use it in the upstream project Devsat runs as a proud job and proud is nothing but a Kubernetes based CI CD system so for Devsat we have two proud jobs one is a periodic job which runs every six hours on the master branch of KK, KK is short for the Kubernetes repository we have a pre-summit proud job which runs automatically on pull requests which change the go dot mod go dot some file or any of the files in the vendor directly so this can also be triggered manually on PRs by commenting forward slash test check dependency stats and that's thanks to proud what this job does is run Devsat on the code which is present in the PR and then print its difference with the output of running Devsat on the master branch of Kubernetes this way we can figure out the changes in dependencies if we merge a particular PR so Devsat produces an output which would look like this every six hours and this is what the stat command does it gives you these four crucial statistics and if your PR changes dependencies if your PR to the Kubernetes project changes dependencies proud would catch that and run the job I just mentioned so if you see here that it tells you that the number of dependencies being changed is one which is reported thanks to Devsat so one of the things I wanted to touch on which came out of this mentorship was that I learned I got a sense of how how good it feels when you actually see a project you created being used in the community being used by people and that is a very fulfilling feeling and I feel that comes through open source and this is one of the reasons I feel everyone should contribute to open source and I got the opportunity to be mentored by dems who most people involved in Kubernetes would know going from almost no prior knowledge I learned a lot while working on this project my only advice would be that if you are applying to these mentorships please don't self reject thinking that you don't know enough because all of us are always learning all the time and don't hesitate to ask questions I've listed some other mentorship opportunities available here so personally I've been a part of almost all of these and I can say that these have helped me grow and learn a lot you can also learn more about such opportunities by visiting the link in the URL and lastly if you have any questions I want to reach out to me you'll find all my social handles there once again thank you so much for attending and I hope you learn something new from this session so hi everyone I am Devupratha and here's my talk on AWS Kiosk empowering resiliency through Kiosk engineering a bit about me I am Devupratha I major in biomedical engineering basically in my final year so over the years I have developed interest in open source technologies and been passionate about cloud web technology and that's the reason I applied to this mentorship program an overview so the organization that I mentored with was Kiosmas Kiosmas is a cloud native Kiosk engineering platform with powerful Kiosk toolkits and friendly interface to use and program so basically before starting I would like you to know about what is Kiosk engineering basically Kiosk engineering is for the microservices based architecture where you could inject fault into your systems and you could get an observability based on that like how the fault is affecting your system, how your system is resilient to those faults and what's more important about Kiosk testing is it happens in the production so you have to define a specific boundary which specific nodes or which specific machines or instances you need to get affected by this test and this is quite helpful to estimate the vulnerabilities in your system so basically my mentorship topic revolved around enriching the already present AWS Kiosk in the Kiosmas environment so Kiosmas supports various types of Kiosk like network Kiosk input output Kiosk stress Kiosk so the first words of this like AWS network or input output defined on which the Kiosk is applied for example in this AWS Kiosk we try to inject fault into AWS ecosystems like the EC2 instances so basically the earlier Kiosk was limited to only EC2 start and stop so that was also not structured and stable so the major part was to make it more structured and stable with already implemented AWS Kiosk and also to implement the complete service failure in AWS so that might be useful for testing infrastructure automation tools so basically before starting with this mentorship program I didn't have much idea about Kubernetes or the orchestration tool however I knew about container technology so I had to start with you know what are pods, what are RIPika sets and demon sets and basically with this deep demon sets since these are what the whole Kiosk mess is based on the config maps and demon sets those are heavily used to implement any Kiosk so I learned about those and then tried to deploy the workloads some of the workloads on my own machine like you could see this go through this link this is basically the hello world for Kubernetes and then I jumped into Kiosk mess as an end user I started using it and then followed this development guide to you know start building my own Kiosk so I was pretty much successful in that and then I started with you know taking up some issues like some good first issues like this updating the make file was some around a good first issue I tried to implement this Kiosk target support for more than one container to get habituated with the large code base of the project and you know set up my local system for writing optimise codes so what I learned so basically the whole project the whole project required me to write my own custom controllers which could generate config maps and that would be injected into a pod and then that would act as a definition for the Kiosk that is to be implemented in AWS Kiosk so this whole thing is built upon one concept called Kubernetes operator pattern I learned about that as I said wrote about my own custom controllers so these controllers were Kiosk controllers and we need to chase the routers according to it so that it connects with the Kubernetes controller properly and also we need to manage the dashboard like what needs to be showcased in the dashboards and the Kiosk control manager was already present we didn't need to patch it and the Kiosk demand obviously we needed to also do some changes to the Kiosk demand executed component part so as you could see in the diagram when an end user interacts with Kiosk mess there are three types of part like you could interact with the dashboard you could use using client dot apply or kubectl apply then there is this Kubernetes API server which interacts with the Kiosk controller manager which was the important part of every Kiosk we write like this controller manager needs to be properly defined and the timings and the blast radius which we define in Kiosk testings need to be defined by this Kiosk controller manager and then it sends signals to Kiosk demand or kubelets which has its own process IDs and then it goes to the container runtime and this containers and Kiosk is injected by this Kiosk demand to this container directly or as a sidecar through this kubelet API so as you see my work was to clear the custom controllers and to generate the CRDs which would define my Kiosk experiment so let's go through how we started so basically the idea initial idea was required to implement only one type of AWS Kiosk as part of the project like it was easy to start and stop we could have injected like EC2 network stop network like implementing Kiosk but when I started more about what are the present features that is available to implement Kiosk engineering in AWS ecosystem I found something called AWS system Kiosk runner and there were a lot of functionalities that was already available with AWS system Kiosk so instead of creating our own config maps for each of the Kiosk separately we thought of why not integrate this whole thing with the Kiosk mess so that once you inject Kiosk like you could define your Kiosk experiment through Kiosk controllers and you could observe the fault injection through the Kiosk dashboard so we planned it in two parts first part would be a runner thing which would integrate which integrates with AWS system Kiosk runner and that part needs to be written in Kotlin and the CLI to be written in Kotlin and Docker image to be built out of it so another part was as I said writing the Kiosk controller which would define AWS Kiosk and it is to be written and go and a controller of AWS Kiosk will create an pod with that Kotlin CLI image and send commands to AWS so basically what is required is the CLI requires you to write the definition of Kiosk as a JSON file and we could create it with config map and then mount it into the pod to provide the JSON file but the real issue that arise was afterwards after we were able to create the CLI the CLI reference we used in the controller and the config map was mount in the pod when we tried to test it in local stack so local stack is basically basically you could say a tool to simulate the AWS environments there were bedtime format errors that were inconsistent and it is very important just engineering so we needed to again put a PR in the local stack report and do an option post from ourselves to mitigate that issue and then this thing work properly and AWS Kiosk would be implemented so the surprise the biggest surprise that came to me in this whole project was I was contributing to a large project with a lot of people already involved in it and a lot of directories and a lot of changes that happen every day so when we tried to implement this multi-container support for Kiosk under stress chaos we did a lot of changes and that nearly took me around 15-20 days to go through everything and to the changes but unfortunately we couldn't merge this PR and the only reason was that it was breaking existing code bases and the reason this happened was we didn't discuss it with the community early on and didn't raise an RFC so basically there is a pipeline of how to implement a feature in Kiosk so there is something called Kiosk less RFC you have to first propose your idea there and then start contributing so as I was experimenting with Stubbs and as I stated to my mentor he was like you could go ahead with that so yes this was a learning for me whenever you try to implement something you try to add a feature in a large feature to the code base you need to discuss with the community in the pipeline that is already available and after that it prompted me to go through all the documentation that is available for the Kiosk less and this was a learning lesson for me and my mentoring experience obviously Zao was a great mentor we had a little bit of problems while communicating because of the language barriers but he being a very skilled developer he used to understand the problems that I used to ask to him and basically more than learning and implementing Kiosk less which is very technically advanced he encouraged me to experiment with Stubbs and to learn the basics basics of how these things work and he was very patient when clearing my doubts and the only way of communication we fixed was the Slack so I used to send him any problems or any other hiccups I had and he used to clear it the Slack itself and then he made sure my development environment was highly optimized because the code we wrote needs to be most in large code bases and it won't be buggy and it should be understood by other people using it so he also even helped me to set up my IDE earlier I was using VS code I didn't even knew the existence of Boland and the capabilities of Boland as an IDE so he helped me with that and that was an amazing experience so yes finally I graduated after three months and what now so now I have given multiple talks about the work I did during my mentorship and after my mentorship I was really inspired by what is you know Kubernetes and I didn't even knew like you know the vastness of Kubernetes so I started contributing to upstream Kubernetes and I now work actively with say Contributex, I'm a Sado in the release team also maintainer of Contributor Caterpoda which posts the tutorials to help upstream developers who are new to the community and yeah I enjoy a lot contributing to open source communities and basically the community and the learning I have is very very fast and I could say I have upskilled my upskilling has been quite exponential when I'm involved with the community and you know how a project is starting from scratch to end like if a feature is implemented how it is implemented that's a great learning experience for me working in this large project so yeah thank you that's from that's all from me as a mentor so if you have any question you could reach out to me in Slack or in Twitter I have uploaded my presentation as a skid you could go through that in the necessary links are available thank you let me begin by first of all welcoming everyone who has joined today so thank you so much everyone for joining the LFX Mentorship showcase and as we move ahead let me just first of all tell you what are we going to talk about so today we are going to talk about we are actually going to dive into we are actually going to dive into CubeBenz Policy Report Adapter and what is this? This is the summary of my Spring Mentorship Experience with LFX and let's move on to introduction so talking about myself so as it has been written here that sometimes I'm a poet sometimes I code so this also was a little poetic attempt to introduce myself that titles all these titles that we are and they come and go Spring LFX Mentor 2021 I was upcoming internet then what I am and now in order to you know that dilemma or that internal rife between the title the ultimate title that we have is I it comes and goes after that what remains so that is what I wish to talk about here that for as long as I remains shall I remain so that was a little you know poetic side but now we are going move we are moving again back to the technical side and before we move to the technical side something that I haven't mentioned in the slides but because I just got reminded of that I would like to tell about that story like how I got introduced to this program so sure is here today with us so it was open source summit in Leon it happened in 2019 all those pre-COVID era times what wonderful times they were so sure so it was then when I was introduced to this program and I still remember that talk that sure gave to introduce us to this program which was then known as community bridge mentorship program and at that time I was interested in Linux kernel as well so I applied like three or four times to this program and this was I guess my fourth time when I finally got selected it was almost thrice I was redirected I would not like to call it rejection because almost all rejections are the directions so now moving to next what we did what happened here now what we were trying to solve in this mentorship we are going to talk about that so first of all thank you so much again and thank you so much everyone who's listening I also hope that you and your family is safe in this pandemic everyone what are we going to solve what did we solve actually so as a fellow mentee Anushka just mentioned that she was also a part of community working through policy and what the policy working group has been doing it has been it has been defining it has defined a policy report customer source definition and what is the purpose the purpose is to help you know become a unification of all the outputs from the multiple policy engines so they can be multiple policy engines like she worked on Valco I worked on Cubebench and there are many other examples like CubeArmor another mentee who worked on that so what was my task so my task was to build a policy report adapter for Cubebench and what does Cubebench does so Cubebench basically runs CIS benchmark checks and what we wanted was we wanted to produce a policy report and we wanted a policy report to run in a prawn job so prawn job is basically to run it in a scheduled way so how did we solve this how we started implementing right so like every step was incremental and the first major challenge that came to me came to us was basically creating the client code for CRD and it was like once created it can be used by even all other adopters as well so it was one of the biggest challenges that I faced earlier because I had little experience with Golang I was very new with Golang I was very new with the libraries that were creating helping create client code even new with the concept called CRD but as the mentorship was rolled out it was an amazing experience the first part was done the next step was another interesting step of mapping because you know policy report customer service definition will have certain variables the Cubebench will be giving a particular output we have to map both the structures we will have we have to map both the structures so that was done in the second step the third part was wiring it all you know just we how to line up all the things how to make things work and finally it worked but how did it work I would like to show a demo so I'll be stopping the screen just for a moment to share it again the video is just a 30 second video where I will be just showcasing what will be happening so just just give me a moment yeah so I'm sharing the screen again just really sorry for this okay I hope this is visible so so what we are going to do in the first step is we are going to just move a few seconds back so yeah okay so let's move it so the first step was to create or define the CRD that happened in the first step the next step was to create the RBACs and this is actually a all built project the project has already been built so as you can see the first step was the policy report creation after that we created the RBACs which actually help which would help the cron job to run the pods behind the scenes so cron job was running there and it ran the job and here as we will now see that we will be able to see the cluster policy reports here so we can see the cluster policy reports here of the Cubans 69 parts 11 fields so that is how the project runs now and its documentation can be found in the community working with policy the positive tree so moving back to the slides let me share the screen back so we can come back to slides again so yeah so after the demo let's move to the next part yeah what was my experience and how this project actually shaped this project would not have been in its present form if I did not have the tremendous support of my community as well as my mentor Jim so if I have to talk about Jim I can tell you that even today he supported me I would like to share that my family was reported COVID positive yesterday yesterday and I was a little nervous in the morning and thankfully I am in hostel my family is in another town but yeah almost all of them are COVID positive they are getting treated and they are being treated well but I spoke to my mentor Jim again and he helped me to reinstate that confidence in me to boost me up to see if I have present if I am able to make up the slides in any way and that speaks a lot about him and when we come back to the experience of this mentorship daily meetings daily async meetings not sync of course I used to update him daily on Slack I used to ping him daily on Slack I used to annoy him daily on Slack and he used to reply me daily almost all the blockers all the doubts that I had and he is a mentor who makes you intuitively learn it's not that he will direct you or give you a roadmap he will make you learn by your own and that is one of the best quality a mentor can have and we had bi-weekly community meetings I used to share progress and then for this project also I continued my involvement with the community fellow mentees like Anushka and Stephen along with that I was a mentee myself in Spring and thanks to Jim and thanks to this community that I became a mentor along with Jim and I had the opportunity to mentor Hardik who built a Q-bar and a policy recorder after and he also graduated recently and it was really an amazing experience from being a mentee to a mentor in just a few months so that is what this community does to us and to the folks who want to know what I learned I learned a lot of things to be honest but a broad category of people who start with community is usually free that is a very huge project to start with so I have shared in this slide a thread by Tims which actually helped me start also so this was the thread which helped me begin and other than that what we learned was about CRD the various policy reports, the outdoor incisions, the client go and co-generate GOLANG itself I learned and gave a talk also on GOLANG for introducing GOLANG to QQ on North America which I am going to talk later about other than technical things this mentorship helped me become more empathetic, more kind and more humble and more grounded what surprises it brought from me so the first surprise it brought from me was that it happened through a Twitter communication only when Priyanka Sharma tweeted that she wants to know a story, a great story and I replied to her and what happened after that was one of the most watershed moments of my life that I became a guest speaker with her and other than that I presented two talks at QQ on North America where one was about a panel discussion of introducing open source to students and the other was about the GOLANG for students again and other than that, the internship opportunities at Harka Rang or even Nirmata and many other offers from other cloud native companies came because of this mentorship and there was some other amazing connections that I built for life that I built for life now especially the friends that was my learning, my experience and now if you wish to connect with me you can connect with me on Twitter and I would like to thank you everyone for joining, thank you so much and looking forward to connect with you all and learning from all of you, thank you so much, have a great day so hello everyone I'm Sachin and I'm here to present my talk on it will be pretty chill talk, I will be going into a bit technical details but it will be kind of like what my mentorship was like and how one can learn from my experience and the experience of others so the title is like a bit, but I promise it's not as as it sounds so I start with who I am so you can see a good photo of me you can also see me at the back we are at the offices of Harka Rang during our internship it was cool so I just completed my internship at Harka Rang and I was a LFX print mentee in spring 2021 last year so my experience with communities and this collaborative ecosystem is I started with communities as you can see Tim's thread and interaction with him helped a lot to get me into this and I learned a lot about how different components works on my own just in high level overview before diving into it and then the community helps you along if you want to find something you just have to ask in the Slack channel or just reach out to someone and they will gratefully help you out with that stuff so that is how I got into it then along the way somewhere I found about what SPIFI inspired is and what LFX mentorship is and so in this talk I'm going to talk about what this SPIFI inspired and what these two terms are which I added during my union formation mentorship and what I learned about learning from mistakes which is kind of known but there is a difference between realisation and understanding when you realise it that's when you truly get it understanding is just knowing the stuff so I'll talk a bit about that and what you can learn about this learn about my experience and if you have any you want to know more about the projects you can just go on to SPIFI.io these documentation are great I really appreciate the documentation and formatting and all that so what was the problem that we had initially so in distributed systems we have a large number of services you know services come online they go offline they talk to each other and all that and so how do we know services are authentic I mean how do we authenticate all these services right there are fewer number of services and we cannot just track every services and give them an idea and just assign them a certificate to you know talk to each other so authentication is the main problem that SPIFI inspired deals with how we deal with I will get into it in a bit but this is the main problem that SPIFI inspired I have find a solution kind of need solution it and we'll see so SPIFI is secure production identity framework for everyone it's a long term but what it essentially does these are essentially SPIFI is just a set of specifications rather than proper implementation so when you go to github slash SPIFI you will see a lot of SPIFI have all these set of specifications that a system needs to have in order to be SPIFI compliant so SPIFI is just an implementation of SPIFI but it's not the only one if you go to the documentation you'll find there are more to these but yeah essentially what SPIFI does it will provide authentication methods to every workload every services in the distributed system so SPIRE SPIRE is the implementation of the real-time implementation of SPIFI it is created by the maintenance and developers of SPIRE it is a platform agnostic and I should have mentioned heterogeneous but what it does is like you don't have to worry if some services are deployed on AWS or some services are deployed on Google cloud what it does is it will give everyone a way to authenticate each other and the way it does is like whenever a system says SPIFI has two parts SPIRE has two parts it has a service it has an agent and it has a server so the agents are like they sit in the sidecar of all these services and they interact with the server and during interaction they demand for center certificates and there's two ways to do that first of all it indicates the workload itself and doing something called no writer station and it solves the workload and then there's something called no writer station in which it authenticates the environment itself like if it's in AWS it will ask for all these documentation from AWS and send it to server so the server will also do its authentication and all and create an SPIFI verification document and it will send it to SPIRE in the agent and the agent will have all these certificates itself which it can provide to the workload whenever they demand for it and using these certificates they can interact with other workloads and authenticate themselves so my project was to implement the new health system and for that we have to know what the system was like so earlier we used like GoHealth it's a library to make health calls and get the responses but it was pretty simple for a project so complex what it did was for if I have to check if the project is live I'll do an HTTP 200 to it so it will know it's live the workloads are live and so in the next text I saw that the server was able to fetch the bundles for the readiness checks but it was pretty rudimentary because there are a lot of parts there are a lot of internal working parts inspired itself and these simple tests would not have suffice in the long term so for the new system we decided to have a global name space check sorry just for a second I have to switch back to the screen to see if there's something left alright sorry about that okay so for the new system we have to check all these components so style has like a certification authority a data store, a manager and all these have different workings and we cannot just ping these and find like okay this is working or this is not working so what we decided is to accumulate give them some tasks generate the result of those tasks and divide it into readiness and readiness of the system and check out if and take all these results and check if globally it works or not as a whole system works or not so for example we have CA manager, catalog manager and all these systems there are more systems there are just some systems that I worked on but there are more systems to this on so for a little example like let's say we have to check the health for CA what we will do we will give a CA small task like to mint an X5 or 9SWIT if it mints it we have like okay it's not both ready and it is it can go live also and similarly for manager, catalog we can see its ability to store these these SWITs in itself and check if it can do these tasks then also live and it's also ready and similarly for manager and all these systems and you can just search out all these results check pass all those results and check okay now it's globally healthy and ready to be deployed to a system so the mentorship was great experience for me like I have these amazing two mentors Andrew and Evan and me both are creators and maintainers of SPIRES, the SPIRES board awesome folks we used to come regularly on whenever I had a problem they would just call me on a Zoom call and interact and we would talk about it we keep a doc to maintain the progress and all these ideas that I have there was a time conflict obviously we have a like 12 hour difference between us but it was we had a perfect system we had synchronized our timings such that whenever I used to just make some design decisions and just comment it on in the docs and they will just comment back like okay this is fine what changes can we make or if you have any problem you can get a meeting to tell these are just awesome folks I would like to thank them and what I learned so initially like I didn't know about SPIRES SPIRES anyhow so I was a bit tensed when I started because it's a security project authentication project and I really didn't have much knowledge about that diving into it not the kind of knowledge that is required for such a huge system but I learned like we can make mistakes and you know learn from them it's easier said than done because we can make some stupid mistakes and think about like what people are missing but if you have really good mentors such as I had so you will really enjoy that learning process from mistakes so what we did was I make a lot of mistakes during initial PRs like small mistakes that should not be made but my mentors really helped me we went through all these PRs code reviews and all that meetings if I was lying in some concepts or something and doing that slowly over time these error numbers got reduced and finally I was able to create good PRs without any conflicts and all so regular feedback with mentors helps really helps not with mentors with the entire community whichever you are part of you just have to interact and you will surely get an answer so I can give a quick example of what mistakes means so so this is kind of thing it was an issue in Golang that I found during when I was working on a PR I thought like it was really stupid mistake I should figure out before submitting this PR and it turns out that it was a problem with go rather than the PR so you know you start making mistakes you try to find them and along the way you might find something valuable like I did so this is like the issue I found out I went on to go Slack Golang Slack channel and incredible guys helped me to figure out that it was a stack or flow error in go please and not an issue with my code so you should never be shy away to ask for help you know if you ask and you will find that's it communication is key just interact with the community talk to them it will be fine so yeah and the getaway part was getaway part is not like you should just get away from the mistakes but rather than pretty figuring it out before you know things started to mess up so for example in my previous case I almost read the system but talking to them talking to the community talking to people interacting with them helping reaching out for help finally helped in finding the solution before I merged the PR which was getaway part for me because no one find back I did it but now they do so yeah this was the getaway part you should figure it out but you should never be shy away from asking for help and the most important thing everyone know this have a great time it's a really great experience you will meet a lot of people make a lot of friends and a lot of new acquaintances which will be meet in person someday so yeah have a great time don't shy away from asking for anything be a part of the community interact with people then yeah have a great time in general so if you have any questions I will be in chat and you can ask me thanks thanks everybody for speaking graduates congratulations you have done great job loved listening to all of your experiences it's been this is why we do what we do providing all these resources and it's fulfilling for us to see you use these resources and that it makes a difference to you thank you Matt thank you Shannon and final note I'm going to thank my sponsors Red Hat, GitHub, Intel and IBM they have been supporting this program since day one three years ago so thanks everybody and good night on this side of the globe and good day to you on this side of the globe thank you