 Welcome, everybody. We are starting our second segment of our Showcase Mentorship Showcase for this year. My name is Shua Khan. I'm a Cardinal Maintainer and Linux Fellow at the Linux Foundation. I lead mentorships and I also do Cardinal Maintenance and so on. Okay, today we have several graduates that are going to be coming in after me sharing their experiences with the mentorship program and what they have learned. And let's start a little bit with talking about the beginner's problem. Where do we start is always something that we all struggle with when we are embarking on a new journey, new career path we might be looking at or which project, open source project we want to start contributing to. So the first thing we have to figure out is what are we passionate about and what do we enjoy doing. And also, there are so many projects, open source projects out there in different technologies, it is difficult to figure out which one to get started in contributing, start contributing in and which one to learn from. Okay, so then when we finally figure that out, we have how do we get started problem. And the core basis always look very complex, and community looks intimidating and daunting, and what's the best place to start. So, and then we, once we kind of figure out which everything together the first two pieces, then we look at, where do we find resources, and then after that comes, who can help us with our questions and who do we reach out to. So, what, the how and the where and the who. So, those are all the things we struggle with. So at the Linux Foundation, we understand that all of these are complex problems for new developers. So we are providing resources and learning paths. So you can explore your learning paths at the Linux Foundation training site. And here's the link that you can go and explore different paths. And then we also have webinars, live webinars that we have recorded archived on on our webinar site, you can go take a look at that and you can learn various topics anywhere from software engineering to community open source and all other aspects, all the aspects that are, are important to learn in the open source ecosystem. And then once you are comfortable and you want to learn from experts from getting involved in an LFX mentorship program, you couldn't apply for one and start working like our graduates have done in the last year, especially from men experts. And then here we are today, the graduates are sharing their experiences at the showcase. And this is our attempt to connect our graduates with the people looking for talent. All right, so the reason as I mentioned earlier, we understand access to resources is a barrier for a lot of people. And we design our programs with that in mind. So we empower people with our part time mentorships and webinars and training resources, and enabling women and people with work life balance problems. So it's a lot we are trying to remove some of the barriers that people experience when they are trying to embark on journeys. So this is a packed slides with a lot of information in here. And then please check that out. We also went and surveyed our graduates from 2019 through 2021 to understand how we can improve our resources learning resources as well as our mentorship programs. This report is just out. It was released yesterday. Take a look at that. Here's a link for you to go download and read and allow also give us our more insight into where our heads are at and what our approach to solving some of these problems, equity access to information and so on. And please take a look. And with that, I will will get started had it off to sit to get started with his presentation. Hello everyone. Just a minute. I just wait sit let me stop sharing here. Oh, okay. Okay, go ahead. Hello everyone. Today, I'm going to talk about how does one go from being an absolute note with respect to the linear standard words sending multiple patches in multiple subsystems. So this will be my journey from being an absolutely clueless to confident with the linear standard. It's been a month and I welcome you all to my talk. So, a brief introduction about me. So I am 21, and I am an electronic center graduate. I'm in my final year, and I like interesting things and like, especially low level stuff and the next is one of them, and hence I'm here. Back to the main crux. How does one go from being absolute new towards sending multiple. There are many, one can see tutorials, and of course, they are polyp stuff or just read a book and think around. But one way is LKMP. LKMP stands for Linux kernel mentorship program and quoting from the program page experienced Linux kernel developers and mentors mentor volunteer mentees. So, this is a very good opportunity because you cannot ask for better was you have a experienced person mentoring you. One might be thinking what do people really doing this. I can say what I did. I participated in Linux kernel bug fixing summer 2022. And that we had to fix bugs in the Linux kernel, which are usually reported by tools like this gallery and so on. You do that I had to improve my debugging skills I understood very much about the kernel core stuff during Linux kernel mentorship program. During the program itself, the duration of program itself, and I think bug fixes in various subsystems like Wi-Fi loop x86, and so on. And some of my patches went to stable and also, and I recently said sent a patch set of 10 which is sizable, sizable path that of mine to the DRM tree. I was literally clueless at the start. Over time, I became confident with my tinkering ability with the kernel and this was a bug fixing project I sharpened my debugging skills and saw how does kernel work and it connects from one subsystem to another. If you search for me on the mailing list you can find I keep on continuing and I respond to feedback and get involved in multiple patches. I'm inside your machine. If you run the latest kernel, my contributions are there and are running in your machine. So this is an awesome thing to think about. So in this diagram you might see on one side there is an absolute noob and other side there is someone who is not an absolute noob but who is his way around. So what's behind this error? By bucketing it into two brain stalls and surprises. Brain stalls is a play on kernel stall. So first part is understanding challenges. So one needs to know about what they are doing and why they are doing because otherwise there is no point in doing. So I had to learn about kernel, how the kernel works, why am I doing bug fixing. Fixing is important. So let me tell you briefly about that. So the kernel is huge. When I was starting out I was surprised and so somewhat overwhelmed seeing the huge problem. The first time the download took a sizable amount of time to get to the positive load. So there are millions of lines of code, thousands of contributors and millions of comments but since these are code statistic we must say that the kernel code is huge. The kernel itself is very modular. So because we can use various config options and compile accurately. But bugs are bound to seep in because this is a huge code base and developers are humans after all. So one oversight here, one oversight there and coupled with millions of lines of codes we soon enough we are coming off a sizable deal. So there are many ways to do that. There are broadly two ways. Static analysis and dynamic analysis. So in setting analysis, various tools like GCC, Clang, Coxinal which scan the code base, the entire code base for bug which affect all executions of the kernel. For instance, here in this photo itself, line 870 is apparently a null check but this is an incorrect way to go for the check. So this was really found during compiling with a particular config setting within GCC. Fortunately, this bug was very not very critical, but it was in hiding for 13 years. So this shows how static analysis is important. But this was a correct, this was too positive, but static analysis might not always be correct. So it won't catch all bugs because if they are dynamic in nature, so go for dynamic analysis. And bugs are detected around time. And there are many tools like case and self-test, CISCALOR, which is a buzzing tool and codes. So I mainly work on CISCALOR course. There is a dashboard, you may see it and we were at the liberty to choose whichever bug we wanted to work on. So the other thing is patching the bugs correctly. Okay, I understood what I had to do. But the doing part is also hard. Once I started debugging the bugs, so it was very apparent that it's always localized to one thing. But maybe at one place, but the detection might be at other place. So one had to do careful debugging. And there will be mess ups and setbacks. Sometimes I really messed up hard and I really took a break thinking I could not do it. Sometimes I told my friends that I took something I couldn't do. But in the end, I could do it. Yeah, they are pleasant surprises, I would say. Mentors are one of them. This was impossible without my mentors, Shua Khan and Pavel Svitkin. They are experienced, very experienced development developers and they are volunteer mentees. So they took out their precious time to mentor us. And interactions are extremely helpful. We had bi-weekly meetings on Zoom and we could interact otherwise outside Zoom and all. And they gave insightful answers to all our doubts, be it technical, be it navigating the community and community. They were kind enough to review patches when we were starting out and they were overall guidance and sharing our knowledge on Zoom sessions and otherwise. So the second surprise was community. The internal community is very direct in the reviews. So if you mess up, they say you mess up, but they also say how to improve and what's wrong with the patch. This is very helpful when someone is starting out and when I was a noob. And lastly, I was surprised because I really didn't think I could do it. I was talking about mess ups and setbacks, but I could finally do it and it was a very pleasant surprise. And so, brainstorming surprises, this is what is behind it. And it was a fun learning experience. I got exposed to a lot of new things. So for anyone who is starting out, going from absolute no to sending multiple patches is very easy. One has to just start doing it. And if possible, go for LKM. So these are the sources I use throughout the presentation. So in summary, I went from being clueless to being confident with the Dina Perla. I started absolutely clueless and insecure. And I got exposed to new challenges and surprises as I had talked earlier. So LKM is very useful in this part. And thus I got confident with my ability to tinker and I started making patches. And I then sent various patches during and after next tournament mentorship program, which got merged. So in DRM, Wi-Fi, F-86 and there will be more to come. I ensure you. And I, for this amazing experience, I would again like to thank my mentor, Shua, Pavel, Linux foundation and the kernel community as a whole. So, and that ladies and gentlemen was what I wanted to share with you. Have a good day. Okay, hello everyone. So I'll be here presenting as well my mentorship experience. In this case, it was in the context of the Hyperledger mentorship program. So just a few words about me. My name is Andre. I'm a PhD student in Lisbon, Portugal. I have been researching blockchain interoperability for like a year and a half now. And it was in this context that I was, I applied to this mentorship in Hyperledger. So the project was called Fabricatedium token bridging. So my mentors were last low and the emirate from the Budapest University of Technology and Economics. We basically had this prototype of a CVDC, a central bank digital currency. And the idea of like the problem we're trying to solve was basically interoperating two different blockchains. In this case we have a blockchain that has control over the CVDC that is managed by, let's say central bank financial institutions. And it was powered by a hyper ledger fabric network. And then on the right, we have an EVM based ledger supported by, for example, hyper ledger based. And in this one, we have retail businesses that expose their services to the clients, but their clients need to pay, of course, for the services that are being offered. So we need access to that core CVDC ledger on the lab. So this was basically our goal to interoperate these different blockchains, and this cross chain bridge in the middle that enables interoperability was exactly where we needed to work on. So the main takeaways that I take from from this mentorship, first of all, you know, the open source communities are really interesting and we can provide help and receive help from from a lot of people, you know, when we need just anything we can just ask and there's someone willing to help us. We had some interesting meetings with a lot of people from from other projects that gave us interesting insights about our own. The power of planning, you know, planning is very important and I believe I underrated planning for a lot of, a lot of time in my life, but in this project was this project was approved that if we plan we can do it. We started in the first day and everything that we propose ourselves to do we accomplished. New technologies. I learned a lot during this this mentorship, you know, a lot of new technologies. This is exactly where I'm going right now, you know, we touched on at least three hyper ledger projects we searched research a little bit other hyper ledger projects or hyper ledger labs. We talked about fabric actors and bezu were the main one, you know, I already already talked about fabric and bezu but hyper ledger characters was in the middle you know it's a project in hyper ledger that is focused on interoperability. And we touched in a lot of different technologies. A lot of different networks, including the IPFS, the EVM which is powers the stadium, and so on. And so to talk, of course, about my my mentoring experience. It was really, really interesting to work with last low name rate. We had daily meetings for three months in the summer doing the mentorship we also met sometimes after that. And we need to thank them for for the support the guidance. You know, in the sharing of knowledge between between us. So, basically, the fight at our project in different phases we had a planning phase. We did some research on existing cross chain solutions interoperability solutions. And then we designed our own bridge implementation, our own bridge, we implemented that bridge and produced a demo prototype to showcase the world. We produce some deliverables, they're accessible in the mentorship page I'll have a link here. If you go to the hyper ledger mentorship program you have their the links as well. You know we have a little report on cross change solutions interoperability solutions. We have the design specification document we specified, you know, the tests that we wanted to to perform in our, in our prototype, and we implemented this proof of concept that is always also available. So yeah, the relevant resources that the first one the hyper ledger mentorship page, you know, leads us a little bit into all of these ones. You know, the GitHub where we hosted our materials. We opened some pull requests on on cactus. The workshop or cacti in this case, we changed the name hyper ledger cacti workshop can also see that in YouTube. I have a presentation on this project but a more in a more technical way, and then economic paper that yielded also from from the work. So this was basically the final results, you know, an interface of web application that one can interact and see you know tokens bridging from one ledger to the other. So it's interesting, you can try this yourselves if you want, have hyper ledger cactus, GitHub repository in in this pull request. You can check out that branch take a look at the at the code. And if you have suggestions if you have anything that you'd like to say to contribute whatever, just feel free to reach out open an issue there. Contact us through the social media. So yeah, that's that's basically it I have here also a QR code with the demo it's a video with the demo of our application running. So yeah, that's that's it you have here my social media if you want to reach out. Yeah, thank you for for having me. I'll pass the word to the next speaker. Hello everyone. In the summer of 2022 I participated in the open mainframe review advice contributed the mainframe open education project. A brief overview of myself. I'm a student at the University of Johannesburg. I recently obtained my qualification for a diploma in business information technology. I'm also an IBM Z systems ambassador and a mainframe open education project student lead. So for this project, I was curating content reviewing and advising the core team on the changes that they could make. This project is a no code open source project that is sponsored by the open mainframe project. So what we're trying to do is to outsource the learning of mainframe skills to close the gap that is existing in the industry. I don't know if you know about this but the seasoned professionals have reached their retirement age and we're trying to train new and younger professionals to come in and replace them once they have retired. So we have a good book that we use to collaborate. Anybody can come in who's enthusiastic about mainframes to either consume the content or to share the knowledge that they already have. So what you do you'd find on our get book that we have foundational mainframe content that you can go through to be knowledgeable about the mainframes. We're not limited to that. We also have links that will lead you to more advanced courses that are on external websites. And also we have links that will lead you to communities where you'll get support on your learning journey. And also you can ask questions. So from this project, I upscaled my knowledge of mainframes. Like I've mentioned that I've participated in the IBM Z systems ambassador program. That's where I got to learn about mainframes. And then from reading the get book that this project uses, I got to learn more about mainframes because I had to go through all the content that's there on the get book and find out the gaps that are there on the platform. Find out what's missing that I could add. So I went on to find more information from external websites and YouTube channels to get that content and bring it to our get book. Another thing is that I learned to manage people. I was tasked with establishing a student user group. Of which I did successfully. I collaborated with students from different universities across the globe. Of the top of my head I can remember the University of El Salvador, the Virginia Commonwealth University, and the University of North Texas. I learned how to form a team of leaders that I managed. We meet every now and again to share a way forward draft a plan on how we can make the student user group more interesting. So I work closely with them and also manage them together with the students in the user group. I've also learned to communicate effectively as I've mentioned that I had to review the content that's there on our get book. So after I have read through that content I make notes and from those notes I go back to the mainframe open education project core team and share the findings that I've got. I have to communicate with them on their level. It's not the same as communicating with students because we understand differently and I have to make sure that I come and communicate with them on their level. Same goes with the students. When I approach them I have a certain way of approaching them. My mentoring experience was really awesome. I worked with my mentor Lauren Valenti. She's the Director of Mainframe Education and Customer Engagement at Broadcom Software. So Lauren and I had bi-weekly meetings. So at the beginning of our mentorship we met and established our goals. We wrote them down on a Google document and then bi-weekly we come back and review them. To see how far I've come and what goals I've accomplished and which ones I still need to do. So Lauren would give me pointers on what I can do to achieve my goals effectively. So she was open-minded like that and would also teach me some of the communication skills that she uses so that I can effectively communicate my findings with the rest of the mainframe community. Sometimes we'd have personal conversations. She'd check up on me on how I'm doing. You know if my schoolwork is okay and if the mentorship is not hindering me from accomplishing my university tasks. So I found that to be really helpful as it showed that she cared about me and would like me to do better. She also gave me career advice here and there. I was grateful for that. She made me realize that there are certain careers that are more suited for my personality and I'd be better off in those careers. She also gave me more opportunities. I was really surprised when she recruited me to be on their core team to work with them on a weekly basis because we have our meetings every Friday. And also she recruited me to be the university student lead. So that got me exposed to a lot more than was planned for my mentorship. So I'm really grateful for the opportunities that she gave me. So in the long term I'd like to become a university professor and a researcher. So this was one way of studying my journey. I was just experimenting to see if I really belong in the education space and I found that I like working in education. And like as I build up to my ultimate career in the meantime I'd like to use the technical skills that I've acquired and the soft skills. I appreciate you getting to a role of being a mainframe system admin. I've started training on that. I've got my certifications from the interskill learning platform. We can find more about that on my LinkedIn. I'll share the link with you in the chat. So I'd appreciate to work on that. And also as someone who's technically inclined and realizes how difficult it is for other developers to take on to write code. Or let me rather say maintain the code that they have found. It's really a challenge if the documentation is poorly designed. So I'd like to help with that by being a technical writer to be the bridge between the old developers and the new developers. You know, it would be nice to still participate in the software development space. And be less technically involved in that as I grow and experiment with more careers in the field. Also, if there's a possibility, one other career that I'd like to get into is the software sales engineer role. So this are the three career roles that I'm considering now as someone who's a graduate. You know, I'd like to gain industry experience while I'm still pursuing my studies. So being a sales engineer would also allow me to work more with people and also use my technical knowledge to inform the customers about the products that I am offering them. With that said, I thank you for giving me your time. Thank you. Hey everyone, my name is Ryan Humphrey. Sorry. Hey everyone, my name is Ryan Humphrey. Thanks to Lynx Foundation for having me. And thank you all for being here. My project dealt with hyperledger fabric. More specifically, a miniaturized version of hyperledger fabric, a part of the hyperledger labs ecosystem, known as mini fabric. Before we get into what mini fabric is and what it's used for, I want to quickly introduce myself. Again, I'm Ryan Humphrey. I graduated from the University of North Carolina in 2021. There I was a part of a club ultimate Frisbee team. It was a great team. I met a lot of good, like lifelong friends and one of those friends actually introduced me to this program. Aside from that, in my free time I like to play a little Texas Holden poker and over quarantine I got into playing chess and I've been playing that for the past couple years. As far as career goals. I'm an expiring full stack developer. I have this long term dream of creating a startup. But I feel like I need experience before I do that. And I feel like full stack experience will allow me to fully carry out my ideas and be able to have knowledge across the entire tech stack. Currently I'm learning AWS. About a month ago, I got a cloud practitioner certification. And currently I am transitioning my, or one of my projects from heroku to AWS. What is many fabric. Many fabric is a tool so quickly up fabric networks, much like what many fabric suggests is just a mini trash version of the hyper ledger fabric tool. And it's nice because they can just quickly and like very simply up a fabric network. The tool supports both Docker and Kubernetes environments. You can run on personal machines so instead of needing. I'm not needing but you know normally a big network like this could run, you know, several or separate servers across like different organizations can be a big complicated infrastructure. What many fabric is allows the developer to work on their network, all on a personal machine, and they can simulate nodes on on different across different organizations all on their personal computer. The tool is really good for beginners to get familiar with the ecosystem. And this is a really good way to get familiar with the hybrid fabric is a big daunting thing to get into had no prior experience with it when I started this project. And so this tool is actually really helpful for me to also get familiar with hyper ledger fabric. With that being said, all it is good for beginners. It really just allows developers to focus on chain code, instead of infrastructure. Can quickly just get a network up and running and start to write in chain code within about around 10 minutes as you'll see here in a second. So what I do a mini fabric added fabric operator support to many fabric, and also built a deploy node operation. Both of these combined to allow you to automate automatically deploy nodes. This is a Kubernetes based mini fabric networks. Previously, we have to manually deploy these, and it took a lot of time. I also created a city pipeline to automate testing for income and poor quest. This is kind of self explanatory. Basically, for each new poor quest that came in the repository, Kubernetes cluster is built, and it tests the code to make sure each operation that many fabric runs. will run properly on the on the code. And I'm going to go through a quick demo before I start. In the interest of keeping this short, I set up the Kubernetes cluster already and I'll try to show like what I did previously. So see here we had the mini fabric tool and all the different commands you have that comes with it. I guess I should point out this is a command line tool. So we have a Kubernetes cluster that's already running. And you'll see the pods that are running on the cluster. You'll notice that we have an engine X controller, as well as a metal load balancer, and those you will add in your cluster. And then in your working directory you'll need a bars directory and then bars, a key config and node specs, you'll need to copy over your key config file from your cluster into this into the key config directory. And then we can just start up the network with a simple command. So take around two minutes. And you'll see here that within the bars directory now, the tool has created all of our certifications and stuff that come with upping fabric networks. So again, you see what we have in our cluster, but we don't have a fabric operator and this is what I added over the course of the project. With this quick tool deploy mini fab deploy operator. Within 20 seconds, you can have an operator up and right under cluster. And you'll see here you have a fabric operator running our system and now with the fabric operator running, it makes it very easy to deploy nodes. What you need to do is put your YAML files from your node that you want to play into the bars node specs and then run this mini fab deploy nodes operation. And this will take a bit of time around four minutes for this one. But yeah, and it gets it up and running. So you see here we have CA order and peer nodes are running on the network. You'll see the top right this recording took me around nine and a half minutes. I fast forwarded some spaces but basically you can get up and running from zero to a fabric network in around 10 minutes and then from there you can go off and running into writing your chain code. All right, so what I learned. I learned how to contribute to an open source project coming into the project contributing open source seemed like this daunting, difficult experience, but it was the exact opposite. Like, understand the basics like it's not that difficult and shouldn't have been so scared. And I'm really glad that I was a part of this project because it taught me how simple it is to get into it. On the complete other hands. I came in thinking that I understood how it works. And I learned that I don't understand how it works. It's, it's complicated and frustrating, but I have great experience with it. And I'm glad for it. I had no prior experience with any of these technologies, but I feel like I left with a good grasp of each, especially as well. And then patients communication. I got stuck a lot during my time with the project was very frustrating at times, but it was a good time for me to practice communication. I had a mentor and just reach out and help in, and he would all he's always nice to like, give me great advice. And that leads me to giving thanks. I had the shout out my mentor Tom Lee. He's the man. I'm sure he got tired of all the emails I sent to him like asking questions. But it was always there to answer with intelligence and kindness. I had a great experience with the project and made it such a worthwhile experience. Shuttle menu. She with the Linux foundation, she was always there to like, keep us updated, keep us mentees updated and, and always answered my administrative questions really quickly. Shuttle. I never met him, never spoke to him, but he would merge my poor class within the hour, every time, just no matter what I feel like he I think he lived on GitHub. But yeah, and thank all you guys for being here. And thanks for listening. Jose, can you please turn on your microphone. We cannot hear you. Jose, can you try and select a different microphone bottom left next to the microphone button. Bottom left zoom button. You have zoom buttons on the along the bottom of your screen. Yeah, I'm left. There we go. Okay. Sorry for the inconveniences. No problem. Okay. Well, thank you. Well, hello everyone. It's for me a pleasure to be here presenting the learning tokens project. This is a project that was originally proposed by the hyperledger Latin America original chapter in. Well, I'm going to talk a little bit about me. My name is Jose Marvin and because I'm a web developer, junior blockchain developer. Open source and blockchain enthusiast. This industry for over a year and a half. And I have also contributed contributed as a co-organized as a organizer of the CNCF chapter of El Salvador. And well about the learning tokens project. This is basically a mechanism to produce token definitions. For that we decided to use the composable inner work alliance taxonomy token. Token taxonomy framework. Sorry. And so that way we could produce definition token definitions to accomplish this. These goals. The first one is to recognize and register the learning process. And the second and second one is to reward community engagement. And the third is to certify skills and competence competencies acquisitions and all of these things. And the third one is to measure the unique properties of blockchain technology. As we know. Blockchain offers. Some unique properties as that it is. The application of the supply of tokens. And this is able to resist modifications and tampering. And most important, that is capable to have been transferred peer to peer. So the project goals. Since the beginning word to understand the process of tokenization for collective learning in. Well, for this. We tend to think that tokens are just a piece of digital piece. But for this project we believe that tokens can be. A unit of value to represent the process of learning. In communities and in educational institutions. So the second wall was to learn how to use the network alliance token taxonomy framework. This is an open source. This is a tool that the token taxonomy framework. And it was the, the main tool that we used to produce these token definitions. And while using the token taxonomy framework, we were, we were able to produce the first for definitions of learning tokens. And that way to contribute new artifacts that are platform agnostic and implementation neutral. So, the purpose of all of these is to create or to empower learning community of the LT blockchain educational opportunities in. Some of the accomplishments in what I learned in this whole in this whole process. We're to understood and create a business definition of targeted tokens. All of these is is in the GitHub repo repository in. Well, I was able to get a good understanding and take a deep dive into dinner work alliance tools, specifically the token designer and the DTF, which is the token taxonomy framework. Well, it was at first complicated because there was not so much information about how to use it, but it is an incredible tool that I believe could be of help for Well, when it comes to design and create definitions of tokens in. Well, that helped me to create the first four definitions of learning tokens in is this is this was helpful because we could establish some agreements with the educational institutions. to potentially use these definitions. At first here in Salvador with the UK. University in. Well, it was complicated, but in this whole process had my mentor. Who is Alfonso Govella is the founding member of hyper ledger Latin America regional chapter. And he is a Linux foundation mentor and well he helped me quite a lot to well provide he provided me support and connections. We had weekly meet mentoring calls to keep working and developing the project and he was in the process of creating the token definitions as well. And provided me with guidance with the whole mentorship process but not just that. And more he was so much help for me, and we set up goals for the future. This is not the end. This was just the beginning of this project. I, I know that the mentorship just was for six months, but we will talk and where we'll keep working on this project. So, well, I am really grateful with him and what about what's next and the aspirations I have. One of the aspirations I have is to keep on contributing to open source projects in. Well, as I said, I will keep working this project of learning tokens for the implementation part in the aspiration I have is to learn much more about blockchain development and contribute in building solutions for, well, using the hyper ledger solutions. For example, so thank you. Thank you so much. Well everyone can you hear me. Oh, I'm about I'm attempting to share my screen at the moment. So, let's see. Okay. Hello everyone. My name is Francis Mendoza let me just start the timer here, and I will be presenting on Byzantine swaps. So Byzantine swaps is a rigorously secure method for cross chain atomic acid exchange. So, in a nutshell, I am a new grad blockchain engineer currently working at rebel labs. I do have prior past experiences at Intel Fujitsu and certificate in terms of blockchain R&D experience all across this jet all across the technology stack. In terms of academic experience, I bring five plus years of experience in R&D specializing in blockchain security and interoperability. I did two different labs. So the ASU blockchain research lab and the ASU cyber physical systems lab. But that's just a little nutshell about me. In terms of Byzantine swaps Byzantine swaps was the mentorship project that I had under four different researchers of the IBM research division, as well as partnered with hyper ledger, the hyper ledger mentorship. So for context, Byzantine swaps are an alternative and deterministically secure asset exchange mechanism, as sort of as an alternative to bridges. So for context, current interoperability solutions, they contain several fundamental security flaws. So, in prior art bridges are the go to mechanism and bridges are very fast but they are extremely centralized meaning, you know, there's a central point of failure if you're able to exploit the bridge and compromise it you're able to basically cut off the flow of assets from one layer one to another. Right. And previously it is impossible to roll back whatever the state of all the applicants using that bridge or other interoperability solution was to basically the last stable state. What this means more concretely is as long as you are acting within the parameters within the definition of the protocol we were able to basically guarantee that you will not end up worse off than you were then when you entered sort of the agreement the agreement being utilizing the bridge. Currently there are no security mechanisms or guarantees for that. So what we want to provide is a guarantee that honest, the honest party and counterparty are not able to lose any of their funds provided there is no arbitrary violation of the protocol. So what that means is either in a semi honest fashion or malicious fashion in the absolute worst case you will not basically lose any money in the best case you will get your money. You're basically your fungible asset of some kind as a token or a non fungible as an NFT or a bond asset for example, basically transferred cleanly. So the contribution concretely is a decentralized alternative method for basically cross chain asset exchange that builds upon Maurice Hurley's original proposal for atomic cross chain swaps with these additional security guarantees baked in. So basically this is compatible with fungible and non fungible assets to be able to be transferable. This is all operating across the hyper ledger weaver framework and our current implementation we're basically demonstrating asset transfer between two disparate fabric networks. And there are several security features in place to not only to protect the super honest majority but more concretely what that means is again nobody ends up worse off than when they entered the cross chain agreement. And then in the best case assets are able to transfer cleanly in a reasonable runtime. So this is a sample output so basically we have a fungible asset swap between two fabric fabric networks will really should say non fungible. So because we are talking about a bond asset here so it's in terminal output Alice is the locker so she is locking an asset with relation to Bob. So the way that Byzantine swaps work is there is an explicit time limit that basically both the party and the counterparty have to complete the transaction. But if for whatever reason the reasons being if you violate the protocol again in a semi honest or outright malicious fashion or if you exceed the time limit in which you're able to transfer an asset then basically we were able to roll back to a stable state where both the party and counterparty provided you know they followed the protocol honestly are able to walk away with you know what they originally were staking or set on the table so basically Alice is locking a bond asset in this case bond number one a 0 3 on behalf of Bob. So then we query the network to basically confirm to weaver and also fabric that the asset has been locked. And as we do a status check the asset in question has disappeared so you do not see bond one asset bond one a 0 3 within the network one because it has already been locked so basically it's not available here. Then when we query it again also you will notice token values. This is a demonstrative of the transfer for fungible assets which are tokens but since we only have a 10 minute limit today so we're only demonstrating non fungible. You could see we were able to transfer basically from token one from the counterparty Bob back to Alice who now has 10,000 tokens. And then when we do the check again for status we had an arbitrary time limit back here of technically 15 seconds. And then over here when we query the state again 15 seconds have elapsed so technically we are not able to query whether the asset has been locked in network one because what happened was Byzantine swaps rolled back to transaction. So Alice who initially locked her asset on behalf of Bob now has her asset returned to basically her ownership. So if we were to query the network again basically we can confirm that yes bond one of ID 0 3 has been returned to Alice's ownership right so that's Byzantine swaps in a nutshell because we only have four minutes remaining in terms of my mentors. I worked with four researchers from the IBM Research Division so they were Venkatraman Ramakrishna did a current Vina guy on earthy Krishna Surin Narayanam and Sandeep Nishad. So in terms of my mentorship experience it was extremely positive. I grew the most in particular regarding cross chain interoperability research and how it compares to prior solutions, as well as how we can make novel contributions atop. Maurice Hurley his original proposal for atomic cross chain swaps because previously they did not have the ability to basically not just guarantee that we were able to roll back to a stable state but to do it in a way that one it's automatic so no user input is able to basically intercept or disrupt that process and to the creation of witnesses so the reason why it is named Byzantine swaps is that a network observers are passively observing the transaction, and we're basically able to confirm whether or not either transaction has succeeded or failed, and we're able to also basically record the state in the event of malicious lock up because one of the key weaknesses of Hurley he's designed for atomic cross chain swaps is that both the party and counterparty if there is if at least one of them is malicious it can continue to basically re attempt an atomic swap and what that does is that holds the asset in question and escrow so other parties that may be interested are unable to receive said asset so basically this can go on forever in the worst case scenario where basically Alice and Bob hold you know this bond or this set of tokens forever by just continuously repeating the transaction but the protocol has no way to basically prevent that from happening with Byzantine swaps that attack is impossible so the most impactful takeaway is to basically how to strike a proper balance between a theoretically sound protocol design versus industry implementation again this is just a version one point of our demo and and in the future we will also have in the future talk we will also have the demonstration of the fungible asset swap. I'm not just across a fabric to fabric network but also a fabric to quarter, as well as other platforms that hyper ledger weaver is fundamentally compatible with. So in terms of skills gained definitely the intricate details are guarding cross chain swaps what constitutes that sound protocol design as well as threat modeling as what are all the various ways that we could break the protocol, either in a semi honest or outright malicious fashion in terms of software engineering skills typescript was heavily utilized as well as go platforms included hyper ledger weaver and all of its constituents so it namely for the purposes of this mentorship hyper ledger fabric and basically the agile process in a free and open source software setting. So that's the end of my presentation, and I have linked my email and my LinkedIn. Thank you very much and we will open it up to the next speaker. Yeah. So, hi, my name is Kevin and today I will be presenting about my experiences with the link foundation mentorship and the open HPC project. So a little bit about myself, I am currently a junior studying computer science at Brown University. And I'm interested in having was computing robotics as well as just comparing with things in general. And a quick overview of the project. Last summer for my mentorship, I got the chance to work with open HPC and what open HPC is is an open source project focusing on providing a reference collection of recipes for high performance computing. So essentially what that means is it's a pack of tools instructions and provide support to build your own supercomputer. And the general goal is to broaden access to save the art tools, learn the barrier to entry and promote best practices. So if you're interested, you can learn more about the project by going to the link here at open HPC community. And during my mentorship, some topics I was able to dig into include it setting up my own high performance computing cluster and getting to know the HPC ecosystem a little bit better. Additionally, through book recommendations by my mentor and workshops at a related conference, I got the chance to play around with parallel programming and run programming jobs on some of the super computers at national apps, which was a pretty exciting experience for me. I was also able to learn more about concepts related to containerization through some hands on experience with my project contributions. And I'm still applying those concepts today. So diving into details a little bit more. Part of my mentorship was signing up a local cluster to virtualization and vagrant. And with guidance from the community, I followed the open HPC recipe to install a small three node cluster with one login node and two compute nodes all running on my laptop. And through this I was able to familiarize myself with some of the technologies and tech stacks in the world of HPC, as well as alternatives to them and what benefits or inconveniences each one brings. And during the mentorship, I was also able to learn much about hardware and benchmarking aspect of things. I created a custom script that measures train time on different sensor for the models and ran it on different hardware architecture on an HPC cluster at my university. And a little bit more on that on the next slide but some difficulties involved with that involve included differences in personal devices and the actual cluster when in the former, I would have I'm in access but not as much in the ladder and something else was running a script on bare metal versus through virtualization. So my project with open HPC also involved using NVIDIA's container maker to generate both Docker files and single area definition files through Python scripts. Singularity is another container system that's widely used in HPC. And it was interesting learning about the differences as well as figuring out compatibility between Singularity and Docker. So in the chart on the right, you can see the iterative process I went through, as well as some of the issues encountered such as permissions accuracy and versioning issues. And the end result of the project was a container image based on open HPC's tech stack with add a support for NVIDIA's GPUs and the appropriate library versions to run the benchmark from the previous slide. And combining the two projects. I ran the benchmark scripts from the slide before in the container I created versus the officially released containers. It was pretty interesting to analyze and attempt to match the training time and accuracy of the two finding ways to gradually improve my version over time. In terms of my mentorship, I had weekly meetings with my mentor Reese and we would go over roadblocks with me laying out options I could think of, and him breaking down the pros and cons of each approach sometimes addressing additional tools. And with Reese's guidance, I learned to take a more breath for breath based approach to debugging instead of telling me into the same attempt over and over again. For me, this method of getting feedback on my ideas, as well as discussing potential challenges of them was an effective approach. And coming out of the mentorship I feel like I am able to make more informed decisions in the future. The discussion based methodology also helped me improve my skills and communicating difficulties I encountered and verbalizing my own thought process. It was also interesting to learn about the formation and development of an open source project from a mentor and receive guidance on industry practices. So I guess a few main takeaways I have, including getting to know how an open source community operates in terms of documentation, recycles and organization funding. I meet various people who applied their knowledge of HBC and also to different ways, intersecting with different fields while tying it back to high performance computing. And one more takeaway was experience some hurdles myself, and then same people present state of the art solutions at a conference that specifically address those issues, and getting a better understanding of how research and pushing the boundaries of the field works. Additionally, I've begun to incorporate more parallel programming into my project and being less intimidated by clusters has led me to consider larger scale and optimizations in my own work. So, in terms of future steps in the future, I hope to be more involved in the open source community and contribute more to HBC. I would also like to pursue the topic of parallel programming beyond my introduction to it throughout my mentorship. And on top of that, I would love to build a physical cluster out of Raspberry Pi with the open HPC recipe. I guess in general, I want to do more hobby projects related to HPC and think about ways to improve their workflow and existing tools. So here are the acknowledgements. I would like to thank my mentor Reese for all the encouragement guidance and excellent advice. I would also like to thank project lead Chris and the open HPC community. I was able to attend the super computing conference with them last November. And it was amazing to be able to meet everyone in person. And here's a picture of the customized Legos they handed out at the booth. Yeah, with that, if you want to chat, you can reach me at my email here. And if you're interested in seeing some more of my work, you can navigate to my website that's also listed here. So thank you very time and thank you all for listening to our presentation. Thank you everybody. That's awesome presentations and I am so happy to see all of you. Thank you for learning about the projects you took on and also learning more about what you want to do in the future, the career wise and figuring out what would make you happy as and pursuing fulfilling careers. That's awesome. Thank you. Thank you all our mentors. Without that, without them we won't be able to do what we do. And thanks sponsors Red Hat, GitHub and IBM and Intel. And thank you so much for all the presentations today.