 Alright, good morning good afternoon good evening welcome everyone to the open mainframe project mini summit. We're excited all of you are able to join us today we have a great lineup of presentations today to really give a lot of overview of the open source ecosystem on mainframe and a lot of the great questions that are happening here at the open mainframe project. I'll be your host today my name is john mertig I'm director of open mainframe project on behalf of the Linux foundation. I want to kick things off today with our governing board chairperson, Len said Lucia he's also the CTO and VP of business development at Viacom infinity and he's going to give a quick overview of the project. So Len, take it away. Everybody see my charts. Yes. All right. Well, thank you john and thank you everybody for the opportunity to present to you today. As john said, I am the open mainframe project chair for the governing board and also CTO of Viacom infinity IBM gold business partner. I've been in this business for quite some time folks. I was just thinking about this since the 1978 timeframe is when I started with IBM, but I was with some other companies before that 1973 through 78. 78 to. Well let's see 2009 almost with IBM and then here with Viacom since then. It's been a great ride in and around this thing called the mainframe and just fantastic. I really am and have enjoyed it very very much and then when this open mainframe project came along and become the chair of it about five years ago. I jumped on the opportunity because of what it could mean for the platform. The, the Linux Foundation open mainframe project is really all about creating sustainability for the open source ecosystem in and around the mainframe. And it really helps with that because you know people can have these open source projects and do their own things but the key word here is sustainability. And that is really what this open mainframe project is really all about. And the concept of open source really goes back quite away. I'm familiar with share share was started in 1955. It's the oldest longest running conference in the world. And back in that timeframe when it was founded, the whole concept of open source was really started then, and in and around one of the predecessors to the mainframe called the IBM model 704. And if any of you know the history of the mainframe mainframes inception was April 7 1964, I was 11 years old at that time. And I was at that debut, because of my grandfather and father taking me there being third generation IBM. And if I remember as a little kid it's still, I didn't know what I was doing, but it certainly was an exciting time all the excitement around the announcement. You see, this thing called the system 360 mainframe with its inception back in that timeframe 360 to for 360 degrees around a compass, because it was all encompassing of seven different architectures, one of which you see pictured here. It was a big gamble for the company, and it turned out to be revolutionized the whole world, and what computing was all about. So I was so nice to say I was sort of a part of that. And then little than I know in 78 I would be a full time part of it. It was, I guess meant to be, but we're not here to talk about old stuff we're here to talk about real good new stuff. The IBM Z 15 is the current version of the mainframe. And as you can see from the statistics here posted on this chart. It is really quite a powerful system. There is nothing else like it on the planet. It runs open source software. It runs Linux also, and actually very very well. In fact, probably better than anything else on the planet, available from any other vendors, any other alternatives. And what's nice is that since this open source concept has been part of its inception. It's carried forward to today. And when the Linux foundation for open mainframe project formed five years ago. It was just a perfect merger of both the IBM mainframe and the project itself. So let me take a look at what has occurred over time. Starting with 1955 and then one of our recent projects called CBT tape was a one of the very first open source projects around. And it is now part of the Linux foundation open mainframe project. Through 1999, when I was still there at IBM, what was really cool is that IBM decided to make a significant investment in the mainframe with Linux, and all of its server family. And the rest is history through the typeframes and other things you could see here on this timeline. If you take a look at what's going on with open source in the market, you know, when it was first kind of getting off the ground. And then as time went on, just look at this exponential growth. There is no way that it's, as we're saying here, it's going to slow down anytime soon it is just growing at such an exponential rate. If you take a look at open source code unto itself, what it entails. It's custom code, it's frameworks, it's libraries of things that can solve many, many problems. So it is what we term as a code club sandwich. And the key thing is about open source projects and what's really, really matters about them is which ones are going to continue and which ones will really be of value. That is really the question of which projects matter to the open source world and within the realm and domain of the Linux foundation open mainframe project. If you take a look at what the answer to that question is, it's centered in and around sustainability and successful projects depend on its members, the development communities involved with it. Infrastructure and standards that are being put into the product and what the market will actually adopt. So it is a community, it is a movement. You could almost look at it as a, as a cult, but in a good way. And it really helps with what we're trying to accomplish with the end and this is the mantra of the open mainframe project on unto itself. If you look at some of the channel challenges of the ecosystem. You know, getting past glass ceiling, developing project needs, managing the project assets, paying attention to governance, watching out for fragmentation lack of integration lack of cross industry projects. These are the kind of things that the open source projects on the mainframe that are followed by and governed by the Linux foundation open mainframe project really addresses very, very, very well. And we welcome people to become part of this organization. If you take a look at a lot of the projects that have occurred since the inception of the Linux foundation open mainframe project. It was launched in 2015, when the Linux one system was announced. I was there, it was in Seattle, and it was really quite exciting. And as you can see, the project there was a DE. And then as you look through time here, 1618 1920. We have actually grown into quite a phenomenal set of projects underneath this open mainframe project. There are now 15 projects, and many of them are been quite successful and quite influential, and many, many different ways across the industry as a whole. Zoe is one of the big ones where it's actually brought ZOS itself into the open mainframe world open source world, and we will have later on today. People talking about what's going on with a work group based in and around cobalt. And also there's a really cool project with Jenkins with polycephaly fail on with connections through the VM cloud connector and in and around education programs with the academic initiative. You'll be hearing many, many of these discussions following mind here today. So take a look at this mission of the open mainframe project is really quite cool. It eliminates barriers. Bring it to us, you run into barriers with open source being brought to the mainframe, whatever the case may be bring it to us. This is what we're here for. We're here to help with demonstrating the value of open source on the mainframe and and from a technical business and financial perspective. And we are here to strengthen the collaboration between the different groups that need to collaborate on these kinds of things. And I might add here just so you know, in the back of my deck here of this presentation deck. I'm really picked out quite a number of the high level charts so I could fit it into my time slot here. But in the back end of this deck, you will be able to see a lot of the details behind each of these things I'm covering at a high level on each of these charts, just so you know. Here to learn about these open mainframe projects. Some of them I just basically touched on quickly. There is a very nice link here to go to to get more details. And also to learn what they're not only learn what they're about to learn about how to become part of them and we're always looking for new people to take play to participate and become part of the projects themselves. The Linux foundation open mainframe project did a very nice job and putting together this landscape. Where you can click on this at the link provided here, and you can get to each of these projects and look at every so it's a nice big window on the whole entire project itself and all the projects within the project. And I'm going to do a long twister say that three times fast right. And the reason why innovation really thrives here is because it provides a vendor neutral home for many of them mainframe sector centric open source projects. Code hosting infrastructure. And our company is one of the places where we have donated resource on it on our on our Z 14 system for many of these projects so they have a place to develop and home to keep things. We watch over the governance of these projects we make sure we're doing everything legally and with any violations of trademarks. We continue to build a very powerful ecosystem. A lot of good staff report support and we create this natural collaboration opportunity between mainframe and open source projects. More can be read here. If we take a look what's going on with the support across the what makes it sustainable is is the infrastructure, the developer support the market awareness and in the governance that goes along with all of this and other participating open source projects include many of the ones that you see here there there's a real literally a ton of them we could even fit all of them here on the screen. There is a really cool podcast that takes place. My buddy Steve Dickens and IBM is the moderator for it and we bring a lot of great people and technology on these sessions. Recently we had the general manager for IBM Z Ross Maury present and he probably had one of the most watched sessions because he talked about the strategy and future of the platform and you can see that very nicely replayed also right here so go go take a look at that and come to them when they're when they're made available. The mainframe inaugural summit took place this year in September what a great session that was so nice to be part of that. And this little mini summit that we're conducting here for you today is only a small taste of what went on there. Well attended the platform that the foundation chose to use was so intuitive and easy to use we got a lot of great compliments about it. And it covered a gamut of things you can go back and look at this and replay any of these programs that that were covered during the open mainframe summit of September 16 to the 17 of my member correctly. So five years since launch 38 supporting organizations 15 hosted projects 40 plus sponsors and mentees that took place with the mentorship program that this project. A couple of guys on my staff were involved with some really cool projects, llama and zebra and a number of others funny funny cool names. Those were affected this and 280 plus projects here is how to participate. Take a look at some of the things you could be doing from a community perspective from an R&D perspective corporate sponsorship. We cover the gamut. And to learn more about the open mainframe project we provided these links for you to click on and go learn, you know, academia can join at no charge but there are different levels of membership. From silver to platinum and everything in between that is all included in my back end of my deck here but you can also get all that information very nicely right here at these links. And please make sure you follow us on social. We have a very great social group of people in our organization that use Twitter, YouTube, LinkedIn, and many others and we also will be posting what you see here today on YouTube for replay and sharing across your organizations wherever you might be. Thank you. And I'd like to turn it back over to our moderator. Thank you. Thank you so much, Len. Great overview of what's going on here at the open mainframe project. So we're going to shift now to one of the announced initiatives that occurred at open mainframe summit. And also one that we announced early this spring in that sort of came out of the wake of the COVID crisis and you know some of the challenges that are state and local and federal governments had in responding with some of the systems and the unprecedented amount of need there. So I want to introduce Sudha Harshana, Srivansa, who is the lead for the COBOL training course, and also at the same time Derek Britton, who is the leader of the COBOL working group. So I'll let you both go. I don't know who's going first of the two of you, but I'll let you both roll. I think I drew that straw. John. Thank you very much, John. Thanks for the intro as well, Len. Hope everyone can hear me and can see these at the chat. Loud and clear. Wonderful. Thank you. So my welcome to everyone who's joined us at this mini summit or as us Brits would call it a hill. I promise no more jokes. My name is Derek Britton. And I work for, I happen to work at Micro Focus, who happened to be quite well known for their COBOL products but more importantly than that I'm delighted to be one of the founders of the new COBOL working group as part of the open mainframe project. So I just want to talk about that a little bit for the first few minutes here. But the work around COBOL falls into two fairly broad areas. As part of, you know, two projects in the, in the open mainframe project so we'll talk about both of those here today I'll be covering the first, so Darshan will cover the second in 10 minutes or so. But before we do any of that. Let's talk about this thing called COBOL aren't we, and of course many of us on this call already know most of the facts most of the background behind this. Most celebrated of languages but but for those who, who don't or perhaps don't have all of the facts. Well let's start with the staggering fact, as you can see here that the programming language COBOL is as of 2020. So it's seventh decade of loyal service to the it community. It's one of the few things that gets anywhere near as close as the age of the share community which which predates it actually by three or four years. So it was great to see Len share some of the history there, but interestingly COBOL predates the first IBM mainframe, which is a staggering fact if you consider it. The term COBOL actually coined by an IBM was was first used in September 1959. And of course since then it has evolved to become one of the world's if not the world's most widely used core it business systems language. And what is it well a few statements here that are reasonably accepted wisdom. These things are actually harder to measure than you would imagine. But what does it do it runs system of record applications back end office applications that 10s of thousands of organizations worldwide supporting a whole variety of sectors and of course, and government agencies as well. Banking insurance transportation government health care, whatever significant value to the economy the global economy hundreds of billions of lines of application code worldwide. So the estimates say actually quite hard to count that as you can imagine, but one thing we certainly were able to count in a recent survey overwhelmingly strategic these systems. And 92% response rate from a from a market survey earlier this year. And to a certain extent, and lens already sort of raised the point. COBOL, one of the original open projects of its time and a little known fact perhaps for some people attending this call is cobal isn't owned by anybody it isn't. You don't buy it from one vendor and one vendor only it's an open project. It's no conforms to an open standard that's presided over by an international standards body. So no one can lay claim to cobal own plenty have a close association but actually cobal is one of those really great examples of what happens in the it community when people collaborate with, you know, with a clear common goal. And that started a very long time ago and we're still reaping the benefits of that. It perhaps also useful to know is john's alluded to it that there's been some press about cobal rate recently, and not all of it fantastically positive. But actually I'm going to talk about some of the things that are 100 separate positive articles around the topic of cobal in the last year alone. And social media impressions. It's even made the top 10 of computer language discussions in the Twitter sphere in 2020. I know such metrics are important, obviously, but perhaps more interestingly is the membership of the cobal program as Facebook group. Yes one exists as nearly doubled in the last couple of years 17,000 members and counting. So something you might say of a resurgence. So what's the secret, I mean, for something to last a couple of years in the it world is is pretty good going so to last 60 plus is pretty phenomenal. It can't be an accident that it's stuck around for so long, there must be something good about it. And, and, you know, we've researched this at micro focus but we've, we certainly talked about it with our friends in the vendor community and our users and it boils down to effectively four things why has cobal stuck it out for so long. And these are the attributes. I don't have time today to go through this in any detail but but certainly there's the several presentations that would uncover this and white papers and such like. But if you know cobal you kind of know this already. And if you don't know so well, well, let's let's just cover it real quick. Many would argue it's better at number crunching and handling data than anything else, which is quite important when you consider what it's being used for you know running business systems. It also could run unchanged on whichever platform you need it to run on and not just various versions of an IBM mainframe it runs actually wherever it needs to run. It certainly ran on the original 360 that Len mentioned, and it certainly runs just as fine unchanged as necessary on a Z 15 in 2020. It's also readable anyone who's seen cobal will instantly recognize it it's actually child's play to read. If it's easy to read it's easy to learn. It's easy to teach, which I think brings us on to you know the question mark about the next generation of cobal talent, you know, can you train someone in this world will cover that in the next session. And thanks to the vendors and the standards bodies it continues to adapt to support, whatever it needs to work with, whether it's or I don't know cloud containers object orientation, microservices or whatever in fact in bearing in mind it predates pretty much anything else. It's obviously had to adapt along the way and cope with many and varied tech innovations which it has done brilliantly. So, you know, it is in fact 2020 technology. It's just based on a 60 year old idea. And for those of us who have you know a, a cool mobile phone, you don't regard that as old technology. But the telephone is a very old idea. So, you know, those two things can coexist quite comfortably. Well, if that's the positive news and why it's sticking around for good reason, it doesn't mean that everyone enjoys celebrating it. And in fact it's always had its detractors look at this quote from 1960, which is quite telling actually because that 1960 quote is actually before the first release of cobal was ever shipped. So before they'd even built version one someone was ready to shoot it down. And, and john referred to some recent press where not entirely accurately cobal was was attempted to be used as a scapegoat for things going wrong in certain certain organizations or agencies and not necessarily particularly well reported or well researched. It's nothing to do with cobal, of course, the issues reported. Why does it have its detractors. Well, there are, you know, those that might have you say, Well, why not use a alternative technology. And of course, the people who detract cobal might be the ones that are trying to push a different kind of technology potentially, but also it can be based on ignorance when something is that old how can it possibly still be useful. Well actually technology doesn't rust. So it has enduring value so you have to be careful of reputation. And that's a genuine threat that you really have to consider, which is of course where the open mainframe project came in. It recognizes that particular challenge. The valuable technology, not just cobal, but the mainframe itself and other mainframe componentry suffers from in terms of reputation just because of its age. And it's clearly something that needs to be looked at factually and professionally. So the cobal working group was set up earlier this year specifically with a couple of goals. One of which is to establish enough factual evidence and thought leadership and clarity around why cobal remains valuable and what the facts are that support that evidence. To help influence, you know, today's decision makers to make an informed choice about the use of that technology. It also sought to look into, you know, what do we need to do about those who are looking to acquire those skills, which will come on too shortly. Now, the working group is effectively just starting out and we're already up close to 100 members already, but it's a community project as all of these things are and the power of community is when everyone gets involved. So, you know, I'm personally inspired by the Facebook group that has 17,000 members avid cobal protagonists just wanting to talk about that technology. I would be doing the same thing here, because there's a lot of cobal interests around our job here I think, and your job listener if you if you choose to accept the, the task is to get on board and and be a vocal supporter of the factual truth around cobal. So before I finish that this. Let's give you the opportunity to get yourself immersed in some of those facts that I mean there's, you know, there are plenty more besides but the recent omp summit recording is probably a good place to start where we laid bare the truth behind cobal. The recent articles from the it press fairly well researched fairly well informed, and I think give a much more balanced, much more professional view of cobal in terms of, you know, what the decision makers of today need to be aware of, in order to make the right choices for tomorrow. If that's our plan to influence decision makers what about the coders themselves with cobal's age being what it is. There is the obvious requirement for a new generation of cobal coders to be fully versed fully trained and fully aware of what the tech can do. And that's not a bad segue as if I wrote it deliberately into how we can teach the next generation of, you know, that technology which is where I'm going to leave you for the time being and where I'm going to hand over to our sibling omp group, all around cobal programming which is led by Sudarsana. Over to you Sudarsana. Thanks Derek, I'm going to quickly steal your screen. I hope you can see my screen. Can you see the screen and hear me. Anybody. Yes, yes. Yes. Okay, perfect. All right. Thanks again Derek that was a great overview of the cobal working group and as you rightly said, a sibling group here at open mainframe project my name is Sudarsana Srinivasan. I'm a program manager at IBM. In the Z influencer team so what we basically do is work to putting out technically engaging relevant content and engagements to really touch base with all of our global community of developers. And that we talked about we talked about skills shortage and you know in in the space of mainframes cobal, but I just want to put things in context here for a brief moment if you look at skills shortage or just Google that up. There is skills shortage in several industries, manufacturing, of course, and you know we've been talking about cybersecurity skills shortage for a while now as well. So just putting things in perspective and I know both Len and Derek already touched upon this and of course john in the introduction. This has been the year of cobal as I'd like to say, you know the good the bad the ugly but the optimist in making it as good news. Just just as you know the mainframe has been silently quietly running the world. So has cobal cobal code has been running the world for us. But here we are in 2020 no one can stop talking about cobal. And at the, the whole pandemic, we had all of the unemployment checks having issues with that, and everyone's coming out and talking about cobal lack of cobal skills, what have you. You know, we could look at that as a silver lining. And so if that is all the problem then what really is the solution, right and IBM was in a collaborative project and long before all of this happened. Well into work. Well into working on a solution and really excited that I could be part of this solution where we came together in a collaborative project to develop that next generation, really modern tooling based cobal course. I know Derek touched upon that and talked about what can we do to bring in that next generation of cobal programmers. It's a lot better place than to land it on the open mainframe project right. All of this amid the the big news that cobal hit back in March, early April, and we landed the course on April 14 with the with the help of John Murtic and his amazing team at open mainframe project. The cobal team. Very interesting times and the way that we had to actually end up finally pulling all of this together. So the story back story here is back in fall of 2019 we were already in the process of planning the cobal course and how we wanted to what kind of tooling we would use how the whole content would play out things like that and the with a conscious effort and the goal that this should be a collaborative project bring in clients bring in academia and IBM SMEs all coming together to create content that would be relevant and really new new age for our next generation of cobal So all of this project kicked off and we were well in the way in February and met up here in Sacramento California and just a week into the residency of you know a four week residency to get this content all done and published. Everyone had to go back home work from home it was a remote virtual residency. This was this was the team that came together and we had a team member from Hursley in UK. So here's here's a little tidbit right for for what is the mainframe and how modern it really is our team member was on on the way going to drop I think his wife at the airport. In the car working on the project connected to his VS code and and the back end to a mainframe all while on their way to the airport. So there you have how modern and the mainframe is and also all of the tooling that is now available as interface to work with the mainframe. So the cobal course the way the content ended up being put together was focused in two parts the first part really all around the tooling, like I said, the goal here was to bring in modern tooling. So this leverages BS code, the open mainframe project Zoe and IBM's Z open editor so that brings in a lot of the language editing aspects. Part two really then dives into cobal itself right the basics of cobal all the various, all the various specific aspects of cobal and some of the really, really cool features that cobal has to offer, like the intrinsic functions and more. And as I said, you know all of this co cobal content and finally landed on open mainframe project and that was back in April. Thanks to me and john, it was, it was quite the quite the month of April, as I remember it. And this was our biggest success. Right. I mean, this was about three days after we actually landed the course, and a brand new learner came along, took the content worked on it on the open mainframe project in the GitHub repo, and had this little tidbit to share very proud moment, not just for for a go one as he goes by, but for all of us on the team to see how well this is being received. What we also did to help a lot of the cobal learners and you know cobal it was suddenly revived and resurrected. So to help with all of the learning that you know folks were looking forward to. And then a landing page on IBM developer. And what this was is to bring in any and all content and resources we could find has one gateway for anything that for folks to come in and learn about cobal. The link here again at the bottom of this page. So the cobal journey continues. That's a little snapshot of the GitHub repository for the content. As you'll see over time since April, this has been quite a while. Right. We've been able to fork this content into a course number one which is the original course. To which a lot of our community members have actually contributed. It is still live and thriving with over 1.7 K forks and then the advanced course which is course number two is currently where a lot of our community members have been contributing. There is a DB to API project that is now wrapped into it. How do you interact with cobal through and DB to so and there are some hands on projects that really help the advanced learner to keep continuing and engaging with the content. So I want to take a brief moment here to also showcase our TSC members. So, you know, as every cobal, sorry, open mainframe project has a TSC. So does our project here. John Murdoch, of course, is on the team and we have amazing contributors jelly Ziggura and Mike from Broadcom. Martin and Paul Newton and myself from the IBM team who formed the core TSC here and we meet every second Tuesday of the month. A few other highlights about cobal itself that I'd like to talk about, you know, they say the proof is in the pudding, right. In the lot since landing this content, we've had 1600 over 1600 members in our cobal slack channel, which is on the omp workspace over 1700 forks like I said to the content. And it also comes with hands on labs which are hosted on a machine that IBM has offered for the labs. And we've issued over 3500 system IDs for folks to actually do the labs that go along with the course. Another thing that we've also been able to do in the monthly TSCs as also Derek pointed out is work with our sister organizations with an omp. So we've brought guest speakers from Zoe from the cobal working group. Our community members, I mentioned the DB to API project, our community members have been coming and presenting content that they have been contributing to this project so that I'd like to say, you know, thank you to john to open mainframe project and everyone here for giving me the opportunity to talk about cobal and share a little about this project. Thank you. Awesome. Thank you both. Yeah. Sorry, I was just going to say any questions for Derek or myself. If you have questions, feel free to use the q amp a that is on the screen here. But definitely, and this is, you know, this is I think the cobal investments the cobal effort really the whole community coming together around cobal. We all sat here, this time last year, we wouldn't think we'd be talking about that but you know it really showcases how strong this community is and how invested in its future it is I mean if you just, if you just think about it like the response that this community had from the time, which really this started to hit in the forefront to the time which this community was already having public ways of coming together was less than a week. This programming course itself had hundreds of stars on GitHub before the code even fully landed. We've had nearly 1800 individuals to date raise their hand and say I'm available for higher and cobal and that's not just people at the end of their careers. You know, one might think it's people the beginning of their career the middle the back end of it, different genders different ethnicities different locales, it is really showcased how strong this isn't. More importantly, it's just shown how strong this community pulls together, and it's such a great investment area. And we just think both of you in the whole community around it who's done a marvelous job of pulling this together. All right, we're going to skip ahead here we have no questions and again if you have questions during this please use the q&a, and we'll try to get that between some of our sessions. I'd like next to shift over to Zoe, and as you know they talked about, you know the interactions between a lot of our projects is really where the strength of the foundation comes, you know, you can collaborate and open source all you want but having these natural collaboration opportunities that open with open up with the strong foundation is something that we just see time and time again. Across the various efforts we have the Linux foundation and Zoe has certainly been one of the spearheading ones and you hear themes of that through all of the presentations here so with no further to I'd like to turn to the over to Mike, Michael Du Bois and talk a little bit about what's going on with Zoe. Thank you, John. I'll just share my screen I assume everybody can see it now. Loud and clear looks great. Okay. Hello everybody and thanks for joining my name is Michael Dubois and I am a product manager at Broadcom. In my current role, I lead our open mainframe product management team, which is responsible for our see a bright side offering and all of our contributions and activities around Zoe. I also participate in the Zoe leadership committee and the Zoe onboarding squad, and I work closely with several of the Zoe squad leaders from my organization here at Broadcom. For the next 20 minutes I'd like to provide you with a very quick introduction to Zoe intended for Zoe beginners. But you can see my email address on the screen and if you ever have any questions about Zoe or you need help getting started, please don't hesitate to reach out to me. As I mentioned I've got a number of the Zoe community leaders on my team here at Broadcom so I can probably help you find the answers you're looking for. And you can also find me on open mainframe project slack where most of the Zoe community shares information through the channels there. So a few very quick facts about the open mainframe project not to repeat anything that you heard from Len earlier, and then we'll take a quick look at Zoe, and then we'll drill down into Zoe LTS or long term support, and the Zoe conformance program. So really quickly the open mainframe project or OMP is, it's of course a project under the Linux foundation as Len said earlier focused on the sustainable use of open source in the mainframe environment. And part of their very bold vision is that open source on the mainframe becomes a standard, and part of their mission is to enable the adoption of open source on the mainframe and to help build the community around it. It's been five years now since the launch of OMP, and it's gained a lot of momentum and almost 40 supporting organizations including Broadcom, IBM and Rocket who were the three founders of Zoe. And over those five years both the network of contributors and the impact of their contributions just continues to grow. Currently there are 15 projects currently hosted by OMP and Zoe is only one of those projects. And as Len mentioned, you'll hear about a number of them today. But if you're interested in learning more about any of those projects, you can learn more at openmainframeproject.org, which brings us to Zoe. So what is Zoe. It's funny that I realized this this morning my first slide in the what is Zoe section answers three questions but none of those questions is what is Zoe. I mentioned earlier that the founders of Zoe were Broadcom at the time CA technologies which was acquired by Broadcom at the end of 2018, and also IBM and Rocket. So the idea was to create a new modern easier way for modern engineers to interact with the mainframe through an open, extensible ecosystem that makes application development and other engineering functions on the mainframe, much more like any other platform. The idea was to make it not different than other platforms. So Zoe is that extensible open source ecosystem for the mainframe, the first of its kind. And Zoe offers a set of interfaces that provide a consistent way of accessing the mainframe for engineers to consume the services that they need from the mainframe in order to get their jobs done. It offers interaction with the mainframe in a modern way, enabling those engineers to use the skills that they already have, and they already bring to the table. And to work with the mainframe using modern tools and scripting language that they've already learned and they're already familiar with. And Zoe is open source so you can build it yourself but it's offered as a binary download and it includes both core Zoe components as well as a set of applications and plugins out of the box that helps Zoe adopters to get started on their modern mainframe journey, much more quickly. And before we move forward. I mentioned modern engineers a few times already and I just wanted to point them out there they are at the bottom of the screen. This is not necessarily all inclusive but there's Ravi he's a DevOps architect responsible for engineering automation and CI CD pipelines. There's Michelle, she's a modern developer tasked with developing COBOL applications on the mainframe I think we just met a few of those. There's Tyler, a modern systems administrator, and there's Isabelle, the modern operator, who, by the way looks exactly like Michelle they could be twins. As of today there are four main components of Zoe and a couple of additional components that are currently in incubation, which basically means that it's in research and prototyping mode and not exactly or necessarily ready for general consumption yet. Okay, so let's start here if this was a video game. It starts with our four modern heroes at the top of the screen and the services that they need to get their jobs done at the bottom. And what happens in between and all that white space that you see right now to enable their modern interaction with the mainframe. That's where you're going to find Zoe. So let's build it piece by piece. Sitting on top of the services is an API mediation layer. Its job is to make it easier to access the services, providing consistent access with consistent security, including single sign on of course, and enable multi-factor authentication with load balancing and much more. The API mediation layer itself is actually comprised of three parts, an API gateway, a discovery service, and an API catalog. So Zoe conform and API services are discovered and they appear in the catalog, making it really easy for people to find the services that they need to learn how to use them, and even to try them out right from the catalog. Next up, the client-side SDK. It's one of the incubation projects. It's currently incubating with the Zoe CLI squad. The SDK is essentially a set of programming libraries that developers can use to easily integrate their applications with Zoe and with Zoe conform and services. The SDK targets specific programming languages and it's intended to give a better programming experience when working with Zoe and with Zoe services by providing a simple and consistent way of accessing the REST APIs on Z. Users of the SDK, they don't need to deal with the complexities of coding directly to REST APIs from their programming language, and the SDK lets them pull in specific libraries that they need to get their job done. And there are actually three languages currently in incubation. The slide only shows two, Node.js and Python, but there's also a library being developed for Swift, and there are other languages that are already being discussed and considered for the future. Okay, next comes Zoe CLI, providing those same modern engineers with a command line interface to access mainframe services to do the things like creating and managing data sets, submitting and reviewing jobs, sending commands to the console, things like that. Zoe CLI enables automation and the use of common tools like IDEs, shell commands, bash scripts, and build tools for mainframe development. Zoe CLI is extensible using plugins which create new commands, and some of those are provided with Zoe like IBM DB2, CICS and more. And you'll also find that vendors will create CLI plugins for their commercial products like CA Endeavor, for example, and there are many, many others, and I'll show you some of those later. Next, there's Zoe Explorer, a plugin extension for Visual Studio Code that provides a user interface to access mainframe data sets and to access USS files and to access jobs. Many modern developers and CIS admins already live in VS Code, so Zoe Explorer has become extremely popular in the very short time since it was first released. As of today, Zoe Explorer has been installed almost 20,000 times. Okay, next up the app framework, a web user interface that provides a virtual desktop for mainframe, plus a number of built-in apps, allowing access to ZOS functions. The core Zoe includes apps for common capabilities such as 3270 and VT terminals and editor, explorers for working with jazz or MVS data sets in USS. The other incubator project is Zoe Mobile. That's a mobile visualization layer that lets you interact with Zoe services, and those services which are integrated with the API mediation layer, which I mentioned earlier. And Zoe Mobile is incubating right now as part of the app framework squad. Zoe Mobile needs more validation and it needs more contribution. So far, developers have contributed a node client SDK for iOS and also for Android. There's been a contribution of a Cordova plugin and there's been a lot of user validation. We just need more contributors for Zoe Mobile to be successful. We need more developers willing to spend time coding, more validators willing to do some testing. And of course, like any incubation project, we need more feedback and we need more use cases to focus on. So if you're looking for something fun, maybe a fun entry point into the Zoe community, Zoe Mobile may be just that opportunity. You can be involved in something modern, something exciting, collaborative, it's open and fun at the same time. Did I mention fun? Yeah, I mentioned it a few times, right? It's fun. Okay, several of the Zoe components include built-in extensions including the app framework, Zoe Explorer, command line interface, and the client SDK. But of course any of those components can also be extended by anyone, including individuals who need additional functionality, community members, vendors and so on. So the possibilities for what Zoe has to offer with additional extensions is virtually limitless. And of course, developers and sys admins and anyone else in need of mainframe services can also talk directly to those services programmatically if they like. And there you have it, the Zoe ecosystem, including all of the different possibilities for Michelle, Tyler, Ravi and Isabelle to interact with the mainframe in modern ways to get their jobs done. Okay, so next I'd like to talk for a few minutes about Zoe LTS or long-term support. Mainframe customers, as you probably know, set a pretty high bar for software that they that they adopt. And the way that their software is acquired, the way that it's packaged, the way that it's maintained, their reliance on stability, knowing that critical defects will be corrected and that the features that they're using will continue to be supported. And all their tools will have a certain level of interoperability regardless of upgrades. Those things are all very important to mainframe customers. Traditionally, open source technologies, they're not as well known for those specific attributes as mainframe technologies. So it adds another set of concerns for mainframeers about introducing open source technologies like Zoe into an existing mainframe ecosystem. So let's take a look at how Zoe LTS helps with this. So first things first, the packaging and the installation. The Zoe Cupid's squad, I always trip on that, was formed to address some of those concerns. What they came up with was a standard SMPE managed installation package, just like any of the mainframe product. Any mainframe product requires SMPE for management. It's a showstopper if it doesn't and now Zoe is available in an SMPE managed installation package. Good start. Next, make it easy to acquire Zoe without having to worry about whether or not the Zoe that you've just acquired is genuinely Zoe, right? Next, make sure that it's easy to drop Zoe down and all of its components very easily and then make sure that you only need to configure the components of Zoe that you need. This is all good and all available. Enable customers to maintain a single instance of Zoe. So that other solutions that prerequisite Zoe can leverage it as they need to. They don't have to always distribute new copies of Zoe, which makes a nightmare for maintenance, but also be flexible enough to allow an additional instance of Zoe as needed for any reason. And finally, make it easy to upgrade open source solutions like Zoe, they update frequently. So just apply a PTF that'll bring you up to the latest service level or whatever level, you know, whatever level you really want to come up to, just with a PTF. This is all available with the Zoe LTS today. So this slides a little bit complicated. So don't worry too much about digesting it. It's a great reference though. I'll summarize the highlights of LTS for you from left to right, green to blue to gray. That's the life cycle of the Zoe version. From left to right, the amount of change goes from most active to least active. In green, you'll have the frequent changes to Zoe, lots of innovation, lots of new features and even breaking changes. But then as you get into the blue, you mature into a long term support version and still new features are available. They come out with new features every month, but no breaking changes. At that point, any breaking changes go into the next version. And eventually you get to a maintenance only mode and all new feature development goes into the next version at that point. So Zoe version one is in the active LTS stage now. And the total time for each version that will be in an LTS phase is at least two years could be longer. But once a version reaches an LTS phase or reaches LTS, you've got yourself a stable version for the next two years without breaking changes. So you can be confident that upgrades from release to release, say from 1.14 to 1.15 will never break your environment. And that's also very important for mainframe customers. So just a few more specifics and details. Zoe will consider the need for a new version about once annually, but obviously anytime a potentially breaking change becomes necessary or is being considered for Zoe. A new version will typically stay in a current state for six to nine months before maturing into LTS. That's to provide time for feedback and for adjustments as needed. Active LTS versions are ready for general consumption and they will continue to have new releases anytime a new feature is introduced. And the maintenance phase will consist of additional modifications for bug fixes only. Okay, some more details. The LTS phases are active and maintenance. I've already said that during LTS, all critical defects will be addressed. A conformant extension will continue to work without any modification. So if you've built an extension to Zoe and you have a conformance badge, which I'll get to later, you don't have to change it. It's going to continue to work. Again, the total time that a version is in the LTS phase is at least two years. And if you're planning to use Zoe in some business critical or production manner, you just want to make sure you're using an LTS version. Okay, so if you want to know more about Zoe LTS, you can you can find out as much as you need on Zoe.org just go to Zoe.org. Click download then scroll down till you see the LTS diagram and then click learn more. Okay, just one more topic, then you'll be rid of me. I mentioned conformant apps a few times or conformance or conformant extensions. I mentioned it a few times during the session so I think it's a good idea we take a closer look at exactly what I mean by that. Okay, I've explained how Zoe can be extended. How some of those extensions are available right out of the box and how anyone can create more extensions for Zoe. Well, the Zoe conformance program, it's there to guarantee that a certain level of experience when you when you're acquiring or installing or using Zoe extensions. That you get that that expected level of experience of a Zoe extension by providing a set of minimum guidance for extenders to ensure that that you get that consistent experience. When an extension meets the minimum guidance, the extender can apply for conformance badge to let everybody know that their extension fits nicely into the Zoe ecosystem. There are already conformance guidelines for CLI plugin extensions for web desktop applications and for API's more conformance programs are expected to be added in the future as needed. The program today focuses on a number of common, a number of different things like common functionality that you might expect from a Zoe extension interoperability between components and between extensions and Zoe integration quality and of course user experience. Obviously, there are benefits of the program, not only for the extenders who provide the conformant extensions, but also for those who consume or use the extensions. So the conformance program was updated back in March when LTS was announced. And that was to stay aligned with the new features in the LTS version. The previous conformance program from 2019 was discontinued at that time. The new program is the V one conformance program, a corresponding to Zoe LTS V one. And on the right side of this screen here you can see a sample of the checklist that extenders would complete and submit when applying for conformance badges for their extension. And the consumer as a consumer, you can easily see all of the conformant extensions on a single page, kind of like you're going to an app store and I'm going to show you how to how to get there in just a minute. Okay, any extension can apply, including new applications as well as existing ones, even those that were 2019 conformant were able to apply for V one conformance those were fairly easy. All you do is review the terms, complete the evaluation and submit your form. The evaluation is then done or the application is then reviewed by the mainframe project. And once approved, you'll be identified. You'll be identified as a conformant extension, you'll be notified, and your badge will be displayed with all of the other conformance badges in the place I'm going to show you in just a minute. So Zoe has 43 conformant extensions at this time as of this morning, and that number just keeps growing. So here's how you get to the list of conformant extensions if you want to see all of these badges in one place, just go to Zoe org. Click Zoe conformance program, then click, learn more, and you'll be able to see all of the current conformance badges for all the extensions that have submitted and and been deemed conformant. Okay, conformance badges are version specific. So the current badges say Zoe v one on them, just like the one you see on the screen now, and eventually there will be a Zoe v two conformance program aligned with the next LTS version. So if you're thinking about building a CLI plugin or a service API or a web UI application, just be sure to consider the Zoe conformance program. And here are just a few quick tips for ensuring that you earn the badge as quickly as possible. Most important on this slide probably be sure to complete all of the required fields in the form. And if you need any help, please just reach out to through the OMP Slack using the Zoe onboarding channel, somebody from the onboarding squad will definitely be there to help you. Maybe even me. And with that, I just like to end by saying thank you for spending your time with me today. Enjoy the rest of the sessions and have a great day. Awesome. Thank you. Thank you so much, Michael. Great overview what's going on with Zoe how to get involved in some of the downstream impacts we're already seeing there. We definitely have the Q&A still open here. I'm not seeing a ton of questions which means we are probably addressing everyone stuff in these great presentations so that is fantastic to hear. So we kind of want to shift forward a little bit. You know so Zoe was a project launched a little over two years ago now and since then we have just seen a ton of new interest in bringing open source efforts to the open mainframe project and we had a few projects announced this year and so we want to kind of talk to a few of them here and have some of them present of what's going on here and the first is Geneva ERS which we announced at open mainframe summit last month and we have a kip twitch well who will be presenting update on that project and he's on view and we'll switch over to him right now. Thank you, john I actually did a video presentation so we'll watch my because I'm in the midst of some activities here at home. So I'll watch that and I'll be at the end to answer some questions if anyone has a question. Hi, I'm kip twitchel. I'm the chair of the technical committee for the Geneva ERS project want to talk to you a little bit about what Geneva ERS is and what the open source mainframe project looks like that we've just contributed Geneva ERS to Geneva ERS is a little bit like a patchy spark but it predates it by almost a decade on ZOS on the mainframe. It's a scanning engine that resolves many queries in a single pass through a database and that ability to resolve many queries in a single pass through the database makes it incredibly efficient. That efficiency means that companies can hold greater levels of detail in their database repository. The ideas behind Geneva ERS come from generalized event architecture, meaning an event based accounting system or event based reporting system. If you can report from transactional detail transactional detail has all of the attributes would capture when the transactions are recorded. That transactional detail is very rich in terms of what you can know and what you can understand what you can aggregate what you can select what you can can do with that data in the transactional detail. When you're dealing with summaries you have to drop transactional detail. Geneva ERS's ability to scan through transactional detail on large volumes of it in a parallel processing engine means that you can have greater information out of the data that you've gathered at such pain taking cost in our business systems. Geneva ERS allows you to instead of just doing analytical processes because it has piping mechanisms whereby one process can feed to another process companies have built whole applications on top of Geneva ERS. Geneva ERS doesn't manage storage for you. In fact most of our customers over the time because they want the scale have just used sequential files on ZOS but it can also read databases and vSAM files as well. But that sequential file access means that Geneva doesn't manage to do anything with the data storage per se. It's simply like Spark a processing engine. Unlike Spark Geneva ERS doesn't have machine learning and some of the sophisticated statistical engine processes that would you would find within Spark. So it's a little bit more simple that way but it is a reporting formatting selection processing sorting engine that allows you to access your data very efficiently create applications through API points multiple API points and aggregation processes on the outbound side of Geneva ERS. The basis of Geneva ERS of being open source is we want to increase its use and connect it to other open source packages. Recent thinking about Geneva ERS is that we should think about using it as a more efficient ZOS map engine in a map reduce sort of construct where you use Apache Spark for the reduce engine to get all of the capabilities in the engine, the power out of the reduce phase of Spark and all of its machine learning, AI, other sorts of capabilities. But in the map phase Geneva ERS can do many things in a single pass through the database and mapping the data as opposed to simply distributing the data across multiple nodes as is done in most map processes. Geneva ERS has very efficient join algorithms as well. The join processing allows it to make sense of your data so that when you need to take codes and turn them into understandable values for reporting and analytical purposes. If you need to join to create other views of your data, Geneva ERS's join processes are really unparalleled in the way they go after and do this. We have processes where in a matter of minutes we do millions of joins in Geneva ERS. Our code base is has a user interface that is where people today enter process definitions. It's a structured environment. You begin by selecting the transaction over the base table upon which you want to report. You select which field you want to sort your output by. You put in general selection criteria to decide what the rough cut filtering will be for the process. Then you select the column outputs that you want to output on your process. In those columns, you can put in selection criteria to put in and decide what should be happening relative to the rough cut filtering at the specific column level. The column filtering output can include calculations, arithmetic calculations, can put in constants, can do joins to other things, and so that's the basic process. When you run Geneva ERS, our process is called the performance engine. The performance engine gets the greatest scale if it wants to process, if it can resolve lots of different queries in one time through the database. So that scale process means that we actually like to execute periodically. It doesn't mean that it has to execute periodically. You could set it up and make it an end user query tool where end users run it as often as they want to. That's less efficient than if you run it periodically and produce all the outputs from a batched up set of queries in one pass through the database. Often our process is built in re-engineering batch processes that happen in the middle of the night. Batch processes typically are about creating balances. It's about posting processes. That posting process of turning transactions into balances is fundamentally an analytical process because a balance is the beginning of any analytical process. We always go and look at the balance to see where we're at today, and a balance is an accumulation of all the transactions that made that balance possible. So Geneva ERS's ability to do posting process and turning transactions into balances consistently, doing it at a very high scale, means that companies are able to re-engineer their nightly flows so that instead of large, highly aggregated values at the end of the night in the typical general ledger process to produce the enterprise view, companies are able to scan detailed transactions and produce many more varied outputs against much more granular data for much more interesting enterprise views within the same hardware environment that they're using today. Because often when it comes to the mainframe, the lowest point of utilization for ZOS is those early morning hours just before the opening of the day. We aggregate data there to run the GL processes. And so when Geneva ERS uses those unused MIPS of that early morning, you're basically using free hardware. Geneva is using it very effectively to produce analytical outputs. Our Geneva ERS project is open to people who want to be involved in the exciting realm of analytics and renewing the mainframe. The engine for Geneva ERS is a parallel processing engine that we generate machine code on the fly and then execute that machine code in parallel. The scale of Geneva ERS is, I don't know of any other tool that does the things that it does within the speed that it does it. So we're interested in having people that want to help us transform the user interface. We'd like to develop a new programming language to open up the power of Geneva ERS. Perhaps using Groovy as a language, domain specific values in Groovy to open up Geneva ERS is to more capabilities and to integrate with Spark. We're looking for all types of people to join the project and we'd love to have you be a part of it. Anyone that's interested in taking a proof of concept on Geneva ERS, we'd love to help you understand more what the power is and how you could use it. You can go to GenevaERS.org for more information about this. We have a YouTube channel as well, GenevaTV that you can link to from GenevaERS.org. We've got training videos. GitHub has our repositories. We'd love to have you be part of our process and part of the Geneva ERS story as we continue to transform legacy systems, improve analytics, improve our financial systems from those that basically were automated decades ago. It's time for us to increase the power and that we get out of the data that we capture. We're glad to be part of the open mainframe project. Glad to be part of this mini summit here for Europe. Hope that you're enjoying your time. Have a great day. Thanks for your time. Awesome. Thank you so much, Kip, for that great project coming together there and some interesting collaboration across a lot of different folks. Next I'm going to turn over to the software discovery tool, which is a new project came in here and we have Elizabeth Joseph, who's the project lead to talk a little bit more about that. Hi, John. And Kip, that was great. You were outside. I'm bringing it back inside. And I've got a deck here. So let me share my screen and bring that up. All right, so, as we said, we're here to talk about the software discovery tool, another new project from the open mainframe project. One of the things that I discovered when I started working in the mainframe space, I come from, I've been doing open source software for about 15 years. I've been working on Linux that whole time. And one of the things I found was that the landscape was a little bit confusing. The open mainframe project has helped that considerably because there's now all kinds of pages that link to open source software and other things. But one of the gaps that I saw was that it was hard to search for exactly what you were looking for. So that's what the software discovery tool aims to solve. So it's essentially a website that you can search for software on. It's got a JSON backend at the moment where you can add your software sources. So right now we have Suze Red Hat Nubuntu as the back end sources. So you can search just for those right now. And then the, you know, the goal is that you can easily add your own JSON based sources, and then we'll be developing some new ones in the project. So it's actually a fairly simple project. It's written in Python and Flask. And so you can host it yourself or we're going to get our own version up and running with the open mainframe project hopefully in the coming months. So what it looks like. So it's forked from a project that was started at IBM, the package distro search. And the goal here was just to search three of the three Linux distributions that are like official on mainframe. And all it had was the package name, the software name and the version that was supported in each Linux distribution. One of the things we want to do is expand that so it won't just have the package name in the version. We want to add things like descriptions of the software were applicable and who's who's supporting it because we have a bunch of organizations involved in supporting open source software and add in just whatever details anyone is interested in adding. And we want to expand it beyond Linux, because obviously there is a whole lot as we just heard from talks before me a whole lot of software out there for ZOS and the rest of the mainframe world. So we want to do a package search that is not only Linux but everything else as well. So, that's where we are now we're in the process of importing the code from from the pds project. We just upgraded to Python three, which is really exciting, but it was one of our blockers. So one of the things we want to do just like right away is we want to add ZOS support in the search UI, we want to improve the UI design, because it is very simple at the moment. And then a lot of our work is going to be putting together these Jason back end lists of software and the metadata to go along with it. One of the other things we want to do is just get more people involved in creating this metadata whether it's you know you don't have to write a JSON file but if you say hey my open source project is not is not included in this. We want more people to come forward and make sure that we're we're offering a comprehensive view, because the last thing we want is that this to become another place where you search for your open source and we want this to be like the comprehensive spot where you go. So, again, it's it's pretty simple but we definitely saw a need here in the community for this. So if you're interested in this project, we have several ways you can join us be just on it's a project now on the open mainframe project website. So you can check out what we're about. Join our mailing list. Join us on slack, and then the code will soon be up on GitHub. Once we're finished scanning it and importing it. And then we can really get to the fun work of adding these new sources and adding ZOS support and all of that. And that was all I had. Awesome. Thank you, Elizabeth. This project is making some great progress they I have echo on somebody making good progress we I saw they just landed some of the Python three support this past week as well so it's really exciting to to see some early progress happening there. So let's move ahead and again as you have questions please use the q&a to address any of those. We have up next our mainframe open education project, and this is a number of individuals from Broadcom IBM and different organizations sort of came together. And we have Lauren, Valenti, who is a member of that project and will be presenting on it. And I have the prerecorded as well. So if any questions you can ask during the session and I'm more than happy to answer during the session as well. Hi, my name is Lauren, Valenti, head of education and customer engagement within the mainframe software division at Broadcom. I'll talk to you about the mainframe open education project. For decades, the technology industry has had a need to replenish the aging workforce with talented and skilled professionals. As a manager, for example, who has new hires to train and get up skilled. Think about it. Where do they get their training. Yes, some have come up with their own and have built their own internal new hire programs. But maybe there might be areas that they might be missing that are needed. Maybe there are learning paths, maybe some of them are not fully baked. And then there are some that leverage partners like ourselves Broadcom and IBM to leverage our education materials. And then some go to other sources. There is not a collection of educational materials out there today that includes the unique expertise of the mainframe community. There's also a lack of a community support platform where these experts can share their knowledge and their own education materials about the mainframe. What if there was one place for these managers in the enterprise that they can go to and get what they need to train their new hires. What if universities can leverage some a place where they can send their students to learn more about the mainframe. The open mainframe project has helped fill the skills gaps with its mentorship program. The cobalt project that was recently launched and will now take it one step further with the new education project. The mainframe open education project will offer a convenient, easy to use platform through which experts can share their up to date education materials. And it will also foster collaboration with the broader mainframe community. And the result. Let's talk about the results. It will provide a clear learning path for those to have a rewarding career. It will close in the technology gaps by offering the comprehensive materials at no cost. It will also open opportunities for the community support and engagement where now experts and seasoned professionals can share their knowledge and even their best practices. So where are we with this project. The project has recently started where we now have a core team that not only represents broad common IBM. We now have representation from the industry, as well as the academia area. Since we can't boil the ocean with the mainframe because the mainframe is there's so much to learn and there's so much information about it out there today. We need to start somewhere and thought it was best to do a phase approach. And as you can see, we will begin building out our material for phase one. What is a mainframe. What values does it bring who uses it. So we want to be able to explain what specifically it is first educate our folks as to even students again people who are changing from another career. We want them to know the value that it brings and how critical it is to the enterprises today. Then what we will look to do we will build a framework the core team will put out a framework and determine what materials should and could be added. And we'll build this out for the same approach for each of these phases. Now, I'll be honest, we can't do this on our own. We need the community's help. We need your help. There is a wealth of passionate people out there I know for a fact there's subject matter experts that would want to contribute to something like this, especially to a platform like this, and have their materials. Help other people understand the criticality of the mainframe. So for those who are interested in learning about the project and that want to contribute. We host a monthly project meeting that you can attend, and you can learn about the progress. You can also contribute and learn how you can contribute here I should say, but help us make the learning experience with the mainframe easy, intuitive, and available to all. Thank you. So thank you for that. Any if there's any questions more than welcome to put it in chat, and I'd be more than happy we're even on the line here as well. Awesome, thank you. Thank you. Awesome. Thank you so much, Lauren. Great, great presentation. You know, it's great to see, you know, so much, you know, all these companies a lot of folks in the mainframe space have been all working on education sort of in their own silos and it's great to see this coming together as a community effort I think this is going to add a lot of value for everybody. Exactly. Thank you. So we moving forward here, you know, one of the big focuses that we have not only from the open mainframe project but if even if we look at the Linux foundation. You saw earlier this week, we made an announcement of the software developer diversity inclusion project. You know we are huge proponents of diversity in our communities. So we see when you have a diverse community, you have better outcomes, you get more opinions you get more views to the table, and the downstream value happens more and I think, really in where our society is at the present time here. It's more important than ever to be focusing on this as a topic. So we're thrilled that we have a great panel here and it's hosted by Rick parrot and let's turn it over to him. Let's get into today's panel diversity in diversity. What does it mean to be a woman in tech. My name is Rick parrot. I'm on the open mainframe project marketing committee and I'm head of content and analyst relations for Broadcoms mainframe software division and I'm pleased to be You know, the whole open source concept, as you know, is about transparency and inclusion. The more contributors, the more committers, the more insights. And it's that diversity of insights, which lead to better design choices and better code. And that's why you're here because you believe that open source results and better releases and more innovations. And that's why the open source community and ecosystem has established itself so well and such very little time frame. Now, like any ecosystem, it's important that we adapt. And the best way to adapt any ecosystem is through vibrancy and having a vibrant ecosystem. And the only way to maintain that is through diversity. In our open communities are working on some of the world's most difficult important software. And it's very important to include the voices of a diverse range of people. So today, we're going to hear from three amazing women on how they got their start in technology, what their current roles are and how they built built diverse teams over time. And through their voices, you know, we'll learn that there really isn't one journey into the tech domain and as you progress through your technology career. There are many ways to build and maintain these vibrant communities. Joining us today is Silva, she's product owner at Broadcom and she's responsible for solutions supporting the modern developer experience for mainframe. Currently, she leads a team developing the CA Endeavour Source Code Management Enterprise Git Bridge, which enables a whole new generation of developers to interact with the mainframe through Git. Previously, Silva worked in engineering as a scrum master for mainframe DevOps teams. And as a consultant for CA Clarity, the project portfolio management product and she joins us today from Prague. Next, Rashmi Agarwal, who's director of software engineering at Rocket Software. Rashmi has been managing large software development teams for over 20 years. And she's built teams from the ground up and manage these teams across the globe. She is absolutely passionate about technology and focused on bringing teams and technologies together to help enhance the customer experience. And she joins us today from Pune. Hi, I'm Jen Francis, developer advocate at IBM. She works with customers and vendors on leading edge technologies and teaches developers how they can be used. She's also extremely passionate about tech and about teaching and mentoring others and giving back to the community. As a member of the American Indian Science and Engineering Society, you know, she is of Cherokee heritage and she loves networking with others from similar backgrounds and volunteers to keep the Cherokee culture and heritage alive. And she joins us today from Rumsey in the UK. So, let's get started with the panel. Silva, let's start with you. How did you get your started in technology and what led you to be interested in open source? Okay, so thanks for the question and hello everybody. My path, how I became what I am today, I'm a PO in Broadcom, a product owner. My path was quite unusual, I think maybe it was a funny path. I did not really study IT, I did not study engineering. I was not really even deeply interested in technology when I was younger, other than maybe just being the first in the family who jumped. When there was a new TV, a new camera, I studied all the manuals, I studied the thing, I wanted to know how it works, I helped to fix things. But that was about it. However, on this path, which was not really maybe usual, I learned quite some things and it finally got me where I am right now today. And I would like to share that path with you. So, first, I studied English and mathematics and people often told me, why did you study English and mathematics? Like, it's so different, what led you to do that? And when I thought about it, I found out that these things are not that different. In fact, they have one thing in common and the thing that they have in common that's a logic structure. So that's how I learned that my brain liked doing things which were logical, which I could imagine and they were somehow stable. So mathematics, that's exactly it, or grammar, right? And so these studies that I did led me to teaching English at the beginning and I loved that a lot. However, I still felt like I wanted to do something more, I wanted to learn more. And that's how I started my second studies at university and that was business. And that led me to France, where I did a half of the study program, and it led me to my first internship and did that to my first job. And this first job still was not really technical, but it was already in an IT company, so a little step forward on my journey, which was not intentional. I was never really intentionally going towards IT. But this first job in IT company was my first peek into how an IT company works and how software gets developed. I could see what roles there are in an IT company from developers, operations support, all that was quite new for me. And also, so living abroad, I learned a lot of things on a personal level. And as you know, maybe you've lived abroad also yourselves. Usually what happens for you is that your colleagues become your friends, your family during that time. And those friends of mine, they encouraged me to look into coding because they knew me, they knew how my brain operated and what made me passionate. And thanks to them, I started looking into online courses like Code Academy. I even participated in a number of events, like, for example, Rachel's Girls, which was in Paris. It was like a two-day crash course for women who were interested in coding, but they had little or no background. So exactly me. So I went there. That was probably my first contact with the IT role with development where I actually played a role. And it was also my first contact with open source. Not much, but it was something that already happened. Still, all this experience that I had in the company and with these first steps in coding, it did not lead me directly into IT yet. So I decided at some point after four or five years in France that I wanted to return home to the Czech Republic to be closer to my family. And one of the triggers to come back at that given point was that I found that there is an intense training in language interpreting happening in Prague. And since at the time I was sort of fluent in English and French, I thought that was like exactly what I wanted to do. And I can tell you it was probably one of the most intense trainings I've ever done in my life, like your brain literally starts boiling and doesn't stop until the end of that year that you're studying. So if you want brain exercise, I recommend simultaneous interpreting. So you listen, you speak, you try to translate in your head, you also try to basically guess what the person is going to say. So it's very interesting. So after this training, I thought, okay, I just internally felt that was the moment where I wanted to decide what to do with my life. So what is it that I'm going to choose from all the things that I've tried and what I've seen. And at that point, that was when I consciously decided to go into IT because I thought that's exactly the right combination of challenge and stability that I was looking for. So what got me this first job in IT? It was my knowledge of French and English. So something I learned until now was actually useful. And also my promise to the hiring manager that I'm really ready to learn everything they will ask me to learn. And I did. So I spent the first year really learning a lot, getting a lot of help from my colleagues as well, which somehow confirms the fact that IT and coding is an environment which is very communitarian. So there are people who want to help you and they want to share their knowledge. So thanks to them, after probably a year, I became quite independent in things that I never thought I would be able to do. Maybe you didn't even know they existed, right? Like coming to a customer, trying to understand what they need, install the software, configure it, script it. There was a word I didn't really know what it meant before, query the database, test, fix bugs, you know, all these things. So this all was quite a lot of hard work. On the other hand, it led me to some state of flow. And I felt like, wow, this is really, this feels so good that I can do things I understand them and suddenly they were so complex yesterday and today I can actually do it. So that was my experience. And then this leads me to my second encounter with open source, which happens probably, it's a paradox in the world of the mainframe. So after I moved from the first job where I was a services consultant in a distributed world, I moved to the mainframe division in CA Broadcom. And at the time you would probably think it's the least likely place to see open source, yet it became a thing. And there were first open source mainframe related products that popped up. I started thinking about how open source can help the mainframe and what is it about, actually, and obviously open source means less boundaries and all these things that we all know. What I thought as well is that it enables people to contribute to the product when they feel like it, because they are motivated by something that they see is missing, for example. And the person who's regulating their efforts is not necessarily their manager, but it's the community. So it's not like your boss tells you to do something is the community who work together on something. And I am hoping I believe that the mainframe can slowly transform things to open source. The mainframe is still at the beginning of this journey, obviously, but I am personally really happy that I can witness this this period and then I'm really curious how things will evolve for the mainframe and open source. Maybe just one last thing to say diversity. I think that diversity can lead to things that are quite unexpected in the team. For example, for myself at the beginning, but even today I sometimes felt anxious. Not to ask a stupid question like a question that everybody around the table will think, oh my God, how come she doesn't know that. But often I learned that asking that question can lead to deeper understanding the entire team would start discussing. And they would just learn things only because there was one person who dared to ask, who dared to be different, show who they are, rather than trying to hide themselves and pretend that they are like the others and they know everything like the others do. So this is how I got where I am now. And although it was maybe not an usual path, it was not always super easy. I wouldn't change anything. So that's all I wanted to share and I want to give space to others as well. So that was that was a great, a great story. I don't think there's any one way or one path to anything in life. And what I love of what you said was about, you know, you should have discovered less boundaries, right? And that, you know, from your, from my perspective, you know, your own career, just as it's progressed, you know, you don't have boundaries, right? You went from, you know, English and math, being an interpreter, coding, distributed, now mainframe, open source, same thing, you know, no boundaries. So that's a great, great story. Jen, over to you. What open source communities are you involved with, you know, today you're working with a lot of partners and customers and what do you, what do you like about these communities and how might you, you know, change them if you could. Wow. So there's already been a lot of change in open source. So, I started, I think open source has been throughout my whole career. When I was still at university and I took an internship, it was my first exposure to open source. And I had a roommate while I was interning and he was contributing to a lot of different open source packages and my internship was actually on building a Linux distribution. So it was taking the open source Linux kernels and making a proprietary version of retailers. And I was really kind of my first foray into working with open source technology at the time. I was incredibly intimidated to post questions on forums to try to contribute because people weren't that supportive. They would make comments like, oh, you must be new or oh, you're a new or things like that. And yeah, I was, but what's wrong with that. And what I see now as I work with things like some of the hyperledger projects like hyperledger fabric and everybody's absolutely supportive. You can ask a question. You're not going to be ridiculed because maybe it is a basic question and maybe you are new. And people are very great at saying, oh, hey, that's a great question instead of answering it on a chat. So a lot of the open source communities will actually have chats that you can, you know, talk with the people that are contributing other people that are maybe some of the core developers to get help. And they'll say, hey, that's a great question. Why don't we post that once I've been like open stack where that question is not going to be searchable because I think other people are going to have it. So it's no longer this like, oh, you're not the expert. You can't contribute. You don't know it's yeah, let's have everybody and we want the input. We want you to learn it. We want you to be contributing. And that's actually been really important because it's been easier to engage customers. It's been easier for customers to adopt the technologies. And really it's what's driving the whole open source community. It doesn't matter what our gender is what our ethnicity is where we are in the world. And it gives us all equal footing. And most of the time I'm behind, you know, a screen I'm contributing it's a it's a user name. Nobody knows who I am. And a lot of people I've met, I'm like, oh, you're this user ID. I know I don't I don't know you, but I know that user ID. I'm used to, you know, seeing it on get issues or I'm just seeing it on the forums. And so it's actually made it easy to work with things like node or work with the different hyperledger projects or work with open mainframe project on the open source open mainframe technologies and the projects that are starting there. It's just been a really easy way to have diversity to have everybody be representative and feel included. Excellent. Excellent. You know, you know, when you talked about in the start about your initial early reactions and someone's a new yes we've all faced that I think what people people tend to forget that. You may be somewhat expert in one area, but you're a new bit another area. So, you know, this is it. I think we all need to have this degree of being humble and somewhat tolerant because everyone was a new but some point in some place at some point in their lives. Thanks, Jen for sharing that story. Appreciate it. So, Rashmi, how do you build, you know, you've built a lot of teams and manage them globally for a long time. How do you build diversity and inclusion into your teams? Rick, so for past 22 years of my journey in technology today, I would like to bring out some of my challenges and then also provide insights into what I have done to build DNA teams. But before that, let me just tell you a little about my background. I'm an Indian and India is a country with diverse culture. India believes in unity and diversity and I belong to a very conservative family where probably nobody except me thought I would be an engineer like my father. My mother had a different expectation and she probably wanted me to be a homemaker. It is considered women's responsibility to be primary caregiver for the home and secondary breadwinner. But don't take me wrong. I consider homemaking skills are equally important for both men and women. But then at the same time, I believe if you follow your passion, you can succeed. And that's what I did. So my first advice to women is to listen to your heart and break the barriers. Do whatever makes you happy and it will eventually make everyone proud, everyone who loves you, it will make them proud. So I have a lot of teams and I would give certain incidences to bring out the challenges and then how I brought back those experiences in my team and created a diversity and inclusive team. So to me, diversity means bringing unique people into team who can build better products, who can also bring different thought process and make better decisions. And this reminds me of one of the incidences very recently. I went for an interview and it was for the role of a director and I was sitting across the CEO of the company and he was very nice to have an interview was going really well. And then all of a sudden I was surprised or shocked by the question and I was asked that the country in which is still things are decided by men or a male dominated country and with child responsibilities, how would I carry my responsibility as a leader. And truly speaking, it was, it was a question which I was not prepared for. And thank God that I'm not part of that organization because I would like to be part of the organizations which values diversity. So, learning from this what I did that all my selection process in fact I to hire people but for any organization right their selection process should be should be to call for diverse candidates from all backgrounds gender race and so on. And I've also carefully devised the interview process, which is a structured in a format, a very structured format so that all candidates go through the similar questions and they all have equal opportunity to perform. It's not based on the performance is not based on the gender or family status and as you all may know interview is any ways challenging and it can get very uncomfortable to get such kind of questions in interview. Moving on to inclusion and equity so once you've created the diverse team, it is the leader's responsibility to make sure that all parts of the team or all candidates feel included and they have equal chance to succeed. Creating a fair environment in terms of roles and responsibilities payment progression or opportunities inclusion also means respecting each other boundaries and working flexibility to accommodate needs of others. So, again, moving back to my experience is when I was growing as a leader, I realized that not only my expectation at my work crew, but also my responsibilities at home are growing. I had small children, I had parents to take care of, and I distinctly remember a day where I just signed off 12 o'clock in the midnight after giving status to us and woke up again at 6am in the morning because I had to start with us with another status call. I was exhausted and felt that it was never ending. It is an organization's responsibility to provide flexibility and support without the fear of being judged at these times. Another similar example for equity is I remember one of my peer came to me and told me about a resource that she wasn't performing well and I asked why was the reason that you think she's not performing well. And he told me that she goes back home at 5pm and she also had gone for a longer maternity leave. I had to explain that the performance should be based on deliveries and timelines and not based on how much time one spends in office. Of course, I support core hours and also I believe that diversity does not mean entitlements, but we all need a fair chance to succeed. So what I do in my teams is that I try to create open and fair environment, follow an open door policy. I coach my leaders to be more inclusive and supportive so that everybody has a fair chance to succeed purely based on performance. Another challenge which I remember and it's very human for all of us to do that. Being a women and being in India is very difficult for me to network over a glass of wine. When I was growing as a new leader and one day I just entered into a room full of leaders where we had to do performance discussions and they were all men. I found myself all left alone. Being the only women could be challenging. In fact, I realized that a lot of decisions were already made outside of the room. So to create a fair environment, I encourage my teams now to have office gatherings or parties at hours where all team members can participate equally. I seek for feedback from diverse candidates. Not that I have to do it. I know that many of the candidates feel shy many a times. They may feel like what if I speak out and it doesn't turn out to be a good question. So I seek, proactively I seek feedback from diverse candidates and it has always helped me building or making better decisions. I also have created my one-on-one channels with various leaders individually to create my own conduct and trust which I certainly cannot do over the glass of wine. So it's not easy to be different and moreover it can be uncomfortable at times. But we all need to feel confident and bring ourselves out of that comfort zone and create a completely new comfort zone by stretching our limits. And we have to remember that when we are given a seat in the rocket, we don't ask which one. We just go. Thanks, Rick. Excellent. Rashmi, that was thank you for those words of wisdom. And yes, I believe this balance of not only it's sometimes hard for people to open themselves up to feel confident and it's really a two-way street. It's the individual having the courage and also feeling that they have a support system that the organization is willing to be open and fair. So it's something that both sides need to continue working on. So guys, look, what I want to do is what I also always do with these panels is to give everyone an opportunity to give a 60-second, no more than 60 seconds parting shot, a final thought. And what I'd like you to kind of think about is, you know, what words of wisdom would you give people today, women obviously in particular. Enjoying and participating in, you know, in the open source community. Or for those that are actually managing people, if you think, you know, what do they need to do and be cognizant of. So let's start Silva with you. What's your 60-second shot? What I would like to say is if you are interested in working in IT, no matter your background, just start. You can make it. It's not impossible and it's not even exceptional. However, what you need to make sure at the beginning is know why you want to do it, because you need to have a driver. It's going to be hard at some point. You need to have something that will keep you motivated when things get tough, you get tired, things like that. So I personally have been driven by learning most of that time and then in such case the hard work can also be fun. And just one more thing that all of us mentioned today. Don't be afraid to bring diversity. It can be challenging sometimes. You maybe sometimes want to hide and not show that you are different because you think maybe it's worse, but just show it. Ask questions that you want to ask. Raise whatever you want to raise, because that's how you will enrich the community. Okay. Thanks Silva. Rashmi, why don't we have you go next? Sure, Rick. So I would say that anything that you want to do, be it being in technology or be it participating in open source. If you're passionate about it, I would just say that go for it and never give up. I've always seen that these are the last moments when you're just about to give up and I've always gotten what I wanted if I've not given up for those 30 days long or 60 days longer. So learn. I think that's another trick. We have to continuously learn, learn to adapt, learn to mix people and see how you can work your way through. Always think of longer term and broader picture rather than thinking about what is happening to you at a particular moment. And don't give up when you have this window of extreme challenges because I think that when everything seems to be going against you, remember the flight takes against the wind and not with it. So always listen to your heart and always, always go and do what you believe in. Awesome. I like that a lot. Jen. Alright, so if you're here, you probably already have an interest or already active in open source technology. If you happen to just have stumbled across us, I encourage you to take a look. I think open source is the future for technology. We need the diverse input. We need multiple people collaborating together. We need the support from that, but it's often quite intimidating to get started. So have a look, try out the technologies as you look at the forms as you are Googling for help. Your question, if you have it, there's going to be thousands of others that have also had the same issue and the same question. So don't be afraid to ask post that question. Open that issue. It's a small thing, but it starts to get you contributing and participating in projects. And before you know it, you're, you may be an active and main contributor to a project. And it's just small steps. So it's taking a couple of risks that's been brought the couple, you know, first times you start to publicly post or ask questions or open an issue. But you're on to something and that's the first step. And once you do, it's amazing how that builds a community. I did switch to you. So it talked about that early on. I did move. I moved from the US to UK and because I've been active and open source projects that actually help me immediately build a community here in the UK because of the people I've been able to collaborate with. So reach out. You never know where it'll take you and how it'll help you grow into the future and how you'll change the world. Excellent. Thanks, Jen. First of all, I want to thank each one of you for taking us through your journey. And speaking again of the world, we live in interesting times now and diversity is really very much part of our zeitgeist. It's political, social, gender, economic culture, whatever it happens to be. We need to embrace it and we need to shape it and we most importantly make it part of everything that we do or care about. And frankly, it's time to step up and to include the voices of a diverse range of people, you know, for which our software is intended to serve. So with that, thanks for listening and have a great day. Hello, everyone. Welcome to the mentee presentation for the open mainframe project mentorship program. This is our fifth year of doing this program. We've seen some really great work over these last five years. Personally, this has been one of the most enjoyable parts of not only open mainframe project but just participating in open source overall in my career to be a part of and you know, we're no shortage here today we have some great young minds here that are excited, not only about open source but the mainframe as well and have made some great contributions to projects that are within that space and looking to support that space. So I want to introduce this group of folks today, and you'll get to learn a little bit more about them, the project, their thoughts on the mentorship, and I would encourage all of you to look to connect with them afterwards. Afterwards, these are some great students they're excited about the space and they definitely are looking for great opportunity to continue their journey. So with no further ado, let's turn it over to the first mentee. So, I hope you all can hear me. Hello. Yep, we can hear you a lot and clear. You're good. Great. So, hi, all. I'm very happy to present on what I was working with the SOE team last summer. And so I have my Shmanta Pereira working as a software engineer in Salzburg Research now in Austria. I was studying in Sri Lanka instead of information technology and I'm interested in the full stack development, including DevOps practices as well. Also, I'm an open source enthusiast. You can find me in a couple of other organizations as well as a committee, a committee in open MR is an Apache software foundation, as you can contact me with the Twitter handle and the mission in the slide here. So that's a lot about me. So I would like to move on to the presentation. So I'm not going to talk a lot about SOE here as we are in the mainframe mini summit. But I would like to highlight the mission of SOE. SOE has a mission focused on providing simple open and familiar tooling platform for COS system programmers and developers. I wanted to highlight this since the file trans application development project is all also about providing functionality to help COS programmers and users. And next I thought about giving a brief idea about the SOE big picture. We have SOE application framework which provide the virtual desktop and we have API mediation layer and SOE CLI. So the file trans application that I was worked on during the summer resides inside the SOE application framework as a plugin developed for SOE virtual desktop. So now we have an idea about SOE. So let's have a look at the project. So the main requirement of developing a file trans application is providing an application which make it easy for COS users to transfer larger files and datasets from mainframe and to mainframe from users desktop by SOS in an easy, secure and scalable manner. So when I was applying to the project this idea impressed me because it seems to be a great addition to SOE and mainframe because lots of COS users who are working at mainframe work with larger files. and datasets in their day to day life. Apart from the development of file trans application I'll use FTA because it seems a short and sweet. So I worked with CSS team who are working on exposing the functionality of COS as C level APIs in order to fill up the missing functionality of UNIX files and datasets which are required for the FTA. To give a small idea about how the SOE components are interacting with each other where the FTA will stand on the ecosystem. Let's have a look at the below architectural diagram. So we have a SOE virtual desktop and we have FTA installed there as a plugin that it will be directly interacting with the SOE app server which underneath handles the communication between the plugin and the CSS server. CSS server is the API layer which exposes the functionality of COS as C level API so this is an overview of how components are working with each other in SOE. So since we don't have much time I'm not going to do a demo here but with the below diagram I'll walk you through what we have done so far. So as you can see it has a file explorer which is a great tool developed by SOE team. This has the capability to visualize the USS and MBS file systems on the mainframe. We are a file and folder hierarchy as a tree. Function is like cart files, copy files, move files, delete files in mainframe and many more provided with explorer. I got the opportunity to work with the team to improve some functionality on file free as well. So SOE file free or we can call it explorer is very important feature for FTA as well. Because in FTA we care all about files and data sets and the first thing we did was to integrate the file explorer into FTA. And when we have the file explorer in place in the FTA as you can see we have lots of metadata related to a file. Like the ownership of the file, the size of the file, the file's actual path in the mainframe this made things very easy for me to continue development. Next we wanted the solution where we can download large files in browser without consuming lots of memory of end users machine. We tried out different libraries and finally we went with stream saver.js. So it has around 10,000 weekly downloads so it seems to be doing a great job for us for our project. So and it was very stable and it was there for couple of years so we thought of moving forward with stream saver.js. So stream saver.js takes different approach instead of saving data in client site storage or in memory it creates writable stream directly into the file system. So I'm not talking about the Chrome sandbox file system or any other web storage it directly interacts with your file system. Still when we are testing things in different browsers after integrating the stream saver.js up to FTA, writable streams are there in all browsers but not writable streams are there. So the download can fail sometimes to work on this we use web stream polyfills and that's how we came up with the solution for large file downloads. So after a single file download we thought of giving supported download folder content and in file transfer application we have defined a separate UI workflow which will inform user regarding folder download and download content as tar file. This was done in tar compression since it requires less compute power than creating a zip folding mainframe. Compute power is always a critical factor to think since there are a lot of ongoing tasks in mainframe in a given time so we went with tar compression. And then we thought about adding more function to FTA which will help users which are very obvious to have in an application like this. So we added support to queue downloads and the priority of downloads also to also be maintained three states in that in progress caslin completed downloads which will give a clear separation to the user during a download activity. The application handles these state changes in a real time after download happens and also gives user notification on activity status as well which seems to be a very user friendly way of handling things. And also for later usage we maintain a list of previous canceled and completed downloads in the user scope for particular user as well these are displayed in the data tables so user can easily fill out previous activities as well. When moving forward we thought of giving the user the ability to define the number of downloads, number of history of items that they can maintain in the FTA. We have defined a separate UI workflow for that as you can see in the FTA we have separate workflow to maintain the user conflicts. In the meantime you also have the ability to see the history of search they have performed in their previous logins and in their previous interaction with the FTA application as well. And for end users I have to highlight that all in progress downloads will be canceled and move into the cancel section if you close the application during download. Finally, files in mainframe master would either in ebcdic ask it to or utf8 encoding when downloading files it's easier to download utf8 files. It will be automatically converted but ebcdic and ac2 we have to define what is the source and targeting coding types of files should be in order to download to work. In file transfer application we have addressed that in a UI with a very easy and user friendly way as well. So we have achieved the block team FTA and there are some features in progress as well like data set downloads currently that we have working with the team and I hope you enjoyed my presentation. To have a look at the full presentation and all the work and demo you can follow this YouTube link and all the work related to the project and the PR are stored in open mainframe project internship. You can see the repository as well. And that's all about me from my end in the file transfer application. I would like to thank all the sweet team members who worked with me, especially Sean and Lenny. Thank you all. That's all from awesome. Thank you so much. It was, you know, it's great to see that sort of contribution and a great collaboration with the Zoe community. We can definitely tell your contributions are making a great positive impact there. Let's move on to I use Jane. If you want to take over and present your project work. Yeah, sure. Sorry, I had some trouble setting up. No worries. No worries whenever you're ready. It's my screen visit. I don't see it visible presently now. So this, this is basically the presentation that for the file and storage model, and I am using an undergraduate computer science student and from BS university Bangalore, India, and I really love open source and I've explored a bit. A lot of open source systems actually and what got me curious in the open mainframe project was the open more in frames, six nines reliability. So I was pretty much interested in knowing the six nines, like how mainframe achieve that reliability and efficiency. So that's how I got to know about mainframes and one of my friends told me about the open mainframe project. So then I started exploring the projects and file on Ansible project. Like, I was pretty much curious about the storage systems as well. So this seemed to be a quite a fit and I applied it in it and I got through lucky. So previously I've also interned at about code at the prestigious summer of Google summer of code program so there I did the band packaging and I got to know about Linux systems pretty well and this time I got a touch off S390X architecture on which basically Linux one systems work. So that's how I got interested in and my mentor for the project was Vincent Theron and he works at Wycombe infinity. So that's basically about us and now I will go take you to the deck and show you the journey from how we build the system from scratch. So basically, we had to build the system for the storage Ansible module for file on guys file on is a development is decay for managing ZVM in a very simple way and it's basically a project of which is being made to advance the use of ZVM or make it easier to use ZVM and it can do a whole lot of things like it can create gas images, networks and allocate volume and all those sort of stuff. So our work was to make it easier to allocate block storage to file off. So that's where the initial name file on for the project comes in. And now I'm going to tell you about Ansible. So and we are creating basically an Ansible module or playbook to integrate the storage or basically allocate a block of storage to a piece of host or basically a block storage to a host inside ZVM. So that was our work and Ansible is basically a configuration management tool or deployment tool which makes it easier to work about across machines and it can do 100 machines at a time. So if you want to install some package into 100 machines, you can basically put all those addresses of the machines into an inventory file and run a command. So that command will be run on 100, those hundreds of machines via SSH. So basically for Ansible running, you need Python and SSH. So if you basically all the Linux systems have inbuilt Python in them, all the distros. So that's a, that's a fit we got. And that's why we, we thought of creating or my mentor thought of creating an Ansible module. So now I'm going to take you through the architecture of the project, basically, like the inner details, what we call it. So on the leftmost side, you can see a file on block. That's basically a file on SDK that might have a locate storage or block storage function. So that function will basically do a rest call to our server, which will be a post with a certain details and in return or response it will get the block storage. So what we have done is we have created a server in Python using Flask and inside that server we pass the request to get the parameters from the body and using that we run the Ansible playbooks using Ansible runner. So basically a playbook or a module. So a module is basically doing the work and playbook is written on top of the module. So what playbooks we have written, these will have a general sense or general configuration like the first one, first step in playbook would be to make a volume and then map it to the host and whatever variables we need. So that will be taken out and return to the file on project. So what our playbook does is it contacts the storage via rest call or SSH. So it contacts the storage and gets the storage and do all the necessary things like mapping the storage to the host and whatever variables we need from the storage. So those are pulled from the storage and then from then those are passed to the file on project. So basically as I told you how playbook can contact the storage in this case we were using IBM DS8K or FS8K. You can use any of the storage just you need to create a general playbook for that storage and you need to place the playbook there then it works like a magic. So as I told you about our server talking to the storage there are two ways SSH and REST API. So IBM DS8K has both the functions enabled so mostly many of the storage systems do have REST APIs so it's pretty easy to contact through them. So we have enabled both the modes in our project SSH and REST API. So you can pretty much use both of them and contact or create storage in any of the storages like Dell EMC and etc. So basically what parameters you need to pass through REST call to create a storage. So in our case Filong needed to create a storage so we need to pass host ID size pool. So basically host ID is the ID of the host what we are running and the size of the storage and the pool is basically from what pool you need to create the storage and you need to pass those in a form of JSON and you will get a reply in JSON with the parameters of SCSI learn ID which is the storage block attached to the host ID, host basically and WWPN which is the unique worldwide port number of the storage to map it back to the host. So that's basically the inputs and outputs we pass into the server and how does server basically does that. So as we have seen like server has a playbook which will create a map or create a volume then map the volume back to the host and pull out the variables what we need. So this is all done in our server using Ansible runner package. This is basically a package equivalent to AWX. So AWX Ansible AWX is a commercial component to basically handle REST APIs. So Ansible runner is basically the open source part of it or the co part of it. So using that Ansible runner we are running the playbooks and basically our playbooks execute on the storage, gets our work done and we pass. So Ansible gives a lot of output. So we need to pass the output to debug what variables we need basically. So we need WWPN and SCSI learn ID. So we pass that and that is sent back to the file on after passing. So basically we need to and for running the playbook we need to give the storage address as well. So that's given in the inventory files. So that's basically it and if any one of you have any queries related to project you can ping me or winnie my mentor as well and the GitHub repo for the project is easily accessible from the open mainframe project repository and here's the link. Yeah, so that's it from my side. John, over to you. Awesome. Thank you so much. You should it's great seeing Ansible and the Phelan project come together. I think that's a great. It's a great value proposition to help connect to ZVM, which is the, you know, still work virtual machine technology in the Z world, the mainframe world back to modern tooling like Ansible. I know that there's a ton of interest in that space here and it's great to see a project that's that's kicking this off. And definitely look forward to where this with it where this is to go and hopefully a lot of this work is making itself upstream and Phelan. So I want to move next to your welcome. Next to are you should should hire. Oh yeah, that's right. Can you hear me. I can I can if you want to present your screen. Oh yeah. All right, so is my screen visible. It is. Awesome. So, so just to start us off, my name is Irish. I was an OMB mentee for four months. Working with James Caffrey on the project over the course of the next few minutes. I'll try to summarize whatever I did how this particular project fits into the grand scheme of things. And what you plan to do with AD in the coming future. So just for some reference, like I said, my name is Irish. I am a recent computer science computer science graduate. You can reach me on LinkedIn with this particular handle. I'm also pretty active on Twitter. And I've been involved with open source software development for pretty much three years now ever since I started my sophomore year. I've previously been a Google summer, of course, student and with focus mostly on machine learning based technologies, which is kind of why my interest was being in in AD because I found since our entire concept of using data science for anomaly prediction to be pretty cool. So I'll just start off with what it is. And naturally, the first question that we need to address in order to have a clear picture is what is AD. So AD stands for anomaly detection engine. And the name is pretty self-explanatory. But just to give you some idea, this is a project which is supported by the open min film project. It's been an active project for about three to four years now. And this project uses the idea behind this is to use a statistical learning and unsupervised learning to learn useful trends from large or huge amounts of log data and then use this acquired information during inference time to predict if some kind of log might cause anomalous behavior in our system. That's kind of what AD does. At the moment, AD supports two formats of Linux Syslogs, RFC 3164 and RFC 5424. My project was to expand support from Justice Logs to more complex Metalware Logs such as Spark in this case. So Spark Logs are comparatively tougher to deal with. And that's mostly because they are way dense and way dirty because they don't have a common pattern. And that kind of makes this problem really difficult. But the first question that we should ask ourselves is the next question we should ask ourselves is why exactly do we need AD, right? So these are just some of the news excerpts taken out from leading media sources that are highlighting why software failures have been a major pain in the industry. These particular failures are really hard to deal with mostly because most of them are at a very great scale. And they cause massive delays in terms of time, loss of resources such as money, and most importantly, loss of data. And the worst part is that debugging these things is a hard job and these crashes are bound to happen. As we develop more and more advanced software that runs in thousands of systems and serves millions of clients, we are going to see more and more problems in terms of scalability in terms of small bugs that pop up in our system that might cause these systems to crash at some point. And it's like to say the only property software or the software that was never written. So we try to focus our attention on the second part of the problem. That is, if you can't stop the problem, why don't we look at solutions to efficiently deal with the problem. And that's kind of where data science comes in. So data science as a very broad topic is this kind of at the conjunction of computer science and mathematics. The broad idea behind data science is to use data is to implement algorithms that learn from prevalent data and apply these learned information to data we've never seen before in order to predict usable and valuable results. And that's also kind of what he does. So at the moment, what he does is that it, it learns various trends from data that we have already seen before, which is like log really huge amounts of text data. And then at the end and then once it is already running, it can actually predict an anomaly score so it comes up with a model with a mathematical model that then helps us come up with an anomaly score that kind of assigns a number to every log slice that we see. And at the end of the anomaly score, the greater the chances of this particular log slice being responsible for causing anomalous behavior. So, the entire the objectives of this, this project were broadly to add support for spark within a DE had had not really been active for about two years now. And we decided that a good point to make this project active again would be to introduce spark and within this sense that kind of increases the usability from just this lock based perspective to more middleware and web server or high performance computing based applications. So, although this, this simple goal this goal looks simple enough, there are a number of steps involved. Most importantly, at the very low level, we need to develop a process that I read this these park logs implement some kind of heretics matching to extract information from the spark spark logs at masking in order to make sure that we do not include sensitive integrate everything with the prevalent system which was developed that which was developed before this. And then at a very high level integrating these with command line arguments that can be easily toggled in order to switch between spark logs or Linux is logs. And then once most of this that these things are done we need to set up the data train our model groups and analyze these analysis results are then written out as XML templates, which can easily be viewed in a web browser to get an idea of what what might be going on under the hood. So, an example of analysis is this this this particular screenshot is kind of what the analysis result looks like for 10 minutes of analysis. So as you can see we, we have various message IDs, and we have assigned a number of scores to his scores to this ID, each of these scores get that's combined to give it a total interval anomaly score which is on the top right of this particular image. And mind that this is just for 10 minutes of this thing essentially runs for 624 into this 624 times per day, and this number can change on the basis of how dense we want the spark logs to be. So, these four months have been a great learning curve for me personally I have to say that of all the things I learned it's really hard to summarize them in just four points, but just to just to put them in one way. I think I personally think that designing and experimenting is a way of a job than implementing implementing is kind of like we start off at one point and then usually pivot with another direction, getting this intuition behind the way to proceed is really tough. And I really didn't think about this much earlier, but I definitely say that this is this has been a very big plus point. Secondly, obviously dealing with huge amounts of unstructured data is a hard job, because one is huge and second is it's unstructured. Third, communication is vital. That's pretty self explanatory. And fourth, Java is pretty cool. I never thought I'd say this, I've not really been a big fan of Java, but I really enjoyed working with a huge code base written completely in Java in this case. So, just to wrap this up, I'd like to thank a few people for, for whatever they've been doing, starting with James Caffrey, he's been my mentor for this project. I would like to thank him for taking out time from his really busy schedule and agreeing to mentor this project. I found his, his ideas pretty insightful. And he helped me often when I get when I face certain dilemmas. John Matic and Robert Dubbock for running this, running this entire program for conducting frequent calls in order to get us on board with whatever, whatever we need to face next. The Linux Foundation for supporting and the open win film project for supporting ADE. I am really looking forward to seeing more people use ADE and this particular direction of having ADE for spark locks in the coming future. Also, I'd like to thank Loghub. So Loghub is a large project that was started by a lab in at the Chinese University of Hong Kong. They have a collection of really large data sets that they publicly provide. So I'd like to thank them for providing me with a bunch of production environment based spark locks that I was able to use for testing and analysis bit of the project. I've also mentioned the reference downstairs and down if anyone wants to take a look. Also, I'd love to have more people look at this project, use it for their use case, and get back to us, either me or James, James Caffrey with suggestions or even just general ideas on this project. It's all available hosted online available on the open win film project to GitHub. Yep, that's about it from my side over to John. Awesome. It's great to see contributions of project that is one of the it is actually the first project that was in the open mainframe project it was a code donation from IBM and it's great to see that project continue to evolve and we're excited by these contributions to see it go. So we have four mentees I believe left from what I can tell here and we're getting close on time here so we're going to try to move through these as quickly as possible might actually five. I think next up is Dan and again Dan just try to make sure we keep it under the form of minutes just so we can have time for everyone. Dan can you hear us. Yeah, I can hear you. Yep, I can see it. So it shows the presentation here right. Correct. Okay. And so, hi everyone, my name is Dan Babachikovic. I am an open mainframe project mentee and student at the University of New Hampton. I've been this is my second year at the open mainframe project mentorship. And my mentor, it's Vladimir Panov, which is a leader he take that Susan. I would like to begin with the reasons why reporting QCF to see is a good idea in the first place. So QCF. The Kubernetes is the orchestrator is the best orchestrator for your workloads and together with Cal Foundry, which is the best friend of any developer. We can bring these two and the simplicity of both. There are two clusters and component is everywhere. So we can, we want to use QCF because the developers focus more on their on their apps and logic behind it, and less on the underlying infrastructure. As for Z, as we know, this is the system run on the mainframes, and it's a very good system for rush applications is very performant it's full turn high security and so on. So bringing these two together is a no brainer. It really does makes a sense. Next. So what have we actually done. We have built Docker images. For Z. So he compatible compatible images to run the QCF. So there are two categories. The ones based on Bosch. Bosch is the is used to deploy software packed in releases. Then images for Irene, which is the new cube native scheduler for Cal Foundry. Next we built a local image for my SQL before scratch because we can find an image elsewhere for for Z. And the back a bunch of packages and OIS images built on opens with a build service. We can find all of these images on the quirks we see with one local hub. So next. How have we built the current images to build all the current images and to make QCF says so you deploy on Z. We first had to have an to build a system and some tooling. So we Jenkins as a build server. And created jobs for all the images. First we use the script. That we made last year on the open my project mentorship. Which can take any boss release tested. And then changes the packages according to the errors we get. Next one, the ones the errors are fixed and the package successfully builds on Z. We use free cell for Z, which we have also built last year to create a document. Yes. After that we push it to the local hub Quartz deposit. So next on to testing. This is how we make sure all the application of working as expected on QC upon Z. So first we deploy QCF using help and start debugging. We use K non S, which is an awesome tool which help does a lot debugging. All the things related to Kubernetes, you can have logs, you can describe a body, you can do anything. And he has a nice interface. Following that we fix the errors and build all the images again. The work is all described earlier with Jenkins. And finally we update the other settings. By setting the example in Docker images names, versions and stuff like that, because we built them again and those might change. So that's it for me. I wanted to thank everybody at the opening project and limit lines foundation and especially my mentor, which was very, very helpful. And from which I learned a lot. If you want to learn more about QCF or contributed project, you can visit the repository on GitHub. And if you want to see my work and the files from this project on the MFM project, you can visit the GitHub repository on the screen. Thank you. Awesome. Thank you, Dan. Great, great project and great to see contributions to Cloud Foundry to help support the mainframe even more. Moving ahead here, we have Matish Goplani. Matish, if you want to jump in here, then we have two more presentations after Matish. Matish, are you able to present? I think you're having some audio problems. We can barely hear you. It's getting better. Hello. There we go. There's a little bit of static. Hello. Can you hear me now? I can. There's a little bit of background sound, but it seems to be okay at the moment. Is it fine? Hello. Yeah, I think this works. Can you hear us? Yeah. Hello. Awesome. Go ahead and do your presentation. All right. Hello, everyone. This is Matish Goplani. And today I'm going to present about my work, which I did as a part of OMP mentorship on Zoe desktop application state persistence mechanism. A brief background about myself. So I'm currently working as a software engineer at JPMorgan Chase. And I'm also very interested in computer security. I have been certified as an OSCP OSC and OSW. These are some cool certifications from offensive security. And I pursued my computer science from Umber University. And I was also a principal developer and maintainer of an app called med student, which was previously known as health savvy app. And I have co-authored a paper titled animate object detection and key ground control. So why did I choose this project state persistence mechanism was an interested project provided by Zoe, which required the developer to be well versed with internal working of Zoe. And also provided an opportunity to contribute to a project that would be consumed by many Zoe apps to process their statesman session expires. So I thought my experience with web development and Angular along with a bit of a security background would really help with the development of state persistence mechanism for Zoe desktop considering the security aspect of the methodology involved. So what was this project all about? As you all know, Zoe has a lot of Zoe is an application framework where a lot of plugins run on top of it. So when plugins running within the application framework did not have access to a secure state persistence mechanism that would allow them to restore their state on the next session login. So the idea was to develop a state persistence mechanism which would allow Zoe applications to be able to save their states at Zoe desktop storage and persisted the next time they'll try to log. So these last state of that gets restored automatically. This is a quick overview of how this system was developed. We start with these applications. So we had two types of applications on Zoe desktop. The first one was normal desktop applications. The other one were single page applications. So these applications would send the data which they want to save to components such as window manager and simple window manager at regular time intervals. This time interval is considerable according to the requirement of user depending on the time interval these apps would constantly keep on constantly send data to window manager and simple window manager which would in turn send this data to desktop plugins or desktop storage. So these this data is saved separately depending considering the authentication of the user. So we have two different storages. One is for one is app based storage which is for desktop applications. One is single app storage which is for single page applications. On the next login attempt authentication manager would try to retrieve those application data on its own from desktop storage and would spawn applications with those data. So consider a scenario where you're working on something really important and you forget to save up. So either a browser crashes or you log off or a session expires or something like that. So this mechanism would help you to retrieve back to the last state depending on what was saved on the Zoe desktop. So I got done with this project a bit early in the course of my internship. So I went ahead and took some other projects. So a quick overview of what those projects were. So those are basically focused on improvements on zealux editor. The first project was that restore mechanism where we keep a track of all the open tabs zealux editor and restore them on next launch. So this is a bit different from what we did in state persistence mechanism because in this case we are using editor storage not desktop storage. So this is specific to zealux editor and enhancement to the current working of zealux editor. We also added copy and cut operations to zealux file explorer which is embedded in zealux editor. This project, why this project got completed. This would support only us but there are plans in future to also support NBS, which are data. These are some of the links to the links to my work links to my PR these this is more related to state persistence mechanism, which was my main project. This is for tab restore part and this is for copy and cut operations. This is a general work which I did. All my work is has been documented and saved on this GitHub repository Zoe desktop application state persistence mechanism. If you go to this documentation folder, you will have a you will find a very detailed demo video of the actual working of the system in notes and research you'll find an architecture diagram which is a bit more detailed. And the entire link to the source code is there in the src repository. If you're interested in this project, please do check it to check this out. And that's it from my side. Thank you so much for listening to me. Awesome. Thank you so much. Great presentation there and great. Great seeing those contributions into the community. So we have two presentations left. So zoo. If you want to pop up next and then we have guys here on video. Okay. Awesome whenever you're ready. All right. Hello everyone. My name is Salis Ali. I am one of the main case for MP mentorship 2020 program. Together with my mentor Alex Kim, we were able to work on the project. So we smf stroke arm a person engine, which we like to call zebra. So a little about me. My name is Salis Ali. Like I mentioned, I'm a 40 a medical student at Byron West T canoe Nigeria. I'm a software programmer. I started learning programming in 2016. So I participated in IBM master the mainframe for the year 2017 2018 and 2019. And luckily for me in 2017, I emerged as the regional winner for Middle East, Middle East and Africa. So I like to put my efforts into learning and getting certified in mainframes, IoT, machine learning, Android and the cloud. I also like building projects on my own, especially when they are related to smart home and telemedicine. So I applied for MP mentorship to get real world real world experience in developing software for the mainframe and also to build on what I have learned already from master the mainframe. So in 2019, the final challenge for master the mainframe was on syslog, syslog. So when I saw another project related to system logs, I decided to apply for it to that's for the RMF side. So this is a brief description about the project. So Zoe, as we know, is a great system operations tool. So one of the system programmers offer performance analyzers job is to decode SMF stroke RMF report to check systems health. So if you can create a generic parser for SMF data set or RMF, this will give this analysts an opportunity to create and reuse many open source monitoring tools out there. This will really make their job easier. So this is the project architecture. As we can see everything starts with a user's request. When a user sends a request to zebra zebra will now for the request to RMF DDS server through HTTP API gateway for RMF. Then RMF will now return an XML file. This XML file will now be passed in zebra. This RMF, this XML file will not be converted into JSON format. So at this point, the JSON format can be returned to the user through Zoe API catalog or through the browser. And at the same time, we took some time to convert this JSON format into what we call custom primitives metrics, which are then stored into primitive server. Prometheus hours users can now connect to Grafana and plot real time graphs. I don't want to interrupt you. I don't know. You're not sharing your slides. If you are talking to slides right now. Okay, I'm so sorry. No, no worries. No worries. We were you were talking great and we were just trying to follow along. So I apologize. Also, can you see my screen now? Yes, we can. Welcome to Grafana. Yeah, perfect. Let me get started again. Also, hello, everyone. My name is Salis Ali. I am one of them in tasteful or MP mentorship program 2020 together with my mentor Alex Kim. We work on the project SMF as a SMF stroke RMF passing engine, which we like to call zebra. Tell about me. As I mentioned, my name is Salis Ali. I'm a 40 a medical student at Byron West T Cano Nigeria. I'm a software programmer. I started programming and I started programming in 2016. I like putting my efforts into learning and getting certified in mainframes, IoT, machine learning, Android and the cloud. I also like building projects during my spare time, which are related to smart home and telemedicine. I applied for MP mentorship to get real world experience. This experience has to do with developing software on mainframes and also to build on the knowledge I have already gotten from master the mainframe. So, as we all know, Zoe is a great system operations tool, and one of system programmers of performance analysis job is to decode this SMF stroke RMF report to check systems health. So if we can create a generic pass up for SMF data set RMF users can now have the opportunity to create stroke reuse many open source monitoring tools out there. So this is the project architecture. Everything starts with the user's request whenever user sends a request to zebra. This request is now forwarded to RMF DDS server. The RMF DDS server process this report and return an XML file. This XML file is then converted to JSON. This JSON data can now be returned to the user through the Zoe API catalog or through the browser. The same JSON data that we have converted from the XML can be stored into Prometheus server. And then from Prometheus server the user can connect to Grafana. Another use case we made use of is converting our saving this JSON data into MongoDB. So the project use case, the projects have about five use cases. We have passing RMF monitor three report to JSON. We also have passing RMF monitor one report to JSON. We have passing RMF static XML file to JSON. Then we have saving RMF monitor three report to MongoDB. And finally plotting RMF real time metrics with Grafana. So how can users access data through zebra? The first option is to use RESTful API directly to zebra. This one can be done using the browser call or postman. Users can also make use of Zoe API mediation layer through the API catalog to access this JSON data. And then through the MongoDB, if data has already been stored, users can use command prompt or MongoDB compass to access this data. And then we have graphical data access. This is mostly through Grafana. And then the recent one for by my mentor use Viva to actually access this data directly to his phone. So at this point, I would like to show us a demo, a simple demo on how zebra present this data to user. The first one is the browser user need to just get in my own case, I'm running this zebra local host on my body is 3090. So we have specific endpoints. This endpoint is for RMF monitor three. In this case, I'm requesting for CPC reports. So let me try to do that again. Yes, this is the updated reports. This is the eastern time. I believe it is 10, 10 a.m. now. And then there are other reports to like usage or proc reports. All this can be done through the browser. This is for RMF monitor three. There's also RMF monitor two, RMF monitor three, sorry, RMF monitor one, and then the static XMF. But for the sake of time, I would just like to move to API mediation layer. As we can see, I'm also running API mediation layer locally on my system. We have already embodied zebra to API mediation layer using simple on body without any code change. This is zebra. And then we have the set of APIs. This is from the swagger definition we have provided. In this case, all I need to do is execute this. This now returns reports. In this case, I'm using one of filters. We have also provided filters. In this case, I am not requesting for the whole report. I just want to get this CPC HMSU value, which is 365 in this case. So there are also other APIs that users can test. Like in this case, I will now request for the complete CPC report. This now returns the complete CPC report. Then moving to Grafana. As we can see, I've already built some dashboards. These are simple dashboards that I build using zebra. There are some procedures that have to be followed like creating a data source. And then finally creating a panel to get this chart. But all this have been provided in the user documentation. So I will not be going through it. I would not like to move back to these slides. So finally, the progress made where we were able to zebra have provided flexibility for users. Users can configure this app to make use of it to their own needs. So for this configuration, the configuration can be done using REST APIs. So we try to secure this REST APIs using Java Web tokens. Then we have conversion of RMF monitor 3 XML report to JSON. We have conversion of RMF monitor 1 CPC and workload reports to JSON. And then we have importation and conversion of RMF monitor 1 CPU and workload XML static files to JSON. And then we have saving RMF monitor 3 data to MongoDB. And then finally exporting custom metrics to Prometheus. And then finally plotting real-time charts using Grafana. So lessons learned. Actually, as I have mentioned earlier, I'm a medical student, but I've learned a lot through this mentorship when it comes to software development. For the first thing, my proposal was likely garbage, but my mentor was able to send me down and show me how to write a good project plan. What do you look for when writing a project plan? And then he also state design thinking sessions where I've learned a lot. And then working as a team, as a medical student, I normally work alone when I want to program, because I don't have friends that do that around me. So I normally work alone, but during this mentorship, I learned so the experience is when experience is brought into team, great things can be achieved like my mentor has 20 years of experience. So that has speed up the development of this app and then the energy. So when we have to achieve something, we don't care, we spend the time needed to achieve that. And even if it is during the weekends, we don't normally work on weekends, but when there is need, we put in that energy to work on the weekend. Then collaboration. When we have issues, we collaborate with other teams like Diso API, Mediation Layer to get this running on our local host. And then finally tools. I've learned tools like how to use tools like GitHub, Trilo, Docker, and Linux. And then finally, the project can be found at this URL. And then finally, I would like to say thank you to MP for giving me this opportunity and thank you to my mentor for taking all the time to support me through this journey. Thank you. Thank you all. Thank you. Definitely appreciate your presentation and all of your work there and apologies for the technical difficulties. So we have one more, which I'm going to do real quick here. We have a video from Kai and I'm going to try to share my screen with that. Let's see if this goes. Oh, if you could stop sharing your screen. I'm trying to do that. Oh, stop sharing. There we go. All right. No worries. No worries. All right. And we're going to try to share the sound here. So we'll see how this goes. This may not work. All right, let's see if this works here. I'm going to start playing it. And we'll go from there. Hi, everyone. I'm Kai Wong. I am a software engineering junior student from Beijing Institute of Technology, and an open source lover. I had experiments in multi programming languages like C, Java, Go, JavaScript, Python, and Rust. I love trying new stuff and making fantastic projects. During my freshman year, I was a student participant of Alibaba Summer of Co. 2019 and wrote an online GUI designer for early OS things. Now I am a mentee of the open main thing, mentorship program. I am also maintaining a pool of our thing written in Go, which is an open source project of Tsinghua University Tuna Association. I am in 50 S in open source, I think the mentorship put me a really good chance to learn about all some open source project and experience modern and formal development. Without the mentorship, it's impossible to share my idea and code all over the world. Main thing has transformed it over the past 65 years and still is the go-to platform for high, transactional and secure computing. Before the mentorship, I didn't know much about main thing. This mentorship opportunity for Linux foundation will surely improve my skill about main thing. What's more, during the COVID-19 pandemic, the mentorship provide me with the opportunity to work remote so I can stay at home. Javier Prometheus is not maintaining anymore. Feilong is an open source Javier and CalConnector project. It provides an easy way to manage Javier. It's suitable to improve Javier Prometheus. My project is to write a new Prometheus is portal for Javier mattress based on Feilong. Now I just released the second version of this portal. It's available on the Python package index. And you can install Javier Feilong's portal via the Python package installer. Before the mentorship, I had no clue what the main thing is or that is even existing. I became a main thing software development in the mentorship. The most important thing, I learned the coding and software architecture skills in the mentorship. Thank you for watching. Awesome. Thank you, Kai, for that great presentation. And I want to thank all of you. Watching this, I want to thank all of you mentees. You've done amazing work and made great contributions to the open source ecosystem and mainframe. You can check out more of the links that they provided. They'll also be around the conference here all week. Go reach out, meet them, and connect with them as well. Thank you all. And have a great rest of open source summit Europe. Thanks, John. Thank you all for presenting today. And just for everyone who's an attendee, remember the videos will be up on YouTube here very soon. Thank you all very much.