 Yes. Do you hear the sound? Yes. We will start today with Matiasi and his goal with the super-translator. Should you have any questions, please feel free to ask them here or in the matrix online, also for the folks who would have seen it online. Thank you. Great. Thank you for the introduction and thank you for coming. It's the first session, not after the party but still, yeah, first session, so I definitely appreciate it. So what can you expect from today's talk? It will be about adopting existing agile methods for environments that are generally big, big projects and bigish teams. So in the first part of the talk, I will try to bring us on the same page. Then I will be, like, concrete about how to do this adoption or about a proposal. And there will be then a demo and then some questions and answers. We'll see how the time will allow. So at the beginning disclaimer, agile is a term which covers a big surface. One of the subdivisions can be that it's about doing the right thing, delivering value for customers and another aspect is to be kind of efficient, to work well, yeah, not necessarily fast but also in a sustainable pace, predictable and so on. In this talk, when I talk about agile, I mean like the second aspect, yeah, I won't talk about the first one at all, not because it's not important but that's what the talk is about, so just to be clear. So I am a redhead developer. I am a team lead also of the security compliance team and when I joined the company like six years ago, we were brought to, or the scrum framework was introduced to us, yeah. We were one of the first teams and what we found out that it's not so easy, it's not a smooth ride. We kind of introduced it and then we noticed that we still have some issues with mainly like this predictability. And because we are like an open-minded team, we checked out what can we change in order to improve the situation and we made I think our agile practitioners happy because we were experimenting and desperate because we have tried like various things and we have evolved and the evolution still continues and we have simply found out that the problem is a difficult one, so even a good approach is not good enough. We need a great approach, yeah, so we'll see, we'll see. However, the main pay points that we have identified are the need to have a groomed backlog, which is like a very important concept in agile as far as I understand, being like a software engineer, yeah, not a person with agile education. And also as I'll get to it later, we are kind of a bigger team so a lot of things are in progress during the execution and it's like a mess to some degree, very difficult to find out whether things are going good or not, yeah. So why is it so difficult? Very briefly, it might be that one is not developing a web app from scratch for a well-defined customer, but one is developing a small feature on top of some kind of big product, yeah, and because the product doesn't have one customer, whom we can communicate and say we will do this and that will be different, but we need to keep everybody more or less happy, then actually a lot of work will be integration and not deliverable, yeah, like putting the feature there, integrating it with the rest and just like the small piece here is like presentable, but the work has to be done is really huge, yeah. So things are kind of slow. Also the team can be big, usually big teams, I don't think that they are necessarily faster, but they definitely can do more things and they are in some degree more stable. If the conditions change, big team can really adapt, take more responsibilities or even like exploit opportunities, but probably won't be doing it like in a specifically fast way. One offering is if you have a big team, split it into small teams, work much better, but maybe you can't do that, yeah, because the tasks that are coming are kind of changing, they are not stable, they are not always the same, so if you subdivide, then you will find out that you need to rethink the subdivision and so on. So it's not always a possibility to get rid of big teams and make small teams. So we are in this big environment, which means that things are slow in order to accomplish anything like interesting, the iteration can't be those two weeks tops, yes, but really like more time is needed. During the execution, a lot of things are in progress at the same time as a result, yeah, especially if the iteration is longer, and what suffers is the predictability. And as an engineer, I definitely came across the idea like screw predictability. What counts is the work that is being accomplished. What about giving engineers the right tools, the trust, the resources, and they will do the thing in the fastest possible way and screw the predictability, yeah. Why this is not a good idea? Some people know, but some might doubt that. So I have an example, imagine that you are a student and you are learning for an exam. And you know that you can get questions from three boxes, the green box, 50% probability of the question, red box, 40%, blue box, 10% probability, yeah, that you will draw questions from the box, a topic, yeah. And the question for the student is in which order you will learn for the exam, yeah, in which order. And it looks like that you should start with the green box questions because this is the most likely, right. But the right answer is you can't really know what is the right order unless you know how difficult it is to master questions in the box. And if you know the cost, then you can make a good decision. And in this particular example, it is clear that you should start with red, continue with blue, and then do with green because red and blue together are also 50% and their costs are much lower. So where am I aiming at? Predictability basically means that you know the costs. And you need to know the costs in order to prioritize, yes. So whenever we do prioritization, we are in need of predictability. So if we have a big team, big projects, and issues with predictability, we simply can't ignore it. Yeah, we need to address it. If you think of the previous example as well, it turns out that we estimate all the time. We estimate when we go shopping, when we travel, when we do pet project software, we estimate all the time. So why is it that the groom backlog or the estimations that are so painful? So there is like another example. And we see a situation like that in software development quite often. That's how much effort will it require to remove the rock. When we see the top, we don't see the bottom. So those gentlemen might have different ideas. And whoever played Scrum Poker definitely experienced that. Anybody who played Scrum Poker in their lives, could you raise hands? About half of the audience, I would expect more. So the other half will experience it, maybe, if they are engineers, at least. So what happens is that one person says, I think the task, difficulty is five of something, yeah. Other person says, the difficulty is 15. And then they discuss and they find out that there is nothing, that one person doesn't know, yeah. Everybody knows everything, but still one thing's five, one thing's 15. What to do that? It is a very difficult situation because it's about the unknown. It's not about the known. And it definitely makes people uncomfortable because the number will be settled on 10, yeah, the number will make it into the tracker. It will be visible to managers and people are not so happy. So in order to know how to help this situation, we need to take a look what is an estimation. And if somebody says, I think the rock can be removed in two weeks. What happens in your brain, yeah? What happens in your brain if you hear from somebody that they will do something in two weeks? It means that probably they won't be able to do it in the first week because it's very difficult to do things fast. It's easy to do them slow, but to do miracles is very difficult. However, it is somewhat possible that they will do it earlier, between first and second week. It is however most likely that they will do it slightly later between the second and third. When the third week has passed, you start thinking, it might never be done, yeah, it happens. It happens in software, it happens anywhere. So what is an estimation? What is it? And we know it all. It's a distribution of probability, actually. And the thing from the annotation that everything has been invented, here it is. It has been introduced in the late 50s of the last century. And this concept that estimation is a probability distribution, it's called PERT. It's being taught, I think, as a part of project management. And it introduces this concept of a three-point estimation. So instead of estimating tasks with one number, they are supposed to be estimated with three numbers, which sounds quite scary because if one number is a problem, three numbers might be a much bigger problem, right? But maybe it's not the case. The guidance is that we come up with the optimistic estimate, which should be so optimistic that we think that we can do even better with one or 2% probability, yeah? So it's not like, it's not completely a rosy dream, it's just like optimistic. The most likely estimate is what we normally target. And finally, we have the pessimistic estimate. It can be like story points, it can be weeks, it doesn't matter, really. And again, the pessimistic estimate is not a promise that it won't be later than that, that it won't take more effort than is the estimate. But again, there is supposed to be like a couple of percent probability that even that will be slipped, yeah? Whether it's one or 2%, that probably doesn't really matter. So what does this approach, so yeah, sorry, one more thing. Usually we are required to put a single number in some tracker, yeah, in JIRA, whatever. And the good news is that as any probability distribution, this three-point estimation probability distribution, PERT or beta distribution, whatever we like to say it, it can be represented by the mean and the mean value is not the same as the most likely value. The mean value is somewhere between the most likely and the long tail. So in this case, we see that the pessimistic estimate has the long tail, so it's like more pessimistic. But if the task would be more like optimistically inclined that for example, for example, there will be no bugs to fix after we run tests, yeah? Then we say, we think it might be like five points, but maybe there will be no work to do, yeah? In those situations, the expected value would be actually skewed to the optimistic end. So what problem does this solve for us in another point of view? There is this concept that we don't have in check, that estimations or whatever quantities, they can be accurate, they can be precise, yeah, we don't discriminate that in check. So when we put numbers to estimations, we are infinitely precise, yeah? Number is simply a point, so we are infinitely precise, but as we all know, we won't be accurate. So we are in this unpleasant situation. However, if we substitute the estimate with this interval estimation, we are probably slightly improving the accuracy by using the expected value, and we are making the precision actually less, yeah? Or we have lower precision, but it is more proportional to the actual accuracy. So we have actually better odds of hitting the target if we have lower precision. Interesting, if you think about it, but undeniably true. So this is like a very interesting theoretical concept. Has anybody ever used it in this room? Has anybody ever used it in this room? Apart from my team? Nobody, nobody, nobody, okay. So me and my team have been giving it a shot for a couple of months, and there is one excellent news regarding that, we have concluded that the process of estimating and using the estimations, when you use three-point estimations instead of like traditional ones, the three-point approach is not worse, yeah? The process is not more painful. It's not like three times more painful to super use those numbers. So the process is not more painful, and maybe it's even more comfortable. We are not completely sure about it, yeah? But we will continue our investigations. At the same time, this is a difficult problem. Estimating is a difficult problem. So even a good solution, even a better solution doesn't completely solve everything. So we found out that you need some practice, you need some guidance to estimate using three-point estimations. And also, it's like difficult to explain. What does it mean? That there are like misunderstandings, and we don't know exactly. Then the second thing, the second pain point was the execution, which is busy, plenty of things going on and so on. Sometimes things are in progress for a long time, and it's usually a problem, but if you have a lot of things in the sprint, then you don't notice. On the other hand, some things are not in progress, and maybe it's okay, maybe it's not, maybe they should be, and so on. So when you have a long iteration, you can't do this traditional approach that after those two weeks you do some retrospection and you say, this was wrong, yeah? If the iteration is very long, then you need to act in progress. As soon as the problem is obvious enough. So it's necessary to have some tooling to introspect what is going on, and we will see that in the demo. So yeah, I put here a screenshot of a JIRA, because this is the only software that we use, which basically has a field to input estimations. I'm not really sure how JIRA can be configured. However, I think that for big teams, which have more challenges regarding estimating and so on, having this small field next to some other stuff, like having this small field basically invisible when you are not looking at the issue, that it's something which doesn't make the estimating easier. And for some reason, at that time, when I realized that, I thought like, oh, might be a great idea to try to develop a software that would kind of address this issue, that would provide an interface to the data that might live in JIRA or wherever. Yeah, GitHub even doesn't have a field for estimations. GitLab has something, I guess. And the software would allow to the team members to estimate and it would allow them to see the execution of their own long sprint and maybe even do some stuff like deliver forecast and so on. So the project exists. It's, I wouldn't advertise it at something that works, generally. It's like a demo. And if you are a Red Hatterer who uses JIRA, then it might be a good idea maybe to reach out if you like that. In other case, I would suggest waiting or maybe using the GitHub channel that the project has. However, I hope the demo will work. What will we try to model? We will try to model like a sprint which has two epics, if we say it like that. One is that we do an upstream release, which includes fixing bugs, running tests and writing a blog post. And then we do the downstream release that includes, again, running tests and then making the package, the downstream package out of the upstream source code. So, five minutes delay. But I guess we will manage. So let me see. So what do you see right now? Is like, is the interface of the program. And it allows us to estimate, to track the execution and also to simulate that the execution is in progress. So we go to the planning interface and there we can see that somebody, it was me, input those two epics, yeah? It's like not a software used for, that can be like seriously offered, yeah? It's like more something that works. So we have this fixed blockers issue and somebody already puts that it's five, yeah? The cost of it is five. And we can make out of this one point estimate, we can make the three point estimate. So if we say fixed blockers, usually it's like pretty difficult. So we say optimistic estimate is four, pessimistic estimate is eight, yeah? Then I click save and it kind of, it stores it somehow. And we have this, we have this, we have this plot. So I will do it, I will do this for that issue. Releasing upstream, again, yeah, probably it will be easy. So I will say pessimistic three, otherwise it will be the same. Writing block post, three points, yeah? Sometimes people are very, very nitpicky, yeah? So we say optimistic three, most likely four, pessimistic, I would say it can be eight. And let's estimate the rest two issues. Fixing issues in downstream tests, running downstream tests and fixing issues in downstream tests. I think that they might be nothing to fix, everything will work. So I will say optimistic estimate is two. And pessimistic estimate I would say can be like five because there might be problem with infrastructure, yeah? And so on. So if we take a look, it all can be added. Like the individual estimates can be statistically, correctly added together. So we see that the cost or the aggregate estimate of the entire sprint is somewhere between like 15, yeah, to 22 story points. Something like that. All right, and what we, any question to that? Might be a good, yeah, so the question was what the points are, what the points mean? Yeah, yeah, it's like a story points that are supposed to be proportional to objective complexity of the task. One could estimate in person weeks, that's not a problem, but estimating in points is like an alternative, yeah? Fernando? Yeah, yeah, the question was whether we have a story point definition there. Well, we don't. We don't at the moment. We use currently the basically conversion ratio to our earlier units that we have been using, yeah? The plan for the next planning season, not the current one, but to the next is to come up with this definition table and so on, yeah? Yeah, the question is about like epic level summaries, yeah, whether they are just reflect the expected value, which is like the traditional work, or whether they reflect the whole. What you see now, it reflects everything, yeah? It reflects really all those three values. It's like a statistical process, some of random variables. Ten minutes left, so I will proceed with the demo of the execution, yeah? So what we can do here, actually? I can say in a separate tab in that same web app that the team will deliver certain amount of story points and there will be some progress. So I choose that the team works on fixing blockers and they deliver two story points. So I click next, hopefully it won't crash, and I will refresh. And I will refresh the web app. So the not burned down chart, because burned down chart doesn't distinguish between states, yeah? The not burned down chart basically says that nothing has been delivered yet, yeah? That's correct. So I continue, let's say 1.5 story point, the team delivers again, yeah? So it shows that something is in progress, but still not done. So continue, 1.5 story points, yeah? And we have our first task complete, oh yeah. So what we can see in this not burned down chart is that the burned down is not linear, why is that so? Because there are deadlines to those epics and we don't expect to work on something at a certain time and conversely, we expect to work on something else the other time, yeah? To make it clear, individual epic burn downs look like this. We suppose the upstream release to be finished early and the upstream release to start a little bit later. All right, so one thing has been finished and when the task is finished, it already makes sense to talk about velocity of the sprint or in the sprint. So we can see that because one task has been accomplished, then the team at that time when the task was in progress had a velocity of the cost of the task divided by how long it took, yes? So this is like the measured velocity at that time. I continue with the upstream release. So 1.5 story points, one story points and we have an upstream release done, yeah? And again, we see that something happening with this kind of burned down and we also see that there is some velocity increment around the upstream release because we have some data regarding the velocity and we have also some data regarding the size of the task we can estimate when the task, when everything will be completed, yeah? So right now if the team continues like that, it looks like that they will be done somewhere after the second week, yes? The 95% confidence is before the sprint is supposed to end. And this vertical line is where is the day of the simulation. So I will just click some numbers here to have time for questions. I will put two everywhere, hop, hop, hop, oh, yeah. And the team has delivered everything. They did it fast because I put two and there was not, there were like three weeks and the total cost was like 18 story points so that was very comfortable for the team but this chart gives you some information, yeah? You can imagine that if there are not two epics but if there are like 10 epics and the number of tasks in the sprint which is maybe like not even a month, maybe three months long, it can be quite big. This chart gives you an overview whether things are in progress, they are being done or not. At the same time, this velocity plot, of course, when one works on one task, only at a time it looks like this, yeah? Of, well, not a very good piece, so to say. And technically it's possible to somehow compute the estimated completion based on data. All right, you have seen something. So I guess that's pretty good but to give you a couple of minutes for questions if you have some, yeah? The question is we have been using this for a couple of months, which is kind of correct and we have been able to validate that the things kind of match, the estimations match. So we, as we are using the data story points, the only thing you really match is the proportion, right? You measure the capacity based on previous executions so the only thing that you need to be sure of that you estimate consistently, that task which is two times more difficult to get roughly two times more state points and so on. So definitely there is, in our case, a concrete case of the team, there is like definitely room to improve. One quarter is a little bit better and the other quarter is a little bit worse and that's usually a symptom that things are not clear. So we have identified, we have done other mistakes, yeah, for example, like the composition of ethics that it was not really matching the execution and making it easy to collaborate. So we are not even in a position to conduct it and hope that it would give like a positive result, yeah? So we are mainly removing the big problems at the moment if it answers roughly the question. Yeah, so we prototype, yeah, yeah. What are the outcomes of using the tool? Yeah, that's all right. We definitely found out that it has value during our secondary iterations, our primary iterations are long, but every two weeks we kind of reevaluate but we don't have any deliverables so it's mainly internal event and on those secondary iterations, this tool is undeniably providing good overview of what's going on and whether there are problems that should be addressed. Yeah, this is what really works. It's not so more about three point estimations, it's more about like these charts and taking a look and saying, yeah, this doesn't look so well, mainly decision support, yeah? The outcome is that it provides a decision support for those interested in completing the work and also, as I mentioned at the beginning, like there is probably a little bit less friction during the estimation process and we hope that we'll be able to use this completion projection and to see whether we are actually sleeping or not. So we are out of time, technically. I'll be around, the tool, the slides will be online, the tool is open source, it can connect to JIRA, whatever, and so on. So you will know what to follow, yeah? So thank you for your attention and have a great rest of the conference. I would like to share some of the original stuff for you in the course in one hour. There will be an after part tonight and your ticket at the noon at the assistant. There is also a night of high work and long call at the end of the night, so it also works mostly with it. And tomorrow I would like to invite you also to go to the last session tomorrow at the closing of this course where you can listen very nicely. I do, and now we will continue with another session. I will hand over the work to our speaker. Thank you very much and welcome to our session, How to Contribute to the Ansible Community. We are Anwesha and Carol and we will do a brief intro of ourselves. So I'm Carol Chen and I just listed a bunch of dates because I just wanted to show that I have been a Linux user for a while. My first digital was actually Red Hat Linux, not Red Hat Enterprise Linux, but it was, yeah, it was Red Hat Linux 6.0 something. So, and I actually finally joined Red Hat, you know, one of my dream jobs in 2016, so such an honor to be here. And I started contributing to FOSS projects around 2004. I'm actually a bit hazy on the exact time because I had to check my own LinkedIn page to find out when that was. So, and if I had seen such a talk before that, I probably would have contributed earlier because there's actually many different ways to contribute, not just code. So we'll go into that a bit in the talk. And the rest are just, you know, showing how much of an old geek I am. I started using IRC a long time ago and nowadays I am on Matrix. Believe it or not, I can learn new tricks. So I joined the Matrix. I actually heard about Matrix in 2016, 17 and already signed up for my ID, Matrix ID, like around then. So find me on both my work and personal Matrix IDs listed there. And I'm going to hand off to the much younger Anwesha who, you know, despite her youthfulness has a lot of open source experience. So don't be deceived. Plus, she has some legal chops to back her up. So very powerful combination. But I'll stop talking now and let her continue. Thank you, Carol. And welcome, welcome everyone to the talk. Hello, my name is Anwesha, Anwesha Das. I'm a lawyer by education and technologist by passion. I am the, one of the newest addition to the Ansible community team. I'm a software engineer at Red Hat. I help free and open source communities around the globe with my technical, legal and organizational skills. I'm a proud Pi Lady. I don't know how many of you know about Pi Ladies over here. I am a Pi Ladies organizer at PiLadies.com and also I led Pi Ladies efforts in India. I have a blog, Anwesha Das.in, where I translate legalese to English amongst many other things. And I'm a fellow at Python Software Foundation. Wow, I'm representing Red Hat. Amazing, okay, okay. I wanna share a story with you. It was 2020 during pandemic. I was talking to a friend of mine who is a senior technologist who maintained kernel.org for a decade and who is an ex-Red Hatter. I was curious that what he is doing during pandemic. I was like, what are you doing? And he said that he's Ansibilizing his whole infrastructure. I said, why? I was curious. And he gave me an answer which actually changed the whole perspective for me for Ansible. So he said, if something happens to me, then people will able to understand what and why about my infrastructure. Ansible helps you to think declaratively. The comment indeed was deep and it's a fact. But thinking declaratively, is it the only feature that makes Ansible cool? Not really. There is a lot of things in the list. Agent list, architecture, cross-platform support, easy way of deploying and deploying in the server. So there are like many in the list and also it enhances the security and mitigates human error and makes so easy. Like I started learning my system administration through Ansible or like lately in Ansible. So that is how we are pretty cool actually. So no wonder why Ansible is the most popular free and open source automation project. Now, when we are talking about Ansible, we are not talking about a single project. Rather, we are talking about a whole ecosystem. Right now we have 20 plus project inside Ansible ecosystem. And like every other successful open source project, there are certain things which lies at the very core, open source ethos, a collaboration and community. What is Ansible community? Rather, who are Ansible community? You are Ansible community. We all are Ansible community. Whether you are a user, developer, or people like us who gets paid to work on Ansible, all of us together form the Ansible community. Now, if you are an Ansible community package user or you use automation platform or you are a Red Hat partner, you are a part of the community. Also, if you contribute via content or code, or if you evangelize and tell other people how cool Ansible is, you are a part of the community. Ansible is blessed to have a very strong community in regards of contributors. Ansible used to be the 10 most popular open source project in GitHub before 2020, before Ansible core and collection split it. Now, if we talk about in terms of users, there are millions of you, like the user base is in millions. And when we want to see that, what, how big is our contributors base? Let's use this chart. So this proves how big is our contributor base. This is a chart dated April, 2023. This is show, this shows the contribution, like staff versus non-stuff. So if we see here, it's 57% of the contribution has been made by the staff and the rest of it by the community. Now, if we see the unique authors in this set, 184 has been staff and a staggering number of 4,170 being non-stuff. We have used, and we have considered PRs and review comments for the last three years on GitHub, where we have indexed like 50,000 plus items and covered 376 GitHub reports. Now, is it actually the graph which shows how big is our contributor base? Actually not. We haven't considered the 30,000 plus roles and collections on Galaxy. We are pretty big. Now, are you interested in like Ansible and do you find our ecosystem interesting? And maybe some of you want to start contribution or you have started your contribution towards Ansible. Let us give, let both of us give you certain pointers to that, like how can you contribute to Ansible and anyone and everyone can become a hero in the Ansible land? So how to become a hero in the Ansible land? There are many options for you. If you want to contribute via code, which is generally, which is a norm, which is the first thing it comes to the mind, of course, you can, we are a Python shop, majorly our code bases in Python and as I mentioned, we are a free and open source project. So, and it is licensed under GPL v3. Now, collections, if you want to share your plugins, your roles and modules and you think it can be helpful for the community, please come and share it. One of the major reason why Ansible is so successful and has been adopted by a large number of people is because of its documentation. I am a living proof of that. I have learned Ansible by learning from the documentation. Now, this documentation, not only our documentation, not only describes the tool, rather it gives you certain practical examples, which is very, very helpful. So, if documentation is your passion, please join our documentation team. Ansible Meetups. So, Meetups is a place where upstream and downstream, the users and the contributors, the whole community collaborate with each other. We have 139 plus meetup groups and 52,000 plus members all over the globe. Find one and join one of such meetup groups and share your automation journey. If you're interested in web design, currently we are trying to build a website. We are in dire need of a website designers, UX and UI. UI, apart from the need which we have in our Ansible ecosystem, please join our group. Carol is going to tell you more about this in later part of the talk. And most importantly, run Ansible Meetings. Share your knowledge, share your user journey, share your stories. This is your chance to give back to the community. Now, in this section, in the next part of our talk, we are going to describe each of these in details. Like, how can you do that? Like, how can you contribute via code? Now, as I said, Ansible is a big ecosystem, so you can ask Anvesha, there are like, it's a big ecosystem. How can I find that which are the spaces I should look for if I want to contribute to Ansible? So these are the spaces you should look for if you want to contribute to Ansible. Three GitHub organizations and Ansible Galaxy. Then the next question comes, there are so many projects in the Ansible universe. Now, which are the projects which might be interesting? You might want to go and check our new Ansible ecosystem page where you can find what is the definition and what does each of the project does. I have actually mentioned few of them over here, but you can go and find about all of them in the Ansible ecosystem page. As I said, we are a Python shop, so if you are a Python programmer, please consider contributing to us. You can fix bug or you can report something which might be a problem for you. You don't know it might be a problem for others as well. So please start contributing by fixing bug or adding a new feature. Now, how to find out if you want to start contributing? How to find out? So look for these labels, Easy Fixes and Good First Start Issues. Easy Fixes are the good first start issues. So these are the labels you should look for when you want to start to contribute. Now, the next part is Ansible Collection. If you want to start contributing to Ansible Collection, it is a very good way of contributing, like a place to start with. So Ansible Collections is what gives Ansible the superpower and why I say that. Because Ansible, there are thousands of collections, as I mentioned before, and which is maintained by the domain experts, which keeps Ansible ahead in its game and also gives us agility. So if you want to start contributing to collection, it's great. Now, how to do that? There's a link, Contributor Collections. If you want to contribute to collections, it's a document which will guide you step by step for that. Also, you can upload your collection to galaxy.ansible.com and there you can share your collections with a bigger audience, with the community. Now, and that is how you can become the guardians of the galaxy. Now, if you want to start contributing to any of these projects, I would strongly recommend you, please do come to our booth and talk to Andre and a majority of the Ansible engineers over here. If you have any questions, you can turn around and talk to Adam and Chad, who are sitting over there and who's, so please do that. And now, Carol is going to tell you what are the other ways you can contribute to. Oh, thank you, Anwesha. There's so much great information in there. So let's continue. All right, Anwesha already touched a bit on documentation and one of our main docs person is over at the Ansible booth, he's Don Narrow. So if you have any documentation questions, you can go talk to him. And actually, tomorrow we will have a talk at 2 p.m. and it's about some of the progress and things we've worked on this year and one of the major things is using personas to define user journeys in the documentation website. And Don will be the person who will be at the talk talking about the details. So I'll let him do the honors tomorrow. So please come to the talk if you're interested. Then, oops, my speaker notes. We think about documentation and sometimes you're thinking, oh, I'm new to Ansible. How can I help in the documentation? And actually, new users can be great help because as you go through the documentation to get started, you will probably notice ways to improve, to make things clearer that some of us who have used Ansible before take for granted. So if you find some way, oh, this is actually missing a step that if it's specified, it will help the process of learning, of explaining the steps of using Ansible. So you can create an issue or a PR to bring that to our attention, to the community's attention and we can improve the documentation with your help. If you're interested, we have a documentation working group, DOGS, DAWGS, and we have a cute doggie mascot there. It's every Tuesday at 1800 UTC and it's our matrix at the docs colon ansible.com matrix room, which is bridged to the Ansible-DOGS RSC channel. And actually this room is also bridged to a Discord channel because we actually have a group of writers from Nigeria, I think, yeah, who are also contributing to Ansible-DOGS. So it's definitely a global community-wide effort to improve the docs. Similarly with code, there are some issues you can find in a DOGS site repo that are easy fixes that you want to get started with. You can search with these labels to find these issues and the bit.ly link below Ansible-DOGS contribute, outlines the process of how you can create a pull request and submit it and what information you need to contribute to the documentation repo and improve that. Again, I want to touch a bit on meetups. So I mentioned a bit more about how you can contribute by organizing a meetup. These are great numbers, tens of thousands of meetup members, more than a hundred meetup groups, but honestly, not all these meetup groups that we support are active. Partly due to the pandemic, partly due to various reasons, some people started the meetup group, changed jobs, lost interest, things in life happens, so they may have abandoned it. So please, if you know of an Ansible meetup group in your area, you can go to meetup.com, search for it. If you find it and it's not active and you're interested to make it active again, come talk to us. If there's no meetup group in your area, you want to start one, also come talk to us. And if you find a meetup group with Ansible as one of the organizers, that's probably supported by us. So, and also besides meetups, which are more kind of local and regional, sometimes in your different languages, depending on your audience, we also have presence at many major force events, such as this one. So, but as much as our team would like to be everywhere or at once, meeting all of you, we actually depend also on the community, such as you to represent us, represent the community at events. So I want to give a shout out to Daniel Shire, I think Slantzimis, who actually did that specifically for Chemnitz Linux days in Germany. We were not there, he approached us. He said he would like to have Ansible workshop or talk or something and even had a booth and we sent him some stickers and he was there just sharing about Ansible community, contributions, the use of joy of using Ansible and things like that. So, these are little things that, if you have the interest, we want to support you and help you to do things like that. Sure, sometimes, oops. Yeah. Sometimes organizing a meetup, it is a lot of work, not sometimes, it is a lot of work, but there are ways that even if you don't have all the time to do the main organizing, you can help in many different ways. If you have a space for a meetup, you can help to host it. If you don't have time to take care of the organization, but you have a topic you want to talk about, approach a local organizer and offer to speak at their meetup. And even by attending a meetup, you are contributing, you're helping because a meetup is nothing without the people, right, without the audience. So, there's many ways that you can do to make meetups happen and be a part of the community. Another thing we want to mention is we're working on Anwesha's leading this effort. We're working on a meetup toolkit for organizers. There's like sample emails to find location speakers, templates for social media posts, and something to get you started. And of course, this thing will also be open to contributions because we realize that many different regions have different challenges and variations about how meetups are organized. So, you can also, as a meetup organizer, help other organizers to improve their meetups. Oh, and also, one example is if you want to do a presentation on how to contribute to Ansible, we will upload these slides to the scheduleshad.com after this. And if you notice on the first slide, there was a Creative Commons license. So, feel free to use these slides and at your next meetup to talk about how to contribute to Ansible. Chatting sounds like a fun, easy way to hang out with people you share common interests with, right? How is that contribution? Well, by chatting with people in the Ansible space, we have a matrix space, a space on matrix, similar to the DEF CONF space that some of you are now in to join participate in this event. By, you can help by answering questions because a lot of users come on these chat channels to ask questions on basic usage, or on contributions, quote questions, event questions. So, a lot of you have a lot of great knowledge to share. And even if you don't know everything, well, because I usually don't know, I don't have most of the answers, but I know where to look for the answers. So, I help, like, oh, maybe you can talk to Anwesha for this. You can talk to Andre for that. So, you know, you can be the person connecting more people in the community. That's a great way to not just help each other, but also build the community spirit. So, what's a community without these strong connections and links to each other, right? So, just by chatting is a great way you can contribute. And if you're watching or chatting on matrix and having questions, we'll hopefully be able to answer them soon. And I'll look at the matrix room after this talk. This is actually XKCD comic that, by the way, the slides were put together by Leo Gallego, our teammate. So, I actually found this tweet that somebody tweeted to me a while ago because I used to be a hardcore IRC user. I'm like, I'm not, I'm gonna stay here for life. But I got converted to, and now I'm a main matrix user, although I do have my two accounts breached to IRCNICs. But anyway, join us on matrix. So, now is audience participation time. Hopefully you'll be able to scan this QR code, take out your phones now, start scanning. I'll give you a couple of minutes. And again, this is because Leo put together this initial set of slides for Red Hat Summit as a community day last month. So, thanks to him for finding a bunch of cute cat pictures. So, if you scan it, you'll find out what I'm talking about. Please vote. And like Leo said, don't just vote for the cutest cat or funniest cat you see, but think about what you might be interested in contributing or what are your strengths and interests. I'm just gonna pause for a while. If there's any questions at this time, we can maybe take one. If not, we can wait till the end. All right. So, we've gone through quite a bit of many ways that you can contribute to Ansible Project. And there's definitely not an exhaustive list. There's definitely other ways that we have not mentioned. And there are also a couple of things we're working on that gives you more opportunities to contribute in the future. So, if you are aware of some of our strategies from the beginning of the year, we talked about how, like Ansible said, Ansible is a large growing project. There's a lot of different sub-projects and different parts of the community. And one of the growing problems, growing pains is that things can get fragmented. So, in order to kind of address this fragmentation, two of our kind of main strategic items for this year are to create a new website and to create a new forum. And we've definitely discussed this very thoroughly with the community. We have had meetings and GitHub issue discussions and things like that. So, just curious, how many of you here knew that we were working on a new website? Good, thank you. So, it's a work in progress. There is a repo that's, you know, we're doing this with the community and you're welcome to join the website working group, our matrix at website, called ansible.com. As with website, you know, there's a lot of content that needs to be added and there's a lot of visual designs that needs to happen to kind of pull everything together. And like, we have great, you know, engineers and writers and stuff like that. But unfortunately, we are lacking in some, like UX, UI, designers, people who have that ability to create engagement through the online presence. So, if you have that interest, you know, it's an easy way to showcase your talent and also help this ansible community. So, and as the website launch in phases, there will be additional ways to contribute like writing blog posts, creating video content. And, you know, if you have great ideas, please come talk to us and also join the website, working group to share your ideas. The other thing I mentioned is the forum. So, it will be based on discourse and it's just a way because we have many, was it, how many hundreds of GitHub repos, right? So, each of them have, you know, discussions and issues and sometimes it's really hard to find if you are looking for something specific to have a central place to find that discussion that's happening. So, we hope that this forum will be able to kind of address some of that and pull the community together in one central place that you can share different ideas. And it's almost close to the end where, you know, in the finishing, raise the finish line here and hopefully, you know, within the next couple of weeks or at least the next month, we'll be able to share the availability of the forum. It's, 99% probably is going to be forum.ansible.com. So, once it's confirmed, we will let you know and please subscribe to the Bullhorn if you haven't already to stay updated of this. So, how do you subscribe to the Bullhorn? How do you, you know, follow all these things that we've said for now? Since we don't have the website ready yet, we do have anansible.com slash community page that has the Bullhorn information, has some of the project information. And so, you know, in the meantime, please stay, keep your eyes on the page to stay on top of the updates. Once the website's ready, it will definitely be announced there and you can continue following us and interacting with us on the website and the forum. And if you have any questions that you want to reach out to us directly, of course, Matrix is one thing, Ansible Space and also Ansible dash community at redhead.com is another way to reach our whole team directly. And speaking of the whole team, I'm wondering, would you like to share with us about, tell us about this? Okay, Carol and I might be present over here, but this is a work of all these wonderful people. Carol and I also included in those lists wonderful people though, but you can see many of them in our booth and some of them who are part, as I said, of the bigger Ansible ecosystem will be there in our booth as well. So please join us in our booth and have, if you have questions, like we'd love to be interrupted. And so whenever you find us, grab us and ask whatever you want to ask about Ansible. So here is our wonderful team. You can, so this slide, as Carol has been mentioned a couple of times, has been made by this person with a red hat, Leo. And thank you, Leo, for that. And many of the questions which we've said in this talk, like answered in this talk has been answered by them. We just, we are just echoing their thoughts. So thank you, our team. So every project, as we were discussing, every project advances and progresses. So, and it is very essential that we work on the core of that project. Like what is the thing, what we are working for? The mission statement. So we are working for our mission statement. If you can have, like go to this QR code and we are having a feedback, like we want your feedback. We need your feedback for describing and having our new mission statement for Ansible. Please consider giving a feedback over there. We would really appreciate it. And with that, do we want to say thank you? Of course we want to say thank you, but before that, any questions? From here or online, perhaps? Thank you very much. And again, if you haven't scanned the QR code, it will be available at the booth as well. Please come talk to us. We'd love to hear from you. Enjoy the rest of the F-Con. Thank you so much. Thank you so much. Thank you. Hello. Check, check. Good. Thank you. Yeah, hi all. Good morning. Thanks for coming and attending this talk. Today, I will, we will explain you how program and tester can be productive. So in this, we will share our experience. Like me and Nancy, we are working from last nine to 10 year on multiple projects. So we face a couple of challenges. We, like what are the things we are doing? Like what are the principle we follow so that everything will be on the time and we deliver the good product? Okay. So mostly like we will share our experience in this. And let me introduce myself. I'm Anu Singla, working as a principal software engineer in Red Hat. So mostly I work on the front end technologies. And like apart from my work, I also share my knowledge and educate to the people on the YouTube. Good morning everyone. I'm Nancy Chan. I'm quality engineering manager at Red Hat. And I have over 11 years of experience in quality software quality. And as a QE manager, so I'm me and my team, we both are responsible for overseeing the development and implementation of quality control processes. And we make sure that our products reach highest quality and standards. So that's my quick introduction. Now, Anu, we're going to talk about like, what we're going to discuss today along with why we choose this topic. Okay. Thanks Nancy. So today our agenda is why did we choose this topic? What are the top five conflicts and how do we resolve these problems and collaboration between developer and tester? When the developer and tester collaborate with each other and the couple of case study like we follow in our projects and the couple of suggestions from our side and then Q and A. Okay. So now why did we choose this topic? Like the development tester collaboration is very important. Okay. So we need to work together. And if they are working together, definitely the productivity will increase and the customer satisfaction will also increase. Like if we say in 1990 or before the agile, the waterfall model was used. So in that like development tester has very less collaborations. So like I didn't get a chance to work on the waterfall model but one of my friend Deepak Cole, he has shared some experience with me because he has worked on that. So at that time like the tester and developer, like tester is working in the end and the developer working in the starting. So in that like once a garment comes, so there is one SRS document is prepared on that time. So one is shared with the developer, one is shared with the tester. So there was very less communication between them and the tester is working on like as per the SRS document they're working on their use cases. But take example like if in the end, the tester is testing the bug, testing all the product and some critical bugs is found. Then maybe it requires some infrastructure change. Okay. So at that time it was very difficult to make the changes because tester is already testing in the end and it will delay the productivity or delay the production. Okay. So that's why like the agile process has come and in the agile like everything is on the collaborating and we are working on the sprint. Like we follow every fourth week, we deploy our changes to the production and there are a lot of meetings is going on, retrospective stand up or grooming meetings. So it helps to grow and deliver our product on time. So before I jump to the top five conflicts, I would like to ask quick questions to our audience. You guys might work with the different development teams or might be like interactive with the QE team and program teams. So as per your experience, what do you think? What are the reasons which creates conflicts between programmer and tester? Anybody in the audience? Yes. Yeah, right. It's totally, I totally agree with your answer. Miscommunication, some like pressure on the releasing because like we always have some deadlines where we need to like release our changes to the production. So if that's the reason like miscommunication, time constraint, pressure. So how, what do you think? Like how does this impact teams, efficiency and productivity? Anyone? Yeah, please? True, you are right. So I can, I can, yeah. So the question is, how does this impact the teams, efficiency and productivity? And she said like, yeah. So we sometimes happens where we fix, we found some bug, then we tested that again, it's kind of a loop, right? And I do you agree? Like it will create some kind of empowerment where like, like we feel like, it increased a lot of errors, right? And along with like, it also impact the overall job satisfaction, associate morale. Why? Because we have that kind of like, because these two teams have conflicts. So, and these are the five top causes which like conflicts, which I feel as per my experience working with different teams. So it's not accurate to say like, testers and programmers can never become productive or friends, but yeah, there are few challenges, there are few scenarios or situations where the relationship impacted with because of the challenges. So the first one is competing goals. So you know, like when it comes to the programmer. So programmers and testers both have different goals and priorities and as per their job, right? And programmers are mostly focused on delivering and developing the software. And on the other side, testers are mostly work on like testing the finding the bugs and make sure that like, the software is the highest quality. So I will take a quick example of like, how this works like quality versus delivery, bug free versus functionality, functional software. So let's take an example. Testers usually they try to find bugs as much as possible. And if they are aiming that like, we will be going to have a bug free software. And but on the other side, programmer really wanted to have like release the things, release the changes on production as quickly as possible. If that's the case, if they found some bug, the testers said like, okay, we have one bug which is related to UI and UX, like some font or some alignment is good, not good because they're prioritizing the customer satisfaction priority. But when it comes to the programmer, they say like, we can take it that later. Let's work on the quickly release. So here the conflicts arise. So next one is blame game. So that is very common blame game. So what it means, let's, here I will also give you a real time example about new feature testing. So when it comes to the new work, the new work, like we already have existing system and we are just adding a new feature in the system. So what testers usually do, they pick that chunk and test. So when it comes to the new feature testing, we also need to test the regression part. So regression is nothing but we're like, you're checking that with this new changes, there is no impact, the existing functionality, which is very, very important. And if they found some regression and they reach out to the programmer saying that that's a regression, they might argue with, no, that's not, it's not the part of, you need to focus on the new feature. Why you are going to the regression? And I'm sure this is not because of my change. So that's kind of, it's very common. So the next third one is communication gap. So I like that's a very major point. Communication gaps happen like testers found bugs, they write the bugs, they log the bugs in the different machines, like we have Jira, bug zealoms, and sometimes happen like they're not explained each and every details and information. So how this will impact programmer pick that bug and they're unable to address what's a problem because they're not able to, they're saying like it's not working on my machine. That's again a very common. So conflict arise because they say, tester haven't shared the exact steps, steps to reproduce so that they can work on the solution. So here the again conflict arise when like they know no and due to the lack of the information, it delays to the resolution. So the last one is the another one is limited interaction. So limited interaction is basically depending on team structure, depending on the project. So there may be the high chances both programmer and tester teams got let very minimum chances to interact. And if they are not interacting, how they will going to understand the end to end business strategy behind the work they are working on. So they might work on a silo basis if they are working on a silo mode. So they have a high chance they have a minimum opportunity to connect. So again, limited interaction also create conflicts. Time conflicts, time conflicts, both roles have a totally different priorities and goals which we need to think. And when I told I already explained like when it comes to the programmer they mostly focus on delivering the things, be on time. And when it comes to quality, there may be a chance for, they ask for like, okay, we are not ready and we are not confident to give the acknowledgement to release the things and they need extra time. So they again conflict arise because destiny is more time to release. So these are the top five causes like which creates conflict as per my experience. And this is very basic scenario. You guys can like relate like where developer and tester is saying it's working on my machine and develop, no, no, no, no, no. It's not working, you are wrong somewhere. And another image is saying like, okay, that's why I'm giving a bug to the developer and developer said no, it's not a bug. It's a feature, it's an improvement. So yeah, so this is how it works. Now how we can solve coming through these points a developer, I think it will create a more impact. So Anish. Yeah, so Nancy has shared like a couple of problems. There are a lot of problems between developer and tester. Now we will see how we will solve these types of problem. And like, as per me, like developer testers should be friends, they sit together like after the COVID is not possible, maybe we are not going office, but at least we should go office, contribute, collaborate with each other. Okay, yeah, so like the first step is supportive language. Always support your tester or developer. Okay, so like, take example, if tester is find some last minute bugs. Okay, so don't blame them. What are you doing, man? Why you didn't find in the starting? Why you are coming in then? At least thanks them. At least we find that bug before the production or if the customer come up with that bug, it will create a negative impact. So always support them. Second way, and one of James, as there is a testing guru who tell always like tester don't hate dwellers. And it's like a means the wife tell to her husband before going out. If there is some stain on his shirt, so he will not feel embarrassed. Something like that. So developer and tester roles should be productive, should be friends, they know each other. They know their strength and weaknesses. Or take example one more like if the tester is not able to reproduce an issue, but help them to reproduce the issue. Give some system logs. Because as a developer we know what are the input. If we provide that input maybe it reproduce the issue. So provide that information so that they find the issue and they can log the issue. So second way is defect triage meeting. So we follow this like every week the test, the bugs will come from any places. Like that the tester from developer from multiple players from customer also. So every week we do one meeting. In that meeting we discuss what is the priority? Either it's a bug or not. Or what are the solution? The basic things we just provide that in the ticket also. And the story points also. The difficulty level of that bugs. And the code freeze. It's very important guys. Like means if like tester is a human. He don't test each and every part of the app. So if we are pushing our code till the end. Like we have the production which tomorrow and we are pushing our code till now. Then how the tester will test each and every part. Because they need to test a regression like each and every part. Like right now we have the automation we can do it. But still we need to do some manual testing also. And like what we follow is we have the three. Like every four week we do our production push. So three week we are pushing our changes to the, pushing our changes in the testing environment. Last week we give full week to the tester to test each and every part. But we give like if they find any critical issue. Just we fix that issue and push on the testing environment. Apart from this full week we give to the tester to do it. And next one is a resource alignment. Like in IT word resource, resource currency is very normal. Like somebody is joining the company. Somebody is leaving the company. And the most standard ratio for developer and tester is like three ratio one or as per the product as per their experience. But mostly this is the standard ratio. So sometime if there is some resource crunch. So always like as a tester push, as a developer push less code in the production or in the testing environment. So that they will get a time to test the things. Or like help them like as a developer we can test other developer code or the second one is also test our code. So we are also follow these things. Sometime is the, we have the little bit scrunch in the resource. So that's the part of resource alignment. I would also like to add the resource balance in the both developer and QE ratio is very important. Five because of like if we don't have a balance team like for example we have five QEs on one developer. Not good, right? And on the vice versa. Five developers, one QE, a lot of pressure. A lot of pressure to work on the things. So being like as a QE manager, I always sit together with the project team and decide like how the ratio should be. So when we are talking about the industry ratio about the dev versus QE it should be like either three ratio one and two ratio one depending on the project, the complexity of the project, the timelines and the like what we committed to our stakeholders. So always like don't go to your manager. We have a current and just we need to help. Like manager can't do anything, right? We have the current so we need to manage ourselves. And the pair program and testing like I follow this one. Like I always try to share my knowledge, architecture knowledge with the tester. Because if we share our knowledge definitely they will not come with the small, small bugs like the data issue or some like the if some field is not showing in the UI then they will not come to us. They analyze from their own self if we share our knowledge with them or like they can directly ping to the backend team. I'm not able to see this field please take a look. So it save a lot of time between developer and testers. So now we have seen like how we will solve these problems. Now at what time the developer and tester collaborate more each other. So like the shift lab testing and in the shift lab testing like always try to involve that tester in the initial phase. In the initial phase so that like once the requirement comes so we analyze, we figure out how we will deliver that product. So at that time if we involve our tester then like tester is feel like a customer. They think like a customer, how the customer will use this product. So in this way if they provide their suggestion then as a developer when we are designing our architecture then they share our knowledge. And we use their knowledge with use their use cases while designing the product or while designing any feature. So it helps like and if we are thinking like all the scenario in the starting then it will definitely will help us to grow in the future also if the new things come. And defect reporting and regulation it's a normal thing like always developers say I'm not able to reproduce this issue. So like when the tester is reporting the bug then definitely provide all the things like attach the video, attach the screen, so provide the proper steps. So that like when the developer start working on this he has some information. So I need to do these things and I'm able to reproduce this issue. At least he will not come to the tester. Oh man I'm not able to reproduce this issue. What can I do with that? So it save a lot of time between developer and tester. Okay, so next one is resolution, recognition and appreciation. So like in Radat we always recognize anybody if somebody help us. Like in Radat we give some reward points some and thank you note so that he like the next person will feel satisfied like he help us and we recognize him something like that. So now we're going to discuss about the different case studies. So before I will discuss the case study I would just like to call like these case studies are the, we are actually actively these are the real time scenarios which we are actually using in our teams with the helper and collaboration from like a development team along with the program team. So the first one is okay the first one is implementing a CI CD pipeline which result improving the software quality. So I will go into as a part of this case study I will going to discuss like how these two roles actually helps like how the tester can help the development team during the development phase. So the first one is like you can see like how usually developer works. So developer build the work they usually work on the requirements they build and before so how the team testers can contribute during the building or during the development phase with programmers they can like add a automation suit. So automation suit is nothing but it's a smoke suit which give us a confidence like the basic functionality or basic critical functionality of the software is working fine. So what we will do we will hear like when the developer is building their code before like merging it's a master branch at that time we will integrate our automation suit with that unit test. So along with the unit test our automation suit will going to run if it's working fine like the basic functionality which we added in a suit is working fine it will ready to go with the merge request otherwise it will detect the problem and share the report with the developer. So that in the early phase developer can take a look what are the problems or you can consider what are the regression with this new change and it will be also solve the time tested they no need to like much worry about the basic scenario because we have already tested as a part of the building process. So the next one which we have is like cross-functional collaboration for comprehensive testing. So comprehensive testing is very important why because in agile module like we do like quick changes move that changes to the production. So how we can achieve cross-functional collaboration so we usually conduct the bug bash. So bug bash is an event where like we invite different teams different representatives from the development team program team as well as different QE teams. So they keep their normal they set aside their normal jobs for some time and they start testing on a piece of the thing which we have planned to release. So for that we can take an example of a e-commerce site which they are planning to release a major changes in the production. So we will conduct a bug bash event where we are like representative from the development team QE team come together and doing random testing to make sure that like to find the bugs or testing different scenario for example shopping cart workflow payment workflow to make sure that like we like give a cover all the scenarios related to the UIT. So at the end of this event what we will do we will call out few winners on the basis of the defect quality their severity and like and the details which they have shared. So it could be like you can every organization have some mechanism to appreciate that it could be rewards points it could be some gifts. So this is how you can encourage the cross-functional collaboration. So the next case study we have is about the actively participation of feedback loop like how you can involve tester in during the feedback loop. So customer satisfaction is very, very important when it comes to the software development and if we act on the customer feedback on timely basis we can increase the overall customer satisfaction as well as the product quality. And as you know testers QEs are known as a customer advocate. And if we are involving then it will definitely we're going to have a very good impact on the next as a feedback. So let's say here let's say example every team every software had their different mechanism to gather the customer feedback. It could be an email directly it could be a form attached on the system. It could be a account time who helping who connects with the customer and customer directly share the details. So we gather all the feedback in a one space and we usually have a monthly base meetings where we discuss the overall the feedback which we receive. Now on the basis of like then developers and tester both contribute and share like what are the things which you need to prioritize. So we log the tickets in our ticketing mechanism and tester start working on it on the parallel side development will going to work on the fixing the things and tester will start working on the test plan. And then they will deploy a test and inform the customer like okay these are the feedback we receive and we fix this and now it's ready to use. So this is how tester can also contribute as a part of feedback loop. So I'll hand over to Anuj to talk about the overall development and development. Okay so Nancy has said like all the three principle which we follow guys if you are not follow try to follow these things like the CI CD it's very important like means before pushing our code changes we try to like test the things so it will not create a regression. So we follow the same feedback loop is very important we follow these things okay. So now we will discuss like how we deploy build and deploy the features okay. So this is a full diagram I try to make and try to explain you guys. So how we are doing and so how we are involving different people like the tester, developer, UX engineer or the PM level okay. So like the first one is requirements. So requirement come from many places PM level, analytics level or the customer okay. So after that like we analyze the requirements. So in that like the all the team members like who are tester, developer, UX engineer or the PM level managers everybody is there. So in that we decide okay either we need to do these things or not okay we are doing these things then what are the priority of that or how the customer will use these features. So the tester is giving the suggestion they think like a customer and then like as a developer we think okay. So either it's possible or not or the backend team also decide okay. So how we will deliver or how we will give to the UI this some if we need some backend support or some data from the backend. So in this way like we analyze each and every part. So once we figure out okay then we go for the build build the feature the UX will give us the UX design and then we build the feature and after that we like the tester things and like in the testing phase we are also helping the as a developer we also help the tester to test the things share our knowledge and then after that like the customer demo. So after that like before going to the production we give the customer demo and like if it's a critical feature or the new feature we give some UAT like some give some pre-prod environments and give some URL so that they can do some UAT testing and then after that like deploy the features and we continuously take a feedback from the customer. It's very important. So like in this way like we try to build our product and try to build the quality and improve the quality and all things. We follow this role. So guys like these are the couple of suggestions from our side to all okay. So try to build a network with all the tester and developer means try to go for offline meetup try to go for conferences like I'm here. So I meet a lot of people like in this way like we get to know okay how the people other people are using the technology different products and all things. So continuous learning like it's a very difficult like as I'm a front end developer everything every day is coming new things the new library come like there are a lot of framework in the UI so it's very difficult to learn the new things or continuously learning but try to update whatever tool you are using try to things okay maybe means try to the new feature is come try to think about okay the new feature come learn the new things so it's always like always try to update yourself and be kind in stress. So like in IT the stress out is a normal thing right. So we have a lot of works to do and but try to come out take a break it's always a good thing don't starting blaming just take a break it's a bad thing or take a feedback in a positive way I always follow these things so I try to take a feedback from multiple people if somebody give you the negative feedback take is a positive because thanks them at least he give you the negative feedback and he finds something bad in you try to improve that and definitely in the end you will feel yourself as a positive way. Make personal brand like means I follow this rule means try to write a blog like I try to educate try to educate via YouTube. So related to web development but means try to do these things and means it's help you if it helps a lot means definitely. So with that this quote we will wrap up our talk so the quote means that testers and programmers should work together rather than or instead of like competing with each other by sharing and collaborating their knowledge their skills they can create a better and productive or good product so thank you thank you. Any questions. Okay so the question is like how you are maintaining the quality while pushing the changes right on the QA. Okay so like this is diagram if you see here what we are doing is when we are merging our changes okay when we are create our MR okay so in that time once build the code everything is fine then we run this unit test and the automation test so tester give us one URL like tester also doing like running their automation test right so in the GitLab we are also running their unit test on our GitLab so in this way like suppose you have make one button change okay so it we are 100% sure it this button will not create any regression at least in the existing or the critical functionality or the unit test we are also follow this rule like every time we try to write the unit test otherwise we don't merge any code so this is like we follow this rule and in this way like once everything like at least the existing functionality is working fine then we go for the merge request and then deploy to the testing environment and then tester will also do their own testing manual or whatever they want to do and after that we go for the production so this is way like we follow and if you see here in the red sign a problem detected in the unit test like if there is some issue in while running our changes then we go to the developer will take a look it's either because of me or there is some issue in the other unit test or the automation test so we connect with the tester and in this way like we follow this rule so as of like in this section like you know like when we execute our automation run right we do have some report which explain like what are the things is not going well so developers can take a look because we have already shared all the workflow and all the framework with them along with their like integration with the unit test report okay the question here is like how we spread this collaboration across things because they have high chances there are teams which not following this right so you know like we usually have a flow where we have a QE team or development team and program team so we usually have one monthly or like monthly or like twice in a quarterly meetings with the leads so over there we usually have a bi-directional discussion and feedback on the QE team as well as development so like as a QE manager I have one-on-one with my team members right to see if there and they do have any feedback for the developers because they're open to share like they can share the feedback if they are facing any challenges so they discuss with me and because I don't want like their developers and Qs directly have some some relationship conflicts so I act as a manager act as a moderator and they connect with the development manager as well as program manage to make sure like this is what we are facing like how we can fix how we can fill this gap and vice versa if development manager have any feedback like okay we have a resource crunch or last release we got a critical bug on production so how we can like improve this in a better way so being a leader so we discuss internally and make sure that like it should be a bi-directional call so we usually keep like I usually follow the monthly connect why because we usually have you follow the agile and or Kanban which has like three three weeks spent or two weeks spent so this is how we should work and second way like we have the retrospective meeting okay so if something is not going good so we discuss in the in that retrospective and discuss like provide some points and action item also we follow that okay so we will take care in the next time okay so like first time mistake is a normal thing but if you are repeating the things that's a bad thing so try so the retrospective is a good thing and we have the weekly grooming meeting daily stand up so we follow these things and at least like we are maintaining the quality and all things so if you also have any questions anyone else I can give a raise me a hand anyone in the back anyone any other question okay so I'll give my I will share my experience maybe Anuj will share so the question here is like how we can encourage peer program your peer testing so I'll give you a few years back me and Anuj were in one team and just like before the release I found very critical bug and when I found like it's like two I think next within a few hours which was supposed to really I was just doing the smoke testing and when I informed Anuj Anuj said okay Nancy come at my desk I'll show you what is going on so he explained me the code like Nancy and he told me like Nancy you sit and you fix it I was like how I can fix it that's your job so he said like it's just a mistake of full stop you just we miss that that is why it's a blocker is coming so I sit on his desk he explained me the whole code and I fix that bug so this is how you can encourage so if it's coming like if developers should like explain like how they're very small small tips like we also encourage that culture in a team like they explain like how the console and how the network logs tester can understand right so because if there something is failing without directly logging up sorry bug but we usually do which firstly check console is there any errors going on is there any what's the network logs because there may be a chance it could be a internet intimate issue right it's you have a like network connection problem that is why it's failing so just like learning these things and collaborating with developers it helps a lot so yeah okay so I follow this thing like I always share my knowledge okay so somebody tester come up I'm not able to see this field I ask him have you think about this why you are not able to see these things so have you checked the network call or are you getting any console error so he think like this and then I explain him okay we are doing some these things and that's why you are getting this issue so at least in the future he will not come up with the same question and the second way we follow we like every month we have the one guilt meeting so in that meeting like everybody share their own experience like something if somebody did something good so like in that meeting he explain okay I did these things and everybody is in that meeting like the developer tester so this is also the second way we follow these things and the third way we try to like automation test right so we help them maybe I have some bandwidth one day or five hour, 10 hour then we told them okay could you please give us one automation test maybe I help you or we review their code also they review our code but sometimes they are not able to do but we ask them okay is there any scenario is pending from our side so we can write the unit test for that so that it will not come up with the because they think like a customer they have a lot of scenario we don't think like that right and in this way like always help them to decide the framework which framework is good and what are the benefits if you are doing these things best practices there are a lot of things like we follow this is just my suggestion guys thank you if you want to discuss more we will discuss offline I don't know where my actual screen is relative to that one okay there it's decided that that one's the screen it wants to put things on yeah help out yeah and then my it's not a race of the minor one is please do not forget to pick up your speakers to our after parties and then parties they are a direct issue for you and also I would like to remind the team of the community in the role of the press so we are going to come here and we will talk about that and now thank you okay welcome to 35 Fedora releases in about 30 minutes I have timed this talk and gotten it in 30 minutes but I have to talk really fast so I asked for more time here so that I could talk a little bit more slowly I still will try and go very fast it won't be comprehensive even with that but we should have the ability to hit some important highlights in the history of the Fedora project and some questions after and I think hopefully this will be fun for people who have gone through all of this informative to people who are newer in the project and or is interested and I think there are some lessons that we've learned along the way that hopefully we can not repeat our own mistakes and we can maybe other people can learn from them Dan all heckling should go later at the end of the talk I have a slide on that later you'll yeah I think said heckling should be saved for the end yeah this is me no time to explain if you don't know me come talk to me later I'll be happy to but you know I'm from the internet and stuff I have 85 slides here so I'm gonna try and get through them there's a whole bunch of stuff about the history of Red Hat Linux and the split to rel and all of that that's a whole nother talk somebody else's talk I wasn't there for the behind the scenes so this is gonna start with Fedora core one this is the basic format of the slides release name and number at the top a little bits of information around since I'm showcasing like the desktop wallpaper over here is kind of a visual focus on things desktops always been important to Fedora but it isn't the only case a lot of people are using Fedora for server use cases for other things for IOT there's a lot of non desktop cases as well so this shouldn't be left out but you know there's the visual flare of the wallpaper so that's kind of a big thing I have the kernel version over here and things like that like which init system is there kind of put some technical things in there the actual number like that's not important over here in events that's our big headline events we've always had a lot of different smaller events in Fedora as well those aren't listed there's a lot of things in Fedora and then I also have most active on the develop list that's kind of a way to kind of get some people's names up on the screen you may recognize some of those names as we go through that doesn't necessarily mean that's the most productive person that release but you know sometimes it is and they're also obviously a lot of quieter people who've done lots of work who don't get shown up here and a lot of people who deserve credit I also put the Fedora project leader's name up there which means my name will show up a lot doesn't mean I am the most important person but you know I get to be the face of things so there it is here at the bottom I have from some of my metrics the release popularity over time it's a little bit of a spoiler here but we'll go through that as we go and then some pop culture things they're just for reference as well I'll take a sip of water because here we go so you have flowers in a pond here nice quiet beginning for the background here so this is Fedora Core 1 and it was really basically what would have been Red Hat Linux 10 if that had been produced it was basically made entirely inside Red Hat with the same mechanisms it wasn't really a community release it was open source but all developed internally it's basically just the name Fedora Core put on what would have been pretty nice Red Hat Linux release and then there was extras which were a collection of add-ons that you could basically put on top of it and that was like that the community was welcome to maintain these extras on top of that but actually during this release a really important key thing happened which I think kind of set the tone for the project overall so AMD64 64 bit extensions for Intel architecture were brand new and it was kind of a server class like enterprise feature and so Red Hat was thinking that'll be a differentiator REL will have 64 bit Fedora will be 32 bit and people will obviously want to pay for that distinction there was a lot of worry about how people will understand like why to pay for REL why it's different from Fedora at the time so they were really looking for these differentiators but then someone in the community actually Justin Forbes who's still a kernel maintainer went and just working under his desk in Texas I think rebuilt everything for X8664 built a version of it and so I wasn't there didn't work for Red Hat at the time but as I understand it this caused a lot of people were unsure what to do should this be allowed should we say anything about it or whatever but eventually the decision from Red Hat was okay well look this Fedora community people are doing this we should accept this and so it actually became part of it actually officially released at some point as 64 bit version of Fedora and I think that kind of set things both for multiple different architectures and just for you know the Fedora community as an independent not just the community doing add-ons but something that can actually make fundamental change in the project and that was a really key early thing also I want to point out the logo here which is a little Red Hat thing which is funny to me because we're not supposed to use hats in Fedora again that differentiator reason which is funny because you know Fedora and also it looks like Red Hat's new logo that they after much expense changed the shadow man thing back to this little icon here so the next one Tet Nang there's a complicated process by which these names are derived and they have to relate to the other one and I've explained up at the top what the connection theoretically is there this is basically another like what would have been a Red Hat Linux release it actually looks very much the same there were some important technical changes under the hood had the 2.6 kernel back then that was a big jump and SE Linux was enabled Dan Walsh is here in the corner he can do a talk about that I actually have my notes that's a whole other talk and Dan Walsh is the one for that talk but a big question here is what does this community stuff mean how involved is the community here and so actually a lot of people talk to me about this IRC it was actually an email mimicking a chat log here that Mr. Icon Constantine Rebetsov sort of sent to show like this is how people are feeling red headers don't seem to get it but like it doesn't really we don't know how the community is going to engage here so yeah in the release announcement here like it's kind of presented by Red Hat here it isn't quite clear what the open source community as executive producer it says in the release announcement like what does this mean so yeah from the outside as a community member who is interested in this stuff it was all pretty frustrating people who are involved in CentOS and seeing CentOS stream and Red Hat's communication struggles with that may experience some deja vu with this it's the same kind of thing in a lot of ways so FC3 this one's a little bit of progress it says in the release announcement I don't have the release announcement there nope I don't but it says the foot or project and Red Hat would like to announce rather than Red Hat but the core is really still in all of our Red Hat thing there were some things happening the Fedora extras the add-ons were actually now you could get them from the same download server it wasn't a whole different thing there wasn't actually a central build system the build system was you would send your spec file your to Seth Fiddle and he would build it for you and then put it there but down things are happening there's like four times as many packages in extras as there were before so people were kind of coming together and working on that then FC4 same wallpaper is the only time we've ever duplicated the wallpaper and it's kind of a boring wallpaper too so oh well this release was I think the worst probably the worst the worst ever it took forever to show up and SE Linux was in a horrible state it wasn't usable it was very frustrating reviews were like this is all about features and no one cares about stability this is you know all the it was probably all true also the release engineering process wasn't very good and actually there were changes like midway through this long thing which meant that you couldn't install it anymore you had to install the original one and then do a bunch of updates like it was a whole horrible thing but there was a lot of exciting community things going on I may be biased here because I personally helped organize the first Fedora conference the Fedora user and developer conference in Boston and I think that was actually again I'm biased but I really think that was an important turning point because it was a lot of red hat people came and a lot of people who weren't red hatters and it all felt collaborative and friendly and fun and there was no red hatters telling people what was going on it was red hat engineers talking to other people in the community and trying to figure out and deciding what to do and people in the community presenting about how they wanted it to go and it kind of felt exciting and collaborative so that was a really nice moment there I think I'm really fond of this background I don't know I'm not a graphic designer but I looked at this and I was like oh look how fancy that is it's so I don't know somebody who is an artist made this one so an artist may be other ones too that's not fair but I so graphic so the big news with this release really is a story about Yum which you know the precursor to DNF now the Meta package manager so before this in the olden days there was just RPM and if you wanted to update your system you basically had to download a bunch of packages and hope you got all the dependencies and it was a whole mess so there were some tools developed to do that and red hat had a huge investment in the thing called red hat network and up to date which I think was just discontinued like several years ago but it was supposed to be like this was going to be again a monetization thing this was going to be the product and they put a lot of effort into it and then meanwhile people Seth Vidal took this thing that an open source project and put it into Fedora that did this idea of rather than just having to download themselves it would actually like figure out what you needed to do an update and retrieve all the things and get them from the mirrors and do it in kind of a nice polished way so you could actually have you just run an update and it would update your system it was amazing and so again as I understand it there was some inside red hat like consternation about this should we allow this thing what should we do with it it got put into Fedora and then eventually red hat realized okay this is actually better than the thing we have tried to develop internally and it ended up going into rel and being the update system for rel as well which was a huge kind of thing as well there was a letting go of the not invented here mentality and say okay like the innovation can come from the community it doesn't have to be something that we've designed at red hat even if it was something that was like contrary to where we thought we were going with our business plans we're going to take a leap with this and figure out how to how to work with it so that was really important although still like red hat has this director control I think in the release announcement that may have talked about that kind of thing again all the decisions for engineering this were red hat internal decisions made kind of I don't know I imagine some sort of shadowy back room but actually not but one funny thing happened here when this release came out that all the Fedora systems were on the same network as red hat red hat was a small company at the time and the release was I'm like an end of quarter some sort of important business day and all the downloads for this totally broke the network and ruined everything and Mike McGrath tells the story better than I can Mike McGrath is now the VP of Linux or something maybe even fancier title than that now Big Cheese says Dan and at the time he was the Fedora infrastructure lead but he didn't work for red hat he was a community volunteer he worked for a different company in Chicago and he was the main and the CEO of red hat at the time called in the FPL and was like who did this you need to get your system as men networking people right in here and so then Max Bivac explained well I could but you know he's in Chicago and also I can't tell him to do anything he doesn't work for us doesn't work for you so at that point very shortly I guess Mike was hired and then told to fix the problem so I don't know that's one way to get a job we also this is where the now classic actual Fedora logo was designed this was actually made by like a brand consulting thing and it took these ideas like the red hat marketing red hat marketing used to actually like do marketing stuff for Fedora and they came up with the slogan of infinity freedom and that's where this this come together to make the classic logo there I have some fondness for it but yeah here this is a wallpaper that a lot of people love when I was talking about that people like really oh that 3D underwater rendered one wow this is one of people's favorites here people also like the name Zod it was a whole in-joke thing we've got lots of in-jokes but this release actually happened on exactly the day that my second daughter was born so I don't remember much about this or several months after of that personally but you know I went back and looked there was still a lot of time happening between releases 218 days that's a lot more than six months there's a whole long story about why there was no Fedora foundation that's probably also a whole other talk but at this point the Fedora board became an official governance body before that there was a thing called the Fedora advisory board which was really just a mailing list where anybody could give opinions and nobody made any decisions but they actually made a constitution for the project that had the Fedora board as the governing body for it and it was a mix of people who are elected by the community and appointed by the FPL and the Fedora project leader had veto power over this structure there and now finally we actually have a mirror manager system which let me get a count so you can start seeing the statistics starting to happen at the bottom of the page there another nice paper there moonshine so this is a big thing here was core and extras got merged together so having that split not only was it starting to be kind of actually problematic practically it was you know showing all the frustration in the community and the stress between like who can do what and so on so Red Hat made another big decision again a lot of fighting with the internal wrangling that I'm not I don't know all the details of but eventually the decision was made we're going to merge these things together into one unified project we're not going to have this core and extras split anymore and it actually turned out to be easier to move all the stuff from inside to extras so basically core got merged into extras extras became the thing and core went away and it became a unified release this is also the first release with live images Fedora wasn't the first distro to do that but it goes way back to this cool also at the FUDCon we had again in Boston there we talked about this new file system called butter a fast thought should we make that the default and at the time we decided well it's going to be about two more years before that's ready so in about two more years we'll take a look at that werewolf seven was where these were brought together but eight was really the first community collaborative releases really felt like this was some ways this is actually the first Fedora Linux because it was not just released as a unified thing but it was actually planned and put together in the open as a community project so this was a really fundamental release here this is where the feature process which we now call the changes process or change process Ben cotton wants me to say one or the other and I cannot remember I'm sorry Ben plural or not so that was introduced and basically the idea is not that we should have a big bureaucracy but stop surprising each other with changes if you're going to do something big let's talk about it first and it would impact that we'll have it's not meant to slow things down or stop but just make sure that we're all talking together so that we can make things work the best they can this was a very very popular release there are still at least a hundred systems out there checking into the mirrors every day to see if we will release any more updates for this just in case I enjoy that we also this is where the idea Fedora spins came out this spin is kind of a version where instead of this is like the official one desktop release you have different ways of putting together the software into a different desktop different installation I think this is kind of fundamental to what's interesting about Fedora as a distribution we really try to instead of saying oh you wanted to make it different well you can fork it and make your own project that's based on it we say hey come into the project if you wanted to make your different version of it that works this way cool we have been trying to find a space for you to do that this wallpaper was the first one I think with a time of day feature so it changes over time this is the evening version here sulfur the features process brings us new features I guess we had a new init system here so this is another really interesting story so init system after the kernel boots brings up everything else responsible for making sure all your services are running desktop desktop starts all of those kind of things and so the old thing system 5 init script system v I don't know somebody can correct me on that later but that basically used a bunch of shell scripts that were kind of slow and buggy and inconsistent so at the FUDCon which was I think in Raleigh that time a guy named Casey Dolan stood up and was like hi my name is Casey I'm going to replace the init system if you want to stop me come to my talk and so people came and I guess no one ended up stopping him so Casey was an intern at Red Hat so he was a Red Hatter but this wasn't his job it was just something he was kind of doing for fun he wasn't working on the OS at all so this is kind of a community led improvement replaced it with did I say upstart here is the system that was used there and so that was an interesting thing that also went into Raleigh this is an improvement coming from the community that wasn't something that was started as a Red Hat decision there Cambridge Cambridge was actually going to be as I understand it the codename for the what would have been the Red Hat Linux 10 release so there was some wrangling to get it to be the Fedora 10 release behind the scenes even though nominally there's that link there that's actually the real reason this is called Cambridge this release is important because this is where the Fedora foundations came from there was that infinity freedom voice that had been basically somebody external to the project invented that and thought that would be nice which it's fine it's nice to have that support but it didn't really reflect who we are as Fedora and so there was some work on talking about who are we as a community and out of it these four foundations are still kind of used as our guiding principles today freedom friends features and first we're there so I think that's nice that was yeah I think Paul Frieds helped finalize that and I think Max Beavac as FPL really started this down that road there this is a special wallpaper I actually never had seen this lion before before I went and did this you had to have two monitors on a widescreen the lion didn't show up unless you had a second monitor there so so this is the icons for the four foundations and also around this time we did talk about Butteruf S again and decided it's probably going to be ready in about two years and this is actually it took us exactly 196 days two releases in a row here for this one I'm going to go kind of fast through the next few ones here we had the first Budcon in South America this is this is a pretty nice but boring release which boring can sometimes be good here I have a note that we had a beautiful new website and I also see that I did not take a screenshot of the website but of Wikipedia this is actually not my screenshot I stole the screenshot from Wikipedia stole there's CC license information at the end of this talk it's not stolen at all I took but yeah there's a lot going on but this wallpaper very dramatic and I got a lot of people like is my screen broken on this one I love our community process that develops wallpaper but for some reason there's a tendency to make things that look like a broken LCD screen and this one got through here but yeah so behind the scenes in this release Jesse Keating who was the release engineer at the time did a big thing of switching everything over from using CVS as our version control to DistGit so that was a lot of big work there which is actually pretty neat technology at the time because Git does not do a good job at storing large binaries so that's like at the time Git was starting to take off in popularity but there wasn't like everything is on the Git forge somewhere there are people's various things everywhere so just like today our primary way of packaging things is getting the official tar ball and so you have this official downloaded binary somewhere and those couldn't really be stuck into Git so it's a clever system for that it's all pretty neat it was innovative things coming from Fedora and I think Jesse got a job at GitHub on the strength of all this work so that's and also because in that transition we I say we but I don't know Jesse probably somebody made a tool called Fed Package that basically instead of using the CVS commands it kind of abstracted all the package managing things so it didn't really matter what transition you were using underneath and we still use that Fed Package tool a lot as well and it kind of hides some of the implementation details and I think that's actually also kind of a nice lesson because change is hard so yeah you can see huge growth here we've got a nice growing growth trend around there things are going really well yeah so this really lovely wallpaper for this release this release is going to be great right I hear laughter from the audience here yeah so yeah uh oh goodbye to half our users here so there's not enough time to talk about whether system D or GNOME are good we could have a whole yelling fight about that but this is a lot of change all at once and I think if you look at these technologies today I think they're good this was the right choice to go in this direction but wow they were not ready and people were not ready and we actually went back and looked and I was like well could we have documented things better we actually put a lot of work into making this smooth there were videos about all this and it was like we tried but it was just too much change I think in retrospect like that Fed package tool if we would have worked on some things for compatibility and done some user experience things and it would have been a much better experience and it's always kind of a trade off so one of the things that we switched to upstart as in a net system before but there was so much emphasis on making it compatible that basically not only did you not notice any of the change we ended up just keep writing the same shell scripts for it and we never took advantage of any of the possible improvements from it so there's something where if you're focused too much on the compatibility it can hold you back so going ahead is sometimes important but maybe let's go this way again because that was not great this wallpaper obviously a reference to Jules Verne and 20,000 Leagues Under the Sea but I kind of can't help but feeling that the sort of murky depths and gloom was just kind of the feeling that my kids say the vibe of the project at the time and it was kind of depressing and so we got a depressing wallpaper here Jared Smith who is the FPL said that was the awkward teenage years so yeah that made me something to it what did I what architectures just wait sorry I'll fix my slide later I don't know why we dropped anyways the comment from the audience was I've accidentally left my arm off the list here and I should not but yeah we did consider ButterFS as the default file system this time and we considered we decided maybe another about two years that's going to be ready there was actually one big important change that happened here before this Fedora had a contributor license agreement where basically you said that you were giving and at this point and since that we've changed to a contributor agreement which basically says I've got the right to submit this code and it's under an open source license and if it doesn't have an explicit license you can treat it like it has MIT or Creative Commons license so it's not a thing that gives away any rights and so that was a change that I think made being a contributor not working at Red Hat is quite equitable there are a lot of open source projects which have a thing where you have to assign your rights to the company or to some entity that works on it and often that's kind of a unilateral thing where everybody outside of the special entity is bound by copy left it means if you make a contribution that contribution can be shared with anybody and so on whereas the entity can say oh I'm licensing this as proprietary software I'm going to charge a lot for it so even your contributions that are GPL for everybody else for us they're special so we don't do anything like that and I think Red Hat doesn't do that for anything I think that's pretty important that we got rid of that and I'm glad we didn't actually we're even looking at if that contributor agreement is even necessary for a lot of things there's a lot of cases where there's a concept called license in license out basically the assumption that when you're making a patch for something that it's under the right license just by making the contribution but that's for legal to figure out I would like to point out that on the DEVEL list here we have Adam Williamson is both the most prolific thread starter and the most prolific reply to those threads there I think that's yeah so things can't all be murky depths here so we had this release beefy miracle the mustard indicates progress the naming process again you can see that people were not afraid to bend the rules to get what they wanted in this one here I'm missing I'm so sorry that I've dropped the architectures well what am I missing now I think maybe that's supposed to say arm64 and armHFP whatever I told them to save the heckling yeah I told them to save the heckling for later but it's not working anyways many people describe this as their most favorite release overall and I think there were a bunch of features but I think the important thing really was just kind of the camaraderie and the friends foundation of this kind of fun themed release that was kind of a silly mascot and everything kind of even though things felt a little bit depressing kind of brought everybody together and the community felt like yeah we are we're doing something here so I think that was really a nice moment there in this release these tiny dots are actually circles because it's an idealized physics problem cow and the name nerd jokes in here this release was kind of a mess actually the installer code base Anaconda it had gotten kind of horrible it needed some serious refactoring which it got but that features process was very geared on what happens release to release and it turned out that rewriting the whole installer was something that needed more than one release to go through and we slipped all the way around from the fall release to January which was not great to say the least and also the Anaconda team that does the installer works on both Fedora and Rel and Rel didn't have an upgrade feature that's actually being worked on for Rel right now but at the time no upgrade feature and it was pretty important to Fedora but somehow that hadn't made the requirements at the very last minute we realized oh no there's no way to upgrade hey we kind of need that so Will Woods threw together something called FedUp which is a name that I love dearly I wish we still had that and this actually kind of accidentally set us on a good path really because it actually was a better upgrade process and from this we've really worked on making sure that the upgrades are really smooth process that you don't have to stress out about it's not a oh no Fedora's short life cycle means I need to give up two weeks of my life every year making sure the upgrade works it's just a hey run do the upgrade thing and you'll be on the next release and smooth sailing on so I think it actually turned out well through serendipity there but it wasn't the intended result another thing that happened around this time is we had a lot of pain with release engineering and I will not name names but one person had been doing most of the work in that area for a number of years and had ended up just from being very overworked the person who knew how to do all the things and no one else did and also some amount of pride in that like I am the person who knows how to do all these things which is totally reasonable and a lot of things in open source are kind of driven by that like yeah I have ownership of this is my thing but it can also be pretty unhealthy for that person and for the project so there was a lot of frustration and I don't blame the person at all this is a systematic this is a fedora problem that we hadn't really addressed very well because we should have found ways to support the person in that and found ways to make sure that they could have that pride in being there in a way that was about the team and the sharing and success of everything and so that's but it was definitely causing problems around that time I think that is kind of a repeated theme a little bit here again we've got their jokes here and these are boxes to put cats in the problems oh yes yeah so this is a good QQA thing it turns out that putting Unicode in the name exposed a lot of exciting things everywhere and actually I think this is the first name we had a space in it which actually turned out to be worse a apostrophe yeah right I think we solved the apostrophe by making it to a Unicode apostrophe so yeah great but yeah so all that is kind of fun but the biggest thing that happened around this release was really an immense tragedy so Seth Vidal who I mentioned before as a really important early contributor to the project he was called while biking home in a hit and run driver Seth was very humble and he would have downplayed this but he was so important to making Fedora what it is both in the community and in technology when I was we were both university sys admins at different universities and he helped convince me that it was worth the effort to be part of the community and give things back rather than work on my own and do things there yeah he was kind he was funny, he was brilliant and he was a dear friend and Seth would miss you very much this wallpaper it's a 20xx for 20 there in the background very subtle this release had a lot going on I was actually going to put a picture of myself wearing the 10 years of Fedora's shirt there but I decided if I was going to cut something out the vanity could go the badges aren't actually in that screenshot I just decided to plaster them there because they're fun so we launched the Fedora badges gamification thing which is something we're working on again and kind of putting more central a way to both bring people into the project find easy ways to get people hooked in different areas and also some people are very much driven by can I get a digital sticker for doing a thing we have digital stickers, it's very fun this is also a place where we decided we have the 10 year mark, it's time to do some strategic planning we started out great and then we had this big drop things are kind of taking up again we wanted to make sure that the next 10 years would be awesome basically so we flocked to a Fedora event we changed from this FUDCON and the idea Robin Bergeron my predecessor was the FBL had this idea that rather than having these little distributed events with no real focus we'd bring everybody in the world together to have a one big event where we would all talk about what we were going to do plan the next release and bring this together and so that was a focus of a lot of the strategy there Tom Calloway, Ruth Sealy did a lot to make that be a reality. I'm dropping names and I probably shouldn't because I know I'm missing a lot of amazing important people's names in this so if I left out a name I'm sorry you deserved to have your name mentioned as well but yeah release engineering comment here this was hard because of all the changes we were making around this but also this time the Fedora board had kind of faded into the background and kind of become there weren't a lot of big decisions to make but occasionally there was something like is this open sourcey enough would be raised and then the board would go off and have a debate and maybe come to a decision and then give a like a prophetic announcement that wasn't the question asked and it wasn't really very functional so actually FESCO the engineering steering committee ended up driving a lot of the strategy work here this was so this thing called Fedora next we came out with having different additions one of the big problems that was causing a lot of stress in the project was the earlier strategy decided that the desktop release was called the default offering and then we had a lot of people who were contributors in the project and they mentioned way back at the beginning that there are a lot of people using this in server use cases not the desktop and with the desktop being the default a lot of the decisions were being made okay well that's what we're going to do we can just desktop oriented decision and so it ended up that people who had sysadmin interests basically felt like all they could do was complain that saying no no no was their role in the project and you know sysadmin tend towards being grumpy anyways so it was easy to exacerbate that so part of the idea here was let's not do that let's give a positive way for people who want to have these different use cases to work on fedora server work on iot work on what are these different cloud use cases and say okay you can make the decisions that are right for your use cases and have them even if desktop can also not have to try and say I'm going to make sure that I placate the server needs for this very different use case for here so we can have these different decisions so I think that's worked out really well some of the other things fedora rings not quite there yet but so they get all the architectures right this time peter now okay awesome did not still still got the list wrong um um yeah go ahead let's do it that is another talk but the three additions is three additions is what were fedora server fedora workstation fedora cloud the time so we've expanded from that but that's basically that decision there yeah that's yeah so actually because of the release engineering problems and all that this is actually we decided to stop and actually we skipped a release for the first time and probably only time ever to kind of give release engineering qa time to like recover from that hard release and get the tooling up to shape to do things that turned out okay I think that probably staying on the cadence is something we should keep to that's a whole another big conversation um yeah a lot of people were skeptical about this additions idea which is very fair um it was a big change um I think uh that idea really worked we can kind of see as these things go up here like um it was a hard decision but I think it has been see a lot of growth in the project and that that that strategy and approach made things better in the community and made the release a lot more popular um and um yeah this is where as part of that strategy we decided to have a brochure site get fedora.org and we're going to also have something that went hand in hand with this a contributor focused site that was going to be a thing called fedora hubs the idea was um there were like eleven hundred IRC text chat meetings every year and a lot of stuff on mailing lists but if you looked at our website and everything that maybe somebody who is new to the project might come and look it looked like we were dead they would look like there was no activity on the internet happening because you know the internet had changed from being something that was a lot of these different text based protocols to you know the web and social media and those kind of things were really uh becoming the focus of what people thought was the internet and so the project didn't really show up so we wanted to have kind of a social media view of showing all the activity um that turned out to not work so well um but um that's um another talk but um we kind of come back to that idea later of making sure that the project is visible and um activity is surfaced um Josh Boyer says that he was the blame for killing the names here but really um so we stopped having the fun code names uh the reason is we were coming up with these lists with these tenuous links we would take them to legal they would review the you know ten candidates and say these two that are the worst possible ones you can use one of those and then that was like we were using quite a lot of legal resources on that so fine we decided okay that's it's not it's fun but it's not worth it for that we have to find our fun in other places I'm sorry um yeah um I mentioned that the board wasn't really working um not just that it wasn't very active but um it wasn't very connected to the project so we had made this you know three editions decision the board actually approved that went through this whole process of that um and then at the flock conference a little bit later we were in a room like this and a lot of people were like no one made that decision that wasn't official and like it was supposed to be official um so like that like clearly when you have a community led project a community driven project you can't have top down decisions that just declare things it's got to be something that's connected in order to understand okay here we have a process for decision making the decision has been made and it wasn't really working very well so we came up with a new structure called Fedora Council um it maybe also could use some improvement but I think it's worked pretty well where it tries to draw people who are involved in different areas of the project into leadership and to have a more active um not just a um behind the scenes governance and dispute um resolution but actually active involvement in leadership in the project um we also got rid of the exclusive veto for the throw project leader by going to a consensus model where effectively everybody has an equal veto on something and we have to figure out how to all get along on things which actually has worked very well we have never gotten to a difficult situation where you haven't been able to reach a consensus um how to do consensus that's a whole other talk I would love to do but again I'm actually running way over my time here because I'm going much slower than I was going to um one thing we did add um we have a thing called the fcake the Fedora community action and impact coordinator that is something Robyn set up and is one of the reasons I've been able to be Fedora project leader for so long it is no longer just one person isolated there's a kind of another full-time person to help with community building and activities in the project which is um very very very helpful um yeah um I can talk also about why that's not called community manager and a whole other talk which I will not do right now um change things up Fedora uh KDE Plasma desktop here um we got a diversity inclusion advisor I think that's something um I would like that to be a fully paid role I can't convince red hat to do it if anybody else can come up with funding for this I would love to have that be a fully paid role um yeah a lot of things going on here I'm gonna go here XFE spin here um a little bit of working in marketing Fedora loves python here um one of the things that happened around here though I would have brought the docs team kind of got into a situation where it turned out where we hadn't been paying attention and one person had become the docs team who had all the docs team knowledge and docs team work and that person got burned out and suddenly we didn't have a functioning docs team in the project and again this is not that person's fault it was something we're structurally like it was going along so well um we weren't really paying attention and giving them the support and the team the support that they needed um so uh um yeah here was a really short release cycle this is something we decided on purpose because previously when the release had drifted we decided we said okay well we'll go six months from now to the next release and that was causing releases to rotate all around the calendar and be unpredictable so we decided well we're gonna stick to the fall the October May release cycle and even though that this will be short um it turned out to be an okay release despite that there wasn't a lot of changes that all worked out well um and people actually really liked this one it got a lot of really good press which was very encouraging to people who kind of went through those awkward teenage years you can see in the graph here this big there kind of I think the things we were working on people started to take notice and be like hey this is what's going on in Fedora these days um Peter Robinson I have you name dropped here uh you said you made yourself redundant for secondary architectures we got rid of this idea of having a whole another process for some of these other architectures and kind of merged that into the main thing maybe I have the list right this time who knows yeah alright I got the list right too so that works out um this is I think my favorite wallpaper I it's beautiful um this is actually not my voice print it's the designer Kyle Conway saying Fedora in a voice print of that and then that becomes the trees and it looks pretty um we got a new mission statement I'm not going to harp on that too much but I think it kind of goes back in a lot of ways to that thing about spins and where we try to make it so you can you can as Fedora as a project we want to make something you can use as an end user but we also want to make it easy for people with ideas about how to make an OS how to make things better how to work on those ideas and deliver those to people um it goes yeah uh from Fedora next rather than just like handing out a bunch of building blocks um we also we want to give out some pre-assembled things but we also want to give people those building blocks and like hey you can make your Lego set and give that to people that's um something that we try to empower as a project um modularity is it's whole another retrospective talk I'm not even going to touch it um we finally got mp3 playback patents software patents are a huge impediment to free software um our basic um tactic is to wait them out which is um not not a great way but we actually waited this one out so there we go um yeah um yeah the modularity thing was a little bit of a stress um this was um this release took took way too long um or the previous release took way too long we made this one short again um modularity was really the fire here and again that's another talk here but um yeah a sugar on a stick which is um I user interface designed for children to be intuitive it's actually kind of amazing because you put this front in front of an adult and they're like I don't understand how to computer anymore and you put this in front of like a four year old and they're like whoa so it's it's it's actually it's great um uh this is another we broke the laptop screen wallpaper I don't know keep doing that um so this is actually the first ever perfect execution of our planned release schedule we were um you know exactly on time um and the modularity thing actually landed so that was nice um part of the trick here was dropping alpha releases so um quality team a lot of work around that um Ben Cotton as program manager I think it's a lot of credit here as well um and uh less good here we had somebody who had been you know working on the websites team who over time became the websites team and had kind of felt he had ownership of everything and once again like he was doing such an amazing job with it that we didn't really pay attention to that and uh when he got a different day job suddenly we didn't have a websites team and uh it turned out then when I went to Red Hat and said hey um can you we need to we need a websites people and they were like no that's not important for rel um so that was a whole crisis about how are we going to have a a nice pretty website um so yeah actually it turns out that um this should be a whole another thing we actually have a nicely revitalized community built websites team that made a beautiful new website if you go to fedoraproject.org now it's amazing and that was really done by you know building together a community of volunteers interested in doing this so we're getting better at this but we certainly had this problem over and over and over again um I love this wallpaper it's cool um uh one of the big things here Red Hat surprised everybody by buying CoroS the company they didn't know what to do with CoroS the Linux distribution um and so I fought pretty hard to have that fedora be a home for that and I'm glad that I did if you look in a different talk come to flock our next conference I'll show you some statistics Fedora CoroS is doing really well um and from now on at least through this talk um we're basically 180 days apart released release um very very consistently plus or minus one I think here um fedora silver blue came out around this time and based on the CoroS technology um which kind of showed that you don't have to just you don't have to be just a trivial change of packages um how do you make a face what have I done oh oh we didn't um yeah yeah whatever um we'll talk about that later um yeah uh the idea is basically you can make a big change to make in a spin it doesn't have to be just a different desktop technology and package in the same way you can change how the OS is put together in fundamental ways and also have a Fedora spin that shows that so I think that's a big fundamental thing okay um running out of time kind of move on we dropped the 32 bit x86 that's a big thing not enough people to maintain it um this is a cool release um I feel like that's like a really nice 80s throwback wallpaper there um yeah there's plenty of other stuff but um big highlight that Lenovo decided that they would ship Fedora Linux on you can buy it out of the box with that and one of the amazing things about this is first of all if we didn't go to Lenovo to make a deal they came to us because of customer demand and more than that uh they didn't want us to make a customized version they didn't want a customized version they wanted exactly what Fedora as a community was producing with no changes they wanted the real Fedora Linux not some sort of Lenovoized version they didn't want to take control over the kernel they wanted to deliver to their users what we are making so that was a really strong vote of confidence and a really nice way to work there um there's been a whole bunch of supply chain problems and COVID and whatever that and there's still not as not available as much as it should be but um you know a call Lenovo and ask for this and because that's really the sales people hearing it is what's causing it not to be available worldwide um COVID times we had Nest instead of flock virtual meeting everything I was actually kind of dreading that um thing but it actually turned out that our virtual conference turned out to be amazing and energizing and um I didn't organize it so I can say this um it was the best conference of all of COVID all of the Nest they were most of the things I went to um you know Red Hat Summit sorry Red Hat Summit people were just kind of draining and awful um uh it what and you know at the best they were like oh here's a webinar and our conference really felt like the community coming together from around the world and having a virtual party um and I that was amazing so um again for our community is wonderful um this you know is the peaceful universe wallpaper I think escaping the stress side that was a nice one there's a night version of that um here's a case where GNOME 40 um remember back to where we lost all the users this is a really big user interface changing things from horizontal to vertical or the other way around um and I think both Fedora and GNOME upstream learned a lot from this actually spent a lot of time doing actual user interviews and research before putting the change out there um and actually you know the results ended up being great and I was I was worried I thought there would be drama there um and it turned out to be there's some people aren't perfectly pleased you're never going to make everybody happy and I think the big change landed very smoothly um pipe wire a big audio change landed you know there's a little bit of rough there as well but this is actually a funny marketing lesson which is obvious in retrospect which is um a lot of people shouldn't ever care about their audio subsystem but there's one set of people who care a whole lot and that is youtubers and podcasters and suddenly around this release we were incredibly popular with youtubers and podcasters who had an exciting thing to talk about and they really liked that and so that kind of jumped up how people were talking about us so it's like oh yeah maybe appealing to the um audiences that talk a lot about what they're doing is a way to get people to talk a lot about things and that's it um and we also have our new logo here um Mo Duffy does a great talk about why we have a new logo and how the designing went into that um I really like it and I'm glad that it now um will not have fedoro as one of the top um typos um our searches from people who don't recognize languages where they don't recognize this as a word assume that that's what the old logo said because it had an A it was very indistinguishable from an O yeah um okay so um up at the time you know best release ever fedora linux 35 um this is the new overview the new um sideways um we didn't quite make the 182 days there so I guess more than plus or minus one but still reasonable range got things right instead of rushing um and here um some of the big changes that are not there we uh launched the chat dot fedora project dot org moving from IRC to matrix and working on discussion dot fedora project dot org I know some people are very resistant but I want to move as much mailing list stuff from there to that as well as a modern friendly interface that um is transparent to people who are not necessarily uh deeply involved in the project I also want to make sure that people are deeply involved have a good experience so we'll work on that transition it's not going to be dumped on people but I think that's actually a really important thing uh we also have a renewed fedora ambassadors program around this time that's kind of getting ramped up after covid um we're working on some investments in mentoring had a mentor summit I think that was an exciting thing um and this website new websites team that I talked about started coming together as well and that was really with kind of lessons learned how to make this a team success rather than something that could be it was just an individual I think that was really really good um uh yeah um right there uh Kino White was somewhere on this we started having some different variants using the OS tree um that were coming around was that was that in this release here Kino White was that um um yeah so um I was asked at the beginning where's the net missing three releases so uh I started adding on to here but I realized I didn't really have the perspective to talk about these most recent releases so um yeah I I'm sure there's interesting things to say a lot of stuff has happened in the last you know year or so um but I'm decided I'm going to put the seal on this talk here and end it and then maybe in you know 15 years someone else can do a sequel and talk about what's happened over the time there um but that's uh yeah I can stop this here uh so quickly um some of the harder lessons what what should we not repeat um you need to find a balance when you're making big changes um you can't just document it away you've got to make sure you're listening to users and you've got to um you got to make room for mistakes but you also have to listen and deliver what people want um but also having a community that cares about the project and is you know working together uh it can make the rough parts actually functional okay I'm running out of time for questions um yeah community teams you need to have momentum to serve to keep going long term got to make sure that you're watching the people who are in in positions where there can be a bottleneck so that they don't don't use one this is one of the official fedora colors so that's there there we go tune to that um yeah don't um yeah um we've got to make sure that our teams are have you know they're not just depending on one person that people should feel like they're recognized and awesome and supported but that also if they want to go do something else they can and the thing they're working on will continue and that should be a pride of success um and going kind of back to the thing that I talked about um red hat you know letting letting go and letting the fedora community lead um really has brought a lot of innovation into the project and I think that's one of the things that a lot of projects really could learn from it's scary to do it that way sometimes it's hard when you have to figure out how to get a community to come along with the decision that you want to happen but you get better results that way and okay there I have left some time for questions here so let's go to Peter Robinson is it about how I've gotten the architectures wrong the first what release was that in yeah Fedora IoT I think I mentioned it somewhere there but I may have just skipped over it Fedora IoT is an important addition that we added kind of one of these use cases that we thought oh we need to make sure we have something that covers this emerging area in technology so that's a very important thing actually yeah and actually going through the idea of adding a new addition made us come up with a how do we add a new addition process which we had not had before so we have that as well um yeah that was the thing Dan yeah right so there's another big change we switched to something called C groups v2 which is like a kernel level thing that's used for making containers be container-y I was almost going to say containers contain but then I remembered that I got Dan in this room so I shouldn't say that but yeah that was a big change that along with the system dn those things it's one of the situations where if you if no one takes that first step it's never going to actually get exposed to users and it will never get there and things will drag on forever so at some point you've got to make that change and so finding that balance of when to go to that and when to not is important and I think having in some ways like the additions split where I think was coro was slower in that some of the additions decided to do that yeah right so some of the some of the releases decided okay we won't make this big change right away and kind of rolled it out slowly so that people had the option it was the default in a lot of things so fallbacks where you okay this was is going to just work yeah so actually that joke why did I I skipped over landing my joke here I've done another version of this I landed the joke properly but I was I had spent too much time on other things here yeah so there's actually several other times it's mentioned we actually considered butterfs throughout the years and every time we did that we'd be two years off and then between red hat storage internally this is actually one of these let the community do things red hat storage people decided we're not going to wait for this anymore we've got to focus on what works and so they put all their effort into a thing called Stratus built on LVM and XFS and so on to kind of give the same kind of features there people in the community I did about things Butterfs could bring people coming working in with AdMeta where they have a bunch of big investment in Butterfs finally said okay look it's ready let's do it and so we went through our change process and with a lot of debate and with the red hat storage team weighing in to say please don't do this we decided you know what we've been considering this for years and years it kind of brings some neat things that we our users would like to see we're going to do it and this is again a thing where you know red hat doesn't call the shots and so the red hat for four things like that figure out how that Fedora can be a good upstream and still have different decisions on some fundamental engineering things but yeah we eventually did make a decision to do that another one that's kind of along the same lines is a change in baseline architecture which is basically as new CPUs come out they get new and newer and newer features and so if you compile your code with a newer CPU it can have better performance but it won't run on the older systems and so red hat for rel wants to update to the newer version of that but we have an ironic problem in Fedora where because we move fast we can't move as fast on some things because we leave people behind so with rel since it's got a long life cycle if people can't use the next release on their hardware or their old hardware that's fine they'll keep running the old release for you know as long as they want to pay for it being supported I think those like probably rel four out there somewhere being paid for by somebody red hat will take your money but Fedora we're kind of going on to the new thing and we can't say okay well sorry all your hardware from five years ago put it in the trash by new computers so we had to move more slowly and so one of the things I actually asked the people at red hat who were working on this were like well Fedora will never accept this and I asked them to bring it to the community anyways to discuss it which we did and it was a big discussion but out of that we kind of came up with this idea of having something called ELN which is a build of Fedora with defaults that are set for what would come the next rel so we actually have a place in Fedora where people can experiment with here is a way to do things in a different way so we can do those builds even if we don't make that the mainline thing that we deliver to people and you know in some ways you know red hat is special in order to do that because they're paying for a lot of the resources to do that but if anybody else wants to come along with a thing like that and can help provide resources so we can do it Fedora is open to that so if you've got some other big idea you want to do Intel if you would like to do your clear Linux thing as a Fedora build come to us we can make that happen yeah what is the next big technical change for the Fedora project predicting the future is hard I think it's both a technical change and something that is going to be a kind of a social change I think that this RPMOS tree approach that's used in Silver, Blue, Kinoite CoreOS IoT I think that's the way we want to go for all of our defaults at least it has a lot of advantages for delivering a system that's consistent you can do cool things like do a bisect to find exactly where a problem came from and it kind of goes towards a container focused separation of what is the base OS and what are your different applications to solve the problem we're trying to solve modularity which is that everybody has different opinions about the speed that they want different parts of their operating system to go and that approach lets you separate things in a way that lets you do that so I think that change towards that and whatever form it will take is probably the next really big technical change there's also the social change of convincing everybody that's the right way to do it yeah so the question is basically with the changes in CentOS project would Fedora make sense as a cloud server operating system in actual production and I can't necessarily speak to your particular use cases there are hopefully reasons for redhead pays my salary I hope that there's value that redhead's providing for that money but I also think that can be used in production and people are using it in production in some pretty large scale things depending on your ability to work with the community on things your tolerance for risk and your configuration and setup your change management processes yeah, you can definitely use Fedora in production for things like that we can talk later more about your specifics as well does that cover what you're asking or maybe it's crazy in a good way the right kind of crazy it can make it really work I guess I'm out of time here thank you everybody hey, hello good to have you here I'm Karolina I'm working as a software engineer for redhead but today I'm here not as a company representative but just myself as a person who's talking with open source not so far a time ago even though my experience in IT is bigger than that and inspiration for that was that I haven't met much other people like me who just started so I thought it would be valuable to share my experience and my observation how this whole process looked like and how did I start it I'm not very great about talking on myself so here is just a little bit of a subject and concepts I'm dealing on my daily basis job I mostly work and contribute for RDO projects we are shipping RPM packages of OpenStack for CentOS so mostly I'm doing packaging developing and debugging also some coding but let's go to the question how did I meet open source when I was starting IT I was fascinated about networking and security and it became obvious for me that you can do it without tooling which is available for a Linux and tooling for Linux mostly means that they are open source so I did two large students projects first was for my engineer project and this was a sandbox network for security research and testing and this was a kind of honeypot I was using technologies like Snort, like Nmap, Kali Linux and SecurityOnion and then on my master sizes I was testing and benchmarking distributed files systems a few time ago after I was graduated I realized that two of the systems I was testing Glaster and Cef was a Red Hat project if I didn't know when I was doing my job Cef appears best from all the that I tested then when I initially knew some Linux tooling I was really amazed that this wonderful tool is so powerful it's open, it's free for everyone and that idea was amazing for me that's why I meant to foster them this is a large conference for open source in a brothel and then I saw some community from a closer view and it was an amazing experience for me how I didn't met open source was on students' interest groups and it's a pity that there is no at least at my university any way to encourage people at the university when they are actually choosing their way or their careers path to encourage them to go to open source also, none of my friends or colleagues from studies was contributing to open source I was lonely in this topic why people enter the open source there are many reasons and they are mostly mixed but there are some few which are most solid to say generally people would like to gain a new skills they are starting new projects some new adventure to just learn new things and they decide to just share it with wider range of people also, they are starting something new because they need it for their work or for their other project and just put it on a github as an open source on a proper license sometimes they just need development in the desired project so they just missing some features or they discovered a bug which is really painful for them and they just decide to fix the job or did the feature and that's why they appear to be open source contributors and the third way I would say rare but it's going to be more often to be hired to contribute to open source project and this is my path of growing in open source and speaking about motivation let's say there are also some more soft non-technical ones like teaching others people like sharing the knowledge people have a need to belonging and all these motivations are mixed but it's worth to remember that most of their motivations are not materialistic no matter what motivation stays behind people somehow has to start the work and I observed that three mandatory pillars of good start boarding, mentoring and documentation are based off start if any kind of projects no matter if commercial or open source is an onboarding process for open source projects is crucial for gaining new contributors and for commercial one it's providing smooth and quicker start and also it helps to reduce stress and just make the start quicker so it's good onboarding process is shortening the time when people stops to be a cost but it also provides the value because don't get me wrong here even the best engineers are a cost at the beginning of course until the time they get into the project and get enough knowledge to provide value speaking about onboarding it's worth to mention some kind of vicious circle if they have a project where contributors or developers are overwhelmed with a work they don't have the time for people they don't have the time for onboarding or the stuff let me alone, I want to just work but if they don't onboard new people properly they still have too much work because they don't have anyone to help them that's why it's very important to have this process because it will help with it and automate some stuff for any new people everyone hates writing documentation including me but good documentation your support not only for newbies but also for experienced contributors and maintainers and this is a far more important tool than just the opportunity to say RTFM think about it in a more materialistic way when there is some process it's undocumented every new people who is dealing with it has to do some, let's say well engineering on it and it costs a time it's a cost so if we have some I heard a quite good description about such situation it costs tribal knowledge and this is a piece of undocumented project which is so obvious for a maintainer that they didn't even mention it in any kind of documentation it can be for example setting a flag before an infra-deployment and you know, you just start your deployment you are doing everything and you are meeting all these horrible bugs before that and you are struggling with it this poor newbie is spending a time digging inside and waste a time I know that trying is a way of process and a way to grow but come on, it's pointless this time can be spent for something more valuable and it's enough to have this flag documented and that's why it's called tribal knowledge also speaking of documentation there is one important note keep it up to date because if your documentation is outdated people who are using it users or new contributors are learning that they cannot trust this documentation and they will keep you asking anyway and this documentation is not needed because it's outdated and the third pillar of a good start is having a mentor as a mentor I understand the person on the partnership level will show you the project will explain you the rules show you how the team works but also this is somebody who will grow you in other ways than only technicals so this is somebody who also understands what do you like who will help you to clarify the way of development you want to go as a developer the good mentor should also understand the tasks and helps to provide the proper task which will grow the newbie which newbie will like and I have to say that I had an amazing mentor on RDO and if you will only have a possibility to find a mentor at your company please do it because I think that having a mentor is a kind of highway on a road in all this stuff I mentioned there is challenges first of them is a remote life and if you work for a corporation you at least have some chances for an office if you work like me so 100% remotely prepare also for a remote communication and kind of remote life I have to say that COVID has developed the culture of remote working in Poland where I work COVID shows for many employers that remote working is possible it's efficient and people actually doing the job when they are at home and also I have to say that it proved it to me because before COVID I never thought that I have enough courage self confidence to even try to work remotely not in a native language I have to say that COVID just leave no choice and speaking about language English is a not native language for most of us and this fact is often neglected people are afraid of asking for repetition or just saying that they don't understand in fear of being known as incompetent and it's good to remember when communication problems or issues appeared it may be not related to technical abilities of a person or just a simple basic communication problem related to language when I start my work at Red Hat I get to know that I will have a meeting normal stuff but in return on IRC I was so shocked I couldn't even understand how is it possible that the meetings are written and also I thought that IRC is a communication method which appears only on XKCD mems but it's still alive yep and now from the perspective also IRC is our official channel communication for open stack community so I understand why we are using IRC it has its advantages because it's very easy to use the whole community at that time I was very shocked and even prepared for that and I can say that Fedora did it in a nice way because they moved for some kind let's say more modern communicator because they moved to Matrix which is very nice web web user interface and it's easier for anyone from outside to just use it and the challenge of communication is to just choose a platform where anyone who would like to have an influence for the project will be able to use it communities are relations communities are people so building a relations inside a community and on the way community and commercial world is very very important and this is a huge role of community architects to just build these relations and events like that are very important for the communities because this is a place where you actually go meet people exchange your experience exchange your ideas and that's why how community is growing also this is very important to inside this community build a welcoming culture because when you are out of the communities it's very nice and I experienced that that you are just welcome here we are happy to have you here it's very good for the beginning also it's very important to have a safe space to ask to not be afraid to ask and grow and solve your issues the governance of the project the world of corporations and the world of open source differs in corporate world we have words like tickets KPI, performance, measurements you know it and at the same way they are not existing in open source communities it's just not the way they are working and this is a challenge to find the way to find the balance between these two approaches and the workflow in those projects the way how you are making decisions the way how you are making even the meetings the way how you ask all this all this issue to be solved they cannot be the same way in community like they are done in a corporate because voice of the community has to be taken into an account while making decisions or while doing just normal basic duties I have some teams for other newbies and fear most important use the time which is considered as being your training or the onboarding process as your time to be you know safe to ask because everyone are expecting that you will be asked that you will don't know but you will looking for answers, looking for solutions and it's normal and that's okay and you should take best of the time but you even can trying to ask proper and valuable questions you have to first try to do something so if you want to ask somebody a question don't let anyone to answer you with let me google for you or here is a documentation just read it try to avoid that because you know it's a waste of time and I think kind of lack of respect and first just try to do the job okay it's normal that you probably will faint or probably stuck on something at least try at least do some effort to solve the problem and this is something which will improve your skills make friends talk with as many people as possible because you will never know with whom you will collaborate so this is onboarding time this usually one one or two points depends on the company or community is the best time to meet new people to make new friends like saying hi I'm new here and just make networking between people and as I said if we only have a possibility to find a mentor do it because there can be a mentoring program in your company or in your community that you even don't even know about it so maybe you can just ask your manager or just try to find one because you will only be profitable out of having a mentor tips for mentors and as I say keep your documentation up to date I think I argumentated quite enough so I won't repeat that and again this world of asking but now asking from a different perspective you know why asking is so important because this is a route of communication so people should the communication should not be only in one way should be in both ways no matter if it's your mentor or your manager you should just exchange your communication I know this is very trivial as I'm saying but what I'm observing people don't talk so they're just sometimes they are not even aware about the problems of each others and if you are a mentor I mean when you are in a situation that you are you give some tasks to your newbie for example read this documentation or try this tool after the time ask your newbie and try to figure it out how is the level understanding of the newbie what did this newbie got from this reading got from this tutorial because this is a great way for you to understand how the newbie is looking at the project and also this opportunity to indicate the moments in your project which are overly documented or unclear or just undocumented because you as a mentor probably you don't have this perspective because therefore so long in this project that too many stuff are too obvious for you so this is an opportunity to you to have a different perspective on your project and what's important if this is asking is that this is not an exam not any kind of assessment because your newbie can feel a little bit in danger in this fire of your questions just let them know that you know just I'm asking it's not assessment it's important because without it your relations can be distracted give tasks to help understand you know when you are not only give a documentation to it but also give a small task to use this new stuff you just learned when I was learning our delorean tools our very versatile tool for building packages I got a task hey just try to build this one package with this tool and it's completely changed my whole perspective of reading this documentation because I was looking for some concrete information in this documentation which will allow me to solve the problem I've got and I think this is a very valuable way of teaching new stuff, learning new stuff give away background not only explain the solution of problem or a bug but also show how and why this happened this is an opportunity to explain workflows and actually the architecture in the project because it's actually having an anchor if a problem because newbie meets something did some some effort to solve the problem so they are inside the context so it's very easy to make notes about it and explain the context around it and hold your horses don't try to explain everything at the beginning because I'm sure you will have to repeat it later and general tips for a good start for anyone have a definition of them this is something I really like to have I mean that when you are making a description of a task in some kind of tax management define input and output of the tasks in a clear way describe this task that way if you for example go for a holiday and someone will do the task after you it will be totally clear for this person and I think this is a good description enough good description don't assume that somebody will understand what you mean without this description that way it's important and guessing is never an option because there is a very high probability that you will don't get what you expected describe the role when somebody's new is coming it's very useful to define what this person will do after the onboarding process and it's like setting an aim and setting a target for whatever they did and when the person will go through this onboarding process will be very sensitive for information which will be related to the job which they are going to be inside after the onboarding process for example let's say if you are a mentor of a manager let's say at the beginning this person will be in charge of developing our infra or in charge of new CI jobs and you know who hold this onboarding the newbie will be preparing itself for taking these responsibilities show collaboration with other teams or projects as I said it's very useful to know where to look for help and being able to ask it of course and who is responsible for what and the onboarding process is a great time for that and last but not least give yourself and themselves a time as a mentor be patient and don't judge people have a different way of thinking different talents, different approaches to problem as we are trying to build a diverse environment in our works also be ready that sometimes you will not only develop someone on a technical way or teach someone these hard skills that you will have to teach also these soft skills like problem solving abilities or doing dependency or required management or just way of speaking or discussing be ready for that as a mentor and if you are a newbie don't expect that you will be great since every first day any first day or even a first month because you don't have any idea about the complexity of the problem or the projects and don't care too much about the failures because if you failed it means that you tried and trying is something which is most important for make you growing thank you very much for your attention I'm here for you and I would very appreciate your feedback and if you have any questions I'm here for you Hopefully you will and I was about to suggest write a documentation of a couple of polls in the app or it you mentioned but I can see that you mentioned on the other hand what you are thinking about is that a mistake is it a bad idea because it just seems someone can do these things so what you are thinking about you mentioned that for the mentors and for the project it doesn't have good afternoon documentation and you mean by giving a task for making the documentation up to date yes okay okay so the question is if it's good to give a task for a newbie to fill some gaps in the documentation right I've got a task for that and yes I think this is a good task but don't be too too difficult in that don't give somebody a task to explain something which you don't really understand but I think it's a good idea for start yes okay the question was where to stop when to stop digging inside the project and stop trying to do yourself and actually ask for help well I've got this issue on my previous job that tried to do it but don't do much and I heard from my manager that you know it's up to you this onboarding process is for you and you will decide when you want to stop and move to asking because your manager or mentor is not inside your head and they don't know when is the moment you will feel frustrated or you feel stuck it's up to you if you feel that this few more hours will solve the problem go get it it's onboarding process of nobody you'll know track your hours but at least should not do it but if you are feel stuck and you feel stuck that even these two days more will not solve the problem just leave it, just ask because there's a waste of time and your energy so we have programs so somewhere as well of you know those programs you cannot say that there's no way for the newcomers to do the project but at the same time you have a very hard point because I can relate from my experience that this is the problem so now from your perspective what is the reason on one side we have those people at the university some of them who are teachers then we have programs that are aimed actually to having teachers and somehow it doesn't work so what are our okay I have no idea how to make your question short for a recording I try to understand with a context so remember that programs you are saying like this gcloud these are big corporations and most people what I was thinking about open source is this small initiative is just like I will start and do it or Google is not widely recognized as an open source company maybe let's say Fedora is because even though it's the same also supported sponsored by Red Hat and I would say people who are have a clear idea of joining open source they will not first think it will not be Google or programs of big corporations and from the other side from smaller projects it's very hard for me at least to find this way to join also this is another another threat here that people for example I wouldn't believe that I can handle the tasks in totally remotely without not knowing people and just using my keywords to solve the problem and actually providing a value so maybe I think people this is a mix because people doesn't recognize Google or order maybe Red Hat but other big companies as open source possibility and from the other side they don't have an easy way to start with smaller projects it's changing I have to say since I joined I have to say that it's changing I think for example there is a lot of programs to start as you're saying but I think this is a matter of time to changing but people are aware that inside open source that they need newbies and I think this is an important change here yes it may be branding but you have a Google in the name does it really have a branding of yes it can be if you know Google Summer of Code something Summer of Code like open source Summer of Code they won't agree but you know if someone is to hear for example program for open BSD it's very clear that this is open source but if you hear Google Summer of Code you understand okay okay come to me after thank you welcome to my presentation about Linux distribution collaboration on the mainframe yes I'm speaking about collaboration I don't watch any Linux distribution as a competitor anymore we have got a working group at the open mainframe project for all Linux distributions and therefore I will speak firstly something about myself then you will receive an overview about the mainframe and the open mainframe project what is that then we are coming to the Linux distribution working group which is part of the open mainframe project and I will tell you something about our goals for all Linux distributions and upstream projects now I would expect all here are available because you are pro open source communities we are upstream working together and therefore we have decided to include also upstream projects now that we can collaborate better together and then we are coming to support you as developers how to that you can receive support in development and architecture specific stuff as an example the S390X architecture and what is the Linux one community cloud or the Linux one open source of the community cloud that you can receive VMs for free on our mainframe as developers something about myself I am Julia Krisch and I am working as a lead IT-OT software engineer at Accenture I am doing that part-time because I am also a part-time master student in computer science at Friedrich Alexander University in Erlam Nürnberg in my free time I am an open source contributor until 11 years now yes 11 years because I have got a education in computer science as a computer science expert and I was a student with work experience because I have worked as a Linux system administrator then and now I am a member of the release engineering team I am responsible for the S390X architecture at OpensUser and I am a team lead for OpensUser C systems I raised that I want to found the Linux distribution working group together with one IBM therefore I am also one founder of that and I am speaking for all Linux distributions who have joined to us what is the mainframe on the right side you can see all sizes which are available from IBM the smallest one here is the latest release from April that is a rock mounting system and the mainframes are well known as large high performance computing systems especially used for banking systems, entrances and everywhere where you have to multiple transactions in parallel now it is also available in a smaller version and if you are speaking about architectures they are also well known as big indian systems and the architecture is the S390X architecture it is used for mission critical data like banking systems as I have explained and you can run thousands of VMs on such a system from that we are coming to the open mainframe project the open mainframe project has been founded in the year 2015 and is under the hood of the Linux foundation the focal point is the deployment and usage of Linux and open source in our mainframe computing environment and yes we as Linux distributions said if you have got also the usage of Linux as a focal point then we want to be integrated because if you can see mainframe centric projects under the hood of the open mainframe project more COS specific or generally C-specific you can see different projects here which are used mostly for COS and not Linux some of them are for the improvement of the command line where you can connect better and where it should be as an open source software so we is for better deployment of Linux and everything else on CWM but I want to highlight one thing we have got also a mentorship program therefore if you are interested for mentorship in the area as a mentee or a mentor we are supporting also the development of open source projects for the mainframe for Linux especially and therefore there are many projects under the open mainframe project then we are coming to the hot topic working groups there are free working groups available I have joined two of them the Linux distribution working group exists since two years now the modernization working group I have joined as a Linux contributor but the focus of mainframe modernization should not be only COS related I want to have also a Linux background the inside of that and then there is also the cobalt working group available which is also responsible for the programming course at the open mainframe project we have started two years ago shortly after the IBM CD with a kickoff with community Linux distributions because I and Elizabeth we said we don't want to have one a Linux distribution for the mainframe we want to have support and infrastructure available for all of us and especially IBM said in the past we provide only official support for Rails less and Ubuntu we said we want to have also really good support for all community distributions therefore I have reached out to Fedora Dan Horak has been available for Fedora when Elizabeth went to Debian and we have received Ryan for that and with that we created our Linux distribution working group now we are more afterwards in the next step we said we want to include all enterprise Linux distributions then Suze has joined in the next step Ubuntu has joined in the next step and Dan Horak said he wants to be the representative for Fedora and Red Hat in our Linux distribution working group therefore to the Red Haters if you are interested for speaking for Red Hat and supporting Dan on the other side you can join us in person Fedora and Red Hat have got the same split as Suze and Open Suze one Linux distribution for one person should be enough but anyways that was the next step to integrate the enterprise Linux distributions and then after my talk had forced them in the year 2022 they have joined in our Linux distribution and Rocky Linux have joined also but if any other Linux distribution wants to receive support for building for S390X I can join our structure is we have got our founders as co-chairs I am a manager Elizabeth is Ubuntu member therefore she has been the second founder from Ubuntu's side as an IBM employee now she is also the head of Open Source program office at IBM therefore we have got the highest level at IBM for Open Source software within our Linux distribution working group then we have got one representative for every Linux distribution we need that for the input and that we can share our knowledge whether we can fix something in our Linux distributions if anything is happening and our Linux distribution has got a solution and additionally on top we say also upstream developers are welcome during our meetings and our sponsor has been through at the beginning our goals at the Linux distribution working group is to create a place to collaborate across Linux distributions we are an open mainframe project mailing list, wiki and chat yes we have got a wiki we have got a chat we have got a mailing list mostly used the place is the mailing list I have got a problem can we fix that can we receive support and mostly you receive support within two or three hours from IBM site or from our Linux distributions and that is providing also a space for distributions to request for help on the S390X port then and then a topic from Elizabeth to ensure any and all infrastructure which is required is available for supporting the ports that is the mainframe in the background Red Hat and SUSE have got mainframes Canonical has got a mainframe Fedora is using the Red Hat mainframe, Open SUSE is using the SUSE mainframe and Debian has got our own mainframe Alma Linux and Rocky Linux have received also infrastructure but in case what I want to give as an example to this goal is that nobody on Debian site from the infrastructure sponsors wanted to be responsible for the mainframe anymore and the CI CD pipeline as an example didn't work anymore and that was the reason that we said okay we want to transfer all the pipeline stuff into the Linux one community cloud that IBM can be responsible for the infrastructure for Debian continuously therefore if anywhere we are shut down of our mainframe our Linux distribution working group has got this goal and therefore you will receive a new mainframe or you will receive a transfer of the mainframe or the community distribution as an example we will receive alternative mainframes where we can receive infrastructure for pipelines and buildings then the topic what I wanted to have within it is the better support from IBM to fix S390X specific bugs in the past that has happened mostly for enterprise distributions only now you can receive a support continuously and really fast I would say I want to create a bug report because of a kernel problem I am receiving a bug fix within one day now that is a really improvement I would say therefore if you are writing to our mailing list or sending a bug report to the program managers at IBM you have got a really fast response there now from that we are coming to our collaborative process we want to collaborate and support each other we don't watch us as competitors anymore and we are doing our problem discussions on our mailing list as a first step we can reproduce issues sometimes in other build pipelines if anything is happening we have got open discussions on the mailing list about problems and then we are forwarding issues and ideas of improvements for IBM as an example through the kernel developer wanted to receive an improvement in the tooling suite for the kernel configuration distinguished engineer Ulrich Weigand has interacted a nice idea forward and within two weeks they created a patch for that therefore yes the distinguished engineer Ulrich Weigand is also available on the mailing list and in our discussions and then we have got collaborative small projects I will bring in on the next slide the example with OpenQA but anyways we have got also monthly meetings which come together with a review what has happened in the last month is all resolved what has been happened on our mailing list as an input and to do tasks and then a discussion what we want to do next and whether we have got new releases and everything therefore we are reviewing all and we are creating new goals for the future there one small project what has happened is also the S390X test contributions for OpenQA some of you will know OpenQA is a testing suite from OpenSUSE which has been used also for Fedora Debian and Gantu in the past and of course it is less and IMA Linux has joined using OpenQA and said we want to enable the support of Red Hat KVM in OpenQA where it did that we have announced that on our mailing list and Rocky Linux stepped up we want to use it also we want to integrate also our contributions therefore we are now OpenSUSE less Fedora, Debian, IMA Linux and Rocky Linux and Gantu and we said afterwards contributions to our general OpenQA repository on GitHub is of course welcome therefore it is a good point that we start also a collaboration for OpenQA that all enablements and test suites are coming together and every Linux distribution can use available tests that should not be only for S390X we want to have it also for X86 and other architectures finally should be Linux distribution specific stuff where for the needles screenshots are different perhaps distribution specific configurations from that we are coming to the topic are there any problems during your development process for S390X because we want to be the point of contact for all Linux distributions if anything does not work on the mainframe during the process problems are affecting mostly all Linux distributions and you as developers also and not only one single person or one single Linux distribution we are receiving fast support via our mailing list of the Linux distribution working group we can build issues sometimes we have got situations that only one Linux distribution is affected but that is mostly based on the kernel version that Ubuntu as an example is using a special older kernel version or open through the tumble read is using the latest kernel version the same as Fedora is using another kernel version that can happen that something will break based on the kernel but that will be also in our focus and we will support there but mostly that should on application level everybody the IBM distinguished engineer is responding on development problems when I have forwarded also upstream project issues where the developer didn't know how to enable the software for S390x anything small didn't work I have forwarded and the fix was available in three hours all were happy then that is also possible therefore if you know anything related to that you can forward it to our mailing list with the WG Linux distros of the open mainframe project org and afterwards we are receiving solid solutions also for upstream mostly within hours or some days if you want to know something like that also for ARM I know that Linaro as an example is providing such a mailing list for ARM specific problems for all Linux distributions when we had email discussion about that and security related there exists mailing list for all Linux distributions therefore you can find something for mostly all architectures already now you want to I expect you want to know how can I develop on a mainframe without having my own mainframe at home or in the company where you can find the Linux one open source of the community cloud and the Linux one community cloud you can receive a IBM for 120 days for developers, students and professors in the Linux one community cloud you are creating your user account and you are receiving access the account will be removed afterwards and recreation is possible but the best case is you can receive also as an open source project long-term access in the Linux one open source software community cloud you can use this virtual machine request write down any reason for what you want to use it you can define which size of memory you and everything else you need and when you can receive it I as an example I have received C70Ms for development and hackathons for our open-through-the-sea systems team and the single thing what you need is any open source project as a reference now we are coming to the last slide finally I can say collaboration we can watch as a benefit upstream contributions are available for all we are lowering the research and development costs that's valid for IBM the community and every open source enterprise company we are receiving the same solutions for all Linux distributions we are sharing our knowledge between the communities and finally all are happy and are receiving the fixes faster and then we are receiving diverse community ideas every Linux distribution has got another thinking has got other ideas and with that you can increase also innovation and on this way we can accelerate the Linux development for S390X now we have got some minutes left for questions yes you can use QEMU already as an emulator as an example and there is also Hercules available but IBM does not really support Hercules I would say but we have got some people are laughing I wrote my bachelor phases based on QEMU IBM has also got a KVM team related to that I can say but if you want to develop and building stuff I can really recommend these two links if you want to use it only one time you can use the 120 days version if you are Fedora contributor of anything else you can order for yourself via whistling the hint is that you can only choose between Ubuntu, Rel and Slash as a foundation therefore you have to choose one of them and if you want to build on Fedora when you have to use KVM or something like that Slash is possible that you can upgrade to OpenSUSE where we have got a method where we can use Zipperdoop OpenSUSE repository and we have got OpenSUSE in our VMs I am working on it that I can receive IBM to the site that we want to support also community distributions that Fedora and Debian can be also integrated where but it seems that it needs some time because that is not officially supported officially supported therefore I can recommend using QM for S390X and when you should have got a good emulator that has been improved within the last 10 years that's the best choice related to emulators they didn't like it really much but my bachelor thesis was about something like that related I should create container images build it and run it integrated into QEMU for emulations and that was when transfer from Kazan Reisen example exported to X4 and when the emulated VM should start on X86 that was an interesting bachelor thesis project I would say you can see also my bachelor thesis and my name is bachelor thesis on Github but yes they are working on it it was not really wanted in the past but IBM wants to become open source therefore they have improved that and this method is the method they want to provide open source projects who don't know anything about mainframes in the background of Github or something like that IBM wants to receive enablements of open source projects that are not responsible for that therefore anyway it has to be enabled and on the other side they want to be responsible for the foundation, kernel development KVM network devices and everything else and therefore you need a solution where you are supporting the application development on community side and stable foundation on IBM side I don't want to fight with it really and also I have some family stuff so I decided that's the other one I'm planning to do it at it will be closer like when it was in here in Hungary and everything it's easier so and also the budget is quite high yeah I was surprised our whole team is getting to a flop which I was not expecting to happen I don't know why our budget worked out that way alright we got some people yeah it's just so okay hi everybody I'm Adam Williamson I've been working on Fedora QA for quite a long time now since about 2009 and this is Miro Vegkerti who is helping out with the Fedora CI parts of this talk yeah I'm going to kind of get right into it because the slide deck is pretty long so I want to make sure we get through the time Miro do you want to do a quick personal introduction yeah so I'm Miro I work mostly on LCI but we have some infrastructure also in Fedora and I'm a Fedora contributor we are providing some infrastructure for running the testing the other testing form service so yeah what we're aiming to cover today is it's not going to be a deep dive into the technical details of the systems what I want to talk to people about is what are we actually testing what is this achieving and for people who are Fedora package maintainers how do you interact with it and how is it helping you so let's get going a little bit of quick background how do we test an operating system this is a huge topic it's a challenge deciding what to test when to test it we have in the time cycles we operate on so there are four levels right now at which you can really do testing within Fedora there's package source control so Fedora packages are stored in a Git repo and you can do testing on things that happen within that repo pull requests, commits anything you want to do with that you can do testing at the level where builds happen when you do a single package build within Koji we can trigger test on that you can do testing at the level of the update which is the sort of abstraction we have for getting feedback on single builds of packages or multiple builds of packages before they actually go into the distribution and we can do testing of composers which are when we build the whole of Fedora together and actually produce the images, docker containers disk images, whatever you like that we ship to users we can test the composers so the background, manual testing ye oldie days so this is up to about 2015 which is going to get covered on the latest slide back in the days before we started getting better at this we basically didn't really do any testing at the diskit level just maintainers who were really keen might self check themselves somehow they might have some little script they run on it but there was nothing automated the Fedora QA was not really doing anything at that level at all build testing, all you could really do is have a check section in your package which many packages do and it's a good thing but it's not really sufficient and that would just usually it just runs the unit tests from upstream update testing once Bodi came in the point of Bodi originally really was to allow for testing and feedback on updates so that packages didn't just randomly go into the distribution and break stuff but before we started doing automated testing it was really just down to people doing it so maybe if your package is really popular, if it's Firefox you're going to get a lot of feedback if your package is not that obvious to users even if it's really important it may never get any feedback at all it was very haphazard maybe this build gets tested but this build doesn't and what actually got tested from build to build again was just kind of down to people so it was again it was very varied it was very inconsistent we didn't really, it's very difficult as a manual tester to test whether an update breaks other things and especially whether it breaks the system we had almost no testing of that and compose testing was the thing we mostly focused on I guess and we were getting pretty decent at it doing it manually we had a whole process for it which I'll cover soon but it was very time intensive anyone who worked on Fedora QA back then my life was just booting up 6 VMs every day and doing install tests and wishing I was dead it was not much fun the amount of work we put into it as humans you can't test every compose we would test some snapshots every cycle and then the release candidates so we'd test I don't know 15, 20 composes a cycle but there are hundreds each cycle so we were not covering very much at all little history of how validation testing worked I'm a history nerd so I love this on the left here this is the earliest recorded formal validation testing of Fedora in the history which is Fedora Core 5 and this is a wiki table and there were 26 tests in it the two people did almost all of them as a test I'm just counting a row I'm not counting the environments because counting that is harder and then up to Fedora 21 which was 8 years later I think we were still using wiki tables but they're much shinier and more color coded and we had got up to 138 tests Fedora 21 was the last release before we really started doing automated testing so that was you know as far as we got with humans doing all the testing did I miss anything on my notes here yeah so we this is like six times as much work as we've been doing on Fedora Core 5 but it still wasn't enough we weren't covering everything we were missing bugs we were taking too long to catch things we would often only find something when we got to beta it had been broken three months earlier and tracing that out was a nightmare and just as a note on the test counts Fedora 39 now has 202 tests so we've continued to kind of scale the amount of testing we're doing that way so that was you know that was as far as we could really push manual testing we were at our limit at that point so we came to the conclusion that we need to do automated testing what are the automated test systems that exist in Fedora right now the key to I'm going to cover other things very briefly later but the key to our Fedora CI and OpenQA so I want to do a little bit of explanation here one of the questions we often get is why are there two you know automated test systems in Fedora and this has been a process and a history and there's a lot of nitty gritty in the background but we've come to a pretty good understanding of why this is and why it makes sense and why we had them Fedora CI the point of Fedora CI is really to provide CI type work flows infrastructure processes to people operating in the Fedora environment Fedora CI the people who work on Fedora CI don't wake up every day and say is Fedora broken they wake up and say are the tools and processes we're providing to Fedora people are they working are they good tools how can we make them better that's what Fedora CI is about providing CI services to people working in Fedora OpenQA is there to do testing of Fedora OpenQA is literally the automation of all the stuff the Fedora QA team was trying to do manually and really burning out that's the point of OpenQA OpenQA is there for us to be able to figure out if Fedora is being its best possible Fedora it's not a service we're providing for other people to write tests of their thing in the Fedora environment that's not what it is so with Fedora CI the system is the point that's what Fedora CI is systems and processes and you know interconnections and it's the job of Fedora CI is to provide testing systems for you all to use with OpenQA the system isn't the point we just use OpenQA because it was a thing that was there that worked to do the job the point of OpenQA is helping us figure out is Fedora broken who broke it what broke how do we fix it that's what OpenQA is for specifically OpenQA is specifically not about letting other people come in and run tests of their thing we kind of thought about going down that road several years ago and we've specifically decided against it you can contribute to OpenQA but it has to be within the context of what OpenQA is there for so if you want to contribute something which helps us test whether Fedora is great perfect but if you want to test your little widget in the Fedora environment that is what Fedora CI is for so that's kind of the distinction between the two OpenQA was started by the QA team in 2015 as a Skunkworks project was to automate our stuff Fedora CI started in 2017 and it really came out of we try to be transparent about the relationship between Red Hat and Fedora Fedora CI really came out of an effort within Red Hat to reduce the distance between Fedora and REL where it came to testing automated testing because how we do things in Fedora is completely different from how we were doing things in REL at the time and Fedora CI is part of a coordinated effort to try and make it possible to share more things across REL, CentOS, Fedora so a large part of Fedora CI is making it possible to take things we have within Red Hat that were never applied to Fedora and make it possible to apply those tests to CentOS and Fedora yep, so I am going to go Miro will talk about Fedora CI after I've done OpenQA I'm going to go into the sort of details of what OpenQA is and what we're doing with it so what OpenQA it's an original strength what it's really good at and what is really cool is that it tests like a human like I'm not going to go into details of how it does this but OpenQA is effectively running a virtual machine looking at what's on the screen looking for little areas of the screen that it expects to see clicking on them it can also type things so it can type commands, get the results out and so it's great for automating manual QA testing because that's what we did most of the time we span up virtual machines and clicked on things it's also great because it doesn't care about the operating system at all you can use it to test Windows, Linux you can use it to test firmware interfaces all it needs is a VNC connection to the computer and a way to click on things so it is really appropriate to what we're doing and as I said what we're trying to achieve with OpenQA is high level functional testing it is not here for unit testing it's here to tell us is Fedora broken is something that we care about something that is part of what we want Fedora to be for people in the world is any of that broken that's what we're doing with it so what do we actually test with Fedora we run a subset of tests I think there's about 200 of them on every Fedora compose also we're going to subset on CoreOS composes IoT composes, Cloud composes anything that's a compose in Fedora gets tests run on it we also run a subset of tests plus extra tests about 50 to 60 on critical path updates for every branch of Fedora stable branches, branched and worldwide that's what gets the data gets the tests run on it if no one knows what critical path is it's really just the set of packages that are the most important and this limitation is purely resource based I would love to test every update that goes out but we just don't have the capacity for it so this is what the OpenQA WebUI looks like and you if you're not part of the Fedora QA team you actually don't need to look at this very much but I wanted to make this talk practical about what things are looking at so on the left hand side over here this is just a subset but this is a view of some of the tests that were run on this is a compose from a few days ago these are most of the tests that ran on the silver blue installer image and down the bottom you can see one of them fail rpmoistree underscore rebase which does what it sounds like it tests rebasing rpmoistree type install on the right here these are two different screens on the right here this is what that fail test actually looks like and you can kind of see there's a bunch of screenshots when it's got a green outline it saw what it wanted or a command did what it was supposed to where it's got a red outline it didn't see what it was expecting to see and you can click on each of these and you get a full view of the screen and you can see like the top here is it's in emergency maintenance mode when it was expecting to be a booted system you can see what you get with the OpenQA Web UI and this is a real thing you can't rebase from raw height to 38 at the moment because of I think an SE Linux thing so that's how that works what do we actually cover with the OpenQA compose tests? we cover 75% of the validation test suite which is the set of tests that need to pass for a fedora release to go out the things that aren't covered are things that are very difficult to automate or things we explicitly don't want to automate we always want to have a human in the loop so we always want a human to test that the images actually boot and install on a real computer, a real container a real virtual machine just in case the automating testing system missed something I never want to release an image that no human has actually tried to boot up so we can't really do 100% but we do our best we also cover some stuff that's not release blocking like we test silver blue technically release blocking but it's very important so we want to test silver blue a lot of the tests are install tests we really exercise a lot of the install we install a bunch of different images we test different package sets can you install known, KDE minimal, a bunch of different ones different partition layouts butterfs, XFS, EXT, LVM fin partitioning whole laundry list of different partition layouts languages, we test English Arabic, French, Japanese and Russian to cover a kind of different fonts, different symbol styles and also right to left which is very important we test different firmware types so we do installs on UEFI and BIOS to make sure they both work but we don't just do install tests OpenQA started out mainly doing install tests but these days we do a lot more we have the base tests are the core operations for any Fedora install, can you install a package can you remove a package, can you update a package does logging in work, does logging out work, does rebooting work is SE Linux in enforcing just in case we mess that up some time hasn't happened yet but it might can you start services, stop services restart services, enable services disable services, all of that is tested system logging is logging working can you get the journal upgrade tests, we test upgrade of all the release locking package sets workstation, minimal, KDE server, we test them N1 and N2 so we test 36-38, 37-38 both of those and we also do a test that a free IPA deployment can be upgraded an entire deployment server replica client upgrade them all make sure they're still working after the upgrade so that's one of the more complex tests we have graphical desktop and application tests we test Nome and KDE a lot of their core functionality the overview, the user menu the stuff that you expect to work on a desktop we test that every installed application at least starts up and stops, trying to test of them all functionally is a lot of work but we test that they don't just crash or just not launch at all on Nome, my colleague Lucas I don't know if he's here anyway, he has been working on writing a bunch of Nome application tests so right now we test about 50 Nome applications in quite a lot of detail like the maps application a bunch of stuff, can you look up a place can you do a route real proper detailed functional testing of the apps we test desktop login which isn't just login but also can you log out, can you switch users can you lock the screen can you unlock the screen can you reboot from the login menu that kind of stuff we test that def notifications work which includes testing that on a live image you don't get update notifications we test one, we test printing using virtual printers obviously and we test upgrades updating and upgrading yeah so that's pretty intensive and we test all those tests gone on silver blue and we also run all those tests on an upgrade as well so we upgrade workstation and then we run all those tests to make sure they still work on an upgraded system we have a bunch of server functionality tests like the key features of server free IPA, a database and cockpit, we do a bunch of functional testing of those do they work, do they show what they're supposed to show so for every single compose we are testing all of this stuff like every time a nightly compose comes out all of this gets tested update test coverage we don't run all of those tests on every update again capacity issues but we run about 50 to 60 tests and this has got a little refine recently because we're doing stupid stuff like running all the gnome tests on KDE updates which was just idiotic so I did a lot of work to split up the critical path into groups and that allows us to only run appropriate tests so not every update runs every test anymore but there's a total of about 60 tests we do the tests I talked about on the last slide the KDE workstation and server tests not all of them but most of the key ones again a subset of the base tests and the desktop tests free IPA, cockpit and database tests I think we've run all of them pretty much and then one thing that's really important and I'm pretty proud of is for every single quick path update we're building a network installer image a gnome live, a KDE live and a silver blue installer image which was a lot of work and making sure you can do that build you can run an install and the install system works so what this is really testing is that the update doesn't break the compose so we know that once we push this update we can run a network which was really key um and yeah again just a key thing to note from earlier on when we're testing an update to you know libfoo we're not testing is this libfoo the best libfoo it can possibly be we're testing does this libfoo break fedora and this is always the goal for OpenQA all of the testing is in the context of making sure fedora is okay so yeah I mentioned this already but the main limitation is really just the tests I would love to run all the tests on all the updates I would love to go across arches but we just don't have the machines if someone wants to give us more machines that's great I'd also like to thank the meta folks because they're planning to hire a contractor to work on cloudifying OpenQA which would be a great way to get you know more test resources so I'm hopeful we'll be able to get somewhere with that soon so scale and success just a perspective on what we're actually achieving with this we have two instances of OpenQA in staging we test more arches we have PowerPC in staging right now we run over 100 tests at a time on each instance I think we've run over 3 million tests since 2015 discovered hundreds of bugs 358 is just bugs that are tagged OpenQA in bugzilla but that's a huge undercount because I only started tagging after a while and a lot of things we just fix it because I'm improving packages so I just fix stuff or we file issues upstream so it's a lot more than that in a typical day just a typical day, not a busy day it'll be one or two fedora nightlies depending on where the branch exists there'll be 3 core OS or cloud composes and we'll test about 20 updates on a busy day you can double or triple those numbers just some examples of recent bugs that OpenQA has caught right now there's three failures that we know about from the EFI system partition size increase thing which I've filed in the past I've noticed that for another two months Firefox just crashing on startup which it does so we caught that and got it untagged so that doesn't affect how it uses the Arabic translation just disappearing from the installer that was a fun one to catch exactly notifying about updates when running live that started happening and we caught it and that's just the first page of search results I just pulled up the first page of most recent results from bugzilla and that's what's on there another example that happened recently from update testing a new util linux build it was mounting the group partition read only which obviously makes the whole thing not work and in the past that would just have landed in raw hide and the next day people would have been like hey my raw hide system isn't working but instead we caught it we got the util linux untagged nobody saw that except people who were running the test OpenQA resources yeah this is the thing I mentioned earlier but it's important to make a service we provide to fedora the testing we don't necessarily want packages to be trying to debug failures themselves so we don't just run the test and run the system we investigate the failures and we make them actionable ideally we fix them if not we at least turn them into a useful bug report so you don't have to look at the OpenQA results and try and fix it yourself so that's the philosophy about OpenQA if you need to contact us we have a mailing list fedora chat room I should mention OpenQA is originated by the SUSE folks it's a great system we're really happy to have it we collaborate with them thanks to them for all the work they put into it and that's the upstream site for it there's a wiki page which kind of explains the fedora deployment and this last point I added after the keynote I really love the keynote OpenQA is an open source service everything is open not just the code but all of the stuff that we've written around it the tests, the scheduler are made via fedora infrastructure Ansible scripts which are in a Git repo and you can contribute to and you can even do a pet OpenQA deployment using those Ansible scripts and it should mostly work so you don't have to contribute to OpenQA but if you want to, you can and I'm now going to hand off to Miro to talk about the Fedora CI so hey so if you want so Adam has less work than onboard your tests to Fedora CI basically you do your part that your software is stable and then will have less work and he will be glad so Fedora CI is here for you that you as a contributor to Fedora can do your part you can stop basically your build which sometimes breaks its software you can stop it before it enters Fedora if it enters Fedora, it's on Adam that's the story so what can you do in RHEL because it has similar similarly there is a Jenkins instance which is actually calling testing farm but my team takes care of it's fairly stable and it works well you can see the documentation here at the docs there is also a nice how-to guide to get you started, quick start guide really copy pasting stuff and you can add your test whatever that is in Fedora there are two places where you can run tests first is so you can run tests the tests are always placed in this Git repository you just drop there some files the tests don't need to be there they can be leaked from GitHub wherever you want that's thanks to TMT because it can share tests with upstream repositories and so on so it's maintained by you it's your responsibility you can of course ignore the results so you can open this Git if you have that test there it will be reported but it's not gating you can just merge it if you want then you will break maybe Fedora so yeah it can be made gating like I think Zul has capabilities for that so that's the first place open a this Git pull request nothing gets yet to Fedora test will run, you will validate it that build that was built from this Fedora and before breaking Fedora you can you can fix it you can also run tests from other components for example I would love if Selenux would run cockpit test suite because cockpit touches a lot of components and so on so it would be great if the great cockpit test suite would run for Selenux and then maybe let's work later on for them so yeah so that is one thing then also the tests are run for validation after merging so once the stuff gets to row hide there is no gating again so on BODHI you can again see after the production build was built you can see the results on BODHI so that's the second place where Fedora CI runs and also OSCI team which is the second team who is running this stuff is running some generic tests installability RPM-inspect and RPM depletion they really try to make sure that those packages are installable without issues and the RPM-inspect make sure that the sanity of your RPM is fine that's right sure this is all I said before sorry I think like if you want to do it super smart do it be a packet on github or github if you are lucky that your upstream is there if not then you can do it on this git merging quest that's a better place and the worst place is after merging when stuff already gets there we would love to start gating on Fedora so your tests can be gating but currently they are not it's really on you to make them gating yeah I do want to just quickly highlight the packet workflow if people aren't familiar with that if you kind of really buy in it's a cool system because you you go from your upstream project all the way to Fedora in kind of one workflow so you have your packet configuration upstream and you have your tests upstream and then you can do all your development upstream packaging stuff kind of happens automatically and the tests get run on your upstream pull request on your spec file pull request it's a really nice integrated workflow with all the testing in the background happening via testing farm which is Fedora CI's back end so that's a cool thing that it's worth checking out have a look at packet I'm just going to blow through this really fast there are other things which kind of are automated test systems even if they weren't intended to be or packages build dependencies change it tries rebuilding it in Copa and tells you if it's broken which that's effectively a QA thing so it's kind of cool Fedora release auto test is written by our colleague Lily Nye, another person on my team and that runs tests on weird hardware that we can only really do in Red Hat Speaker because it's a Red Hat internal test farm thing so it can test weird enterprise storage things like Vybertanel over ethernet iSCSI and things that we can't really test in open QA so that's really neat so that automates those tests which were really hard to get done before RelVal which is a silly thing for reporting results manually to the wiki also actually when a release validation when a new release when a new candidate release comes out it runs the size checks automatically and files those results into the wiki so that's kind of an automated testing Fedora Core OS has its own entire like CI release workflow which is really cool and really integrated and it's very modern you know you're bolting all this stuff on afterwards Fedora Core OS was invented much later so they have a really cool cycle and they do all their own CI Google based CI is getting tests run in pager for your project and packet I just mentioned result delivery reaction so testing is one thing but you have to do something with the results so this goes back to that earlier slide about the four places testing can happen so diskit is like the earliest place you can get your test results in so you know your package is in diskit we have a pager user interface on the front of that which is you know the github style forge thing and your pull requests if you manage your package if you actually do pull requests which not all package maintainers do but if you do pull requests then your tests that are in your repository will get automatically grown on you'll get the results in your pull request as Miro mentioned so this is like the earliest point you can get it in and if you're disciplined to use the pull request workflow it's really useful to have all your test results at this point Bode the web UI of the package of the update management system is probably the main integration point for results this is where you're most likely to actually see your results I think for a normal packageer working in a normal way on the automated tests tab in Bode you get like a list of all the test results and this is you get your Fedora CI tests and your OpenQAI tests OpenQAI tests all together the ones that have Fedora-CI at the front come from CI the ones that had update at the front come from OpenQA but because the back end they're all kind of compatible you get all the results in Bode we've improved this quite a lot recently the first version of this talk had me apologizing a lot for a lot of the bugs in this view and I'm hoping people may have noticed it's got better recently we would have weird things where the sync between the back end and the front end of Bode was off so one would say your package was gated and one would say it wasn't and hopefully that's been fixed you now see running state so when a test is queued or running you will see it on this tab before it would just say the result was missing which is confusing but now it actually tells you that the test is running, wait a bit and you'll get the results so yeah, new queued running state hopefully more consistency right now updates for stable releases and branch releases are gated on most of the OpenQA test which means if one of those tests fails your update cannot go stable right now Rohide is not officially gated I'm probably going to turn that on after this talk, I wanted to do it during the talk but we're running out of time because Fesco has approved it I believe and that'll be a big thing but anyhow we've been shadow gating Rohide for a while which means when we find a bug in an update for Rohide we will untag it before it makes a compose so effectively Rohide has been gated for several months we just weren't telling anyone about it that's me and Kevin Benzy but it's been working great it's made Rohide way more stable packages can configure additional gating requirements so if you want to gate on the tests that are in your repository you can drop a gating.yaml file in your package repository and it will be those tests will gate there's a button for waving bogus failures if any test is failed and it's blocking the update there's a button that says way failures please only click that if you're really sure the failure is a bogus one there's another button for re-running the tests if you think the result might be a bogus one hit the re-run tests button and see if it keeps failing then it's probably a real problem or a bug we need to figure out what do you do if you see a failure in Bode as a packager if it's a pack, if it's a test that you put in Fedora CI yourself you should probably go ahead and debug that, that's your problem if it's one of the generic tests which are the installability, RPM, depth lint something like that that shouldn't gate the package by default but maybe you've turned on gating for it try and debug it if you can't then contact the Fedora CI team and they will help out open QA failures again this is part of our full service model if you're really keen and impatient you can try and diagnose the failure and figure out the details of open QA but if not just stay calm and wait and we usually I refresh the open QA web interface about 12 times a day, like I'm always looking and any time I see an update failure I'm going to investigate it I'm going to either fix it or I'm going to file a bug for you so you will get a report which you don't need to know how open QA works you will just get told what's going wrong and I will ask you to help fix it so if you need help send me the Fedora chat, the mailing list send me an email, carry a pigeon whatever you like and yeah, re-run the test if you're not sure if the failure is genuine, wave the results if you're really sure it's not a genuine failure for problems in Bode itself you can contact CPE the Fedora infrastructure team or you can contact me because I still have work on Bode too so I know how to do it and quickly, yeah we still use the wiki, like for 20 something releases later we still have tables on the wiki we still need to build the results in them the reason we do this is just because it's the only way we have to integrate automated and human results so we still need some manual testing for composers so people put their results on the wiki page and the crazy thing I wrote let's open QA, file its results into the wiki so when we're deciding whether to release a Fedora release we go and look at the wiki page and we check that all the tests are covered and all the tests passed that's why we still have the wiki I'm going to skip over this because we're short on time there's a lot of stuff behind the scenes here and this is all shared between Fedora CI and OpenQA on the front end there are different systems but the way they file results the way they all talk to each other is all kind of integrated and shared this, I really like this slide if you've been looking at cat pictures for the last 30 minutes but you want to say you came to this talk take a picture of this slide this is everything, this is the whole talk updates and composers tested by OpenQA we don't gate composers right now but we review the results we gate updates you can gate your pull requests in diskit and Cache does FDBFS that's what we've been talking about the whole time the future, again quickly, what are we doing with OpenQA we want to do more tests we're always writing more tests so cover gnome applications gate raw height updates I would really like to turn that on and I'm planning to soon we'd like to cover more arches that just need more hardware cool thing we're working on right now is doing bare metal testing in OpenQA which uses Raspberry Pi KVM it's a really cool project more tailored update tests so because we now have this grouping thing I kind of want to do let's run all of the installer tests when we have an anaconda update so that would be cool and maybe move it to the cloud Fedora CI plans, Miro we are almost out of time one thing, if you have STI we are going to get rid of it so migrate to TMT, there is a nice guide it's important stuff for all of these otherwise I think Fedora is quite stable if not they can reach me on chat Fedora project or Fedora CI we are there to help you and if you watch the recording of Miro's talk he has a lot more about all this stuff in it other plans, yeah one thing I skated over, the biggest challenge we have right now lots of other things go into making a Fedora compose the kick starts the comps pungy configuration, workstation OS tree config which is actually where all the immutable configuration lives when these things change there is no testing if you make a mistake when you are editing comps the compose fails and we just don't know about it, so I would love to do some stuff to do more testing of these but we got to have plans we made it through the slide deck thank you everyone for sticking with us by talking fast I think do we have five minutes for Q&A or is that the slot? five minutes cool, yeah so the question was can we also test Turkish yes we could I am loading a new language just requires doing quite a few screenshots but yeah we could we do have an issue tracker I'll talk to you after the talk with OS Auto in Distry Fedora is the project for the tests and if you file an issue on there I will file one, yeah we can definitely look at doing that cool okay yes, let's do one at a time so I don't forget, the question is can we test our suspending theoretically we could test it because you can suspend a VM and resume it but it's not very useful because all the problems that suspend and resume tend to be on real hardware so yeah that would be that's one of the reasons we're looking into the real hardware testing sorry I'm not sure what you mean, application cache files and debugging oh yes good question I kind of scourged it over Miro can talk about CI but in open QA if the test causes an application crash so there's a thing in open QA when the test fails it runs a post-fail hook so we upload a bunch of generic logs and stuff but we do also test for abort crash dumps and core dump control crash dumps if we find one it gets uploaded so there's a tab in the open QA web interface, assets assets and uploads I think and if you click on that for a fail test you'll see a bunch of logs and any crash record that happens so one of the things I do when I see a failure is oh there was a crash I analyze it and then I file you a bug with a proper backtracing I just want to say that maybe there could be a generic test for all packages for that like maybe but then the test would have to run it to see if it crashes anyway yeah more questions? I think you had your hand up first okay I think great question yeah so the question is how long do the tests take to run for Fidority I it's very complicated because it all depends on it's all in the test definition for open QA the compose tests I believe take about two hours to run completely for an update it takes again about two hours in total but most of that is for the silver blue test which isn't gating so for all the gating test to finish takes a little over an hour I think mainly because of the live build and install test which takes a while and yes there's a lot of parallelization so the way open QA really works is you have these things called worker hosts which are systems that spin up virtual machines and run the tests in them and it's configurable how many workers will be on each worker host we run on fairly powerful machines so our main test boxes have got 30 workers so at any one time they can be running up to 30 tests that's how that works you are alright thank you so much thank you sorry I took a lot of time I think good job thank you for that introduction yes so welcome everyone and welcome to my talk on developing modern eBPF applications I'll spare the introduction like I'm doing networking at Red Hat mostly kernel and a lot of eBPF work recently I have a few notes up front I'm super happy to answer questions but please keep them until the end and we can have also lively discussion about that one thing like the second thing I designed to talk for an audience that is kind and knowledgeable and eBPF already so it helps if you have already written eBPF but hopefully you can follow along if you didn't so far the reason for that is that my goal of the talk is to show you new ways how you can improve your eBPF programs so you can switch from writing small toy programs to like writing normal eBPF applications like bigger ones where you can do more complex stuff like I said I'm working on networking mostly so my examples might be slightly screwed towards networking but that doesn't mean that they don't apply to tracing use cases it's just my examples usually come from networking alright so let's get started I want to cover basically four topics today the first thing is modern eBPF what do I mean when I talk about modern eBPF the second thing is I want to show you how you can compose your eBPF programs like the small programs you have into bigger applications there are different ways to do that and I want to show you three of them the third thing I want to show is testing so with growing complexity in the applications how do we test them how do we go ahead and make sure they do what they are supposed to do and the last one is I want to cover eBPF helpers and kernel functions so those are basically the APIs that you can use from your program and I want to show you how you can navigate that space more easily so you know what functions what APIs are there and which one so you can use so let's talk about modern eBPF what do I mean with that what do I talk about when I talk about modern eBPF I want to illustrate that with a few examples I have brought some code snippets here all of these code snippets are from the samples it's not important what they do right now but I want to illustrate a few things with them the first thing I want to illustrate is eBPF a while ago didn't have any loops you could not write loops that you are used to in any normal programming language was not possible in eBPF the only exception to that was you can ask your compiler to unroll the loop so basically copy the content like the body of your loop one after another and run that instead that of course increases the size of your code and it's not applicable to each and every loop it has to have an upper bound that you know at compile time already next thing that was not really possible was calling functions so you could not just call the function like you would in any other language it was just not there the only thing you could do is again inline all the functions so basically copy the contents of the function into your main program so you have a constant flow of execution or you could use so-called take calls which basically means at the end of your eBPF program you jump to another eBPF program and execute that instead but you will never get back to your old or previous function so it's not really a function call it's a way to combine programs but not really a way to call functions so let's see what improved in terms of loops eBPF nowadays can have loops like you can write loops in your eBPF program the first addition that we had there were bounded loops so basically loops that have a fixed upper bound as well so the verifier can really check does this loop end at some kind of time so you can be sure that it doesn't prevent a kernel from continuing with that it was not necessary anymore to unroll the loop your compiler might still do it but it's not necessary anymore and it covered some use cases but not all of them so we later on had further additions to the eBPF or the eBPF program environment that were the loop functions how I call them so basically the first one that I want to introduce is eBPF loop that's a helper function that you can use in your programs and what it does it takes basically a number of iterations and it runs your another function like a callback function that number of times and the number doesn't need to be fixed or constant or anything like that it's checked dynamically at run time so at run time you can pass it a number that you got I don't know from a network packet for example and run the loop isn't that many times that's like with that you can probably support all the use cases you have for loops you can do everything you want and then there are other functions that might simplify your life if you want to do something different so there is eBPF for each map element and like name already implies what it does it gives you a way to iterate over the contents of maps with keys and values so all the eBPF maps you can iterate over the contents basically it's also something that was not really doable before at least not easily and now you have like a very useful helper function to do that and like a more convenient way for that let's talk about function calls as well so eBPF now as well supports function calls something that was not possible before and if you now want to write eBPF functions you write just a normal function like you would a normal C code and the compiler translates it to real function calls no inlining anymore, no nothing your compiler as well can still inline it but it doesn't have to and you can really have function calls and with that you get all the benefits of calling functions you have better modularity you have like your code size goes down if you're calling if you beforehand had an inline function that you would call from many other places your code grows in size now if you have real function calls like it's reduced in size again and one nice thing about it is that every function is treated by the verifier as its own program so the verifier is that thing in the kernel that basically checks your programs and it checks each function as its own program that means that all the limits that the verifier has apply to one program or to one function at a time so for example the complexity limit it applies to your main program first and then it applies to the function you call second so it's you can write more complex applications because you don't run into the limit of the verifier that easily anymore if you break down your application into smaller functions so those things that I just said like the function call that's just a few examples for what improved in the eBPF territory so things that you can start using today now that we can split up our execution into like different functions let's talk about how we can compose all these functions back together into like big application or larger application I want to introduce three different or like two and a half different ideas for how you can do that and the first thing I want to talk about is how can I do like how can I compose programs together like how can I combine them at build time and right now we have a super simple linker in eBPF tool so that's something that was added a while ago you can now call eBPF tool and link together eBPF object files like you would have done with your normal C object files in the past it's really doesn't like the linker is not that complex like it is like a normal linker for C user space programs but you can really do the same basic thing with it you can have your code in one file you can declare your functions that you want to call in a header file you can write the code for that in a different file compile these things separately and later on link them together you can ship one big binary but you have your code split over different files so you can organize it nicely and so on that makes it like I said easier to structure your code into something that you can maintain more easily something that is very important if your application grows in size and of course with that you can build stuff like static libraries for example so you can for example have a team that maintains all the parsing functions networking focus parsing network packets is something that we do a lot and you need to do often and you basically the code is the same all the time because the network packets look the same all the time and you can go somewhere and build like you can build your library build your parsing functions into that and then we use them where needed like basically make an object file of that and link that into your application so that's a use case for like these linkers another thing is imagine you don't have all the object files available at build time, imagine you want to link stuff at load time or combine programs at load time the BPF tool functionality I just described is based on LibBPF LibBPF is that user space library that's there for basically all your BPF related wishes and it also exposes the linker functionality I just described to you as a program so you can programmatically link BPF objects at load time or just before you load the program into the kernel you can just link it together and then load the result into the kernel one thing that I could imagine you could build with it is you could for example have your application built already and allow users to provide you with like their code imagine you have like a big networking application and you have one place where you could offer the user or the users your program to collect statistics for example and they can provide you with an BPF object containing the program to generate the statistics and that they can extract from your program and you can link that object into your application at load time so the user can really make use of the BPF functionality still after you have attached your application so imagine something like a plugin system in easy terms right and the next thing I want to talk about is a bit more to be honest it's called every place so imagine you have a program that you have already attached that you've already loaded into the kernel and you want to change parts of that program so there's a functionality called every place it's available by assist called it's available from the BPF as well and it allows you to like swap out functions or sub-programs of your application that are already there so you can even if the thing is already attached at run time just replace a function of it and basically put in your new code it's a super powerful concept like you can do a lot of different things I've seen people doing interesting things with that but you have to be kind of careful because it's not it's not as usable as other features from the BPF the first or the main restriction is you cannot use it recursively so in your whole call stack like in your whole function stack you can only have one function that got every place at some point of time you cannot have multiple of them so as soon as you have used it once in your stack you should not use it somewhere else in the stack you can still replace that same function again and again but you cannot replace something up in the stack or lower in the stack and that combined with another feature so this functionality every place is often used in infrastructure like LibXDP for example uses that to attach multiple XDP programs to one network interface that's something that a kernel doesn't provide but LibXDP allows you to do that and that's based on the LibXDP on the every place functionality for example so if you are writing XDP programs you can probably not ever use every place because LibXDP already does that thing so every place is something you want to use in your infrastructure maybe if you're building infrastructure for the BPF ecosystem that's an interesting thing to use and to know about but it might be not that useful in your applications I wanted to show it anyways because it's super powerful and if you're reaching like the limits of what you can do with the other method I showed before maybe every place is the thing you need to really get forward so be sure it is there but when you're using it next up when we combine programs at runtime, load time, whatever so we're building more complex applications out of our programs we want to make sure that those applications do what they are supposed to do so let's jump to testing there is a way that probably a lot of you already know if you're using BPF or if you're developing an EBPF you're basically running your full application in some environment to test it so in networking it's usually a combination of network namespaces virtual ethernet pairs shell scripts bridges and so on and you basically set up a network like a virtual network within your computer have different namespaces and you run your application inside that those namespaces basically that's cool thing like I've written here it's used in self-tests it's used in demos I've seen it used for examples as well so it's a pretty simple thing actually right you can just set up your network and just run your application within that network that you set up difficult is how do you observe if your application is really doing the right thing so you cannot test your application directly but what you're doing in networking for example you're usually just sending traffic through application and see if it does the thing you expect and if I don't know the traffic is dropped in between do you know why it dropped you don't know it's just not there anymore and then you start looking where's the issue coming from is it coming from my application is it coming from if I run this same script on another system is everything in the same place that I expected to be all these different issues that you get if you're just basically running testing on your main machine so it gets hard to observe what is going wrong what goes wrong and it's also kind of brittle sometimes it's like you have a lot of race conditions there you set up network interfaces what does the system do with them and so on and another thing is that if you do stuff like that you need to change the system of your developers and I can say for myself I don't like people adding network interfaces to my systems I have like they're using IP spaces I might be using them as well and it's always causes some issues I want to show you one other technique that you can use to debug or to test EPF programs that doesn't require all these things and that's called test run or BPF test run so that's a way where you load your BPF objects just your BPF objects not the full application but really just the kernel parts or the BPF parts of it you load these objects like you would in your program as well into the kernel then you know already those programs pass the verifier and then you basically run the program without attaching it to the real thing so you don't attach it to your network interface or to your syscall but you just run it with a defined context that you give it like you pass a buffer to it with a context and you run the program with that so you can think of it like unit testing for your BPF program you take the BPF program give it a defined input and run it and observe the output that's of course super useful for network programs but the context is super simple it's usually just the network packet it can be used for most network program types it can be used for others as well at least for syscalls it can be used for tracepoints it can be used and the full list is in the link the slides are shared in the schedule so you can get the slides from there and click the link if you want to I want to quickly give you some hints on how to use it because the documentation as some parts of the BPF documentation are pretty sparse but my hint is take a look at struct BPF test run ops in the BPF for like to get started so you can really see all the options that you have for running programs in that testing environment and the core idea of it is you build a packet in like user space memory it's a memory you write your packet data and you do it craft the network packet as you would do it otherwise it's just basically you place your packet data there or your context if you're speaking about syscalls then you pass all these like the full buffer you pass the full buffer to the kernel to that BPF test run function and the kernel then continues to execute the BPF program in the kernel so it's not like running in user space running in the kernel on your buffer if you ask it to it can run the program repeatedly so that's interesting for example for benchmarking reasons if you want to run multiple times and after the execution the kernel returns the result of the execution so it returns for example the modified network packet if you're speaking about that it returns the usual return value of your program so you know what would have happened if that thing was like being run in the kernel at like a normal hook and for benchmarking reasons it returns the average run time as well so it's a pretty nice thing that you can use for unit testing like I said for benchmarking as well especially for networking where you have like programs that are usually time sensitive so you want to process the packets quickly so that's why you can use it for benchmarking as well so next I want to talk about the eBPF helpers and kernel functions those are basically all the APIs that you can use from your application it's basically a set of functions that you can call and for you as a developer it doesn't make a difference if you're calling a BPF helper or a kernel function or kfunk that's not different it looks the same it's just a function call the difference for you as a developer is that the BPF helpers are part of the stable user space API of the Linux kernel so there is a man page documenting them and that's part of the stable API so that will not change in a backward incompatible way so your programs using BPF helpers will always like if they run on a current kernel they will run on a new kernel as well on the other hand we have kfunks and they are not necessarily stable there are some life cycle guarantees attached to them they should follow some rules and those rules roughly boil down to we don't want to change them without reasonable justification and we don't want to remove them without reasonable justification and there should be an application period if we want to remove them but it's not guaranteed so if we see there is an issue with those functions and it's reasonably bad that issue then we can just remove the function or the kfunk if you want to use kfunks it's a bit more of testing you should put into place when you a new kernel version is released check if the function is still there and if they still work in the same way sometimes these helpers like these kfunks are also explicitly unstable so there are functions that are explicitly unstable one example for that is currently connection like contract access you can access the contact table from your bpf program and that thing is currently explicitly label is unstable so you well be careful when using it that's the main take away from that the list of helpers and the list of kernel functions is basically growing with every release so it's hard to like take a snapshot in time and say like this is what you can do because it changes like from release to release and then you can add new functions add a new functionality that you can call from your bpf programs and I want to quickly show you how you could navigate that space so how you could find your way around and how you can find out which functions you can call and which ones are available so for bpf helpers it's kind of easy so there is the data man page that's called bpf helpers and you can just type man bpf helpers and you could get a long list of functions and I invite you to just step through that once and see what's available the functions are somewhat descriptive like the names are somewhat descriptive so you can at least guess which area of functions are available I would not want to guarantee that everything is documented in that man page so there is this header file that I linked here or that I showed the path here you can go to that header file and take a look so all the functions must be in there bpf helpers must be in there that you can use for kernel functions or kfunc it's not that easy there is no man page documenting all of them so in the end it boils down to you need to look at the kernel source if you want to know which kernel functions or kfunc you can use some of them are documented in the normal kernel documentation but not all of them so a lot of them are basically only viewable if you look at the kernel source but one nice thing is all of these kfunc should be marked with underscore underscore bpf kfunc so if you just grab in the kernel source for that you should get a long list of functions that you can use and it should be almost complete there shouldn't be any other functions that you missed that way right so as a summary what I want to show you with that talk is that basically the kf development environment got a lot better in the last years a lot easier a lot more comfortable and so you can really build more complex programs more easily and one particular nice thing about everything that I show today is these features are not yeah these are not from like yesterday's kernel they are not super bleeding edge but they are present in kernel for a while already so if you have reasonably recent distribution or Linux distribution you can use these features today in your program like if you target something that is somewhat current you can use it today and you don't have to wait for the next two years for those functions to arrive and for those features to arrive and yeah with that I want to thank you for your attention and if you have any questions please go ahead so this is happening in a runtime run and previously you mentioned that now at the modern dvdf you don't get unbounded by the loops so you can have an unbounded loop so is it correct to say that at free play you can have now that if it's having a run time you won't need an upper run or lower run and you can have a number of loops so the question is basically if we can kind of escape with every place and the loops like the unbounded loops if we can escape kind of the limitation that it has to stop at some point of time no, not necessarily so everything still passes through the verifier and it's not really unbounded so the loop function for example that's also like it's a helper function as well so that's code that is not within your ppf program and you don't control it but the code that iteratively calls your callback function is part of the kernel so you pass a number of iterations to that function and then the kernel makes sure to call the next function a number of times and only that number of times and there is an upper bound to that so it's not unbounded in that way and no matter if you every place anything after that, that doesn't change anything yeah, go ahead so what I can add there it's not that they're not documented at all it's just not like the documentation is not exposed like you don't see like it's not in the normal kernel documentation it's not in the rendered online but if you search for k-functs like the ppf k-funct thing in the kernel then usually these functions have some kind of annotations on top of them saying like this function just isn't bad those are the parameters so it's not completely undocumented but it's not openly documented in a like online rendered way any other questions what exactly do you need okay, thank you so we were asking for one sentence basically what you can do now what was not possible for yeah or an example for that so for me the most interesting things are let's say these composability things so you can build more complex applications and really you're not bound to the strict rule I build my program first and then I load into the kernel anymore and then I change at runtime what is going on like in the most extreme case and what is possible for that is for example what I told before the libxdp so the library that's used for interaction with xdp programs they found a way to attach multiple xdp programs to one interface using every place and beforehand there was a one-to-one mapping like your network interface could have had a program you could run it but that's it and they made it possible that multiple programs can attach different programs to the same interface that's for example something that is enabled by these new functionalities alright, thank you very much so I'm going to give it like two more minutes am I already unmuted do you know how I could point with this yes there's a leader aha, great the red one great, thank you sure so so, hi everyone I'm here to talk about glibc I've tried to put this together as a sort of call to action for a first time contributor or a fairly new contributor but I'm also hoping that maybe it's useful at least entertaining to people who are experienced programmers and already know their stuff who am I, my name is Arjun I'm an upstream glibc contributor I also co-maintain glibc in Fedora and Red Hat Enterprise Inux which of course means that I work at Red Hat so I'm going to try to this is the last talk I want to make this really quick to talk about whatever I have to say and leave some extra time for questions and answers I hope I make it it's really my first talk so I'm not sure how I'll get the timing right but I'll try my best to leave as much time as I can for questions and for people who want to maybe leave or leave to party right so I'm going to go with an introduction I talked about seven steps in my title so I'll quickly jump into the seven steps and then I'm going to walk through a patch a glibc patch, a recent glibc patch just to kind of show what kind of goes into writing a patch and then I'm going to talk about what you could do to contribute to and help with glibc and then of course questions at the end and we're off so I was first introduced to the C language in high school I knew a little bit about stdio.edge and malloc and to be honest it was mostly magical incantations that I wrote in the middle of my program to print and to get input and to allocate some memory I really believed it was something that's happening within the compiler and kind of didn't really think too much about it but very soon I realized for example is a function that can be written in C so it's not really magic there's something going on there obviously eventually I got into this field and I know a little bit more about all of this well not all but some of it it's a pretty white topic so I know that a lot of it is actually mostly written in C mostly written in C so what is glibc it's the standard C library we have all of those functions and a lot in between for example one might think that main is the first thing that starts executing when you start a program but there's quite a bit that goes on before that that is the loading of the program placing everything in the right places fixing up function addresses and then finally you call main and then you go on with the execution so there's a lot more to glibc than just providing the functions it is the runtime okay so next we're going to talk about why it's maybe useful or why you might want to contribute to glibc I think the first reason at least for me is that it's very high impact there are millions of installations it is the C library for the majority of non-android linux based operating systems if you make a change in malloc for example that shows up on the critical path you're looking at trillions of executions of the code that you wrote in a week or something like that I don't know the exact numbers but you can think about every time malloc goes through your code's in there doing something so that's what actually makes me very happy about being able to work on this stuff the other thing is we actually have a fairly actively developed piece of software it's not really like some arcane old thing that's never really updated we have over a thousand commits a year which means we regularly add bugs that need to be removed my personal experience with the community is that it has been very welcoming to me very kind mistakes are welcome and accepted and understood and commit bugs once in a while and help each other out to fix it it's like any other piece of software really we recently I think recently is not really accurate but I think it's been more than a year already we have weekly public video patch review meetings so people who have recently submitted a patch can actually show up to that meeting there's a link somewhere in our wiki show up to the meeting and talk about your patch or say that you want review and someone will be assigned to look at it a lot of the regular attendees are basically like regular contributors I don't show up very often to that meeting for example but once in a while I do and try to find or get assigned a like maybe a beginner's patch because I do care about this I do care about it makes me happy to it's first of all easy to review a patch by a beginner and it makes me happy to see that we have another person who made a contribution and who might make more contributions it's got this kind of multiplying effect so we have all of this and we do look out for patches from new contributors and we also have a code of conduct that is a work in progress so yeah we do care about being welcoming be kind to everyone and trying to get as much as possible contributions from people who are willing I said seven steps it was a clickbait to be honest I didn't know how many steps it would be I don't think there's a fixed number of steps but I did shoehorn the seven steps here you can see them so you do a git checkout and this is a bit of an idiosyncrasy you need to be building in a separate directory you can't really build in the same directory as the source tree I don't know the reasons for it it's something to do with the build system I don't know most things about glipc to be honest so you're building a separate directory and you need to make sure at configure time that you provide a prefix which is about where it's going to be installed it's just a couple of idiosyncrasies to building glipc that don't exist in a lot of similarly packaged applications so let's say that you're trying to fix a bug a good place to start is by maybe adding a new test that fails without this bug having been fixed so that's a potential step three, you implement the fix you do some testing we recently added a couple of recent is always a bit of a longer time span when it comes but maybe I think a couple of years ago maybe slightly more we added a couple of scripts to help you run a program not on the system's installed cLibrary but on the one that you just built so there's test run for that and then there's you know you need to do a lot of special things to get gdb to pick up the intree freshly built glipc sources sorry not sources at this point of course executable and then run it with a test program and so we have the script to help with that as well and you can use that to kind of check out how your test and your fix is working and then eventually you delete the build directory, you reconfigure you run make check, you make sure everything's working fine and perhaps you submit a patch to libcalpha at sourceware.org that is our mailing list we submit patches there, we discuss patches there right is that the end of the talk maybe not so I will now go into the anatomy of a relatively simple patch, I try to look for something that is on the order of magnitude halfway between a fixing of a typo in a comment to like a entirely new let's say feature that changes dozens of files, somewhere in the middle of that is this, it's a fairly simple fix that went into gilipsey recently, Mike who is an experienced developer and a prolific in the gilipsey sorcery, he submitted this a couple of days ago, I reviewed it for him and it's basically a fix to a function called stir error and apparently this function must not return null we'll go into why so for some reason stir error the function was returning null in some cases and it's not allowed to do so and then Florian goes on to explain that we made a recent change where stir error was implemented in terms of another function, stir error underscore l and that's what caused this sort of regression that needs to be fixed what is stir error and stir error underscore l so stir error is basically if you know the error number, it takes in a number which potentially is the error number and it returns a string corresponding to that number describing what that error might mean obviously if you put zero you get success and I did not know this and I really don't know if this was a joke or if it's I don't know, divine intervention but stir error 42 returned no message of desired type, I have not read the book but 42 is apparently an interesting number and I thought it was funny that this is the reply that you get when you try to find out what 42 means so stir error underscore l is a very similar function that returns a string in the current local which might have a different language than the one the in a given local, sorry, not the current local the current local is the one in which the program is running so the program is running in local x you want the error message in local y for whatever reason you use this other function obviously you can now see why stir error can be implemented in terms of stir error underscore l you just pass the current local and you get it back so someone made that change which was definitely an improvement we don't want to duplicate code but it caused a regression and why was it a regression? stir error is not allowed to return null but stir error underscore l actually is allowed to return null for some reason and the details of that are actually in this little bit in the POSIX documentation for these functions POSIX tells you what these functions are allowed to do in what circumstances and basically the POSIX manual says that whether successful or not stir error must return a pointer to a generated message string but stir error underscore l does only need to return it upon a successful completion or fail it can return null so the moment we made stir error underscore l we started having the same behavior that it was sometimes returning null that was the bug so now we are actually looking at the patch that Florian wrote it was a test and a small change to the code I really want to show that I think my idea here is to show that a patch is not so complicated here you can see this and see that it's not like arcane magic so the test includes some usual headers you see a few headers called support slash something those are actually part of the glibc test rig when you run the test through it you can actually use some helper functions to do a lot of things like implement some checks do some error checking for functions that you're not testing and so on so we have some helper functions for it the patch has a test these are the headers that the patch had so first we're going to look at the test itself so I think I should be using this at this point is that visible? so it's a function that tests this stir error and what it does is it sets a variable called fail malloc which when you turn it through apparently some malloc is going to fail every malloc from here on is going to fail and then you call stir error with a weird number which obviously is not a regular result that you get for an error and then you get the result for it and then actually you stop causing malloc to fail because you don't want the rest of the test to stop working only for the call of the stir error you want malloc to fail because we know that that is the point at which stir error was returning null and it would fail and it would just return nothing so we got malloc to fail we got a result and then we're actually checking that this result is the same as this string unknown error which is the default string for any kind of error that you don't really know what it is so we expect this so this test compare string is actually from the test rig like I said we have the support directory where you have all of these helpers it's one of those helpers is this it just compares strings and it logs an error if they're not equal I forgot I can use this to change slides so now we're looking at the test itself so the test rig actually has this bit where it makes sure a test will not run forever if the test was a main program that never returned then you'd run make check and it would be stuck running one of these tests forever so what we do is for tests that we have in glibc we require you to actually define this function called do test and then write all of your testing inside it and then you just include the rest of the test rig which has a main function and that will actually make sure that this do test doesn't run for more than a couple of seconds so that test will return or if the test gets timed out then it actually errors out and we know that this test hangs for some reason so that's pretty much all of the test so florian did include another test for error underscore l but that's not important I just want to talk about the one piece so that's what I bolded here and now we move on to really the end of the test itself which is the malloc which we said that there could be a malloc that fails and this is the malloc that was in the patch so it's a malloc which will obviously take precedence over the malloc in the c library if you define this function in your test it's going to get picked up before the glibc malloc gets picked and what it does is if this fail malloc is true it returns a null which means it won't allocate and otherwise what it does is it goes into the glibc .so and it picks out the actual malloc and it asks it to do an allocation because we don't want to re-implement the whole malloc we just use the one that actually works okay we finished writing the test how do you add a test to the glibc source tree this is a string based test string function and so we have a directory for that and there's also a makefile and there's going to be this little line called test and then you just add the name of the test here without the .c it's literally just that you write this function you don't write it as main you write it as do underscore test and then you add it here to the makefile and you have already a test that will start to fail without the fix and I hope this is visible because it was really hard for me to split this among multiple slides so now we're looking at the fix itself right I'll quickly go through this so we have stir error underscore l like I said before we implemented stir error in terms of this function right so this is the function where the problem lies now I know it's allowed to return null but there's nothing wrong if it stops we can do better than what the standard requires so we'll now make both of these functions not return null in any case so we have this error number that we get ignore that bit it's not about erno erno is something else let's not think about that right now so we have this error number from the user and we wish we need to convert to a string we go to the error list really let's not care about the details go to the list and we get the number and we get a string corresponding to it and if we don't get a string for that number then we know that it's an unknown error we don't know what this error is if we knew what it was we'd have it in that list from the get error list right so we're in this piece where we don't know what it is if we know what it is then we just translate it to the locale that is requested and then we just return it right so this is what the code look like before here on the right side this is what the code looks like after the fix everything else is the same it's just that this bit where we don't know the error it got fixed and so what got fixed there what got fixed is we were trying when we know that there's an error number 999 right that's what we had in the test we're trying to return something like unknown error space 999 right we're trying to create like on the fly a string with the number also so that eventually when it shows up in the application somewhere that number is not completely lost that number is still there we're trying for it so what we were trying to do was we're trying to do an AS printf which basically it allocates memory and it prints into the memory whatever you want it's basically like printf but it'll create its own buffer and it'll print into a string right so we call AS printf and whenever AS printf return minus 1 because it couldn't allocate memory we were returning now okay and what we do now is okay so we try to return unknown error 999 AS printf failed so okay Florian changes a bit we were looking for a minus 1 for failure now we are looking for greater than 0 for success why is that because AS printf returns a number of bytes that were written so if it wrote a few bytes we know that it succeeded if it wrote 0 bytes we actually know that something went a bit odd there right it didn't write anything and if it returns minus 1 or any negative number then of course that's also an error condition so that kind of got reversed right now we had the error condition first and then we had the success case so we changed this around a bit so if we succeed we set the return string to the one that we got from the printf but if we fail what we do here is we return simply unknown error without the number which is a static string we don't need to allocate anything for it it's just unknown error it was part of the binary anyway it's just going to get returned so that's the fix right that's the fix and now we look at this and we are sure that we are never going to be returning null okay I do want to pause here and ask if this is completely gibberish or it kind of made sense kind of made sense are there any sort of let's say C beginners here who see this and feel like okay that's not too hard it's not arcane magic or maybe okay I think that's what I was hoping for I was just hoping to show that you know a glibc patch is not all that right it's not all that so that's the fix and that is actually the entirety of the patch I know I kind of formatted it and showed it in a slightly different way that was Florian's patch which fix this bug and that's actually like what I really wanted to make a point about now I want to talk about what you could do to contribute to glibc as a well even if you're super experiencing something else or you know also super experience in glibc but you're bored of doing that you usually do there are a lot of things that require relatively less knowledge of the internals where you could make a difference so the first one is that you could write new tests and you could improve old ones for example we have the fedora I'm also a fedora contributor so we have the fedora glibc package and then we have some ci behind it a lot of tests some of those tests actually I wrote a few years back and didn't upstream them for whatever reason don't hate me so we run the ci which a lot of those tests are not really upstream the reason for that is that some of them require setup or altering the systems kind of configuration in some way those are not the sort of tests you want to include in a test suite for an application where like you're messing with the user system you probably won't even it'll fail because it'll try to modify nscswitch.conf and it can't right but since then we actually have containerized tests in glibc so I'm going to go back here so instead of adding an entry to this tests line you add it to a different line called tests-container which you can write a containerized test you can make a little directory containing all the files that you want inside the container and you can actually write a test that does a bit of setup or has a bit of setup that modifies the system but you know it won't modify the system it'll run inside a container so you could do some tests like that you could look at the ci test for fedora and then upstream them as containerized tests for example right you could I'll come to that actually I just attended a talk earlier today where it was about beginners feeling like how it feels to be a beginner in the open source community and actually the speaker there said that we should have good documentation so I feel bad about that that I'm saying right documentation is a beginner task but okay you are a bit into gdb there's this whole thing called pretty printers I don't know much about it but I think it's like a python thing where you can write pretty printers and we have lots of like these opaque glipsey data structures maybe bits of the glipsey heap maybe there's a way to like walk through the heap maybe some other opaque types like a lock is a lock locked because if you try to see like a pthread mutex type what its values are it doesn't say whether it's locked or not it's going to have some numbers behind it so you could write a pretty printer that actually just trains like okay this lock is currently locked or not occasionally when you're just like reading code sometimes you'll notice like it's changed a bit so the comment is a lie and nobody caught it at review time you know it feels like it often feels like it's not a value but honestly it is there's value in fixing that obviously it's not so glamorous but it helps we also have a bug tracker where you could sort by new and maybe confirm that this bug actually happens for you or you know tryage bugs see why they happen all sorts of stuff some more relatively more specific ideas so you could optimize the integer to string conversion in printf you could rewrite the base64 decoding encoding we also don't have info pages for dlopen and pthread a lot of pthread functions and some dlopen and related functions you could write those explaining the details of how the glibc implementation handles these things and we have this mtrace currently a Perl script but you could convert that to C so these are like concrete things just mentioning which we could use help with some help with we also have orc and perl scripts that we picked up at various points of time and we could kind of standardize on python and kind of reduce no hate for these, just reducing the number of things that we need to build glibc yeah a lot of things to do I promise to leave a lot of time for questions and I see it's only 5 minutes I'm sorry about that but I hope some of you will stay and keep asking I'm here to answer anything that I can final kind of links, we have a wiki it's very out of date to be honest but it's still useful we have the bug tracker and we have the development meeting list and then this one I would say really nice libc help for people who don't want to don't feel so comfortable like posting a patch from get go but need help with something maybe you want to ask what you could work on maybe you want to talk about something you're trying to work through but you're having trouble so everything libc related is sort of on topic here you could write there and ask questions you could also hey just sort of, kind of sort of a beginner here myself to be honest it takes years working on this stuff and still feeling like you don't know much but you could write to me and if I don't answer I'll point you to someone who can answer you so now questions so I'm just going to repeat the question which is that for a lot of application level stuff you know like a more modern I guess repository and like a contributing.md possibly on github and glibc doesn't have that and also a lot of people take it for granted that it probably works good enough I guess that's what you want to say and so how do we hope to have more contributors to be honest this is a question I do not have an answer to it is quite an old piece of software it is well established the truth is that there is as I said over thousand comments a year I actually checked maybe not like not every release is 500 plus comments but the average is like over thousand so it's happening I think a lot of the contributors do tend to be full time employees of like software companies in this field that is quite true it's a hard problem really I must say it is a hard problem to I know it's just not as it's also not as glamorous as the kernel I don't know the solution to this I don't know the solution to this but you're right I think it is sort of harder to get contributors into this than to some other stuff so I guess that's my answer sounds good to me okay so to you can go first yes so there are a lot of things you could do first of all if you found the bug obviously you could file a bug report while you're working on the patch file a bug report assign it to yourself you don't have to but it's good to do that but literally sending to this mailing list is all you need to do you could be a drive by contributor who does like a patch that sort of fixes it but has some issues you could just like send the patch here and walk away and never come back if you don't want to and we might still actually work on it and write ourselves down as a co-author and finish the patch I've seen that happen at least once fairly recently somebody fixed a bug in some one of the glibc utilities that ships some executable I don't know which it was and I think someone kind of fixed up the patch a bit and then like committed it on their behalf so like that's it it's a bit over fashion I will admit so I use git send email I also remember that my company actually changed the way like our email provider and there was like a few months in between where I was nervous that am I going to be able to send patches the same way as I could and I was like using my private email address to send patches but it's git send email works for me even now and that's the one I use but you could attach it to an email and just send it here and it should be fine we don't have full requests sorry aha okay good question I will come to you so license agreement I guess you mean copyright assignment yes so you may assign copyright to the free software foundation if you wish to but it is not required anymore you could do a developer certificate of origin you don't need to assign copyright so that requirement is gone yeah it is fairly new yes I have the same so the question is are there any free tools to kind of help understand all of the function calls that happen there which are like you know you don't know what's going on there the answer to that is I actually don't I have the same problem myself I try to avoid looking at a lot of context and just trying to look at this particular patch look at this particular bit of code okay I'm out of time I can continue answering questions but we will do it off this platform thank you