 Okay, I'm starting. Yes. Yes. Okay. Thank you. Okay, let's start then. Thank you very much for being here this afternoon and at the end of the summit. Be prepared to hear a lot of numbers in charts. So this is number intensive talk. And I suggest that if you want, you can go to bit.ly slash open cloud dance talking and get a copy of the slides because there you have the cable links to all of the dashboards that I'm going to present because the most interesting thing about the talk is probably that you play yourself because most of the numbers that I'm going to show can be found in dashboards and you can go there and dig deeper and find out other numbers that could be of interest to you. Okay, they said this is the idea of what I'm going to try to tell you. There are a lot of slides. I'm not sure if I'm going to be able of going through all of them, but well, they are there for your reference to if you want to have more details. So I'm starting with a bit of context about myself and the company where I work. Then I'm going to talk a bit about them methodology. I'm going to talk about the projects that we are going to analyze. So this is about analyzing different periods in the cloud area. I would be talking in a moment about which of them and complete with a pen and a stack, not only with a pen and a stack. Then I would be showing some specific operations. The agent charts, which basically show about the expertise of people in the project and how people are entering and living the project. The geographical region, where I'm going to talk about time zones and some other staff. And the hourly patterns, which are quite interesting to understand, same characterization of the developers. And also how to measure corporate diversity in the project, not only as a whole, but also in the specific parts of the project. Then I would be talking about companies in particular. And as a bonus track, I'm going to show you some proof of concept that we are developing right now with Kibana-based dashboards, which are very interesting because you can drill down to any level of detail about the kind of thing that they're doing so well. And then it would draw some clashings. So a bit of context. This talk was provided by me and Danny. Danny is not here, but probably came in a while because he was in another meeting right now. Both of us are founders of Beteria. Beteria is a small company doing analytics. Basically, the kind of analytics that we do is to solve development analytics. So we go to the software repositories like Git, like Bazilla, like Launchpad, like Algorith, like a mailing list, like Stackable Flow, like any kind of place where developers are either having a conversation or doing something, get that staff into a database and then query the database to get interesting numbers, trends, etc. We produce dashboards, we produce reports, we provide consultancy, that kind of staff. I'm also working in the university. There's been research in this area for more than 10 years. And the part of what the company is doing is, in fact, some results of our research group. So in some sense, we are trying to take to the industry some practices that are somewhat common in the academic community for a while. Well, let's go. This is about the quantitative state of the cloud. There are several previous radians of this talk. So the first one, I delivered it in NOSCAN two years ago. That's one I delivered in NOSCAN again this year. And the idea has always been the same, going to the main cloud systems, in a stack, cloud stack, Galiptus and the Pnebula, and analyze them not exactly in a comparative way because they are very different, but just to precisely focus on how different they are. But also finding some common things that all of them have in common. And then there's going to be this bonus track. We are in the OpenStack Summit. I'm going to focus like half of the talk on OpenStack, trying to explain to you some findings that we found on OpenStack development. And also trying to explain to you how you can find by yourself, by looking at this given a dashboard that we have for this talk. So some words about the methodology so that you really understand it. So first of all, we run a transparency analysis on the bridge that we are going to analyze. Transparency means how transparent they are when you are trying to get the data about how they develop. So I consider that a company may develop OpenStack software by developing everything inside the company so that they only release software. But all the information about how the software is released, maybe can stay in the company and you never know about it. So this is not what usually happens in OpenStack software, but it didn't happen. So the first thing is to go there and see whether this project, in addition to being OpenStack software, is really an open development project. So they really provide information about how they develop. So that's the transparency analysis. So then we go to the tooling. And once we know what the repositories are, we extract information of them. So for that, we use Metric Sigourmoire. Metric Sigourmoire is a tool of, sorry, a set of open source tools that you can use to mine every kind of repository that OpenStack software projects usually use. So of course, you can get information out of GIT, or Baxillor, RunSpot, or FamillionList, or GARID, but you can also go to Stack Overflow, for instance, and get information about your project, how people are talking about your project, or to a Slack, and get conversations from there. Then we use Remarlib, which is a Python library, basically produces the information that you want from the database. And mostly it produces user-ligation files that are later animated with this Remar, which is basically a JavaScript library that we use for producing the dashboards. We are also in this talk presenting Remar and G. Well, Remar and G was presented in Oskine, but the Kibena-based proof of concept, that's because we are trying new ways of showing the dashboard that are a bit more actionable, so where you can play more. And Kibena is a good solution for this. Kibena is also open source software, so all of this is all the coolest stack we use is open source software, so you can take it and reproduce it if you want. And Kibena is showing some capabilities that are very interesting to us, and I hope to you. Okay, so the talk is not going to be about performance, or about how Kibena, sorry, how OpenStack has been used or things like that. It's going to be about how it's been developed. So we are going to focus on activity, so basically how many contributions are there, processes, how the process are performing, how the break is performing, and things like how long they are taking to close reviews, for instance, and community, who is contributing, numbers and actors. As I said, we are not going to analyze functionality, runtime performance, or things like that, or even popularity. So that's out of scope here. And basically what we do is to produce a dashboard for each of these projects, I mean, for the four ones that we are analyzing. So this is the Open Nebula dashboard, and I'm going to use it to very briefly explain to you about what we are showing in the dashboard. So basically you have one role per data source. So for instance, this is Git, and in this case, this is a tracking system, while mailing lists and code review system. And you have the main trends. So basically, this is about people, and this is about activity. So for instance, for Git, this is number of commits, and this is number of authors per month, in both cases. So for this dashboard, the data is per month. On the left, you have some numbers about things like the number of active people in the community and things like that. So for all of them, the information is pretty similar. So some have more data sources than others. But basically, the information is quite similar. So you have the URL for each of the dashboards in the bottom of the slide, so you can go to the internet and check the real thing. This is OpenStack. Well, OpenStack is so massive in development, I will be talking about that later, that the dashboard is by week instead of by month. So you have to consider that when comparing the numbers. And this is PrimoorNG. So in case you want to look at the same data from other point of view, you can go to this one. So these are a bit actionable. You can basically click on the different areas around here in the charts, in the bytes and so on, and you get information filtered. So that if you are interested in what happens in a specific subproject of OpenStack, for instance, you can get that information of a specific time period. Again, there is one of these for each of the four projects. You can go there and analyze the real thing. Again, you have the URL around here in the bottom of the slide. But the information below is the same. So it's just two different ways of looking at that. So let's start with the transparency analysis. We did have data for all of them. So this is the first thing that we have to look at. So they are really doing open development. There are some issues with some of them, but there are minor issues. So first of all, all of them have the git repositories so that you can obviously get all git information. And it's not like the amp of code. So some rates are just amping code from time to time there. So this is the real thing. So you can see how the patterns of collaboration between people are in the git repository. So we really can understand how they are adding more and more changes to the repository of time. All the code seems to be in git at some time. So you can compare it with the distribution. You find everything is in the git at some point. And for the cases of OpenStack, CloudStack and Eclipse, it seems that all the tickets are in the ticketing system. That's not that clear for OpenNevola because of the traffic you see there. It's very likely that they have a different tracking system for the customers or something like that. And with respect to, well, and that's it. So that's the only minor issue. That means that maybe OpenNevola is not showing all the information about tickets. So let's look at the numbers. So this is activity. And here you start to see the main differences between the rates. And that's very, very obvious. So this is the number of commits. And you can see how OpenStack stands out, like, almost in order of magnitude. If you look at the developers, you basically see the same thing. From, like, pretty close to 4,000 active developers at some point in OpenNevola, the next one is CloudStack with some more than 300. So you can see that there is, like, level of, sorry, an order of magnitude in activity and in community between OpenStack and the others. And between the others, there are some differences. And the most clear difference is this one. So these are code developers. How we define code developers? Those that wrote more than 85% of the code, right? So that means that in the case of OpenNevola, the team of people really writing the code, I mean, most of the code are seven people. So it's small team, small company. Eucalyptus is a bit more. CloudStack is a bit more. And in OpenStack, you need to amount for the contributions of 300 and 37 people to have 85% of the contributions. So it's clearly more active community with a bigger core. If you look at the ticketing system, you find similar results. While you find that the only probably outlier is OpenNevola, which is having much less tickets than the others. But then if you compare this, which are sort of in the order of 10,000 with OpenNevola, which is in the order of 7,000, you can find that there is a difference to. I'm not sure if there is a question over there. Yeah. Yeah. I don't remember exactly. You can see that in the dashboards, but it's basically almost all of them that are considered by OpenNevola as OpenNevola. I mean, doesn't include a... Yeah. Right. This is July. And this is not... Yeah. You are right, Stefano. Thanks for the clarification. Well, I was with tickers, with tickets. If you look at the people submitting tickets, that's very important because that's the number of people who both are to go to the ticketing repository in form of an error or ask for a feature request or things like that. And again, you can see in order of magnitude of difference between OpenNevola and the next one, which is CloudStack. And then these two are even smaller. Right? Then we can look specifically at the last month. This was for the whole history. You can look specifically at the last month. And for that, you can see how all the numbers... Well, numbers are approximate, first of all, but all the numbers are consistent with the order. So this is a more fair comparison because it doesn't have into account the history. So the other one means that if it gets longer, it's going to have more commits, for instance, but this is compared in the same time period. So I'm not going to enter into the details of the numbers, but you can see how they are, again, this order of magnitude of difference is again happening. So let's now go to the most specific, but I hope interesting, staff. So first of all, the aging charts. For the aging charts, the data is quite simple. It's like looking at the aging structure of a community. So if you do that for a country, for instance, you know how many people are old, how many people are young, how many people were born during the last year. So this is exactly the same considering that age, time in the project. And we talk about time active in the project. So if we are talking about developers, for instance, I mean, committers or authors, we are talking about people being active as authors. So if you stay for six months without a commit, for instance, to become dead, let's say, in the project and you disappear from the aging chart. So the idea is with this, you can find out how much old spectators we have. You can look at the old generation, how many of them are we retaining. But you can also see how many new blooms we have. So how many people are being born in the project, let's say, how many people are coming, are being attracted. So the aging chart is like this. So it's like half of the population pyramid, which you're probably now from demographics. And for instance, this is for cloud stack. And you can see this is the last generation. So people entering the last six months. So by the way, this is for, I guess it's starting in last September. So three months before September. And you can see that you have the blue and the yellow lines. Yellow lines are the number of people that entered the project in each generation. So this is the number of people, like 40, that entering during the last six months. The previous six months, they entered that much people, like 80. Of those, still the blue line are independent, are active, in the sense that they are still committing. So that means that if you look at that second generation, they are like entering, and they are like 35 retained. The other left. Consider that this is quite normal, because if you make a commit, you became a part of the population, but it's very likely that many people just do some commit and then leave. Because that was something casual. Depends a lot on the committing policies of the project. In some projects, the barrier for committing is very high. In some others, it's not that high. In this one, it's not that high. So people can enter and leave very quickly. But in any case, you can see that, for instance, here, you basically don't have people with more than four years of experience still in the project. And you can see how the old people, there are very few of them, right? Now you compare with a penestack. This is a penestack like one year ago, and this is a penestack like now, right? And you can compare how, like one year ago, the last generation, this is summer 2014 and before, six months before, a penestack attracted like 600 people. The six months before, they attracted like 500 people. Look at the yellow lines, and you can see how this was expanding semester after semester. If you look at the yellow lines here, you see a different story, right? You can see, well, periods are not exactly the same, so that's why this difference here. But basically, you had the penestack had this trend of growing up to the last semester. So during the last semester, it's the first semester in the history where you are getting a bit more contributor. Well, you had another one here, but this is because of the way well calculated months, right? So we can say that the population in a penestack, with respect to attracting of people, it's starting to become stable. So it seems that penestack is attracting like between 600 and 800 people every six months, right? And then you can look at the blue lines, and you can learn how many of them are retained. And you can see an interesting thing here, which is if you are retained after the first six months, it's very likely that you stay retained. You see, so basically if you can stay for six months, you are in the place for a while. So that's quite interesting, because that basically means that there is some kind of entry barriers at the beginning, so maybe some people commit something, but it's too difficult for them, or then maybe they should do some other thing, whatever. But if you stay for six months, you have some commitment with the project and you stay for a longer time, right? And that's quite interesting, and you can see how there are people, a few of them, but there are people with like five years of developing experience in OpenStack, which given how lively the OpenStack community is, it's astounding, because those guys stay committing in the OpenStack group of stories after five years. Well, geographical origin, so this is a different story. Now in what the developers came from, it's really, really difficult, because basically we don't have that kind of information. If you are running a web server where all the developers have to go there or something, you can look at IPs and IP targeting and so on, but that's really the only way of doing that. But most usually you don't have that information, and it's very difficult to know from the visits to our website whether the people are really developers or not. So we need another kind of analysis. Of course, you can survive and you can ask developers to register and say where they come from, wherever. But in the end, you are relying on the answers of people, and well, maybe they don't want to answer, or maybe they just don't bother, which is what happens in most projects. So what we do is just to look at the Git repository. In the Git repository, you have the time zone of every contribution, and we just analyze that. Of course, that's not perfect in the sense that we can only track big geographical areas. But fortunately, if you look at how the time zones are over the earth, you can tell interesting things like people working in the east or in the west coast in the States, or working in most of Latin America, or working in Europe or Western Europe, Eastern Europe, Asia. You can tell China from India, from Japan and Korea, for instance, because they are in different time zones too, and even Australia and New Zealand. So you can have a lot of information. So this is open Nebula, and this is basically Europe. So time zones one and two. Remember the summer time zones that we have in Europe and in the States, right? So this is reasonable because this company is in Madrid, Spain. That's it. You can look at the Caliptus west coast. Okay, open stack, cloud stack is a different story. So what is this? India. So like, not half, but a sensible part of cloud stack has been developed in India. And then you have Europe, Western Europe, and then you have west coast, bit of east coast. And then you have open stack. Open stack is with difference much more diverse in geographical terms. And an interesting thing to look at is how this is evolving over time. Because you can look at this four years ago, and you can look at this now. And that you can imagine the main differences in Europe and in Asia. At the very beginning, this was a lot like the States. But for instance, in this period, which is the last year, you can see how there is in fact much more development in Europe, including Russia under, I'd say, everything until the URLs, then in North America. Which is interesting if you look at where the companies are located. Or at least the headquarters of the companies are located. And well, you have the participation in Asia. You can see how India is not that represented. But you can see how China is. China is times own eight. And Japan and Korea, and this is basically Australia and New Zealand. Right. And as I said, looking at the evolution of this is quite interesting. This is another thing which is only commit patterns. Only commit patterns say a lot about developers. Because if you're a developer working for a company from, say, nine to five, that's your commit time. If you are a volunteer working on Saturdays and Sundays and the night, that's your commit time. So by just by looking at this, you can learn, for instance, how the people in Napa and Nebula, which is this one, they are mainly working for a company. You can see this is the usual pattern. By the way, do you know what's the time for having lunch in Madrid? Exactly. You can compare that with the time for having lunch in the case of Ecoleptus, which is California. There's a difference, right? And you can see how the people in Ecoleptus, some of them tend to work late. That means that either they are volunteers or they are working in a company very flexible, because they are not working office hours. So that's not important by itself, but it's important from the point of view, I make contribution to them when they are going to answer me. So this is much more continuous than the other, right? And this is a cloud stack in the top and an open stack in the bottom. You can see that open stackers are working even at, say, four o'clock in the morning, a sensible part of them. So which is just interesting. And you can see most of them are having lunch at about 12. So if you need to put a meeting, don't write it at 12 o'clock in the morning. But anyway, you can see how this is much more spirit, which is because of, first of all, the community is more varied in terms of cultural areas. For instance, people have lunch at different times and they stay longer in the night than they stay early in the morning, depending on many other things. They are volunteers, there are many people for companies, but there are a lot of people in companies working with flexible times. So this is quite interesting again from the point of view of how I have to collaborate with these guys. Another topic, corporate diversity. Corporate diversity is very important for projects, because that means I'm depending on a single company or I'm depending on branching companies. So remember that when you're adopting open-source software, in fact, you're adopting a community in some sense. So you are relying on that community. If that community is focused around one company, in the end, what is happening is that you are relying on that company. That may be very good or very bad, depending on the policy of the company. At some point, they may decide to invest a lot and the product is going to grow very quickly. Or at some point, they may want to pull and remove all the developers from the project and the project is going to have some trouble. Okay, so let's look at that. So if you look at the companies, this is basically a panevula. A panevula is Universiat Complutense and a panevula, the company. So this is all the history. And this is a panevula because the project started in the university and then moved to the company. And you can see most of the development is both things. This is Eucalyptus. Again, a single company project. This is Cloud Stack. It started like being a single company project, but they are getting more contributions. Basically, this area is enlarging year over year. So this is again for the last, from July 2014 to June 2015. And you have a Pena Stack. Pena Stack is a different story again. You have many different companies. You can see how some of them are very big, but there are a lot and there are a lot more here in others. So in the bottom, you have the number of companies active per month with contributions. And you can see how, since more than one year ago, it became a city over 50. So that means that there are over 50 companies contributing every month to Pena Stack. So there are so much committed to that kind of thing. So again, the diversity here is quite different with respect to the other projects. We felt at some point that we needed a number for talking about diversity. So we came to the basics and we remember that Apache is doing something like this. So the idea in Apache is how much do we depend on developers? So how many developers do we rely on when we are talking about the project? And the idea was let's define the pointy factor. So the idea is well, they consider like Apache developers being ponies. And how many ponies do we have for amounting like 50% of the contributions to the code? So that's the pointy factor. We extended that to companies and we defined the elephant factor. So companies are not ponies, they are more like elephants, you know? So the idea is how many companies you need to amount for 50% of the contributions to the code? Right? This is a single number. Of course, single numbers you now have a lot of trouble. But with that number you can capture how you are depending on different companies. And this is the table for some periods in the area of cloud. So not only these four, but some others. And you can see how the pointy factor, for instance, for OpenNabila is four, there are four people contributing more than 50% of the contributions for Ecolicton is five, for OpenStack is more than 100. But you can go deeper and you can find that for Cloud Foundry is like a bit more than 40. For OpenSafe is 10, for Docker is like 15, and for Kubernetes is for 12. If you look at the elephant factor, all of them are one, except for OpenStack. But, OpenStack is big. If you look at the OpenStack at the different granularity, looking project by project within OpenStack, this story is a bit different. I'm going to tell you later. Right? But right now you can see how there is also a different fewer. Well, this is just the number of commits, excluding bots. Just now, what we are talking about. So here we're talking that six companies are doing half of 126 commits, which is like a bit more than 60,000 commits per, I mean all the five companies together, which is as a stable amount of commits by the way. Right? So now let's go to the final part of the talk, which is this Kibana based that's worse. And here I'm going to analyze in a bit more of detail two specific aspects of OpenStack. One of it is the elephant factor. And the other one is code reviews. For this is very important that at some point you go to the dashboard and look at the real thing. So we have, sorry, we have prepared two dashboards for this. The first one is for companies. It's basically commits and you can drill down by company, by person, by project and by time and by some other things. And the second one, sorry, the first one is for code reviews. Where basically you can learn about how long code reviews are and what happens in terms of time to merge in code reviews. By the way, both dashboards are not a product. They are still a proof of concept, but the data is real. And I guess that they work pretty well. So at this to have an idea of what's happening. So this is the one on contributions. I'm going to talk later and this is the one about code reviews. So let's start with contributions. So the elephant factor. I said that for OpenStack the elephant factor is six, but what happens if we go project after project? So let's start with Nova. Nova is the largest one in terms of number of contributions. And we can see how the elephant factor is here. This is the survey companies. Well, it's a bit more than two. Could be two, three, depending on how you count it. So that's quite important because if you think of OpenStack as a whole project, so that's nice. We have a factor of six. You are relaying on a lot of companies that that's okay. But if you go component after component, you can see that for some of the components the diversity is not that high. So for instance, you can see basically two companies are doing or at least in the past did most of Nova. By the way, this is rugby space. This is IBM. Right. Of course time passes by. And it's not the same Nova for all the history of Nova, Nova for the last year. And probably the story for the last year is more interesting because I really don't mind that doesn't that it really doesn't matter a lot. Who was working in Nova three years ago? It's it's important who work was working in Nova right now. And here again, you can see that the the factor is a bit bigger. So you have like three companies and the companies are different. The first one is I'm sorry, but it's IBM. The second one is Red Hat. The third one is Neck. And this is still an interesting story because you can see how companies are changing over time. So record space is no longer between the top three or the top four even companies there. And you can find down in the list and they are pretty low compared to where they were like three years ago, which means that companies are, you know, switching in activity and the British is still being active. In this chart, I'm not going to enter into details, but basically you can see the same thing over time. And well, basically, you can see the activity during the last two years. I guess, basically, sorry, the last year of all of this and this is Red Hat, for instance, and you can see how the contribution of Red Hat is changing over time compared to the companies. From that, you can also infer trends. You can see whether a company is rising or not in the contributions to the to the project. Just to compare, this is Neutron. Neutron is by number of commits the second most active project in OpenStack. And you can see how for Neutron, the diversity is a bit higher. So you can count for companies five, it's around that. Okay. And you can see again that the companies are different. The first one is the same. Then you have HP, then you have Miranda's. And the other one is, sorry, Bit Treats. Right. You can again look at this in more detail and again, go to the dashboard and drill down. You can click on everything and learn about, for instance, the specific story of a certain company in the period. But the interesting thing here is this period is a bit more diverse than Nova. And you can go to Hit. And Hit, it's like two companies. But you can see not only that it's like two companies, it's like four companies for 75% of the code. So it's like five companies in total is 85% of the code. So it's diverse, but it's not as diverse as the others. And you can also look at the trends and you can see that the trends are interesting by themselves too. And you can think about how this period is going to be next year in terms of diversity if the current trends don't change. Of course, a company can change this in any moment because they can basically push more resources to the project. But it doesn't happen from one day to the other because in OpenStack, all of these are approved commits. So some other reviewers were looking at the commits. So it's not a matter of I put more committers, I start to contribute more. It's a bit more complex than that. So the trends here are really interesting. And just to compare Cinder, Cinder is one of the most diverse projects right now in OpenStack. And you can see how they have a factor of around six, seven. And you can see how there are a lot of companies in the long tail. And you can see how the evolution is also, let's say very healthy. Companies aren't really living, but the period seems to be working pretty well. By the way, remember that this is companies, not people. In some cases, it's changed just because people move from one company to another. But the people is the same. So if you look at the same thing for people, it's in many cases is surprising how different it is because some of the period have a stable team of people that the team of people came from company to company over time. And now let's move to the last thing I'm going to tell you about, which is trying to understand the review process in OpenStack. Again, please go to the dashboard. I only have five minutes to try to show you this and this is a bit complex, but quite interesting from my point of view. Remember that code review is very important because for all the companies doing continuous deployment, basically code review is in the middle to continuous deployment. I mean, every time you have something to do, you propose it as a feature request or as a bug report. At some point, that gets implemented. And when it gets implemented, it's submitted to the code review process. So there starts to count the timer because the company or the person is usually very interesting in having that patch for putting that into the real thing to deploy it. So the shorter the code review process, the better. And in addition, we have also a measure of effort, which is if you have to submit a lot of versions for the same for the same patch, that's interesting because the patch probably is becoming better and better, but in the end, it's a lot of effort, both by the developer who has to produce new version of the patch and for the reviewers who have to review a lot of new versions of the patch too. So the ideal thing would be that all the patches are as good that I can just approve them and go directly to the code base. That would be nice, but obviously it doesn't happen and that's why we have code review. But in the end, the closer to one, the better. If it is growing, that means that there is a lot of effort in making that patch better, effort by the developer and effort by the code reviewers. So this is the first stand, which is this is for all the history of OpenStack and this is basically three, I don't have my glasses now, but this is new, new means still in process. So these are active reviews. So this is merge, so reviews that ended in the code base and this is some unknown. This is the first interesting result that 20% of all the reviews get that unknown. So in some sense, that's wasted effort. Of course, not always. Many of them are abandoned and then there's a new, a new one, which is basically donate and so on. But basically, some people invested a lot of effort in this area here and that code didn't, is not going to to be merged ever. This is the evolution of a time of all of these parameters. This is time open and this is number of patches. The top row is number of patches and the bottom is time open. Time open is in days. So first, very interesting results. Median, I mean, 50% of the contributions is one day or less. We're rounding here to the floor so that means that in fact, it can be up to two days. Right. But that's very interesting because that means that half of the contributions to open a stack get landed into code or get abandoned in two days, which is pretty good for most standards in many projects. Well, you can see how this grows higher if you go to the 75% to the 95% or to the 99%. So there are some patches that take like 200 days or more, but those are 1% of the patches. I mean, 1% of the proposed changes. And this is the number of iterations of the number of patches that you have to submit. So again, half of them, half of all the reviews needed only two patches which is pretty good because that means that those half of the code reviews the reviews only had to review twice and the second one was a good one. So that's nice. Again, if you look at the 75% you have four, if you look at the 95% it's 12 and you have 1% which are more than 25 versions or patches, which is well interesting. You can look at the history of a time and this trend is doesn't mean that the product is getting better and better is just that the reviews didn't have time to get longer. Okay. And something important this is the numbers for 99% 95%. And 50% is so low that it's beyond here. So here we're basically looking at the outliers. Right. And the same here this is basically the outliers more than 95% and the real thing most of the interesting thing happens here which is pretty low. We can do the same for specifically abandoned reviews. I'm not going to enter into all the details because I'm running short of time but basically you can run the same and for instance for abandoned reviews an interesting thing is that in some cases they got abandoned after 22 iterations. So it would be nice if both developer and some reviewers notice before that that this should be killed because it's not going to be anywhere. With time in some cases this is 280 days and while you can do the same analysis for those merge and here you can see that some are merged after 25 iterations. So if your review is getting like 20 iterations you still have some chances of being approved and being entered and you can look at the days and for 99% I believe for 1% of the contributions it took more than 43 days to merge the contribution. Well then you can look at the backlog. So this is the current backlog as of yesterday or something and you can see how old their reviews are. So this is per month so this all of these reviews has one of one month or less. This is the next one two months three months and this is the whole history. So you have some reviews that are like 15 months old and still being considered. So I mean that the cold review didn't end for them. And you can look again at the times while I'm not going to enter into the details. This is specific for Nova because all of the product have very specific numbers. And yes we are going to focus on this one. So this is the number of days. Nova is a bit higher than the than the mean. Remember that this was one which means basically two or less and here it is three which means four or less. So it's a bit more than the rest of the project. And you can look at Neutron. Neutron is rural it's like two or less days. And you can you can look at it. It's like three for instance. You can look at all the numbers by the way but I'm just focusing on this one. And well you can also in the rest of the of the of the charts you can see things like how many of them are abandoned. So for instance Hit is very good at having quick reviews but a lot of them are abandoned which is not necessarily good. You can compare that with Nova for instance which is pretty similar but this one is a bit less than that. Okay. This is for the last year by the way. And then you have part of this information in the reports that we prepare for the OpenStack Foundation if you want to look at there it's more a summary of what's happening but if you have less time maybe it's just a matter of reading that and this is the end so I'm going to finish I'm out of time. So final considerations so there are huge differences between the different projects. That doesn't mean that some of them are better than the others depending on what you need and we are only talking about how the development process is working. We are saying nothing about the quality of the result for instance but in many cases now in the community on who I rely now in our project. So for instance I can see here that if I have a patch for OpenStack and the patch is good it's very likely like in two days it's going to enter the code base. So if I'm doing continuous integration that's a pretty good number to now. Picture for a moment that that's an important security fix. So that's the numbers that we have here. In the end look at the details it's very important and you can use the dash words to drill down to the level that you may need. And for the specific case of OpenStack OpenStack is large large and complex and there is no such thing as the OpenStack development. Well you have a lot of things in common but you also have differences. So some of the projects are reviewing in twice the time than others for instance. So there is also a lot of opportunities for learning for good practices. Some of the OpenStack projects are doing it better from some point of view than others. And you can drill down look at the numbers and say what are doing these guys for being so good in this parameter and I try to learn from them. Of course every project is different is different sorry I have different peculiarities but they are sort of homogeneous too. So you can learn a lot from a similar project and try to understand what's happening and try to practically extend the good practices to the rest of the of the project. And well a short disclaimer and that's all. You have the links for all the things and that's it. Since we are sort of time but we are the last talk if you want we can go out and talk if you had our questions. Thank you very much.