 Hello, I'm Nicolas Junior, I'm a researcher at the IMT Atlantica School of Engineering in France, and I'm going to present the Artificial Regulation of Errorism Error, Managing the Compromative Creative Connection. The goal of the presentation is to discuss the problem of managing algorithm error. Most of the time when you speak about algorithm error, you speak about the fact that you have implementation problem of the data, not good enough, etc. But in fact, it's very hard to correct the implementation, and most of the time even error can appear after the implementation, because people evolve, because you have new things to do. And the question we are looking at here is about the day-to-day management of the activity of an algorithm, and the regulation of the problems which may occur for this day-to-day management. So the idea of looking at Wikipedia, of course, is that as the manager, as the users, the contributors, as the users, and the managers, it may be easier for them to understand that an algorithm is doing something not expected or not, and to correct it. And that's what we want to look at, how Wikipedia implements algorithm and manages the algorithm on the day-to-day activity of Wikipedia. So what we needed to look at, of course, is about contribution management, because the core of Wikipedia is about managing what people provide, contribute, how they contribute, and of course, with the success of Wikipedia, there are too much contributions for the managers and partners to control all these contributions, hence the need for the algorithm to do so. And so, of course, with the exposure of vandalisms, it was even more needed to have technical tools which automatically control these contributions. So what we looked at is how Cellbot works. Cellbot is one of the oldest bots which do vandal fighter, that's more than 10 years of, even 15 years old. And what we looked at is how people signal that there are problems or discuss the fact that Cellbot did something to them which was not expected or was not what they wanted. And how this is handled by Cellbot developer or by all the patrollers and all the Wikipedians which discuss how it works. So what we did is looking at all the discourses about Cellbot and from Cellbot and about Cellbot on Cellbot's page on the Cellbot developer page, on the bistro, which is a place where people discussed within the French Wikipedia about the rules and the impact of the rules and how the tools work. And of course, the idea was to discuss this, that an IT cell and Cellbot implement rules to control the practices, but the practices may evolve and make the rules, or you may do things which are, according to the rules, well emptied within the artifact and how does that work. So the result, the first result is that, yes, Cellbot makes errors and because not, because you cannot rely always on the past to focus the features, people change rules change and Cellbot has a hard time to cope with these changes. So there are problems. And so fact that there are problems is a problem in itself because it's very hard for the users, when it is automated, to understand that they have done something which is not agreed upon by the algorithm, which is fair regarding the rules of the project. And so users, and even if they have done something wrong, it's very hard for them to understand what was wrong because it is too automated and it's very hard to understand the restrictions of the the tools. And where do you go to complain? It's also hard, even if everything is always right and explained, very hard to understand the explanation, it takes a lot of time and effort. And even if you know where to go, you have to make extra effort to make your case and to defend your case against Cellbot and its creator and the community, which is trust the algorithm as the tools about more than newcomers, which can have hard time to explain what they have done, etc. So there is other confidence on Cellbot, which impediment the capacity of the project to evolve. But there are two consequences to that. So first one is quite classic in private platform and algorithm management is that there is really the automations make harder or make some boundaries between simple contributors, which are just regulated by these tools. And policy makers who are those who make the tools and control the other by the tools here, the community. So it makes a boundary very stronger, very higher, and it makes harder for the contributors to enter the community. And one is one of the consequence. And this is because you have difficulties to detect, to qualify and correct discrepancies of the tools. And it is very hard for the people to do so. But at the same time, as it is automated, it is possible to do so. You can learn, you can try, retry, retry without fatigue from the tools as you could have fatigue from the users. So it is needed for the platform to survive. As I explained, you have lots of vandalism, but it's not very different from private platform. What is really different within the Wikipedia framework is that even if it's hard, you have these spaces of discussion. You have places where you can make your case and where you can look and you can explain what's happened and where the problem is discussed. You have an open feedback group. So what is very new with Wikipedia is that you have two spaces created or reinforced by the algorithm. The spaces of production, very controlled, very automated, and very hard for people to understand how it works. But it protects the space where you have the space of project management when you can discuss and which is open to everybody and is still open to everybody. So this is quite classic. This is more different. And I thank you very much for your intention.