 Okay, I should really plug it. Hello, who doesn't know me, my name is Narek Tsukhin, and I will give you together with, like, another thing I was talking about. Overview of what happened in copper during past year. One year is long period, so you are really familiar with some improvements and changes we done in copper, but some maybe surprises for you. So, let's go. So, what happened? Before we start, let's talk about some statistics. In copper we have right now 50,000 projects. Whatever it means. One project is really one young repository, sometimes it's some very good project, sometimes not, usually not. But yeah, right? Projects. We have 30TB packages, previously we said 13TB from our repository, so that was outgoing traffic. What we done? Well, this chain was over PC 64 with one of the endless, and then we added that during autumn. Originally, still it is kind of experimental stuff, because it doesn't reside in federal infrastructure, it reside in rural and technical university, we have a policy machine, so it will be the pizza with the ocean, so it's not optimal, but it will change soon, because we right now have the policy machines in open stack, and they will be available very soon, so it will be more good. D&M. We started using D&M in Koji in copper QA2, because the base work was done, Mikhael Shimachek who done the support more. Any programs that we notice when the D&M was deployed in Koji because there was no such program. We have few programs in copper, And now we fixed that before it was deployed into the Koji, which I think was a good goal of the Koji. To find the issues before it hit the Koji where a lot of people were complaining. We have problems in the Koji, people usually email us or ping us. It doesn't work. But actually, it doesn't bring so much stuff. Disgit. You remember previously you have to provide the Cooper just on some address on the internet where you write your source RPM and Cooper downloads your source RPM. For several months we have disgit, so you can upload source RPM from your road station and store it in the disgit and you will find that disgit. It's not completely open for writing for anyone. You still have to upload it to our content which is configured for only one user that you can call it and use it anonymously now if you want. It's not open for a lot more users for writing because we work right off some performance problems because we have 150,000 geeks and some projects are not capable to work with such huge numbers. For example, we had a performance program with CG which is the web interface for the disgit which was not able to handle such number of geeks. Just rendering the content each to 30 minutes. So, each time I was on the, even if the cache didn't able to cause us a lot of problems. From the start of the GoPro there was access controllers so we were able to assign somebody else and give him permission to build or even administrate your project. But there was a program that the owner, name of the owner was still visible in the name of the project. So, if you start project like L2T slash GoPro and then you have the project it was still visible in the name of the project and people don't like that and ask us for the groups at all. So, we done that, so you can have the groups. They are related to fast groups because we say, okay, let's not write another account system than we have one to the account system. So, if you have group in fast, you can enable that group in GoPro. Give it that group some other name. So, if the group in the fast is named Geek Mock, you can name it Mock in GoPro. So, it looks like better in the output. If you want to use that feature, there is one problem because fast couldn't create on the file the group. There is no such thing like create a group. It's still ticked the driver so you have to write tick it to the infrastructure and ask them please create a group. I need that for GoPro. They will do that. It takes several hours all day to create that so it's not a real time thing. So, but I expect that more people will use that and then somebody infrastructure will create such a button and automate that. Webmoogs, we suffer webmoogs right now only again. GitHub but you go to the settings. There is a tab settings in GoPro project and there is a tab webmoogs and it will provide you some address on GoPro and you can copy and paste it into the GitHub settings. And you have to notify GoPro server after each commit and GoPro can trigger build of package. Yeah, question? I saw this feature. I didn't try it yet. Does it require the sources and the spec file to be at the same repository? Right now, it's only built from GitHub if your package is used in upstream crypto or it's capable of build using Mock ASEAN. And Mock ASEAN, I have to tell you that there is a right now one way to automatically bump your release or burden in it. But if you go to the Mock ASEAN homepage and there is a tab in SCM and it will tell you how to run Mock ASEAN against some Git repository. So, if you succeed building your package using Mock ASEAN, then you will work in GoPro. It's not optimal. I see a lot of space for improvements, but there has to be that in Mock ASEAN. Which eventually will end up on me as I'm right now mentioning Mock mainly. But yeah, it's not in GoPro. Yeah, and the biggest work we are doing right now is we are building some native modules. We started with Python modules. And it says, okay, there exist 70,000 Python modules on Python ER. But we have in Fedora only 2,000, 3,000 Python modules. So, people are installing those modules using PIP, Insta, the name of the modules. It shows the problems. You couldn't verify the signature. You couldn't remove that. So, all the problems while we are using packages. So, let's review the whole PIP as RTI packages. There exist the two PIP to RTM. We created all those source RTM using PIP to RTM. Yeah, there was a lot of problems. But we are reporting them to PIP to RTM. And he's working on fixing that. The biggest improvement recently and was contributed by Nitha Maratha was that previously PIP to RTM was trying to pass in the setup.py as normal tags, which of course is not used as normal tags, because a lot of maintainers are putting their version is equal to open some files, read the contents and take the first sign or something like that. So, it's not passable by the aspects and it must be evaluated. So, Nitha Maratha contributed a code which is evaluated to setup.py and we will get the variables when they are evaluated by Python. So, we have over 70 modules. Right now, it's only Python 2 and separate modules with Python 3. And we have by now 22% success rate, which means we have 15,000 Python packages. Anyway, that's quite a lot. But honestly, we don't know what one it is. So, it may work or not. But it is. So, it may be, you might try and if it works for you, it's good. Or somebody from Telugu can step in and try to test all those 15,000 packages. So, it's a playground for you and you can do something with that. I don't know what. I'm just providing those packages. Question? What is the intelligence of Python? The Python packages was built just for the Rohan because I was trying to get some statistics which packages we are able to build for Python 2 and for Python 3. And then I want to build on the mixture of the final project because there will be a combination of those, although it doesn't have sense. So, I'm ready for a few improvements in P2RVM, which I report. Yeah? How do you handle interdependencies between those Python packages? I don't handle it at all. So, if it fails because of the dependency, it fails. I'm just assuming that those packages are putting the alphabet. So, which is not up to you. Yeah, I didn't talk about it later. As a feature set, there is a project in DNF and they plan to provide API in DNF for sorting using built requirements in DNF. So, there are some API and maligned tools. Yeah, we'll talk about that. Right? Yeah, so, yeah, and set up by part sync I mentioned and Python 2 and Python 3 and the mixture was the biggest problem. So, sometimes when the module support Python 3 but when we set up part, so that was the most problem. In the meantime, we started with RubyGames. This time, RubyGames are much easier. I didn't anticipate that part. The part is that there are not too much problems. We have even 60 pre-success, 40 pre-success rate which can be right now provided 28,000 packages. Last week ago, right now it will be even more. For Fedora, this time it's for Fedora Load, Fedora 24 and 23, I think I'm actually 22 and 27. So, that's a lot of packages. Now, this part is submitted there and there is a queue and still 80,000 votes are waiting to be done. But I know this whole revenue has taken about one month. So, it's quite a lot of time. We are trying to focus on the speech improvements but still even if you, the build of the game last one minute which is super fast, counting in the build-through preparation, so it's super fast, but still if you count the one minute, there are more than 100,000 games and if you count it, Fedora's Ethos, it's a huge number. We had some problems in the minutes, though. For example, my favorite one, there is one review game, it's never been this way. Because we didn't have so long in the car, we had to change the TV. That's longer. Does it actually sound quick or is it a joke? I don't know. This review game is, I hope I didn't step into someone else's namespace. I don't know what it actually done, but it built our queue for a few days. It's a test case, so it serves some purpose. I saw a group of guys that made that three attempts to get into their namespace and found people were like, no one's fetching my namespace. But the biggest issue was the license issue, because a lot of game maintainers doesn't provide any license information regarding the game. RubyGAM.org website license is if a maintainer doesn't provide license information, then it defaults to overwrite our result, which means no distribution, no free use. I know that this is just, the author just didn't care about the license and they basically can license it using whatever license. But yeah, we don't have those license, and it's a huge number. Previously we had 46 persons access rate, but it's approximately 50% of those games doesn't have the license, so we probably only get 70% of games, actually. So it's so much, so many games that I couldn't do anything about it. I couldn't contact 50,000 mayors. But I thought about it, and I probably can. I was very comfortable. RubyGAM.org sent some mutual spam to all of those authors and can do as them, they can provide some license. They will see if we can do some. Maybe another chemist could be to actually check if that GAM is a dependency of any important man, or maybe even any GAM. So you cut out the GAMs that people even don't care to do the license. You didn't have license issues with Python? No. There may be some packages that have such programs, but the problem is not such huge as RubyGAMs. Probably few of them. But Python-makingers are pretty, pretty good with license. Okay, and last slide for me. It's false. It's like recent picture. There is a little, the bottom, fork, which enables you to fork your project or some other project into some of your namespace, into your namespace or the group namespace. The intended use is that you have your project where you have 90 bills, or even after each colleague, or it's my project, 90s, and you will be there. And then suddenly you decide that you want to do a release. So you fork your 90 project into your my project stable, it will copy latest successful bills into your new project, my project stable. Re-send them with the new project GPGP. And since that moment, you continue with your my project 90s and you will be there, but you are still trying to stabilize my project stable. And only put the fixes there and stabilize that. Okay, from this point, it's stable and I'm releasing to community, use this project, this DNF repository, as it contains stable version of my project. And now I will give the place for Michael. We will show you where the fault is. Do you want to show that actually? I will just show. You go for CI and the place. Oh, that's right. Okay. And there is a talk. Okay, so it's here to be loaded, otherwise it will not be shown. It's at the bottom of the page. So click this project and you can click on that. It will say, okay, 40 into the, I mean, private, some group, we are a member, and some names, so couple, go there, and 40. And it says it's being prepared. Okay, so right now, back then it was always this, and it was always these, which may take some time, even a few minutes or dozen minutes, but then it's prepared for use. So this is, I'm sorry, this is great. I actually gave you a question and asked you for something like this. Okay, I was expecting that. This session for you, perfect. Perfect. Just one question. In this case, it looks like confidential was pretty existing. Or dysfunctionally in the production? Yeah, yeah, I get it. So the question was, what will happen in this project to reach and working or will it exist? If it only exists, it will get very few RPMs. If the project doesn't exist in the screen, then it will just copy those two packages. No, no, no, no. I'm not sure because I think it's forbidden. No, no, no. I saw some tests by a group and I think it's forbidden actually, but you can't do that. No, no, no. You can't do that. And then, no, no, a place of the couple will start in the place where all seven of those previous girls are all real ones, so they can be burned and removed if they are only then working there. So all the girls, like this one, you copy there and we will start it. Okay, so I can still fork to an existing... Yes. Exactly. I mean, right now I have a stable wish, but I'd like to be able to do a test build, one that I don't test against it, so it looks good and then move it. Yes. That should work. Yeah. For example, you can have your project nightly, your project stable, which is always used, and your project really standardized. So you can fork nightly to really standardized, these those few packages which you found some problems and then you assess. I have to be there, I have to be there until it's stable. That's great. Thank you. So just to be certain, it's not going to rebuild there again, just re-sign it. Yeah. Are you just copying them, or you didn't even copy those build series calls, but I think, oh yeah, told me that right now we are copying even those build series calls, so you have the entry in the build tab. And what happens with the people that you have? What happens with these gate commands and so on? It's not created those entries that were written earlier in this gate, so it's not copied to this gate of that new project. Since it's not too much visible for the user, we didn't care too much. Because I think the functionality, maybe the name Fork is a bit misleading, I think. Maybe we could... Come up with a better one. Yeah, yeah, I don't have one. I'm saying that people are used to, I don't know, Fork is the name of the project. Yeah. So, yeah, we are open to ideas. Almost somebody else come and request this feature. We want to have this feature easy to use. So, we didn't care about too much as your hands do by the relation between this gate and the other visible builds. So, if it works for the user, it's enough for us. So, we are aiming for easy to use. President, are you actually aiming for this gate to, in some way in the future, to be usable as, like, I don't know, comparable to the build, whatever. You can use it right now. But you can only read from the repository. You can read. And do you think at some point in the future we'll be able to write? Yeah, we'll go back. First, we have to work and check if GitLab is capable to handle so many users and so many gates. And I'm... I'm not sure if it will work for us. Maybe we can try it here and make it as stress-less for it. Oh, that would be nice. We have already... We've been on the intrapop. Yeah. Yeah, so we are applying it for packages in the ground. So, probably you have more packages to compare, but still a lot of packages, I guess. We have a lot of ideas. A lot of things to do. Just human power. And one more question. So, curiously, if I go to the 5PI repo and I click for, do I disable copper for months and it will be signed on these packages? No, no, no. Because they will not be available. Just copy. We'll just copy a few gigabytes. So, you will probably... You will be able to disable the signing for a day or two or three, but not months, yeah. That's not the best. Michael, can you show us your demonstration of your things? So, my name is Clime. Hello. And I would like you to see some cool things that you can do with proper CLR interface. So, first thing that I will do is creating a new project. Okay. And now, recent pre-levered many projects on the whole page that were automatically generated. IPM software did over-values or something like that. And it was just coming out of the homepage. So, we decided to implement some feature that would enable us to hide those projects. Actually, it was a feature request. Thank you for that. And we implemented it. And it works like this. Right now, this clipper is visible. Okay. So, Clime and my clipper. And I will hide it. So, now it should be hidden. And now I can make it visible again. So, right now, this option is not used by an overwhelming group. But we are going to have a person who will use it eventually. So, I hope you can do it. So, what do you do with this? Start with you are using your project for continuous integration. For example, our team, using that they are creating for every pull-out that they create a new project. And so, they spend it on page with a lot of projects which no one by the RPM team cared about. So, they can be not used in the home page. Thank you very much. This is one of the features we implemented recently. And another issue that we have had was that when we started these massive rebuilds of IPI and all the gems, it completely stopped our builds from happening. And our build queue and other builds didn't take their turn at all. So, we needed to figure something out to fix this. And what we did was that we introduced the IRDs in a very simple fashion. So, we introduced CopperCII that means minus minus the ground, dash dash the ground. And this will make the build the ground job, which means that it will be overpainted by a normal job if it comes in the queue. So, normal jobs have precedence in Copper. And the ground job will be built if there is no normal job. So, I'll make a small demonstration. I hope it won't take too much time. Difference between Copper and CopperCII? No difference. Why do you have two? Say it. Why do you have two tools? It's one tool. It's just an ARS. And you were using this, but... CopperCII has... I'm trying to... I'm using this... Okay, so I will make a Jam build. So, the Jam... A123 into my Copper and it will be the ground and I don't want to... Well, I don't want my shell to be burned. So, this is a ground job and I will also send at the same time but after the Jam job, normal job. Is there any user limiting? Say it. Is there any user limiting, for example, to the user spam, CopperCII? Yeah, well, actually, I have virtual machines, there is a limit and I will begin only, like, for example, eight machines can burn to one user. So, after that, this build still will be sent into the queue. The queue and it can blow other users. Yes. If they will lose the guarantee then... Well, we have 25 builders. I think right now we use... So, and only six... Six builders can be burned to one user. So, if you ask for builds then we take builds from other users. So, now, what will happen? Don't do it all. Is that... first, the Jam build is created and after it, do normal job, normal... this two-in-repo build, two-in-repo package is built. So, ID 33 is two-in-repo normal build and it was sent after the Jam build. So, Jam build should be first to be built, but it will be second. So, what's the rule? Yes, you can see that two-in-repo is running, the build is running and Jam is still waiting. So, it will be built after. All right, and the very last thing I want to show you is our build package interface. So, you can create and edit packages directly from command line. For example, I will create little package. Okay? And name will be example. Microphone, sorry. And now, why is this good? It is good because you can do build package command and it will use the definition of the package that you defined when you created it and you don't need to specify it again and again. So, if you have more arguments, more parameters that you want to define your package with because there are more you are already there, this can be useful. You don't need to specify it every time again if you make a new build with this command. Okay, so, any questions? This is stored on server or on file? The packages are created on the server. Yes. Do you expect users to use the background priority that won't be nice? I don't know because right now we use them for the PTI religion's level. So, you can submit a package with background priorities and in one month in the video. But later when we stop with all those huge views we expected to be used for continuous integration. So, if you have continuous integration and you don't care if you build in half an hour or an hour use background and normal users will have a big package on all of us. So, background is just bullied or cannot possibly be more or less background? It's just a bullying. We have priorities. Any demo of the background thingy? Oh, the server received an immediate restart. No, because it waits a couple of seconds or? Yeah. Our background is forming our background for new birth and it does it every 30 seconds. So, I was hoping that it won't be the first. So, it could actually happen? Yes, it could. Okay. You can see that the packages are created in our server. So, that's all for me. My thoughts? Yes. Thank you. How could I know that the package is so empty now? There is a status current, I think. What is it? We're seeing a status IE of the build. The build is done? Yeah. How we know that the build is done? If the builds are frequently concerned, then you can use CRI watch build and ID of the build join the proceeding build and display its status as a close. It's similar to Koji command. So, you have that message. So, you can use the notifications and set up emails or whatever. There is a fat watch command barista and execute anything when you receive the successful build. So, what can happen in the future? So, we are Europeans. We are somewhere in the middle of the PAPI, but we are anonymous. So, why not further than those as well? So, very likely to happen. Something from intro, we know we are in the progress of redesigning with Q-Regic to better utilize our services to build faster. Igor Netenko is initially a static project for common IP and creation tools, because right now there is a graph, TITO, MoMA, ADO, PQG and other which are found somehow from our CQA, the package. And I see that every project creates the IPM in some different way. We have for a long time the IPM depth package, where the IPM depth bounce path, IPM, IP change law, entry, etc. But we are targeting Igor is targeting to have some common tools which are better suited for these days. So, other tools can be utilized and maybe we will converge to some very tools or just few of them which will be commonly used. Yeah, I mentioned that DNF team is working on package ordering which will help better bootstrapping. So, if you start new pool with KDA there is 100 packages and you have to serve in specific order otherwise you will fail and you have to resume. So, there you have as well. And there is, of course, the big topic of 3R modules was mentioned today by Mattingo and they will be done on Wednesday and Friday by Thursday. On Thursday. On Thursday by Monday. So, I encourage you to visit there so you know what 3R modules are and where likely COPA will be able to build them as well. It's somehow able to build it in our development environment. I'm trying to focus on more with SystemDNS. We saw now more is building in the pool for more than a year we have and how much as part of the summer pool work which enable more to build in container SystemDNS. There are still some bugs which prevent us to deploy that direction and once fixed we can probably very easily switch that to other container like Docker or so so you can build this container and there are no new ideas and generally I'm open to any idea which is related to continuous integration or building from upstream. So, if you have any idea which may help you or somebody else and open to your idea you can use something about that. So, there is mostly the easiest way to give the time for you guys to submit future requests or interesting questions. Corporate development at least is for you to comment guys or we are hanging on Bootsys on RSC Prino so if you want to chat with us then RSC is probably best if you have some more longer idea then email on the mailing list is better. You may open a future request announcement in Baxila component we are ready to work on that or you can discuss with us here and we will talk about your ideas. Another thing if this is involved actually but Copper is great I love Copper, it's awesome and Boots I need to primarily build kernels and so that takes a couple hours and I have noticed kind of the disturbing trend in the past few months where it's given us so much to rely on things time out there's runs out of this space there's probably cotton setting up the mock environment and sometimes I have to recent three or four times and how is I mean it seems like Copper tries to build something and I get kicked off the time so it restarts again so it's kind of a little money takes six or seven hours and I basically lost half a day to a day when I was not recent so is that something you guys are aware of and hopefully in September we will get better? The question was that Copper sucks recently and have a lot of issues yes we are aware of that and we are trying to fix it as hard as we can but at the same time we were breaking that because we were putting those new features into production and then we okay this sucks and this was broken so right now we don't have any disruptive features because most of those problems were by those new troubles which caused some services time out or there was a lot of we are trying to change the view logic may bring some new but really only some delay not per se yeah we are fine we are not breaking it just for fun I hope that right now there will be more season of more stable Copper that's my wish as well hey it's still I still love so thank you guys very much I have a question yeah do you have like an example project that maybe uses Copper to let's say IPM automatically for example I don't know push attack to get to repo for example figure out a web hook of our Copper Slash Copper or Copper Death Copper Slash Copper Death there are configured packages which are configured again Gidha and anytime we push anything to Gidha it will suck the new build into the Copper into our project so in other words we build Copper in Copper and install Copper from itself okay so you don't do like I don't know but for example our PM team have set up that they have they have those projects on Gidha as well and anytime we push a physical pump they try to set up which will create a project in Copper submit to the package into the Copper fetch this result and submit it into the poor at best command so they modify the poor at best build something or not so yeah okay there is no support for that so you have coming status so I don't know be able to put a thick pump so you have to do it we are trying the main problem is how you build your idea because right now the best way is to use Tito for building that's only reasonable way which is able to bump up really quickly but yeah I noted all a few projects to use Tito not everyone is fan of that project so if you help me to find a way how to build the social RPM from Gidha reasonable not just for you but at least for two or three other people then we may put that chain into the Copper but right now everyone it's not even made RPM it's like okay I have Shaskir I have made RPM I don't have Simei but I have I don't know what so everyone has different way okay thanks can you figure out how to build RPM from any Gidha whatever so is it possible to say only build on builds or report back to Gidha or something like that is that something we plan to integrate more because now you have a bad look you just hope you paste it somewhere and then every comment triggers a build and that's it right yeah but are those those build success build error in interaction because the Arab was working on that you cannot if you can build some just like for every comment right it builds actually every comment but if you don't check the text text is black I guess it will sweat with the package but nothing happens so it's like you don't know this is the latest tech yeah you can build the latest tech but not every comment and I am working on a feature where you have others which you can paste in Gidha really me and you are under nice build success build unknown and I was asking whether it is in production or not I don't know but we have it ready is there any progress with regards to our entire process in Fodorantra so making it at least a better than the problem no unfortunately no progress I have other priorities recently which comes from where I pay my wages yeah I will have to focus on that more it will probably happen in the near future which will mean that we will replace copers that around Fodorantra or the new service which will be named as the old name copers Fodorantra project which will look like the same but on different hardware and different it seems we are running out of time so that's everything if you have any questions ask any guy with this t-shirt and we will try to test the recording