 Okay, thank you. So, hello, my name is Michael, nice to meet you, and I would like to talk to you about project I'm working on. It's called Copper, and here are some of my colleagues, so I might invite them in the middle of the talk to talk about something we will see, and particularly I would like to mention what's new in Copper, like what features we have recently introduced, some features might be interesting for you, so I hope that will be the case, maybe not. Who doesn't know Copper here? Okay, so just to get like basic context for this talk, I will briefly introduce it, even though most of you probably know something or heard of it. So Copper is an RPM build system. It's basically a tool that can take your source code that you are working on and it can make it a YAM repository. It can produce YAM repository from your source code. So if you are a developer, you are making some new cool application. This is something you might want to easily get your work distributed to your end users, and this is basically the main idea to make YAM repositories out of people's source code. But okay, so we have this platform, we have this tool that is able to do this task, but what is actually our mission? What do we want to do with this platform? Where we want to go? So it can be summarized like this. Our mission is to make this platform, these are two developers on the platform. So our goal is to make this platform as stable as possible so that people can actually rely on it, because if you are a software developer, you want your end users to be happy, and you want them to always get the latest software that you are producing, because it may contain some bugfix, and if it is security bugfix or some sort of bugfix that's good to get to users fast, the platform needs to be reliable. But we also want to make it easy to use at the same time. Ideally, this middle piece and this diagram and this diagram should not be visible to users. Like ideal case would be that you are just editing lines of code and somewhere without you maybe even knowing RPMs are produced that end users consume, that end users install and use. This is like ideal scenario that you don't need to set up anything. But it's a long way to go. Maybe we won't be able to achieve this completely, but it's, I think, a good goal to have in mind. And finally, we also would like to be attractive. We would like to attract new people to copper, because copper actually stands for community projects. And we would like to make community around RPM ecosystem bigger and maybe happier. So we need to make sure that all the possible use cases that people might have, because there are like tons of tons of use cases, are supported by copper, so that they can use our project and not use possibly some other projects that has more features or is more stable and so on. And I would like to also mention that by developers, I don't mean just upstream developers. Actually, we are focusing also on federal developers and namely package maintainers. And this is something I think we can bring something into the game with this, because it's not obvious that there can be a system that can support both of these groups, because use cases in these two groups are usually quite a bit different. Package maintainer usually uses this grid, which means a good repository with patches, backfile and tarbols, whereas upstream developers usually work with upstream sources with raw source code, C files, PY files, RB files and so on. So it's interesting that copper actually aims to support both of these groups. So I mentioned that we would like to be attractive to new users to step back a little bit. And for that, we have introduced some new built methods into copper. Traditionally, and it is even nowadays the most used method is uploading SRPMs into copper. You basically have some sources checked out locally, you build an SRPM and you upload it to copper to build SRPMs from it. This is cool and this is still the most used use case, as I can see every day in copper, but it has some disadvantages and it doesn't allow certain things. Namely, you are responsible, you need to build your SRPM yourself, some manual work that you need to do in addition, and you need to upload it. SRPM can be huge, like I don't know, like 200 megabytes, and it also doesn't allow continuous integration because developers very often they want results of their builds, get reported back to their source forge, and if you need to manually build the SRPM in the middle, then this use case is not possible. So to really make sure that we can support as wide range of user use cases as possible, we have introduced the following three methods. SCM make SRPM custom and SCMRPKG. So I would like to introduce them briefly. Actually, there is a cool project, a bought upstream cryo family on a copper. This is a copper and it contains packages, the latest packages from Project Atomic, from the Rocket Hub project page. So you can check it out and I will actually demonstrate the make SRPM and custom method on this project because they have really nice setup. So I think I wouldn't be able to make a better example than this. And I will start with make SRPM. So if you want to use this method in copper, you basically just need to specify clone URL. This is pointing to the GitHub Project Atomic page. You may want to specify branch. We call it committish because it can be a reference, it can be a tag, but usually there is a branch there. And if you want author builds on new pushes, you just check author build option on and you choose make SRPM as source RPM build method. But this is not everything you need to do. You actually also need to provide this make file in your Git repository that's getting built by the make SRPM method. And this make file is expected to be located at dot copper hidden directory in your top level Git repository. Git directory for Git repository. And it should contain SRPM target and copper. If you make new build with this method, copper will clone the remote repository and it will invoke this SRPM target in the make file so that SRPM is actually produced as it would be if you did it locally. And from that point, if the SRPM is produced, then copper can do what it usually does with uploaded source RPM, for example. So the procedure is then the same as RPM gets built into RPMs and create C is invoked to actually create the resulting YAM repository. So this SRPM method should just produce some source RPM into our editor. And we can see that before that there is some invocation of prepare SH script. So let's see what it does. It's quite interesting. So you can see that actually from this prepare SH script, you can install stuff, which many people consider unexpected. You have actually root privileges there. This is run in a mock charoot. And you have root privileges, root user with some stripped down capabilities. So it's safe, secure, because we stripped it down to bare minimum that you need to actually build something. But you can install stuff at the same time. So here, if Git is not present, it will be installed. This is because somebody wants to also test it locally. So he has Git already installed, so he doesn't need to call DNF locally. And then there are some substitutions. You can notice that those substitutions are done on some podman spec in, which is presumably a spec file template. So basically those hash commit, hash short commit, hash commit, hash commit date are some tags that are placed in that spec file template. And they get substituted here from some values that are computed from Git history. So why anyone would want to do this? Like do some weird substitution here, substitutions that should just modify spec file before the SRPM, the final SRPM is actually built. The reason is that with this method, with those kind of substitutions, you can make your RPMs follow Git history of your project. Basically, name of the produced RPMs will contain, for example, GitHub of the commit that the source for the source RPM comes from the source archive. So from the name of the final RPMs, you are immediately able to recognize from which commit this RPM comes from. And this is very useful for debugging. Because if there is some bug in the produced RPM, you immediately know where to jump into in the Oracle base. So some substitutions are done and then Git archive is called to actually pack the source repository or content of the repository into tarbol and spec. And tarbol are then used to actually build an SRPM. So this was make SRPM method. Now custom method. Custom method is actually very similar to make SRPM. The only difference basically is that the script is not placed in the remote Git repository, but it is stored directly in copper DB. And this has some advantages, sorry, because if you are, for example, a package maintainer and you care about upstream not breaking your package, you are able to set up a custom package in copper and then just ask upstream to put web hook that will invoke builds of this package in copper into their web hook settings in GitHub. So basically, you just ask upstream, please include me this web hook and that's it. You take care of all the other stuff that's needed to actually build a working package from the upstream sources. So you don't need to ask them like please put this weird dot copper make file somewhere. You just need this. So that's quite nice. And also it has some extended attributes when compared to make SRPM. With make SRPM, you install stuff manually into truth, but here you can specify, you can even specify which truth it should be, whether it should be Federa, the latest branch Federa or Federa or height. And you can also just put a list here which specifies what packages should be installed before the SRPM is actually built. So what dependencies are there to build the source RPM and also result directory there where the SRPM is expected to be put by script where copper can find it afterwards and take it and build RPMs from it. So custom, nice method. And RPKG. RPKG again allows you to build source RPMs from remote git or even SVN repositories, but it is much easier to set up because the only thing that you need for make for RPKG to work, RPKG method, is just spec file or spec file template in the remote repository and you don't need the script that actually builds the source RPM. And even as you could see, there were the set substitutions in the previous script and here. So RPKG has actually a built-in solution for this. It has a library of tags that are supported in spec file template that it can recognize. For example, git version which will automatically generate a version string that contains a number of commits from the latest tag and git hash. So it will do those substitutions for you and you can even specify your own macros if you want. So you could possibly generate build dependencies at source RPM build time, which is pretty, I think, interesting option. And another cool thing is that by default it can work with upstream repositories, unpacked sources, but it can also work with diskit repositories, with packed sources. And you just specify clone URL and it doesn't care if it is packed or unpacked, it will produce SRPM no matter what the type is. And this is quite cool that both inputs are actually supported by RPKG method. So here, the settings that you can see are quite many. There is Macomitish parameter specified master, the branch, but also subdirectory and spec file. The name of the spec file and subdirectory means where the RPKG command should be called. You actually don't need to specify these parameters, they are optional. If you had just a flat git repository where your spec file is placed in auto-level directory, then this is optional because RPKG when it produces SRPM will auto-locate spec file and it can work from there on. And also, it is a tool that you can install. For example, you can install RPKG from federal repositories and if there is something off with the setup, if there is some problem, you can just debug it locally. The previous two methods, they are a bit difficult to reproduce locally. Okay, so we have some new methods, like some methods that we hope will support like a wide range of use cases, but this is not enough to be attractive for new cameras. This is cool that we can build in a thousand ways, but this is not really the thing that developers want, not the only thing. Another thing they usually want is to get builds or build results reported back to their source forge. So for example, if there is a new pull request coming into their project, they want to see if the changes actually are valid and that the project builds with those changes. This can be very useful because you might immediately fix some problems before they are actually merged. So we have focused on this problem. This is like part of CI of continuous integration. It's only a part because ideally, we would also like to be able to run some tests afterwards, some integration tests, but we just focused on this part first. We have implemented a Siem integration with Pegure. This might seem like poor because just Pegure, we can actually also, there is like GitHub, GitLab, Bitpocket, and we don't have an integration with those sites. But it's not that bad because FederalistGit uses Pegure, which means that you can use this feature with FederalistGit. Pegure.io, of course, uses Pegure. AppStream first uses Pegure, and then there might be more Pegure instances. So any source forge that uses Pegure is supported just by adding this feature, which is cool. This is like great advantages of Pegure that it can be really used also for wide range of use cases. So I would like to show you a demo of this. I will show it on production copper and production FederalistGit. So I'm curious if it will work. Yes. And I will actually start from scratch. I already have created some projects here. So I have this package on FederalistGit, and I would like to get it authorized when new changes arrive. And also I would like to get the build results reported feedback to the pull requests. So I will create the project. The name can be arbitrary, but I chose the name that is the same as the package name. I will create a package definition for this git package, which basically describes how the package gets built. So I will just copy this. I will specify package name, and I can basically, okay, I need to check this. And rpkg is okay, so I can submit it. So at this point, if a new change arrives in the master branch of the package, it will get auto-build in copper, in this particular copper, but it will not be reported back. For that I need to go to settings, integration step, and I should enter URL of the project, and also an API key for the project. So this is basically API setup, so that copper knows the credentials. Just a moment, please. So I will create a new key just to flag pull requests. So I don't mind if you use it for something, we would try to use it for something else, or if you flag my pull request during presentation, and I will copy the API key here. All right, so we should be set up now. So let's try some pull request. I will actually use this pull request that already exists, and I will just make a new change to the already existing pull request. It will work with a new pull request that are filed as new as well, but I can use also this one which already exists, and a new change is added to it. So I will view my fork, and let's do some modification test for. Okay, so let's see if the change is visible in the pull request of the main repo. Okay, so it's here. So let's see if there is something happening in this copper. Yeah, maybe, yeah, I didn't notice. So it's here, it's building. Important face. Here you can see the forked repo, the origin repo for the build, and the reference on the hash of the git commit that is getting built. And it's cool that from here I can easily get back to the pull request. So you have nice linking between those two things. So copper build is here, also simple CIG got triggered, and from this link I can get back to the build. So that's nice that you can jump between those two. And also directory think here, this is actually something that I can use to enable build from this pull request, and install packages from this pull request and test them locally. So I can invoke this command, and I think it's PR2, maybe I'm not correct, and PR stands for pull request. Okay, so it doesn't exist. Module macro, okay. All right, so I have it enabled, and that means I can install packages from this pull request. So ideally I would like to do this in my normal work in a container, for example, because it depends how you trust that pull request. So you might want to do this in a container, actually. So I will just have a look what packages are there by using the NFR query plugin. So they have this new cute nice switch. Okay, well, not sure what's going on, but I think the packages should be there. Not sure, but I probably made some mistake, but you can see that the modular macros PR2 YAM repository got synchronized successfully, and that I can install stuff from there. All right, so this is something that you can use for developing. And we have also introduced new API finally after many years, and I would like to introduce Jakub here, and he will tell you more about this. Hello everyone, can you hear me? Okay, nice. So in the following five minutes I'm going to talk about the new API. I bet that the first thought that comes to your head is why? Why do we need another API version? There are already two of them. Well, none of them is complete. They both provide some features, but also lack some features. And we could possibly pick one of them and finish it, but it's not that easy. There are several issues that is not possible to solve without breaking a backward compatibility a lot. So we decided that it would be probably better to just create a new API version and give you enough time to learn it and migrate to it. So what did we want to achieve? Well, the first API version is here for years. We, you seem to like it, we really like using it. So we wanted to take a good things that are in the first API version and do the things that don't work differently. Mainly, we wanted to have JSON everywhere for both get requests, but also post requests. So it will allow us to easily fix many, many data type issues that the first API version had. There were a lot of other goals, but I just want to show you a demo. So I apologize. I don't have a live demo because I don't like to live my life as dangerously as Michael here, but it will be awesome. I promise. Okay, nice. Here we have a terminal window with IPython in it. And we'll try to type some commands. So first, we need to import a version three client and create a client object from it. We will use default.config slash copper configuration. And let's, for example, try to create a new project. So we define some suits variable, not important. And we will create a project. It will be called flock and it will be owned by a copper group. Now we have a project created and the result is stored in project variable. What do we know about it? It has attributes and it has more attributes and it has many, many more attributes. Try it and see. So what can we do next? Let's try, for example, submit some build. Here we have source RPM package for testing purposes and we'll try to submit it to our project. So we will use attributes from the project variable and so on. Bam, build is submitted and we can see what is going on here. It has this ID and it is importing right now. Awesome, right? If you want to know more, please read my blog post about the new API. There are links to the documentation and many explanations and everything. Thank you. Okay, so we have some new build methods. We have some kind of continuous integration that we want to continue working on and the new API. So what's next? Right now, even right now, we are working on a support for multiple copper instances in DNF copper plugin, which is interesting because it enables people to deploy their own copper instance anywhere in the world if they want to and some people might find that useful because, for example, if you want to create your own distribution, copper might be a tool very suitable for this task. We don't have composes. We have just separate YAM repositories for each user and each project. So that part is missing, but otherwise, this is something that can be implemented by a custom solution at the moment. And actually, University of New York started to use copper RPM build, which is our builder package in their custom solution to make a Federa remix. So this is quite interesting that it's actually getting used like this, even though it's just copper RPM build, not the whole copper stack, but we need more work to actually make the whole copper stack easily deployable. What we would also like to do are application test suits, automatically run after build so that you don't get just a build result in a pull request, but also test results. And this is nice because you can run some tests in check section of a spec file, but if you want to run integration tests that are testing your package in a larger context, in a larger group of packages that are supposed to work together, then you need integration tests that will just do the whole deployment of your packages and test how they work together. So this is something we would like to achieve. Very soon, I would say. And right now, we have copper list kit, but it only serves as a log, as a build log. You can find their builds that were built in the past there, but it's not writable, it's read only. People wanted to interact with it to actually make changes there, to make some development there. So we would like to make this possible and open copper list kits for public writing. We are also considering building container images, but the implementation of it, we are unsure of at the moment. We are considering some options using project atomic packages, like, for example, Podman or Buildac, but this is still in consideration, I would say. And the question also is, we actually can handle the size requirements that are related to storing a large number of container images. And that's it. So I would like to thank you, and if you have some questions now. Okay. Yes. Yes, this is something we would like to implement, and yes, thank you. So the question was if it is possible for us to implement automatic task triggering after builds that users can set up some custom task that they want to run after a successful build. If this is possible to implement in copper, and if we plan it, and yes, we would like to have this feature, of course. And we are thinking about integration with Taskotron, actually. There are more possibilities than just that, but Taskotron is certainly one of the options there. Any other questions? Yes. Well, it actually is possible even nowadays, because we emit FEDMASUGA notification. So you can set up FEDMASUGA watch. It's the name of the project, which will allow you to set up your own script when some custom FEDMASUGA arrives. So you can just write that on copper notification. Your script should be executed, and you can do that nowadays as well. All right, thanks. I didn't realize that, so this is also possible right now even. Any other questions? Neil? All right. All right. Cool. So the question was how difficult it is actually to deploy an own copper instance. Is that right? Right now it is quite difficult, I would say. It is difficult to make everything the whole stack deployed and working. And we are actually thinking about writing Ansible playbooks that would be able to do it for you. And you would basically just provide IPS of the target machines that should get deployed. And you would run the Ansible playbook. And the setup would just run. And you would have your own copper instance. So this is what we would like to have. And if we have it, we will also use it for our testing suits. So basically we will, instead of virtual machines somewhere, we will be deploying local containers and then running test suits against it. So basically this would make us pretty sure that the stack actually works if you run the playbooks. So right now it is quite difficult. It is not that difficult to take just some parts of our stack, our infrastructure. For example, copper began and copper RPM built and use them without front end. Maybe it is even more easier to just take copper RPM built as the University of New York have done that. But this is something we would like to improve and we would like to have those kind of scripts that are able, no matter what the target machine is, what kind of machine it is, if it is a container virtual machine, bare metal, the playbook should be the same. So that would be cool. Yes, Neil? What about building images like live media, disk images and stuff like that? A lot of times people want to produce something that shows off their code and then it is producing the code itself. Right. At the moment I would say we are not exactly thinking about this, even though we were asked if this is possible with copper already by the Tiger team from Tiger OS, from the University of New York and from other places also. But right now we are not like really, we are focusing just on development side of things and quicker distributions of individual projects until we haven't got that far to think about making an ISO from it or something that the user can install as a complete set of packages. Does that answer your question? Okay, so maybe in future. I will add something to that. Actually a lot of people are asking this kind of question that we can some images, container images, etc. In fact, even some people in red hat are pushing for that. But actually when it comes to the money, suddenly everyone stops their interest. So yeah, we can do that but it needs storage and if you are willing to pay for that storage, contact me and we can do something about it. But as until today everything stopped when we come to the question of the storage and the money. All right, so we have no money for that. Yeah, that's something we could try. Okay. Current system is not very big, I would say. We tend to trim it down as much as possible. So it consists of copper front end, copper back end, copper disk kit, and copper RPM build. And copper key gen, which is generating keys. So those five packages basically take care of everything. And I would say it's pretty small system and we would like to keep it that way and even make it more minimal. But we will see about it. All right, okay. So right now I think on copper back end we have around six terabytes of disk space allocated, I would say. And on copper disk kit is around four terabytes, which are basically disk kit repos. I should repeat the question. The question was how big the current deployment is, what are our requirements for the deployment? So it's around 10 terabytes in total, I would say, of disk space. And we are using OpenStack as a building platform, so we are using virtual machines there approximately around 30 to 40 builders. Usually not every builder works. Usually it's only subset of them working. But we also hope that this will change, this will change in to better shape in the future. Did that answer the question thing? Any other question? Well, we are not rushing that. We have basically obsolete API one, API two by the new API. But it might stay there for another year, I would say, or even longer if people actually use it still. The new API is, I would say, much more pleasant kind of development than the previous ones. So we will try to promote it. But we are not actually rushing to the breakout and remove all the ones that can stay there. We have not a big issue with it. We are just happy that we have finally a new API that is usable, actually, that somebody can use and be quite happy about using it. Perhaps it might be better if you decide to declare the period in which you want to get rid of the APIs now, rather than since you have to do one. Because that way people can move on, and then you can kill it with fire. All right. So, okay. So the note or remark was that we should actually state or say when we want to remove the previous APIs. So we need to talk about it and team and decide on the date. And we should be able to say some definite date. Any other question? Okay. So that's it. Thank you very much for your attention and did you have fun?