 Okay, so everyone, we are going to start the last part of the, of this first day of that conference, let's lighten the talks. Each speaker has 10 minutes on time included, we will continue in the end, and that is done for today. Thanks for attending. Hope we will enjoy the next day. Okay, so let's make this quick so we can all call it a day. That's my yearly what are we doing in RPM talk. As you may have noticed, we've pushed RPM 4.13 into Fedora. Last year comes with a pretty long list of changes, so I'm only going over the most notably. One of them are file triggers. I don't know who has ever heard of file triggers. Okay, that's pretty good. They're actually already used in RPM in Fedora. The next thing that's not so much used yet is bullied and dependencies, which have just made it into RPM also. They're basically the extension of the week dependencies we pushed in there last year. There are people working on it, but it's not there yet. The third bigger new feature, which is, as far as I know, not used anywhere already is support for security IMA file attributes. That's basically support for trusted computing on the kernel level, file system level thing. So if you actually have started, have managed to boot into a trusted kernel, you can then make sure the kernel only reads file which are also trusted. So some guy or actually guy from IBM has been working on that. I don't know if anybody is actually interested to putting this into the real world, maybe IBM is. So if anyone wants to have trusted computing in Fedora, we support it now. Go for it. And in addition to these three big things, we have a huge number of smaller stuff. I've only listed the stuff that's maybe more interesting for packages or users. We've added a remote path post, remove path post fixes, which is basically a setting which allows you to cut off the names of file or files. This is useful to create sub packages which have conflicting files and the problem is as RPM has basically one tree of files which it packages into sub packages. It's not possible to have to file in multiple sub packages with different content. For example, having different config files in sub packages that are preconfigured for different use cases. And that's pretty annoying and I think Harald Hoyer and a couple of other guys have been nagging about this. So that's basically a band aid to allow this. So it's basically you give a post fix for the names that is cut off and then you can basically use the remaining name in the sub packages. Then we have new checks for the encoding of the spec file something a lot of people have wanted to have for quite a while because there are still spec files out there which are not properly you'd have aid encoded. So I think it's still disabled or it just gives a warning right now. So I don't know at some point we are going to switch that to enforcing and it will basically prevent RPM build from building packages which have non-proper encodings in them. We also have enabled the prices expansion for the clubbing minor thing but you can use it in the file list for example. And we've extended RPM build with a new parameter so you can do all build steps directly from an SRPM. So R for rebuild. You already could rebuild and source RPM directly but you could only basically do a full rebuild and not build it to the older stages that support it now. We also added what requirements and all the other queries for the week dependencies which we somehow missed last year when we added week dependencies. I don't know I must have looked elsewhere. We also switched the signing over to use pin entry more or less because pin entry is that you are no longer allowed to call us with the command line parameter. So actually GPG that changed to no longer allow passing the password on the command line and so we had to switch this over. That's done now. I hope there's not too many problems with that. I know that a couple of other distributions had issues with not having the proper version of GPG and pin entry lying around. You now run from unused macros and there's a longer list of other stuff we fixed. There's a release note so if you're interested. Okay. Five minutes. So then we've continued with development. The big feature we are working on right now is getting stabilized as the new database format. Some of you might haven't seen it on the mailing list with a huge discussion. The problem is that the Berkeley DB we are using now has been bought up by Oracle and they changed the license for the new version. And there's all this politics involved and so we have to move away from that basically. Another thing that might be interesting maybe not so much for Fedora but for other people that are building packages is multi-threaded except compression. We have had some customers who had well set up continuous integration which worked well by compiling on like 64 processors. Which was quick and fast until you have to compress the RPM which is then single threaded and takes a while if the package is big enough. So we have now a solution for this and it's coming up in the next release. I think it's only compression. I think the compression is for one fast enough and on the other hand you need to read it sequentially anyway. So this might create problems with Delta RPMs actually. But I hope I have an intern that I can put on this task to check and make sure Delta RPMs will continue to work. There's all kind of tricky stuff like being able to actually reproduce the Delta's bit or recreate the package in a bit by bit compatible way. Need to hurry up a bit. So there's a new dependency generator written in Python which comes in from the Mandriva crowd. There's been some discussion. It's not 100% finalized how it's going to be but that's coming up. And there's a lot of smaller fixes so they're not going to be that much big changes but more small bug fixes and all kind of cleanup. So we have basically started to do a bug fixing Friday to get down with the number of tickets we have in Fedora and on our own bug tracker. Another interesting thing that we just added recently and backported to Fedora is we made Missing OK which used to be basically config. Missing OK so it could have config files that are you allowed to delete without getting an error with verify. So we promoted Missing OK to a full featured file attribute. The GILIPSY people are currently needing this to get their language packages done as they for some reason have files they want to merge in their language databases somehow. Details to ask some GILIPSY people. So that's what we're doing right now. What else? We've moved. We've actually not moved but we have cloned our repository to GitHub. This has worked out very well for us so we are getting a lot of contributions right now. We even have to set aside a day a week to go over all the pull requests and get them merged and reviewed. For now a lot of these contributions are outside from the Fedora world so I want to urge you to have a look and make sure that not only the other distributions are actually pushing stuff in there without anyone else noticing or caring. So that's what's happening. How many minutes do we have left? One. Then I'm done. Can you hear me? OK. Hello. I'm Honza Shilhan and now we are moving one layer up from RPM to DNF. In my talk I will cover what we have done from previous year up until now and also I will reveal some future plans for DNF. So DNF's main goal is still to maintain the compatibility with YAM. And so far we have YAM Utils almost fully compatible with DNF plugins so you can search for its counterpart via manual page YAM to DNF. We have also added dimensions about DNF in Fedora project and Fedora wikipages so almost every use case is covered in DNF's snippet code as well. For those of you who are still using YAM I have a good news for you. You can switch to DNF immediately. You can just type DNF migrate and it would transfer all your history and other metadata from YAM to DNF. The status of the packages relying on YAM there are only 13 packages. The rest of the packages were ported to DNF and in Fedora base image there is no package which requires YAM. The tool FedUp which takes care of Fedora upgrade to the next version uses now DNF backend. That means that it no longer ignores package conflicts and you could be able to boot up again. Okay let's look at the new features that happened in DNF. We introduced new mark command. Let's say you install package A which requires package B. You install, you actually use these two packages and then you decide you want to remove package A. Then you notice that package B is also getting uninstalled. To prevent this you can type DNF mark install package B and it will stay installed until you decide to remove it explicitly. DNF has full big dependency support so you can query in DNF repo query for it. It has minus-minus such as enhancers, supplements and what enhances and so on switches. DNF also has two states how it threads big dependencies. By default it installs all recommended packages but you can turn that behavior off if you want to keep your system minimal. You just have to set to full install big depths in DNF config option in configuration file. I think I must say that DNF is more user friendly with regard to user experience. It provides you with some hints like that showing you that some packages were skipped and how to resolve some conflicts. We also for package maintainers describe the rules or actually wrote a draft how they can prefer some package to another by using big dependencies or versions provides. Okay, now let's look for the plan to do next year. DNF so far is using for C libraries while YAM was written entirely in Python. Recently we have merged DNF C library with package kit library and we are planning to move more parts from DNF to C code base. We would like to also for package maintainers describe some rules and provide them solution with examples how they can resolve some packaging problems with use of cool stuff from RPM like weed and rich dependencies. And last thing but not last thing we would like to DNF to actually provide more verbose output when you are dealing with some package conflicts and you want to know what happened inside the dependency solver. That's for the plans. Now I would like to invite you to the talk that's which has the title developers QEs of themselves. And as title says it's mainly targeted for upstream developers who has no QEs and would like to deliver stable releases of their application with minimal effort. So you are free free to go there. We will explain our continuous integration workflow, which components we have used there. It will be on Sunday at 1310. Do you have any questions? So when something breaks, I don't DNF upgrade and then let it all put like DNF. No, no, only when there's conflict and transaction cannot be resolved. Because sometimes there are no packages are chosen. Yeah, then you can switch it on in verbose mode. It will be shown. It didn't give me this information. Yeah, that's the future plan. Any other questions? It will be probably some draft. Packaging draft. You can actually use version from April 7, I think. Yeah, but there's only version I think 0.64. Any other questions? When there's one reason why people need to use YAM because it's the YAM support system. Like you mean? You'll give a specific update ID and... Excuse me, that's being covered by the work on a security plugin. So yeah, it's being worked on. Okay, any other questions? Okay, thank you for your attention. Is he in extended mode? Hello? Okay, I'm not going to talk further up from DNF, but it is a completely different thing. I'm going to talk about release engineering. Before I start my talk, I would like to show you this short clip, which always kind of makes me smile. One second, sorry. So this is something when I hear release, I kind of remember this, the crazy baby, which is kind of trained hard and she is the one to catch that. So let me move on. So why I'm a release engineer? It's... thank you. I come from the development background and I also have an experience as release and configuration manager. And I also worked in customer support. So what I realized when I handled all this role is I like solving customer problems and I would like to dive into the problem and technically see what is going wrong and also I was passionate about delivery and release processes. So that's when I kind of discovered release engineer is kind of a role which enables me to do all these things. So that's why I am here. And so what do you have in store? What is in store for you when you become a release engineer? What opportunities it opens? So from my experience, what I understand being a release engineer at Red Hat, it has given me an opportunity to develop tools which can be leveraged for the release, a faster release process. And you will get to work with a bunch of inspired people and you collaborate with teams which is going to expand your knowledge about the product or the project. So and you will get a broader idea about how the product is going to be delivered to the customer. How can I make it accessible to the customer and other parameters like quality and security of the product. So I think it opens up a lot of opportunities and it can be at your capacity you can grow to further in any capacity being a release engineer. You will work with multiple products and you will work with multiple teams and it will give you a thought about how the processes that we have, how we can leverage it across the products. So you will be thinking towards that and it's kind of a constant process. It is reaching that maturity and you will think of innovative options of having different tools. How we can make the installation easy, packaging easy and how to make the delivery process kind of really streamlined and think in different ways. So it opens up different opportunities for you. So what release engineering do? So I would like to talk about my delivery and how it looks like. So as a release engineer, broadly if I categorize your responsibilities would be build and packaging and content management and delivery into customer environments and also through test environments where the developer wants to test it or the QE wants to test it and also means that release engineer will be engaged starting from the planning phase of a product so when any product or project is initiated you will be contacted for as a person, the go-to person to understand what are the contents I'm delivering how I should be doing it and what are the different methods that are available which distribution platforms I should be looking for things like this and also ensuring if the contents are accessible to the customer how I can give them, how I can deliver it to the customer and if they see any issue what is the process they should follow and it's like being a release engineer you will be evangelizing on the software delivery processes that are used and you will be kind of a provider consultancy to the products on how they can really go forward and once they deliver the products and how will it look like in the market and how the customer will see it say for example when you distribute it as a dockery measure, ISOs or any other RPMs or anything how are we going to kind of package it and make it reachable to the customer so that's what we work with on that and another note we also developed tools we kind of keep track of what are the different products that are coming in what different methods they are using is our tool supporting those build environments test environments and the test frameworks are anything that they are following does our tool support it do we need to come up with something new and support their processes and we think in all these different ways and try to make the release as smooth as possible for them so I think that's all I had to say and if anyone is interested in release engineering and to know more about it you can contact me on this so any questions yep actually we have a choice which product we would like to work on and it's not like only one product you might be interested to work on many so it is up to you to choose it's not only RPMs or anything that you can explore other options too any other so that's it hello everyone my name is Nikolai Kondrashov I work at identity management and security group at Red Hat and I'm here to present about a project we are doing for a while now and what is going to happen and what's exciting about it I don't know is this the other way that's better so the idea is that many companies especially big companies have contractors and peripheral CISOD means with which they don't have really big trust which come in just for a while but need to access their privileged systems I have privileged access to some critical stuff also some government organizations at least in US are required by law to present recordings of user sessions for review so we are trying to build an open source system to handle that there's plenty of commercial systems one of the bigger ones is Centrify and there is plenty I just just looking on internet I found 8 products which are more or less fulfilling that role and they are pretty good most of them they have centralized servers to store the recordings to search them to play them back with various speeds and rewinding and look for comments which user entered at this point and just rewind to that point in the recording and like pretty good but on the open source side there's really nothing nothing good really not a product for that there is to do that allows to record user sessions like the privileged access into local files but there is no way to deliver to the servers like no built-in way there is no integration with user management or anything there are some tools that allow like central recording like teammate I.O. or Askinima or Show Term where you can upload your session somewhere to a single place but it's obviously not suitable for security purposes so really there is there is nothing that is fulfilling that role in open source world and that's where we come in so we are working on exactly the stuff that the commercial solutions provide but we are doing it the open source way we are doing everything from open source components and we are open source and everything that we are doing extra and naturally we are using the stuff that we have in our identity management team so we are planning on using free APA as the central management system on the servers and on the client side we are using SSSD which is going to control session recording itself configure everything and control who is recorded how he or she is recorded etc so everyone probably knows what audit D is it's for those who doesn't know it is system that gets some messages from the kernel and records what user access and what syscalls were involved etc etc which we can use the record what commands the user executed and exactly what files were accessed and apply filters to which files we want to track etc that's good but there is really no good terminal recording solution no program that allows the record what user did on the terminal what he saw on the terminal what he typed in and what was displayed for him there is the again the script the sudo but they are not integrating well with central delivery that's why we are implementing this thing and I am the one implementing it right now so it's a tool that basically gets in in front of the user's shell records whatever passes between the shell and the user terminal and logs that in JSON to wherever we want so for example there is a user session and user trying to execute a command as sudo and below is what gets logged to syslog at the moment and as shown as the on the previous picture eventually from the log server it gets to elastic search which would allow us to search everything to visualize everything and eventually display that in Kibana and our current idea is to get a playback visualization to get a nice playback with rebinding and everything in Kibana as good as possible we have no idea if that's going to work but we are going to try it so back to that this is how it would look in Kibana just viewing the JSON so that's the same that's the same log message that we saw in the previous slide and the current plan is to get this to Fedora this year later to release it a stack preview in rel and then get more of that that's it, any questions the idea is to get T-Log to run on the special user it will be set UID and when the user logs in it becomes a different user starts logging then forks then drops back to the original user and starts the shell so it runs on the different user of course it doesn't help with root but nothing helps against root if we want to log root we will probably be using jump servers where the user will have to log into one server but he doesn't have the root privileges then he logs into the target server and the intermediate session is logged the thing is that you can stream that and with T-Log you can stream that, you can deliver these log messages as they go there's a time limit when they get logged and that can get delivered immediately to Elasticsearch and that link below has a video demo of how that happens and there is a demo how it works basically exactly how you see it on the screen what you see on the screen gets logged yeah, sure it's perfectly fine it's all preserved and it's yes, it's a problem there will be an option to not log the user input only the user output but we'll still log commands executed as part of ODD log so we can just keep input we will really I'm sorry, I'm out of time we can talk personally that looks good so it's me between you and Tobias I'll try to do that quickly this is a motivation talk and a short how-to to do what I'm trying to motivate you to and that is testing the kernel so why help testing the kernel there really really is a lot of computer hardware out there you know that every year there's a new notebook and a successor and printers get there out every new year so that's something the kernel needs to support and that gets even more complicated because hardware components can combine in a number of ways and that influences how Linux deals with it even what does make the situation even more bad is that firmware sometimes influences Linux compatibility too and the config you use in your kernel or the config that distro uses in your kernel influences it also so in the end most systems are quite unique and maybe just in the stack that you have there is something where a bug shows up then that bug will show up and annoy you and others and that will happen even more quickly if you have really unique hardware that's kind of if you're using a five or six year old graphics card nobody else uses anymore or was kind of special even back when it was new so if you don't test that those new kernels on your hardware nobody might and hardware specific bugs might only be found when they are really old so maybe like two or three months old but depending on your distribution can even be one or two years old and the problem with that is finding and fixing the root cause of those bugs gets really, really harder the older bug gets sometimes can even get nearly impossible if you have kind of six year old graphics cards and maybe there was a bug in the driver it didn't compile and the kernel developer said okay here the code didn't compile for two years it seems nobody used it anymore then they throw it out so if you switch from Ubuntu 14.04 to the next one that's coming out in April you suddenly might notice the driver that I used to use until now bringing it back is really hard and then you are annoyed and you have to live with it and find a different solution that's why you need to test it's in your own interest other risk not too many old kernels get normally installed stay installed when you're installing a new kernel so you can always go back to the old kernel and boot that instead and new kernels normally are always backwards compatible so there shouldn't be any problems with the new kernel it's a risk of data loss it's quite unlikely that's a short version unlikely on the other hand says yes of course it can happen but it doesn't happen that often so it's not something you should it's something you should keep in mind but it's not a reason to not test because otherwise nobody will test and in the end your hardware won't work anymore so how to test if you're a fedora user run fedora raw height the kernels used in fedora raw height are pretty close to what upstream is developing right now in the mainline kernel it's maybe one or two days behind mainline kernel most of the time there are a few fedora specific patches in the raw height rpms but it's compared to other distributions that are just a few so if you report the problems to the fedora developers or upstream then it's normally not a problem that you have a kernel that has extra patches in it there are a few situations but then the developers will tell you how to test it if you're not running raw height because you think it's too unstable or something like that it's a little bit unstable on the latest fedora release that works most of the time I've heard that it right now doesn't work due to some dependency there are always ways to get around it but if it works you can grab it with dnf on the command line or from Koji directly and install the rpms it's quite easy as I indicated already there are different ways I'm maintaining fedora repository where I can get vanilla kernel rpms and run them on fedora without compiling them yourself there's actually a page in the fedora wiki that explains how to use it actually the two important commands are in the screenshot already I've put them here as well but you can't find them in the wiki and if you're googling for kernel vanilla repositories for fedora then Google will get you there in case you forget to write down the URL and yes, I'm running those kernels myself on my notebook to help testing the kernel and I've never run into any bad bugs until now but I found a few bugs that I had to debug and track down and something got fixed before they were hitting you to this similar repositories like this one are available for other distributions so at least for the big ones so if you want to go down that road just search for your distribution and vanilla kernels the third way is simply doing manually that's quite easy you download and extract the latest kernel sources run these two commands that actually is a backslash whatever whatever that creates a configuration that's based on your old distribution kernel configuration and then throws everything out which seems to not be needed so all the modules that are not loaded basically and then compiles this on my thinkpad t4 420 that's like 4 years old now that takes just 12 minutes and then this kernel is ready just running install command rebooting and profit and it's there, you can boot it and check if everything works and if it doesn't you can report bugs that's the main part I guess times runs out soon, I have actually questions like for example how to get rid of those kernels again that didn't fit into this talk normally it's not a problem with RPMs you can just deinstall and the kernels that you install manually are quickly deleted from the file system it's not that hard I'm putting up up here but maybe you have questions already for the kernels I compile myself actually that's on my to-do list but I all boot those kernels once in a QMU and check if that comes up normally I only publish them if they are booting they're fine the one where and how to report bugs that depends on how specific the bug is I mean there are as I said billions of hardware out there and combinations and if it's something like audio codec on your specific machine that likely nobody else has and it's not that at the top of the to-do list there are kernel developers so you might better off getting the upstream there because the developer of the driver knows more how to fix it and what might be wrong but not all the developers use it that's also the short version I could do this talk I guess in one hour and there are still things to talk about