 Hello, so I'm gonna give a short talk about a new feature that's landed in the audit system. So far, I haven't really spoke anything about it, but it's in everybody's computer, assuming you're up to date on Fedora. And I think this is gonna take the audit system in a whole different direction than it's been for a long time. So does everybody remember what audit logs look like? They are ugly. So go over here and take a look. And a mess should come out. And that's kind of really hard to tell what's happening in the system. But there's a new feature where I'm putting this into AU Search, but it's not there just yet. It outputs your log in an English sentence. And that was a pretty good piece of magic to make those logs turn into something like that. Okay, so we just did that quick little demo. So the thing is, you probably want to know how does it work? There is a hamster in the box, and he is really good at translating. Seriously. There's actually a smart hamster in this box. Through common criteria, there's a certain number of things that we have to do with logs. Demandatorily required to have time and date, who did it, what they did, what they did, what was being acted upon, and what the results were. So what I was thinking was that if we formed a sentence similar to this thing at the top on node one at time, the subject acting as results, action, what, and then using, and it would map to these things. That actually turned out to work pretty good. Because of the way that this thing works, we get a normalized view of the log now for the very first time ever, because now there's about 120 different record formats, and there's about 80 different orders of fields. So it's a daunting job for anybody to write a program and know that when the path record comes out, the object is in the second path record, and when it's a network call, it's over here in this other thing, and so that's a mess. This straightens it all out. So once you've got a CSV file, you can pull it into open office, you can pull it into Excel, you can stuff it in the database, or you can do what I'm doing, and that is use R scripts to start looking at this. What I'm interested in doing is applying data science techniques to analyzing the logs. Because you can do charts and things like that and visualize data and turn data into knowledge, but you can also create models, and R will let you do stuff or you can train models, and then check the logs to see if you find something that's outside of what the model recognizes. And from that, it's a step into machine learning, and since we've got just a few more seconds. So this is what the typical log... This is just one event, and I've separated them just a little bit to make it more clear to see. So the way that that works and that sentence that I showed you is it's going to go through and create this out of that event. So it says at this time, as Grubb acting as root, successfully deleted this file using Xoff. So what the classifier does is it knows that the subject is right here. It knows that an alias to the subject is right there. It then looks to see the success field. Then it takes a look to see what's actually happening. In this case, unlink means deleting a file. And it also has knowledge built into it to know that between these two path records, that one that's circled is the one that we really want. And then finally, it knows how to find what program was being used to do this. And so that's how it strings it all together. I don't know if I have any more time on this, so I wanted to keep it brief, and I think this is a good cutoff. Are there any questions? I think at this point, you extracted the file and got this deleted here. Okay. Let me go one or two more. This is the API to it, and the way that it gets used is you call that very top function, which is classify. So in other words, if you're using AU parts, you move to the next event, and then you call classify. And what it does is it builds a bunch of pointers in memory. And so then, if you want to find the object, which is that file, then you call AU parts classify object primary. And when it does that, it positions the cursor, and then you can use any kind of function that AU parts provides to use that field. Any other questions? I think my time is up, but I will be blogging on this sometime soon. I've got about 24 blogs already lined up, so there's a lot to say on this topic. Thank you. Okay, so hello. My name is Sjiwaniek, and I'm going to tell you about the one really bad night in a bus where I did not have access to the Google and I need to program something. So yes, Java, Python, Perl, any language have its documentation mechanism. Java, Python, and PyDocs have a huge amount of information, and luckily they are useful information. They are useful, and it's quite hard to search in them. So this is how the programming is working. Luckily, I really tried that. That is missing a lot. Better. I have tried that on approximately 20 students, and yes, they are all working like in way. Google, right where I need to, StackOverflow, CopyPaste, they are very effective with this dev, but it has some disadvantages. In some older days, like before the StackOverflow was invented, normal people were programming like searching the docs. Even some time before they were even reading the source code, but these ways where I agree with them, I abandoned it by the younger generation. Still, all these approaches have one disadvantage. You search for something. You search for the documentation, or you search for the whole solution. The more younger you are, you are here, the more older you are, you are here. If you are old enough, you are somewhere here. So I was running from the dev console to the first dem, and I didn't have network, I didn't have battery, and I really needed to fix a bug and post it. And I had a plenty of Javadogs around, where it was bugging the JVM itself, I needed to fix that. So I knew that the answer is in the Javadogs. I couldn't grab that well. It was a lot of auto-motox and so on. So I got mad, and in that night bus I rather wrote a new tool for myself, for sure to search the docs. So seriously, most of streams really provides the separate archives for the documentation, and these archives are useful. Your hard drive is probably full of docs which you never used, and you will never use, unless you somehow look into them. If you are programming for, with that library, you probably manage that to link it to your ID. Yes, it's working really nice for IDs, but searching in the docs, it still sucks. Even through the ID, the browser, it's not a browser, and you want to browse through that. And that's the point. You are browsing. You need to browse the docs. It's HTML. Browser is done for that, and you probably adapt your browser to browse in a better way in an HTML thing. So in that bus, the tool was created. It's a small server running small input form where you write your form, and it's just written in your answers like the Google is doing. It supports archives, it supports HTML, whatever. It even supports the typocontrol because I make a lot of typos in searching. It has Lucene index, page index, and so on. And a few weeks later, I added some libraries because when I was searching in that internal Google of myself for Java string, it returned me a string. But the second record was the Python string. So it was not working exactly well. So I have created libraries. So I have one library for Java, one library for Python, one for Perlin, and so on. Well, it's packed for Fedora. Go on and use that. And simple demo at the end. How it actually works. Five minutes. Good. Longer demo. So let's search for the docs. Well, this computer I was not using for some time, and I actually don't know what docs are here. Quite a lot of docs. Here is the Java.co file search which will be used. Otherwise, there's a lot of strange docs. Let's say I want to... index. Let's say this... I'm not even sure what this is. Sorry, not like that. Does anybody of you know where your distribution is saving the Java docs or other P docs or that? No. Yeah. User share what? Yes. User share. Java doc, user share of PyDoc where it's here, yes. But you do not want to type that past every time. So let's type it only one time. What's here? Languid detector is here, okay. Languid detector, cool. And indexing it like a library. Lang... LD. So yeah, now it's indexing. Well, it's indexed. Well, that's more overall. What I need to search in this library. Now I will start the server, and here you go, the server is running. So yeah, that's cool. And I will search for the mine method in that. So mine method in that. And here it is, Google likes something. It's a small library. Okay, I'm searching in Java. It's because that's my default library. So here we are in language detector and mine class. It was easy. Only two results for you. And here it is, the Java doc as I am used. Safe, really good browsing. I can do whatever I am used to do with my browser. I can. And I have backbar button and so on, because this is where most of the indexes are failing. The query for the searching is useless for browsing. It's not like that. Anyway, this was really simple. Now let's see how I will work with the Java library. This is all Java doc for JDK. Scanning of that is for one minute or so. So you can really quickly index anything. So let's say for the channel. There is typo with one N. What does it look like? Could you mean, okay, yeah, I meant the channel. So yeah, it really fixed my query. Cool, cool. So it is not much in the first. So what about file channel? This is better. Still it's a little bit low. And here I found that the Lucene index is, it's actually somehow, better some way or not. So there are two page index, which is the old Google page index actually, and this is the merge of some house. So where we are now? File channel. It's here pretty quickly. And again, I can browse in that, the switching through frames, it's still working. You can completely safely browse all your Java doc s if you are on your Google search, whatever. So that's it. And this approach is the possibility to scan the archives. It's absolutely comfortable because you don't need to unpack them. So that's more or less, I have something more for you searched. I doubt it. Yeah, it have headless mode, but that's not exactly right. Exactly. How much time I have? Two minutes. Okay, then I will show you. Well, this is a set. If you do not have even network card, you may fail with that. So it can search for you even like that. Oh, okay. Well, you can grab with that, but that's not exactly the bet because you will need to open that in something. That's not the best thing ever. Hmm. I think that's all. Any questions, actually any hints if you are actually familiar with such tool which I overlooked? Sorry, what? You used n-grams. N-grams. Okay, I overlooked that. I don't know that. Is it really like that you can browse the docs? If you're looking for a reference, it's on the list. Mm-hmm. Okay. If you don't like my Java doc offline search, you can use that. I'm doomed. Any more questions? Okay. No, thank you. Okay. Can you hear? Mm-hmm. Okay. Okay. So it looks like it works. Okay. So I am Kevin Goffler, and I'm going to talk about Garnola, which is a pure KDE Fedora remix without GTK Plus. And yeah, so first of all, I would like to thank my employer, DarkOptopterization Technologies, GameBear, who paid my travel expenses to come here. So just to refer to who doesn't know me yet, so I've been a Fedora package since 2007, so 10 years now, and mostly for KDE stuff, and I already said my employer, and I'm also a PhD student in mathematics. So what is Garnola? Garnola is a Fedora remix. It's a remix from the Fedora KDE spin, but unlike the official Fedora KDE spin, I have only KDE or Qt software in it. There is no GTK Plus on the image. The upstream Fedora repositories are enabled by default, so you can get, of course, GTK Plus and GTK Plus applications from there, but they're not on the image. And other GTK Plus applications are replaced with KDE or Qt applications. I'll come to that later. And it's graphical thanks to the KDE Plasma Desktop workspace, and it's installable thanks to the Gala-Miles installer. And if you're wondering where the strange name comes from, that's because there is a Sicilian cake that's called a Torta Fedora, so this was a pan, because Garnola is a famous Sicilian sweet with just a different shape, and the K stands for the KDE. The main feature is it's an installable live image. It's 64-bit native, and it includes the latest updates of when the image was composed, so currently I have images with updates up to January, and I plan to re-spin it frequently, unlike Fedora where just the GA release and then you get to update afterwards. And there's also a nice feature that's the NetInstall of optional packages, which is powered by the NetInstall model of the Kala-Miles installer, so it's a hybrid live and NetInstall. It's first to install the live image and then it adds extra packages on top of it, but it's also possible to install offline without the NetInstall, then of course you'll get only the packages on the live image. And it's significantly smaller than the Fedora KDE spin, so currently we're at one gigabyte for Garnola versus 1.4 for the official spin. Both because there is no GTK plus and also no Anaconda stack because they replace it and also because thanks to the NetInstall you can omit some optional components from the image. So the main browser on Garnola is Capsula because Firefox uses GTK plus on Knussel Linux and also there is also a system integration and with some freedom and privacy concerns also like it sends things to Google by default and things like that. Capsula is a modern browser for end users and since version 2.0 is based on Qt Web Engine which is based on Google Chromium, so the Blink Engine, it doesn't use any Google services, Qt Web Engine doesn't have user Google API key. It's up to date with current web standards and it has the security updates back for it to the stable Qt branches. And Capsula has a very nice desktop integration. You have native icons, native file dialogs, native notifications and so on and there is an optional plugin that adds support for the K-Wallet. There is also one for GNOME, but I don't care about that in Garnola of course. And also the Conqueror, the KDE browser is available in Garnola currently but it's still an old KDE Libs 4 version and so that's why it's not the default yet but it will become the default. Okay. Then the installer, instead of Anaconda which is the DK Plus, I use Garamales which is a distribution independent installer framework and the goal is to be an upstream project instead of every distribution reinventing the wheel. It was started by Blue Systems there and it gets contributions from several distributions also now and there's still a developer from Blue Systems paid for it. It's written in C++. It uses Qt 5 and Patent 3 through Boost Patent and the major version 3 was just recently released and yesterday they released the latest bug fix release 301. In the stable releases of Garnola I still have Garamales 246. I currently have 3.0 and they'll have the bug fix released shortly. It has an advanced partitioning module which uses the KPM core, the core of the KDE partition manager and it has some unique features that Anaconda doesn't have such as the already discussed native install module. For the firewall, since the firewall config is GDK Plus, I was looking for an alternative and I picked the UFW, the uncomplicated firewall which was developed by Canonical for Ubuntu which is also important like the firewall D but it has a KDE front-end electorate and all KDE Libs work front-end but I got it to work in Plasma 5 so I'm adding just a standalone menu entry that brings it up outside of the KDE system settings. For the crash handler, at the board school it requires GDK Plus so I decided to just not ship a board at all because the reports would not be visible to the user. The KDE application crash was actually already caught by K-Crash which funds the Dr. Conquery debugger and Dr. Conquery is also an answer because it reports the crashes upstream and not downstream so it goes directly to the right people. This one, some people here will probably not like SL Linux is disabled by default because the troubleshooter requires GDK Plus and so there would be no denial feedback whatsoever with only acute applications and I think that's not acceptable as a default user experience and also I don't think it is very useful to have SL Linux enabled because you see the target policy protects many server applications and the application that they're going to protect usually is the browser so you have a policy for Firefox it's not going to apply because I don't use Firefox and also the default browser in Canola is Capsilla which uses the Chromio sandbox so you already get some sandboxing which is mostly comparable to what SL Linux sets up for Firefox but it just works with no SL Linux and so in the end the application that I need to protect is protected and so SL Linux is disabled in Canola and some of the support packages are actually omitted to save space. To create the image I use the old LiveCity creator from LiveCity tools but actually I ported it to DNF and I have somebody ported it to Python 3 who is now NeoGampa who is now the maintainer in Fedora of the package and the reason I picked LiveCity creator rather than LiveMedia creator is that it's easier to use one simple command line there is no net installer image with an account that I needed it's also less picky about the contents of the kickstart what's the main deciding factor for me is that it supports a persistent package cache without complicated hexes it also has better error reportings according to the corolla forks which is another Fedora remix the developers from the remix they said they tried both and think the LiveCity creator is much nicer and it's also a simpler code flow which I think is much more maintainable and it could also use non-flat and kickstart but I don't use it because my kickstart is actually flattened because it's easier for me to maintain it that way so here are some links where you can get more information the main one is the top one the others are upstream projects the latest is the announcement so it's my last slide the latest is the announcement for the LiveCity tours fork and so this is the project webpage so this is the front here and there's actually some screenshots I don't think there's time to do a live demo I can show you the screenshots this is the installer with the about box so this is Capsilla which is displaying the Caramalas webpage this is the UFW KDE the firewall configuration front end this one is KDE partition manager which is one of the applications and this menu shows some office applications there are present Caligra and the KDE theme contact suit and this is the last screenshot here you can see the net install model these are the packages that are recommended these are the ones that are actually on the Fedora KDE image and that are KDE applications but they just had to me to sell space okay, I'm done should we do questions or should we just stop here? so, yeah, this year we have a short talk just brief talk any news in the LVM? well, no big news so I think it's a good news because we keep working but few things which are worth to note so first line we are spending more and more demons you may have noticed that we are adding more and more of them not always everyone needs everything so it's worth to read documentation, manual pages check out if you need that one if it saves you time or if it brings you a problem I would specifically mention this one because the LVM is not really useful in all cases so before you start compiling it, you miss some devices it's always good idea to check out Google and eventually see the devices to disable the LVM this is actually at this moment the best advice we can give you because we cannot solve all the problems with devices we have so if you experience troubles with missing devices or duplicate devices and stuff like that please switch this demon off what we are doing constantly we try to improve the performance so we do analyze how long does it take to process commands how long does it take to work with larger sets of devices we try to constantly improve it we fixed couple regressions which may have speeded up a processing of widgets with thousands of LVs drastically so if you work in that regard please enjoy the other topic we spend a lot of time is the command line validation we heavily extend the command line processing of command options so we should no longer accept some nonsense command lines we continue to improve it we even probably would allow to accept even more fancy APIs so if we get constant or not constant but we often hear from some other people that some command lines could be made better so you are probably now welcome to provide us with a better API if you think there is I would like to point out this tool which was given the presentation by Bryn this is a great enhancement on the DM level part of LVM package where you can now vary in great details monitor the performance of individual pieces of your devices so I will skip it, try to do enhancements for caching we just extended the support to be able to un-cache volumes if you get them missing so you can try it if you corrupt something and your cache device gone missing you can now actually fix your LVs and drop caches and stuff like that I will keep the thin pool as the last one for the rate maybe for the cache I will mention that there is an upcoming project by Joe Thornbar which is the cache driver maintenance we should now be in rather quick time to be able to deliver a new version, cache version 2 which should be improving the performance and it should be giving you better experience in terms of speed of activation, deactivation and other cases basically we have LEN lesson 1 from cache number 1 and there will be cache 2 which will improve the efficiency that we have found out rate is going to get enhancements I think this talk was here the last year but the enhancements were quite not so easy to be matched so we are working on it but we are now much closer to get them upstream I think so the whole essence of those reshape patches now should be much more there is much higher chance that they will be upstream shortly and for the thin pool not much in terms of that we have added some new feature for creating thin volumes or stuff like that this mostly works but what is worth to mention is that we now support to run external command and the thin pool is breaking barriers like if you have a thin pool we had only one reaction that you can set the threshold where the thin pool reach some size auto extension was called so LV extend use policies and this is quite limited because that's the only thing you could do and if the thin pool was not able to be resized at the end we actually did unmount which was not seen always like a good idea so to make it more configurable and more usable for users we now support a new thin command from the LVM Conf option so you get this thin command called for every 5% increase so 50%, 55%, 60% up to 100% and your script will get in some variables how the thin pool is and you can do a reaction according to your needs so if you need to run LV extend like for 70% you can call LV extend use policies it will do it but you can also call FSTream you can do LV remove of snapshots if you are thinking that this will solve the problem so basically we give the user much more power how to solve the problems if the thin pool is running out of space and this is probably the end of this story because I want to give you a chance to give me some few questions eventually out of the room but ok go ahead for LVM the always the valid thing is the man page we provide now specific man page which are not probably known that much we have the man thin man LVM cache, man LVM rate so we have the specific pages for targets now so users for demons have its own pages and I think there is some probably there should be some more comments for LVM because this is something we should probably admit that we have not solved everything it's too hard yeah so yeah so documentation I think Google usually at least for my account or I don't know if the Google is too smart in my case it usually gives me good links anyway when I ask for something about LVM so use the Google as well any more questions? no? so I think we are ready thank you