 Okay it is Thursday we have lightning talks and we have looks like nine marvelous speakers doing eight things and first up we have Karim Khazem, is that right? Talking about effortless reliability. So here I have a list of software which is critical for the health of the for the for the free software ecosystem and which the core infrastructure initiative has said is vulnerable or at risk to heart bleed style bugs so what I figured out is that from this list of important and vulnerable software those ones don't have any kind of test suite at all okay so there's some very important software here we need it to be reliable but it's very difficult to do quality assurance on it because there's no test suite in many cases the upstream maintainers are irresponsible so what do we do about software like this I'm not going to write test suites for it neither are you so I'm going to be proposing a new approach to quality assurance which will be complementary to existing efforts at Debian which will be good for software like this but also will be beneficial for all software that's packaged in Debian and which solves the technical challenges and also social and cultural challenges that are associated with quality assurance at Debian so what I'm proposing is an infrastructure with the following three properties the first one is you give the infrastructure a Debian package thanks that's right and it generates a whole bunch of genuine bug reports so no false positives the second property is that if the infrastructure must be able to generate bug reports without any kind of per package individual setup and the third property is that this must be a service I don't want the obligation to be on maintainers to download this thing I wanted to be a centralized service so what's the point of these three properties if we have a centralized service that generates genuine bug reports for Debian packages and which does not require any knowledge or setup per Debian package when we have this we can turn it on by default for every single Debian package and get bug reports for every single Debian package without the maintainers having to do a single thing what I'm hoping is that one day the maintainers will look at the package web page and there will be a kind of health report for their package with results from all kinds of bug finding tools telling them any problems there'll be kind of line numbers and descriptions of bugs contributors will be able to click on the line number go to sources.debian.net fix the bug and then easily send a patch to the maintainer okay so out of this work in practice so my PhD supervisor already has something like this at the university he runs a static analysis tool called CBMC on the entire Debian code base and he has used this to file hundreds of bugs and many of them have been fixed so we're proposing integrating this into Debian running free freely available static analysis tools like CBMC and infer on the entire Debian code base in order to find bugs we're also like to run dynamic analysis on the entire Debian code base like Valgrind or to help with this I'm actually developing a tool which called SMID which automatically interacts with with interactive programs using sensible user inputs so we're hoping that this will benefit two groups of people first of all people who already are familiar with the code base will get the benefits of the results of these high quality static analysis tools that don't generate any false positives and only tell you about genuine bugs and they won't have to do all of the boring setup because we will do all of this stuff centrally but more importantly I'd also like to encourage new contributors to Debian so ad hoc contributors to start fixing bugs rather than just adding features to Debian packages so for example I'd love to see students who are looking for a fun project over the summer to easily be able to see a list of bugs and easily be able to kind of jump to where the bug is in the source code and fix it and send a patch upstream without necessarily having knowledge of the whole code base so doing all the easy stuff and even the more difficult or intricate issues for upstream maintainers so in summary I'm proposing to implement this thing so an infrastructure that finds genuine bugs with no setup per package offered as a service so that we can run it by default for everything to help both people who understand the code base and also to encourage new ad hoc contributors to Debian to kind of help them get into free software and into Debian and into quality assurance and so making this as easy as possible so I'd love to hear ideas about this so we've already got a lot of this work is already based on things which already in Debian and we're doing a lot of this stuff together if you've got ideas either about the big ideas or the details I'd love to speak to about it and listen to your ideas so my name is Karim my PhD supervisor is Debian developer Mikhail Tauchnik and I'd love to speak to afterwards thank you thank you very much next up are Axel and Frank Hoffmann and they're going to talk about this book that they've just come up with hello my name is Frank Hoffmann versus Axel Beckert and both of us we are writing a book this book is named Debian package management book and we would like to present you what we did so far yeah so we've written on that book already for two and a half years and it will feature a deep package it would feature apt aptitude and the whole apt ecosystem around it it will be in germ initially but an English translation is planned and the big news it is already under a free license since like one week you can get it on github the sources it will be available as ebook actually you can already build your own ebook from sources now PDF or EPUB forward there's a printed book planned at the onyx neon publisher whom I have to pleasure to be here Alison is sitting somewhere there yeah thanks for coming and it will also be available as Debian package in the future so packaging is already there yeah you can get the source code on github DPMB is the Debian package package management book or on job Debian package management book same abbreviation hence we have chosen that one so if you want to contribute or found a typo or even if you have complaints about the content fork it fix it and file a pile pull request and thanks to mech hilde over there we already had our first pull request today so yeah that's how you can build the book yourself but I think how many minutes do we have oh okay well it's okay we can still show that page a little bit it's quite easy you need eski doc and doc book latech you clone a repository change in the freshly created directory and type make and like ten minutes later or so depending on a machine you'll have an EPUB PDF and HTML file or you can even run a deep package build package on it and install it as depth works the same more or less and yeah here are the links for those who are can easily remember germ words Debian Debian package management point a or those who don't want to type that much DPMB org the slides are also on online they are also eski doc you can find them in the organization account under talks we also have an email address to contact us both if you have questions and yeah actually that was it and looking forward to get pull requests fix typos and maybe later people who want to buy the book when it's printed if you have questions if you want to contribute if you are if you have questions regarding which part is included if you would like to add certain features if you think something is missing just contact us talk to us you are happy to hear your information thank you one FAQ the book will not be about packaging stuff it will just be about handling the binary packages so that would be far too much thanks okay we'll just have some singing and dancing whilst we wait for the setup but actually here it comes on Jay whose name I still haven't pronounced properly once and he is talking about how can SO use reuse port improve UDP performance hi this is on Jay his pronunciation and who thinks they know what SO reuse port is okay and who who confuses that this SO reuse other than okay so the SO reuse port was a feature which was introduced in Linux 3.9 and it was well meant for TCP first because there was also a bottleneck for TCP and it's a feature of network socket API and it gives a lot of performance boosts on multicore systems so it allows you to bind the same port from multiple threads which well normally you would do as well but the threads will compete which you can see here there are several cues in the well it depends on your internet cards but there are network cards that have separate cues so if you have multiple incoming cues on the internet card then there are several receiving threads in the kernel but it comes all down to the single socket and single buffer there's lock which distributes this incoming packets between the threads in the for example DNS server because we are writing DNS server and we want to have a fastest DNS server in the world so this is the situation without SO reuse port so the threads compete for the for the sockets and the incoming packets are not distributed evenly with SO reuse port this well all those threads are simplified because it's much more complicated but with SO reuse port the incoming packets are distributed between threads evenly there's some hashing algorithms so so well the packets are pins to each thread according the source address and stuff like that but basically it looks like this after after you well use SO reuse port and it can get even better there's an interface in the slash sys where you can pin individual cues on the network card to individual well threads but we are not there yet because it's more complicated because of the hash function I told you about it distributes it a little bit different but the difference we've seen that the blue lines are different versions at the top are different versions of of not DNS and the top line peaks at 800 thousand queries per second and before it was something like for 400,000 packets per second that we were we were able to process so it's almost it almost double the performance on the same hardware just by using the single kernel option and well you need to well optimize the number of threads so it matches the number of cues on the network card but well so almost a double double performance boost yep that's it and it can be used of course for any any TCP or UDP server out there thank you okay right next Tim Ansel will inform us how and why and in which fashion he has too many projects hi I'm Tim and I'm from Australia and I have too many projects I also have too many slides so if you could tell me every 30 seconds that would be great I need help because I really need to sleep and since this is a Debian package Debian conference I need help with packaging things because I'm upstream and being in Debian is awesome if you're a Pearl hacker tune out for a second if you're not Python date time TZ is a Python module you should be using if you use Python and dates however app get install Python date time TZ the computer says no so I would love help there as well if you're using Python and you're tired and it's 2 a.m. and you want to debug your server use the Q module it's printf debugging on steroids basically you queue anything and it ends up in a file you can queue a function a class does everything again also not in Debian I would love help there I also don't like small projects I have a big project which is the Tim videos project it's a bunch of projects which are trying to do recording and live streaming basically to replace the DV switch pipeline that's currently being used at DebConf however I can't make your content interesting but I can help make it readable I have a tool to do that called slide lint slide lint is basically a presentation proof reader it checks that your slides are good they don't have the common pitfalls there's a command line version that needs packaging there's a website version which you don't need to package because it's a service so the question is how do you record a live stream at a conference this is how first you need some capture hardware we're working on a project called the HDMI to USB to do capture it's boss firmware it kind of sits here it captures from the computer or the camera we have two firmwares which do this one in VHD on Verilog and run which in written in Python they need to be packaged so that people don't need to build them from source they can just get install HDMI to USB firmware the new firmware is written in Python you still need to understand hardware but it's much easier than Verilog I would love help packaging the dependencies that needs we also have open hardware this is a the new month to office board you can sign up to get it has two HDMI input two HDMI output display for in display for out to USB gigabit ethernet and expansion port you could probably use it for other things such as video DJing and stuff like that it's all open the schematics up on GitHub right now and it's done in kicad so you can even use free tools to look at your free hardware but once you capture it you need to do mixing so I have a project called GST switch which replaces DV switch which is a software video mixer that does HD it's written in C has a Python API that needs packaging as well I wrote it in C because I thought C was better for this but the Chaos Club decided that they like Python better so they rewrote a version in Python called Vokto mix they would love help with that that probably needs packages as well once you've done that you need to stream it there's a website and a flu motion based thing that needs better packaging because the upstream for flumation is dead and we have our own custom version of it and need help how do you package that I have no idea but you still need people to know about your event live streaming stuff so I also have events everywhere which is a bunch of command line tools for doing publishing of your event to Facebook Google plus event bright meetup so yes that is a summary of some of my projects I have more just in summary Tim's video live event recording slide link make your slides better h2m USB capture GST switch software mixing streaming system streaming events everywhere publish your events everywhere support open hardware and come and get a board I apparently have misjudged it and have 30 seconds so here I have one of these boards here if you're interested to look at it come and find me later I also have business card so you can find me later everything it's open I like open source software go thank you Tim made it with five seconds remaining well done okay up next is Martin Pitt finally there when I expect him talking about auto package test for CI have a microphone thank you so now I'm Martin Pitt the auto package test maintainer and devian and I think it's safe to say that so far writing auto package test for packages has been quite a great success we have over 4,000 packages in the open and the devian archive and because we do forward and reverse dependency coverage with those they actually cover a lot more with those so I think every other upload in devian and the winter triggers one way one or more tests nevertheless the motivation for maintaining and creating more could even be greater we have about 300 tests in devian which have never succeeded so which are just broken and we often get test regressions which nobody really cares about and I believe this is because we don't really use those tests to their full potential so we don't use them for continuous integration only as a maintain as a convenience for the maintainer so Antonio Tocero set up this wonderful infrastructure on CI devian net which I hope you're all familiar with which due to fully executes all those tests and then we completely ignore them for testing migration and this needs to change so the machine this is how an excuse looks in devian right now so Brittany is the machinery which decides whether when went to propagate a package from unstable to testing and it currently takes into account like age and whether it's built and installable but there's a big thing missing which is tests but Brittany would be the right place to actually trigger the test and evaluate the results so this is how excuses look in the window we have a brickney branch which actually does that so whenever we upload a package it triggers the test for that package and all of its proposed dependencies and as soon as there is a regression then the package gets blocked period and this helps us to essentially ratchet towards the situation where we never regress the development release so and this is of course not to pack about the the own cloud maintainers here this is just a random example I fished out of excuses yesterday but you see even though the packages test themselves is perfectly fine it detected now that the new v object module somehow brought the death module and this is of course just a trivial case consider if someone uploads in your pearl or as you currently do python 3.5 this literally triggers hundreds of tests and only if they if all the regressions of them have been sorted out then the whole thing migrates to like testing at once or the development release in the winter's case and so my plea to to us is please let's put these tests into action integrate them into into Debian so that we stop having the weird situation that we can detect regressions but we don't thank you thank you very much next is DKG talking about Debian slash upstream slash signing key sorry signing hyphen key dot ASC documenting your upstream signatures so do some dancing before he starts no he's ready excellent I was trying to make it full screen but it doesn't seem to go full screen I don't know why okay not at five and f11 doesn't do anything so you're sorry whoever's laptop this is anyway anyway I'm gonna begin because my time has started so you can't read this because the browser doesn't work properly here so if you're a Debian package maintainer you want to verify your upstream signatures on your package who here has an upstream that signs their tar balls and who here verifies the signatures every single time that they fetch a new tar ball those numbers are not identical let's fix that so upstream releases a tar ball they also release a signature that signature files right what is that signature file it is a GPG signature so this is a very common pattern we see in most of our upstreams so why do you want to check them you want to check them because you want to make sure that you only distribute what upstream intends to distribute I mean we may patch them but we want our patches to be on top of what upstream says you want to detect any sort of network attack that goes against you why would some do a network attack against you by modifying the file that you're fetching they're actually attacking all Debian users through you the package we owe our users more than that so let's protect them we're closing a gap here if we get archive support for this and if the package if the fonts were right here you'd see it says wider distribution then we could actually start distributing upstream signatures with our patches with our packages so that upstream signatures are more widely known around the network and more available to everyone so how do you do this so these are stuff that's in your Debian directory if you're a packager Debian watch is how you check for new versions of upstream Debian watch has this mechanism where you can say PGP Sig URL mango equals and this says well we'll look for the same file name but it ends in dot ASC instead of the tar ball so just put that ops PGP Sig URL mango this is in the you scan man page if you're interested just pop that in there and you will look for a new upstream signature and then you make this file Debian upstream signing key dot ASC it is an ASCII armored version of upstream signing key anyone know who's signing key that is I'm kidding okay so this all works now right now you can put this in your package so sorry the archive is below the red line and you scan is above it you scan already handles this you scan needs the way to find it that's watch you scan needs a way to know upstream signing key so that's the Debian upstream signing dash key dot ASC d package dev already accepts dot ASC and dot DSC I was informed by UNSCAR but have not had a chance to test that actually I owe him one more check mark here which is that DAC already now accepts the dot ASC files so we can distribute them alongside the original tar.gz and hopefully soon once that's in place and we all know it d package dev will find that dot ASC lying around and automatically put it in your dot DSC so we can distribute them so there's a bit of a few concerns here what if upstream doesn't sign their software what do you do okay let's not kick our upstreams let's go to them friendly like and say it looks like you don't you're not signing your software and let me help you figure out how to do it we're actually in a pretty good position to do that we can say look Debian is doing this we're trying to distribute signatures for all of our court code including upstream signatures we would like to distribute your signature to if you don't have an open PGP key yet I'll help you figure out how to do it help them help them change their mind if their signatures are not binary if they're in the dot sig form instead of the dot ASC form we might have to do a little bit of translation that's a bit of bug work that we can do we can fix that we can translate from binary to that ASC signatures and if their signatures are not in the open PGP form that's a totally separate question that's additional work that we can do we could distribute those as well if we want to though my point here is though you may not know about this but please be aware of it and check your upstream signatures it'll all like the tooling is designed to check them for you so that's it I've got one minute left I can take one very very short question with a very short answer I shout it out I'll repeat it colonel.org signing style okay that's not currently handled yet talk to me let's figure out how to handle it signed signed upstream get tags would be awesome I don't know how to handle that because of the relationship between the get tag and the tar ball but yeah talk to me I'm happy to help you figure out how to make it work if you've got stuff like that thanks. Next is Aaron Ako talking about spotting and fixing build failures and here he is. Hello everyone my name is Aaron and I'm a serial bug filer. I have filed a fair number of failure to build from source bugs over the last few years including some just this week despite not being an official porter or maintainer just a regular DD who cares about portability and rebuildability as such I decided to take the opportunity to explain what I've been up to. All I really needed was a foreign CH route and scripted through together which I've now posted on people.debian.org. I specifically focused on new binary packages because regress of existing packages already blocked migration to testing so we'll presumably get developers attention sooner later. I found a few common sources of trouble which I've outlined here I'll hold off for now and elaborate on most of these but I'd like to point out one subtly that's sometimes been an issue namely that i3d6 architectures use a processor type of i3d6 in multi-archive paths but i3d6 do toolchain tuples so the new type in multi-archive variables are not in fact synonymous everywhere and people should not assume that they are. At any rate Debian has some nice web applications that are useful for checking individual packages or all of your packages or for subscribing to both failure notifications. I encourage everyone to keep tabs on their packages one way or another. Meanwhile my broader approach has been to use a custom script that piggybacks an aptitude tracking of new packages and identifies packages that are new on AMD64 but unavailable altogether on i3d6 or vice versa. It yields a link to build.debian.org which has some very helpful features these days. Once I open the link I often look at the list of failing architectures and report bugs that are appropriate. In some cases the jury is still out so I check back later. When I do file bugs I generally include my take on what appears to have gone wrong and how the maintainer might address it. I guess I've still got time so I will go back to some of the common failure modes. Build depends could be undeclared all together or misclassified as relevant only for independent packages. There could be other cases where Debian rules assumes that the lock package is going to be built when it isn't necessarily. There can be architecture variation and generated symbols files especially for C++ libraries. The KDE team has a tool for tracking this and their journal is pretty good about using it. Somebody's upstream forgets that other operating systems such as K3D3 or the HURT exist or same as they also assume that everything is a 64-bit system even though there are still plenty of 32 systems out there. Again because I'm talking that new packages specifically sometimes the build environment has changed from when it was first uploaded and so no longer builds are because the current unstable and also the auto builders have some quirks so they have no useful home directory. They don't have any networking rely on and so forth and you need to account for that possibility. Okay that's all. Well we may have today's land speed record. Next up is Didier Rabout talking about from Debian printing to printing Debian. Here Mr. Didier. So from Debian printing to printing Debian. Why me? You might know that I'm the maintainer of the Debian printing stack cups and other things and sometimes I do have weird ideas. So I have a dream. What we do is great. We have to acknowledge that. Debian is the largest ordinated free software collection ever built in history and we'll stay that way for quite long. We have 175 gigabytes of source across that long number that I can't read in English lines of source code for Jesse. That's just insane but source code is immaterial. It's it just is something that lives on our computers but you can't actually feel it. So what about crystallizing this heritage into the physical world? Let's print the whole Debian source code. Easy right? Think of this as an art project. Digital heritage for humanity. Put that in a museum with everything that was ever published in free software in a coordinated way. This could be useful in future ages when all computers disappeared but someone finds this collection of paper with things printed on them and might even understand what that is and how to put that back to use. Of course there are challenges. Type setting 175 gigabytes of source isn't exactly easy. You can't really do it by hand right? I mean unless some volunteers in the room. No right. We have many non-text-plane files, images, sound, etc. And it's not exactly feasible to generate one single PDF out of that. I don't know if you ever tried to generate hundreds of pages PDF. It's not exactly easy and it's probably memory bound. So we need to find ways to do that. But we have precedent there. Wikipedia type-setters the English version. 15 gigabytes of text. We have three times that. Five million pages. Seven thousand volumes. They're actually printed I think 25 or 30 volumes and the rest are available as PDF. And they uploaded all the PDFs on Lulu.com where you can actually buy the whole seven thousand volumes for a half million if you want. Well, among the challenges the question is what is the amount of paper you actually need? We have to decide about the format. Would that be a list of books, just A for pages? We have to decide if we use recycled paper. I mean recycled paper wears out. So you might want to buy quality paper and print using Chinese dark ink, something like that. And then there's this ecology question. Of course it could eat a small forest. But we don't need to print this multiple times. We just need one instance. And then you have to actually print that. So the current prices are around two cents per day for pages. So then you have to decide how you want to actually book bind. How many volumes. Does anyone have ideas for sponsors for the printing? I might need to talk to HP, right? And well another idea that we had yesterday very late was that we could crowd print that. You could pick one volume and decide, yeah, I will print that home and send that to the museum exhibition, for example. We could try that. And then there is the question, where do we put this one copy? A museum art exhibition? Ideally not one time. If that is supposed to be a heritage thing, it has to be kept there forever. And no, my place is not a good place for this, right? Finances. So we also need to kind of finance that. But I've heard Debian has some money. Perspective and ideas. One idea we had also is this bring me anywhere in the Debian source code. Like you go into your library, you take one book, you open it, and you land in, I don't know, ghost script source code. But we could start with a web version. Bring me anywhere in Debian, and then you can infinite scroll up and down. This is not too hard to do with web technologies, especially now that we have source Debian net. And the exhibition could be about the printing process. Instead of actually having all the books, we could have a printer that just continuously over probably decades, which brings the Debian source code. I need help. So I created an alias project page. Join the alias project. We can discuss this. Last year, everyone said I was crazy, but this year might be in the price of the beer. I don't know what, but people actually are trying to consider this. So join me and we'll have fun. Thank you. Thank you very much, all of you. Just as a reminder, we have another session of lightning talks on Saturday, same time, three o'clock. We also have a session of live demos tomorrow at two. There are still a few slots available for the Saturday lightning talk sessions. So send any proposals for talks, not marriage. To is lightning real at debconf.org. Thank you very much and good night.