 Obey Arthur Liu is going to have a talk about Google Summer Off-Code. He was himself a Google Summer Off-Code student a few years ago. That's actually more or less how we get involved in Debian, as I remember the talk. I left Arthur Obey's talk. If you have a question, it's best if you can wait to the end of the talk. Thank you. Hello, everyone. I heard myself. So this is the same talk as last year, except it's totally different because we have different students. And it's been a few years since the program started. I don't know if you remember. It all started in 2005, where it was kind of a small offer with a few people and a few organizations and everything by email and so on. Then 2006, 2007, it got larger and larger. Debian started participating in 2006. And we've been participating in every edition since. So this is the sixth time the program existed, the fifth that Debian participated. And it has worked very well for us so far. So just to remind you what's the program for, the idea is it's around open source. It's about code, of course, because of the name. It's about getting open source code created. But more importantly, not by anyone. It's mostly about young developers who didn't have any much contact with the free software movement. And the idea is for all these projects to be able to identify these people and bring them into the movement so that they can be the next generation of developers. It's also a way for the students to be able to do something during the summer that's remotely related to what they've been studying, like not working on McDonald's. And it's also a way outside of the open source movement to work on real software projects on a large scale, with real people working on it on large distributed teams, which is not something that they often get to do at school. And in some ways, developing in the free software movement is quite special with all the mailing list etiquettes that is even more important. See software licensing questions that we actually do care about. And the fact that most developers just haven't seen each other since the last time. So the rules in the program, there's three rules. Google, who's financing it, organizations, such as Debian and all the students. So Google selects multiple organizations. This here is about 150 different organizations. Anything related to free software from operating systems like Debian to desktops like GNOME or software like Blender or, yeah, they're very long list. So what Google does is assign a certain number of slots to each organization. And each organization can then rank students who apply for each organization for certain projects. And Google says you have 10 slots, and Debian choose which 10 students we want to have. That's the way it works. See Google gives a stipend of $500 to the organization that the organization is free to use at its CSV, and $5,000 to each student as a stipend for this summer. And there's also a variable quantity of money that Google gives to each organization when they're able to organize events such as Debian so that they can actually sponsor students to come. This year, we've been able to get six Semovkot students to come thanks to special sponsoring from Google outside of the usual money. And yeah, we are very thankful for this. We've been able to get many more students than last year, and we hope to get more next year. So what's new this year? It's the same old as last year, which is me. We still have about 30 project ideas coming from developers, so we like to have, we always like to have more project proposals because it's in some way, each student goes through this list of organization that participates in the program and they click on this thing that says ideas. And this list of ideas is really what students will look for, and we have a list of about 30 projects. And compared to some other organization, it wasn't that much. So I hope that we'll get some more next year. We've got about 80 student applications, which is slightly more than last year, and is a very good score for an obscure organization like Debian compared to more popular distributions or very popular stuff such as Firefox or OpenOffice. That is popular outside of Evan Linux. So Debian did quite well. We got 11 slots, the same as last year. Even though there were more organizations participating this year, so usually each organization got less than last year, but we did quite well in that we got the same number of slots as last year. So after the bonding period, the first months, we kept nine because of various reasons. So we had students who retracted and another who left for various reasons. This year, after the midterm review, which happened a week ago, so we were right at the middle of the summer, we had the first review and we kept eight of these nine students. One was eliminated for insufficient results, but I'm quite happy that all the other passed, which is usual ratio. So is anyone here not a Debian developer or something? Is there any of you who are students? Okay, for the few of you, you might have come because you wanted to participate in the summer course. So a few tips, very important ones, is to get in touch with the organization as early as possible. The official application period is something in March or April. But if you get in touch with the organization and I'd hope you choose Debian, we will know you, you will know us, you know how it stuff works and it will be so much easier for us to know what you're worth and it greatly increases your chances to be selected as a Debian student. So I highly encourage you to talk to me or any of the students who are present today to talk about what you'd like to do at Debian or as part of the summer course in general. So developers, each year I see many people in this room but I don't see that many on the mailing list or on RSE, I really wonder why. It's really easy to help the program because I'm sure that each one of you is doing something in Debian and that you'd certainly like to have someone else help you doing it. Maybe you have some ideas that you just don't really have the time to realize and that you would like to have someone have fun for a summer doing it and you could help with it. So December course is really a great program to get some maybe a little crazy stuff down during the summer. So I highly encourage you to get in touch with me to propose your idea and when the idea proposal period comes to propose it and even better to be a mentor for it. It's really a great experience and anyone who's been a mentor in the program at Debian can tell you how great it is. So it's not really about me, I'm done with talking. So right now we're going to get every student to come up and present what he's been doing during the summer so far. Hello, my name is Piotr Galiszewski. I'm from Poland and I'm working this summer on an aptitude project. My mentors are Sune Worela and Daniel Barrows. The main goal of my project is to create a kid-based frontend for aptitude. This frontend should have at the end of the summer or probably later, all features from GTK frontend or NCARS frontend. And probably some of you will ask me why are you working on the project which is involving a new package manager. There is plenty of package managers currently available and I think that this table could show it as for GTK-based desktop environments there are two good package managers with full-debate and comparison with advanced features like Synaptic and aptitude GTK. But for KDE, there is only IDEPT which is currently animated project. And due to that, it's impossible to, there is no good project for good, I'm sorry. There is no working package manager for KDE. So what will we do after the summer when my project will be completed? There will be available aptitude QT and also the Kubernetes create a Muen project. So this will be quite better state. And what works now after months of my works? It's possible to browse packages and search them. By default, our packages are shown on the list. And later it's possible to search them on two ways. It's fast search where you can type a text where you can search and choose a category for it. And second thing is support for aptitude search patterns. It's also possible to show more advanced information about these packages like list of available ease of install files or change logs. It's also possible to work some of the jobs of packaging like updating package cache, make clean, up to get clean job of auto clean. And other package actions are also implemented. So you can choose if such package should be installed or installed or maybe kept or change of version. So it's also possible to perform these changes. So it's possible to work inside packages works. But there is also a lot of work required for this. I'm currently working on other missing features. And also, I'm still updating and re-basing patches according to Daniel's comments. Our comments are made on aptitude mailing list. And so it's a small screenshot which shows current state of the project. And there is a list of filters. Currently, it's empty when it will be added later. And list of packages. One of packages extended for now. There is some more advanced information about these patches like the description versions are so home page. And there is a hot link to more information after the clicking is open at new tabs. So because I'm trying to avoid using of new many windows at this project, only there is one window at the time. Probably there will be one or two more, for example, configuration. But there is also will be possible to have multiple mind views, minor windows at the same time. And at the bottom of this window is shown a global progress bar of currently running jobs. After clicking on show the tiles button, there is a new tab open it which show progress of currently each file, which is doing lablet. So what needs to be done on this project? There is showing changes somewhere integrating faulty terminal into perform changes tab. So because this terminal will be used to show in progress of the DPGA running. And also there is some missing tabs which has to be added. And also publishing, going and fixing remaining books. There is a lot of work to do. So probably this project will be continued this project after the summer of code. But I hope that at the end of the summer, most of these features should be implemented some way. And later to the end of the year, all of this should be, I hope this will be merged into an aptitude master branch. So how can you help me? I will be very glad to hear any feedback on the design decisions I have made. I posted mock-ups on my blog, which is a great as in Planet Dibyan about two months ago. I've received some comments, I'm very appreciated it. So you can test the front-end. Here is the front-end. Some information about installing this front-end is on this hotlink. There is new configuration option enable Qt Require to add. Also repositories of this project is on GitHub's project. And snapshots are on my page of current state of the project. It's because there is no single branch which can be posted and later will be updated by pushing, pulling from repository. Because every feature is added in new branch and every branch in every part is always, is often re-based, re-write, and according to comments on the aptitude mailing list. So it's better to test this project on computing snapshots. Here is also a contact for me. If you have any questions, I will answer them here on DevCon for you can write me email or on aptitude mailing list. I also will be on Debian Davis channel in a minute, DevCon Davis channel in a couple of minutes. Thank you very much. Hi, OK. Hi, my name is Krzysztof Tyszecki. And I am working on content hour of config files upgrading project. My mentor is Dominic Dumont. The goal of this project is to improve package upgrades with semantic configuration match. This may sound a little bit cryptic, but a simple example I will show you later will give you an idea of what it's all about. So what already works? And so when I was applying for a summer of code, I was thinking of creating my own library and creating all the tools by myself. But it turned out there's already a great tool for parsing and manipulating configuration files and it's called config model. It's a parallel library created by Dominic Dumont. And if you want to use that library, each configuration file needs to have a model. A model is a simple parallel data structure which describes all elements of a configuration file and what can be put into these elements. And as you may imagine, complete models are great, but tend to be long and complicated because the developer needs to describe each config file element separately. And this may take a long time in the case of web servers, for example. And some config files still cannot be parsed because some written write backends in config model are missing some key features. Well, and after a few weeks of my work, now I'm creating models easier because in the case of upgrading files, we do not need to have each element of the config file described. We just want to load it and make some semantic comparison. So instead of describing each element separately, Dominic suggested to me that I should make possible to put a regular expression that describes a set of elements and how they could behave. And that is how it works now. I've also improved the existing backend for any files and I'm working on managing configuration files now. So I can't work here so I'll continue my work after I come back to Poland. So what have we done so far as I said before? Models can be declared now in a simplified way and such models are not describing structures of the file exactly but are suitable for upgrades. And I improved the in-rate backend. And as this added value config model can also handle more complex upgrades by describing upgrade parameters that needs to be changed for an upgrade. I will show you an example of such action in that demonstration this now. So let's suppose you have a very, very simple configuration file that consists of only one parameter. Well, it's some kind of command that is being executed in some program. Well, in the file version of the config file it is bash executed. And user didn't like bash so he changed the default shell to zsh. And for no apparent reason, the outer of the package decided to extend his application and to modify the default config file. And now the role of the command parameter is actual commands, but the command is still there and is working in different ways, serving different functions. Well, the default behavior now to preserve user configuration in depth config, we use the UCF tool. UCF tool in this case would show an error and wouldn't do a freeway merge. If you would like to compare this and merge this file as a text, we couldn't preserve user settings and preserve the new default settings because it's impossible. But it's possible using config model and semantic match. Using features of this library, we can define that the contents of the alt command variable should be transferred to the actual command and the new command should be added to the config file. And using cases of my work, the actual match config file will look like this. So the user settings is preserved and the new settings is still there. That's how it works. And what's to be done, we're still not very well tested and some things have to be improved. And there's also not a testing done yet in the actual packages. We need to test it more. We need to write some additional backends, most notably backends for XML configuration files. This format is pretty common now and to write some documentation. And how you can help? Of course, test this software by performing upgrade to the convention files. You can check if it works in real life. Integrate config model in the posting script of the package to maintain. It will be very helpful. And hand for bags of cars. And all the code is listed on the repository on Mercurial. So that's all, thank you. Right here, that's good. Hello, I'm David Wendt. This is my project. I'm working on modifying the SOAP API for DevBugs, which is the software which powers our BTS so that you can now modify and report new bug reports into the system without using email. So this is a pretty cheap diagram. I rigged up of how the whole system works, basically. As you know, we have the SOAP API and I'm going to add new functions so that now you can report bugs, follow up, close forward, et cetera, from pretty much any language. Heck, we might even have some sort of nice whole user interface for that. What does that mean for end users? Well, for programmers, it means that you can now interface with DevBugs quite easily. It's much easier, simple, less error prone than firing off an email. It also makes it easier for users because now we could write something like, say, a bug report program or something like report, but that does not require using the email and they don't have to, or send, and they don't have to send the email themselves, which is good for new users. And there's also all sorts of different example usages that you could take the API and use it with. Like you could write an automated crash reporter that says, hey, the program crashed. Would you like to send a bug report and make it really easy? You could even write an automated package tester which says, ooh, your test case failed. Do you want to send a bug report, et cetera, am I? And this is, if you want to send an email or something or bug me on AIM, yes, that's my actual face. A shout out to these people who from, without them I probably would not be working on this, obviously. Bastion Venture is my mentor. Don Armstrong, who's the Debugs maintainer and Obey Arthelieu and Google. Sorry if that was a little short. Hi. I'm working on parting the Benizterra to the Neo4erner. Neo4erner is a phone with only free software on the almost only open hardware. You have, for instance, GSM chip that is completely closed but the driver is free, free software. There is a Glamo chip too. The documentation is not available but the driver itself is free software. Unfortunately, it's not produced anymore. So, current situation in Neo4erner in Debrain Clays, we have a PKG FSO which works on the phone applications on even the kernel, everything they make stays in the repository before going to main. There is an install.sh script that is meant to be run in an already installed system on the free runner. And it's in DebootStraps.debian and do some things to so it can run on the free run. We have a kernel in the PKG FSO repository. This kernel doesn't follow the kernel packaging like every other package is on main and those. So, one of my goal is to provide a new Flavor. All right, so I've made a new config for this Flavor that supports only the free runner, for no reason. That can support over Samsung devices. But we miss many drivers like the Glamo driver. It's not upstream, so I had to clean it and fix some things. Then I submitted it to upstream but I haven't got any review yet. I've done some Debian install work too, like adding a new sub-merge. That's three lines of code here and here, but you have to know. I'm working on Uboot Installer. That's a tool that make it possible to read the bootloader information as the bootloader stays in the free runner and we don't modify it. It stays in the free runner, but the default config doesn't let us in Debian. So, we have to modify it a bit and that's what Uboot Installer is for. I've made an instruction script that runs on the host. That's made to be easy to be able to use and to avoid some Uboot bugs like having to copy the line but not too fast or else Uboot will stop reading at the TTY and you'll have to reboot the system. It should be easily extensible for other similar devices, but yet I have only free runner. So, I've made images with Debian Installer. It can run on the free runner. We can install it, reboot, it works, but all the changes that have been made have not been integrated yet. And as I said before, the patches I sent to the Linux channel haven't been reviewed yet. Other things to do, there is something to, in the PKG, to exclude some path from an institution that can save place for space when installing the whole system like you can skip your shared doc. There is two flash support for Partman. It doesn't make much sense on the FR since you have only a few, a really small memory. I want to support other devices, but I've not finished with a free runner yet. So, what you can do is try my work if you have a free runner. I'm Jeremy Koenig and I'm working together with Samuel Thibault and the Debian Installer for her. So, you probably know Herd is an alternative kernel for Debian like K3BSD, for instance. And it's based on a micro kernel and most of the stuff is implemented in user space. So, for instance, the whole file system code is in user space as a collection of demands which interact with each other to provide a file system. So, Samuel had already done some work for generating the boot images and I've been porting more packages. So, since DI is a collection of packages, porting DI is mainly porting those packages to Herd and I've been fixing bugs and adding functionality to Herd in order for the installer to function. So, there's a lot of small parts involved. So, I've just taken a few examples of the work I have been doing on Herd to make DI work. So, one cool thing in Herd is the user space partition stores. The device drivers are in max, the micro kernel and currently there are specific partition devices in max. So, Mac will interpret the partition table and show the user space series of partitions. But the newer part stores run in user space and they access the whole device in Mac and provide partitions. So, in Herd there's not really such a thing as a device node. Since the file system is in user space, you have a special demon which is attached to a given file and will interact with the kernel for the input output in the case of block devices. So, currently the translators, the demons are called translators which are attached on the block devices in slash dev and interact both with the file system and the kernel to bind the two. So, the new structure would be have a whole device translator which accesses the device and the kernel and have separate translators for each partition which access the first translator and give access to a given partition. So, it would be more flexible. So, you could for instance have a disk image file and any user could create pseudo device nodes for the partitions inside the disk image or you could have say a disk image within a partition and have partitions inside this partition. So, in Linux you would have to play with the loop back devices and give offsets which you have to compute manually. Here you could have partition translators on top of your disk image file or whatever device you want. So, the work I have been doing on this is fixing the default page which handles swapping to work with them and I've also fixed grab since on a user space partition store grab would see that the underlying device is a whole disk. So, it would fail to detect the partition on which the files are stored. And what could still be known would be to have a translator which you can put on top of a whole device which would provide a directory with all the partitions inside. So, you don't have to create individual partition nodes for each partition on the device. Another part of her that I've been working on is the herd console which runs in user space too. Is based on VGA text mode but it has a very cool feature where you can have a font larger than the 512 glyphs which VGA text mode supports and the glyphs are dynamically allocated. So, at any given time you can only show 512 of them but your font could be much larger and include the glyphs for many languages. So, even through the installer on her as you see VGA text mode, it's pixel for pixel identical to the Linux installer and bugger B term. So, depending on which language you choose the right glyphs will be loaded in memory and it's okay. So, I've worked on that. I've added double width glyph support for Chinese and Japanese and Korean I think. And one drawback of the console is we still don't have, we can load key maps so for internationalization. It's not that good. Another cool feature of the console is that it's split in a server process which has a character matrix stored and handle the terminal IO and a front end which will run with the VGA driver for instance, which connects to the server but there is also a curses driver so you could log into your installation over SSH and take over the console from there. Okay, so the installer works. You can grab it there and I'd be glad if you could have a look and tell me how it goes. So, basically that's it. I think I'm going to pass the mic to the next student. And here, okay, which one? Okay, so the next student isn't here so I'm going to do really fast for him. His project is to implement correctly multi-arc in APT and about multi-arc you might have heard about the last fun story about flash in Linux which is that we decided that if you're running 64 bits and no playing flash for you because they decided that there were security issues and they didn't feel like fixing it. So the problem with flash currently is that the way to get it to work currently is to so first remove the old version and install something that doesn't quite work or the other way around is to download something that will download the 32 bits version and unpack it somewhere strange and then repack it and then install it in a strange way. So the problem with this is that it's not very secure because you don't have any support from any maintainers so security issues are just going to exist without any supervision. The solution is to create IA32-something packages which reproduce the 32 bit architecture but in the AMD64 repository which is quite ridiculous because you basically end up with a copy of the 32 bits archive inside the 64 bit one. The solution would be to actually be able to reuse the I386 packages on an AMD64 installation and the solution for this is multi-arc where you could be able to install a package from another architecture provided that yours is able to run it. So if you don't really care about flash it will still help you because there's lots of software that just don't run, that just can't be compiled in 64 bit or can't run natively in 64 bit so you still have to use 32 bit software and all kind of funny things with cross compilation so you don't really want to do that on your desktop. And if we have proper multi-arc then we can remove all these demo packages and so on. So multi-arc is almost somewhat ready. It's waiting somewhere and you can play with it. It's an experimental and yeah it's all up to you to help with using it. It's not quite ready for the end user but it's already mostly supported in aptitude and many of the APT libraries. So you can already start having fun with it by going and getting APT from experimental testing it, reading the multi-arc spec at this URL and reading the blog of the student and tell a few things that you can look up. Next student. Okay, so how are you all? I'm Peter from Czech Republic and I'm working on a smart upload server. You're mentoring from George Jesper. Have you guys seen him because I did not? So maybe next time. So current state, if you upload package to Debian, have you ever done this, some of you? I guess so. So you use FTP-based solution when there is Chrome, Rant, Debian, Qt, Demo, which will then do the stuff for you and this solution is good because it works but nothing in the upload knows what is coming in. I'm thinking the last file is uploaded because it's the actual changes file which describes the upload itself. So what can happen? You upload the files to the server, all the files in the package and then the last then the DQ realized that there is actually a newer version of the package so it rejects all the files you uploaded. My solution is based over HTTP and Python and it started with changes file and it works it on test early in the report early stuff and check every file and reject it instantly. When you upload it, it says if it's okay or it's not. Next time you upload the old package, you put the changes file first in server response that it's old and you don't have to upload all the files. What is done right now is the protocol for uploading a set and it works and the checks are performed. It's not just about the version, it's about checksums. It's about LinkedIn and it's about some other stuff and what's going to be it's dark integration to you really can use it for uploads and there will be some client provided but this side would be really open. It's just use some HTTP so you can write the client and someone wrote it in I think 15 seconds when you see this and you can do that. Here you can see the repo but you can comment, hack it, there is a weekend in stuff. So that's it, thanks for your attention.