 be in private, and I hand over the mic to the next speaker. Hello. OK, thank you. The next talk is from Yuri. 1-1-1. OK, very good. I just wanted to present a little talk about what's happening with open source at Google. And I will start by presenting a number of projects which were initiated by Google, eventually open sourced, and got into Debian. And then I'll present some other information about open source-related resources provided by Google. So this is kind of random selection. And those are mostly the projects which I heard about. And they were announced within Google and then open sourced. So for example, Gennady is a tool for management of Zen clusters. You can find the more information at gennadygooglecode.com. If you have a lot of Zen clusters to manage, it can help with automatically starting up, shutting down, failing over, and managing the disks of these instances. There is this OS installation support. It doesn't offer a live instance migration at the time. It is already in Debian. You can just apt-get install Gennady. Another project is ZumaStore. This is the storage system which basically provides snapshotting and reportedly this snapshotting, file system snapshotting and replication. Reportedly, it is better than LVM because if you create multiple snapshots with LVM, then a changed block in your original file system will correspond to a changed block in each of the LVM snapshots. And this system allows you to create multiple snapshots. And you will still have one block per each changed block in your original file system. So it scales a little bit better. It provides the replication capabilities. So you can periodically create snapshots of your file system, and then only changed blocks will be sent to a remote location. It is GPL version 2, not yet in Debian, but pre-built Debian packages are available from ZumaStore.org. Finally, one of the last large open sourcings at Google is Protobuf, protobuf.google.com. Fancy description is it's flexible efficient mechanism for serializing structured data. Basically what it means is that you can easily obtain binary representation of complicated data structures. So there is a Protobuf meta language, which you run a proto compiler on, and it generates C++ code for setting the values, getting the binary output, decoding, and so on. So if you're working on any kind of distributed system, you most probably will be interested in it because it's very good to do message parsing, RPC calls. You can even store the data in this form with very low overhead. All these libraries in Java and Python bindings are available in Debian already, and it's a patch license 2.0. So code.google.com is a general open source project hosting service which provides some web space for documentation and files. It provides SVN, SVN repository browser. I believe you can also browse the SVN repositories which are not hosted on this service. It provides Wiki and the issue tracking tool for each project. So if you're looking for hosting for your next project, this is one more thing to consider. Finally, while this is not directly related to open source, it's still very cool. This is one of the services which Google advertised recently. It's called Google App Engine. And it basically allows you to run your application on Google infrastructure. Guido and Rossum was one of the persons behind it, major driving force. It provides a Python API to Google-specific data store authentication, URL access, and email access so that once you write your application using this API, you can run it on Google infrastructure. Most Python frameworks support it, and Django in particular is included in the SDK. It provides you with a fully integrated application environment. It is free to get started at this point and you get a fair amount of resources, something like 500 megabyte of storage space and something like 5 million page views per month. However, there is a catch that at this point, you're only allowed to create three projects for each person. This page, code at Google.com slash App Engine, has some nice videos, both a tutorial on how to use it and a talk by Guido on App Engine security, I believe. And finally, when I was preparing this, I remembered that project called Ritville to Google.com is an open source code review application by Guido, which runs on App Engine and which different open source applications can use. Thank you. Thank you. The next talk is from Jakob. OK. Here you are. They are turning on. Is it ready? Can you hear me? Does it work? OK, so I started a research project about a year ago where I realized it was possible to freeze the memory of a computer in order to extract the contents of it. So we wrote a paper about this that we submitted to usenix security. And just last week, we found out it won the best paper, which is pretty cool. Basically, there are a couple of different methods for doing these types of attacks. And all of them require physical access in most people's minds, though actually that's not the case. It's possible to execute a couple of them over the network. And so in order to show this, we released the source code for this, which I'm going to probably package for Debian, either in the next couple of days or next week or something. The basic idea is that memory retains state even after power off, which means that if you quickly reboot a machine or if you cool a machine and power the memory on later, you can extract the memory from it. And to a lot of hardware engineers, this was sort of obvious. To a lot of software people, this was pretty devastating. Because a machine that has cryptographic keys, like, I don't know, is anybody here encrypt their hard drive with Debian's DM crypt, maybe? All right, so we broke DM crypt on Debian specifically. And the way that we did that was by basically taking a think pad. Anybody here using a think pad and DM crypt? All right, cool. So what we did was we took the think pad and we turned it off, turned it back on, booted a small program. We have a couple of payloads. One of them is a pixie boot payload. Basically, we serve it out with a DHCP server and it spits out the memory of the computer over the network. We have another program. It's a key finder. It extracted the AES key. And then basically using that, we were able to mount the file system of the computer. So it's not very difficult. We also have the ability to reconstruct keys. So the basic idea is that there might be some decay in memory, because bits will decay in a predictable way. And our reconstruction algorithm works with up to 10% or 15% of bit error rates. And we actually never saw anything more than, I think, 0.1% bit decay. So our algorithms are pretty overkill for that. But it might be interesting to come up with a way. I talked with Theodore 8 of OpenBSD about this. But it might be interesting to come up with some things that you can do to detect when someone might be cooling your system or to have a panic button. And Theo and I came up with the idea of a patch to M advice. Basically, you could say, there's a whole bunch of software on this computer. And it would be really useful if in the event of some sort of security panic that you killed these bits first. And we don't care about having a kernel panic. We don't care about pieces of software crashing. Those bits have to die no matter what in the event of a panic. And you might just have a couple of milliseconds if you can detect an event. This might be like a case intrusion detection sensor. It could be a temperature sensor. New DDR3 chips have specific temperature centers on the memory. And you can set interrupts. You can say, I guess, if it drops below a certain temperature, then that's a specific thing you want to catch. And then you want to do a specific action. So it would be kind of interesting to talk with some WN people about making it possible for creating this sort of catch and erase situation. Because as it stands right now, it's pretty difficult to, even if you were to detect an event, to actually ensure that you killed all of the bits properly. Like just trying to remove DMCrypt should, in theory, get rid of the keys in memory because DMCrypt does the right thing. But if you can never hit that entry point and you can never actually destroy those keys, then you won't really be able to erase them. And those keys will continue to be there. We have an iPod-based payload, too. So if you have an iPod, we can put this memory damper on it, and it'll dump it into the iPod. And we also have EFI. So if anyone here is running macOS 10 on Apple hardware, or has a Debian running on Apple hardware, it could also probably dump the memory. But we haven't tried that for that. Anyway, if anyone has any questions, come talk to me about it afterwards. Thanks. The next one is Frank about the source-centric views. No? Do you hear me? OK. So one problem that we have is we have so many packages in our archive. And it makes it difficult for our users, or for ourselves, to find software and to find the right combinations of software to install. So obviously, one solution that was proposed for this is, for example, Deptex, which makes it easier to search for software by their features and implementation details, and stuff like that. And I think one thing we should consider is to also group software better. For example, if you install a binary package, you have a certain set of suggest and recommands, which should tell you which packages go together with this software. But I have the feeling that this is often redundant information. And what you really want is a view where you see, OK, this package comes from this source. And this source has the following binary packages. And there's one binary package that is like the main package of the source, which we often have. And then there are the translation packages. And there are the data packages and stuff like that. So what my idea was is that we should give the user more information about which packages belong together in a group without having necessarily to specify all the right suggest and recommands, but just give them a little bit higher view over the archive. Because with time, we tend to split up packages increasing speed so that we can, for example, we have an increasing number of packages that put their arch-independent data to a data package so that the mirror space isn't wasted for it. And we have packages that move their translations out to singular packages. And we have packages that try to find the packaging so that all the bits that don't need any GUI interface, like X or GTK, GNOME belong to one package. And all the stuff that does need it gets its extra packages so that the dependencies are more granular, more finer. And what I have the feeling is that in this process, we get lost in these huge number of packages which normally don't interest anyone unless they get installed by recommends or by dependencies. So I think we could profit from exposing the user more which packages came from the same source and what option he has to combine these packages together. So one thing I would like to add is descriptions for source packages. Currently, we only have description for binding package, but it would probably be useful to have description for source packages. And maybe the possibility to specify in more detail what the task of a package inside its source group is so that source could specify which of its binding packages is the most important, the one that the user should present it first with and which ones are just dependency packages or stuff like that. So that's something that's just an idea that I had during DevConf and that I want to work after DevConf on to see whether it's feasible and whether it brings actually good results. And if someone is interested in, just talk with me. Thanks. Thanks, Frank. The next one is a show about Terminator making the GUI terminal fun again. Is this on? Can you all hear me? Am I? Great. OK, so I didn't write Terminator. It's written by someone else, but I've been using it a lot lately, and I thought I kept getting questions about it, so I thought I would give a lightning talk about it and why I like it. Let me say a little bit about myself first. I've tried using tiling window managers before, and they were never really fun, at least beyond the first couple of days. I felt that structuring my windows into non-overlapping segments lost me a lot of freedom that I wanted. And also, maybe it's just heretical to save, but I actually like network manager. There's probably some of you who aren't laughing who are probably hopefully thinking, yeah, it works for me also, so I kind of like it too. I hope that there's at least one other person like that in the room. And so having the GNOME panel there actually would be a nice thing, and most of the tiling window managers I've seen, there's one of them that says, and the fact, hey, my GNOME program doesn't work with this window manager, and he says, GNOME is terrible, don't use it. But I have applications that I want to use. I don't use my computer solely to run his window manager. So, but I do like terminals, obviously, and I like thinking about solving problems in terms of stacks, like call stacks. So with that, I guess, I'm going to quickly run a demo of how I like to use Terminator. So first of all, Control Shift P and Control Shift N is how I'm switching back and forth. I might run a mail client and find an email, and it tells me that I have to do something. It tells me I have to set some configuration options for these Git repositories. So let's let the window horizontally, and the last one here is looking good. So I'm going to just clone the Git configuration for where these repositories are. So I like this because I still have this window visible. If I were using GNOME terminal, or Xterm, or something simpler, I would have opened a new window, and it may or may not have overlapped with the one I actually wanted to see. So if I look in here, I can edit geotosis.conf. I nearly need to add some configuration to this, but I can't remember what the configuration line is supposed to look like. So I'll actually, in a new terminal, find another example, and I'll look at my personal Git configuration. So the nice thing about this is that as I open a new terminal, the old contents are still visible and are still modifiable. As I wait for my own configuration to download, I guess I also like the fact that when I open a new terminal, it creates pressure for me to close it eventually. Otherwise, I end up with 50 terminals on one screen, and I never remember why I created them. So in this one, oh, it's called description. OK, great. And it's in a repo block. So then now I just know that he needs repo live validator.git. Description is this. So what I like about this, just to be clear, is that I got to create new terminals, and this is the whole call stack for actually executing the action that I was emailed to do. The first window told me I needed to create, open up the CC, open up one, Gitosis configuration, and when I got there, I learned I needed to open up another one. And I can just use the keyboards, hop between them, close this, now that I know how to do it, exit, save this, get at it, get commit it, and then when I push it, it'll actually be live on the website. And then I'll exit the window collapses and I could reply and say I've done it. So that's how I like to use Terminator as sort of function call stacks for executing actions. Thanks. OK, thanks, Ashley. The next one is Paul about SYNFIG. Oh, you have already. So, OK, Jonas, yeah. This is SYNFIG Animation Studio, a set of tools that put in the hands of artists to reduce realistic compositions and, at the same time, complex animation. It combines high dynamic color range and spatial and temporal independence with an interface built especially for artists. It produces 2D animations and utilizes tweening to reduce the effort needed to put into an animation. It was written by Voria Studios and was turned over to the free software community after the company went out of business. It's written in C++ and released under the GPL. There is a vibrant user community. Every month we have artist challenges and the latest one involves kinematic typography. We use ISE and web forums to communicate. Some of the tools that it provides are geometric shapes, blurs, distortions, filters, fractals, gradients, text, simple particle system, stylization, and other effects. We need developers. So if anyone knows C++ and wants to get involved, please see me afterwards. And I'll leave the rest of the learning talk to the shiny things on the screen. OK. So it's four and a half minutes long, so we enjoy the last minutes. Thanks, Paul. So one of the next one is Jakob about Tor. That's a really tough act to follow. Man. So I spent some time while at DEBCOMF learning about Google Maps API, which is not really that interesting. But it made it really easy to display some data that I was interested in. If you could zoom in on South America, that would be cool. So just like since I have only a short amount of time here, I wanted to get a sense of who is familiar with Tor. Is everybody here? Great. All right, so I don't need to tell you about what Tor is. That's excellent. For those following along at home, please read our website at torproject.org. But basically, I wanted to do some data visualization to sort of understand the clicks that exist within the Tor network. And should you drag the map a little bit so we can see New Zealand also? Like just a little bit? A little out? So one thing you'll notice is that we have very few nodes outside of North America and Europe. And this is incredible. We have this plot's about 1,000 nodes. This is for the v3 consensus protocol. So it doesn't include all of the Tor nodes around the world. But South America is very poorly represented. We have about, say, seven nodes in total in South America. And if you zoom in on any other part of the map at all, like if you could zoom in on Europe, if you could zoom in on Europe, that would be fabulous. Yeah. I mean, the CCC has done kind of an incredible job here with getting Tor adopted by people. And that if pretty much every town in Germany has one Tor node, I mean, it's incredible. Like, could you zoom in here a little more? Just click on the map? Double click there a couple times? Should zoom in? Well, all right, it's too dense for you to do it, so you'll have to use the. But there's basically hundreds and hundreds of nodes per small section of Germany. So one thing I was looking for is for people that are running nodes that are not in Europe or North America, or who have servers that are not in North America or in Europe, and they'd be interested in running a Tor node. And I'd like to talk with you about how you could share some of your network bandwidth in order to help people that maybe need to use Tor. This includes journalists. It includes people who want to do research on topics where they require personal privacy. It may be they want to read Wikipedia articles without letting people know who they're interested in reading about or topics that are sensitive in their particular area. And doing this individualization made me realize that we have a bunch of users from these places. Obviously in Germany, but we also have a lot of users in China. We have users in South America. We have users in Africa. According to the maps that I did, which might be incorrect, we have basically not very many nodes in Africa at all. It looked like one. So if anyone here would be interested, I would love to help you set it up. We have Debian packages. And I'd love to talk about the risks and rewards of doing this, and I think it would be pretty fabulous. All right, thanks. Thank you. The next one is, Orien, about the K3BSD project in Debian. Could you hear me? Yeah. OK, so I'll just give you a brief introduction of GNU K3BSD. So GNU K3BSD is basically a freeBSD kernel with a GNU Lipsy. Currently, we are using either K3BSD, the kernel of freeBSD version 7.0 in our unstable ports, and version 7.1, in experimental. It uses GNU UserLand, and it comes with cool Debian tools like D-Package, HEPT, and it consists actually in two Debian ports, the AMD64.1 and the ES386.1. So why would you want to choose Debian with FreeBSD? So the freeBSD kernel has some cool features, like jails, also ZFS support, something which seems difficult to open in the Linux kernel, also the OpenBSD packet filter. Depending on your devices, the device support can be better or worse. It depends. It's also one of the advantages that you can add diversity among all your machine while keeping the same user land so you can administrate them the same way. And it is able to run GNU FreeBSD binaries, freeBSD binaries, and also Linux binaries. So if you really need to run all these binaries, it can be very useful. And also we choose to make that because it's something we can do, and Debian is a universal OS. And but we can ask questions why just don't use FreeBSD. So the FreeBSD port system is very different from the Debian packaging system. So you may prefer the Debian way to install package. The same way you can prefer the GNU user land over the BSD one. And also we are providing a GPL contaminated kernel. That is why enabling XT2FS, RizalFS, XFS. So you don't have to rebuild your kernel to have this feature. And we are making sure it is 100% free according to the GFSG. And so we don't have the same license policy as FreeBSD. So the status of the GNU FreeBSD port, unfortunately, it's not yet an official port because we are lacking a few requirements, mainly the Debian installer. But we have an up-to-date toolchain, including Java. Ada and KFreeBSD is very six. We also have Mono. We have a lot of packages built on GNU KFreeBSD. And the build-in months are able to build them with an up-to-date rate of 95%. And we have a lot of packages available that is normal for the desktop environment, but also Apache, MySQL, or PortgreSQL, so the server environment. As we can't release with Lenny, we will make a snapshot of it at the time of release. Some packages will have a higher version, but we consider that it could be stabbiness for a port. And I want to remind you that there are developer-accessible machines. So if you want to try, it's really easy. If you are a developer, if not, you can send a mail and you can create an account. And to show you that it works, there is a nice snapshot where we can see that we can play. We can edit images. We can just surf on the web or watch a video. If you need more information, there is a page on the Debian Wiki. That's all. Thank you. The next one is Casaro about marking people in group photos. He's doing the work in our group photo. Can you listen to me? Yes, yeah. Well, we're just doing the setup. And we are going to show you what we've been working the last days. We've been trying to create a program that can show people's face in a picture. We had a previous script that could do that, but it required that you fad it with the location of every person's face. And then we had to do that manually. And then we have created a program to do that. We don't do that using artificial intelligence. We do that using human intelligence. So a person got to use it and click on their face. But, well, we created a program to show it. Then it does not generate a video. It programmatically shows every frame. And we've been working with different frameworks to do that. We've had to use it SDL, Cairo. And finally, we've reached to the conclusion that GDK from GDK would be fine. And it could generate a nice video. It's not finished yet. The last version does not show people's names yet. Well, so Lincoln is going to show what he has done, which is the interface he's using in Python and GDK to mark people's face. Well, he is able to select a picture. And then we have our group photo there. The video dimensions does not do anything by now. And, well, if you could help us with the names. Well, this is Holger, right? You can mark his face and write his name there. And then you do that with every other people's face. And that's why we need your human intelligence to help us. Well, if you can see that when you select the name, that there's a green. And you can move if you have mastered with it. You can change the place. And you can change the names and everything else. And then you save this file. I'm not sure it's going to work, but, well, then you have to, no, no, please move it to, move the file. Move the file to, where is it? Yeah. Is it a ceiling? Right, so please run it. Oh, this is another file. We have four faces in it. And then we are going to try the, well, then it is going to zoom out and zoom in from face to face. It will stop for a few seconds in each face. And it should just show people's names right there. So we are going to try to provide you with this software so you can help us. And then we can finally produce a final data set, which we can use to show the final video. Well, there is a Git report story. So if you can get the software, it's on http.git.cascardo.info slash movie.git. That's it. Thank you very much. Thank you. And you will stay here and do the next talk. OK. Talk is about Quake, another terminal emulator. This is Quake. Have you ever played FPS games like Quake or Open Arena? So this was the idea of Quake. Provide an easy keyboard access to the terminal. And when you type a simple keyboard access key, you show the Quake or hide it. We can create another new tab here. I think we can compare it with Terminator with a simple difference that we just don't split the screen. We just create another tab. But the main cool feature in Quake is the easy way to access it and to hide it. It was based on some standards. We are using Python and VDE widget to make the terminal. Some options like the window size can be controlled by the property window or you can just click here and drag. The first thing is because Quake aims to be a normal human interface guideline is compliant. And we are thinking about implementing new features in Quake, but we don't want to bloat the user interface and the options. So we are hiding some options that make Quake better. For example, I don't like to, when I am using the full screen feature, this toolbar is just clutter me. So I like to hide this toolbar. And now when I set the full screen, I can use my Quake. And Quake actually is not a full-featured terminal. It's just a terminal to give an easy access to a terminal and have two programs like it for KDE. And so we have Tudor and Quake to GNOME GTK. So I think it's all. Oh, yeah, we are very glad because during this conference, Debbie got to APT. So you can just install it in your debit. Thank you. The next target from Jonathan about the queuing maintenance. Can you all hear me? I don't really have very much to say. Hence a lightning talk. I probably won't even take two minutes. We previously had issues with key ring updates, sort of taking a while to process. There have been various flame wars about that, various people unhappy about it. And I just thought I'd stand up and say, I believe that's not the case anymore. I was added by James in April or May time to key ring me. And there's still the both of us who have full access to it. Currently, I believe there are about seven outstanding tickets on the key ring, all of which I've either followed up to or are waiting for responses from the requesters. And fortunately, discovered that we actually have the oldest RT ticket, number five. The only single digit RT ticket is still open. Can you show the next slide? If you need to update your key and you have a good reason, not just because you feel like you want a different ID or whatever. If your key gets compromised, if your key expires, then these are the ways you can talk to us. There's a website, which has very basic HTML, but does tell you what you need to do. You can send a key to keyring.debian.org. If you are only updating subkeys or an expiration date, just send the key to keyring.debian.org. I'm processing those at least once a month. If you need it done faster, raise a ticket, and it'll get done faster. But a month seems to work at the moment. If you need to raise a ticket, keyring.rt.debian.org. You need to include Debian RT or you don't get past the spam filters. You need to include something descriptive. That helps me look at it and go, right, that needs done now immediately, or that can wait until I've got a bit more time. You can log in. Key requests go to the keyring incoming queue, which very few people have read access to. That means if there's some security issue with your key, it's not public knowledge. Once I've looked at the Montrealge, then they go into the public keyring queue. That's visible with the default Debian read only RT username and password, which I can't remember, but went to Debian developer notes some time ago when the RT instance was things. So you can track. If you think that something isn't happening and should be, you can check the RT, see if someone's followed up. If they haven't, feel free to ping me. If you have a problem with your key, this is what you use. I will read this. I will follow up. If you think there's an outstanding problem that isn't being chased, follow up to the RT ticket and it will get sorted out. That's basically all I have to say. The system is now working fine. I haven't had any complaints, but then maybe they didn't put Debian RT in the subject. So thank you. The next one is actually about brainstorming about online services. On. On is much better than off. Yeah, okay. So I, is there a time that I can see also? Okay, great. Thanks. Okay, so I have a lot more to say about all these things that I'm not going to tell you right now. But you know, there's four freedoms that the FSF described for free software. Freedom to run the program for any purpose. The freedom to study and adapt the program. Freedom to share copies and the freedom to improve and release that. But if you think about me sitting at a web application, I'm sitting at the Google box and I'm thinking, can I run it on myself? I can't find out how it works because they won't give it to me. I can't share it because they didn't give me a copy to share and I can't improve the Google search engine that I'm sitting typing things into because I don't have a copy to improve. But it's not even just that I'm not allowed to. It's that structurally, it's far away from me. So even if I, I mean, it's not like for a proprietary software where you're not allowed to exercise the four freedoms but you can, you just might go to jail. It's actually that you can't. So that's a real difference. So this is one perspective on running free software, running non-free software as a service, it seems to me. If you keep, anyway. So, so this like, this is a bit extreme obviously but if you do, if you choose to have someone else run lots of non-free software that you rely on then obviously you're still using the non-free software but you might choose to do that because it provides features you like or because they maintain it so you don't have to actually install it. I did a quick survey of my free software mailing list archive from the past like six months just my lists folder and more than a quarter of the email addresses are at gmail.com. So there's a lot of gmail users. And okay, so in addition to all these features and not submitting it, there are network effects. For sites, well, for sites like, oh, I have skipped some slides of mine, anyway. For sites like LinkedIn or Olo or Facebook you want to use those websites because the people you wanna talk to are there. It's not like email where you can use a different client where the messages will get routed. If you don't use that website, if you don't use their software running on their computers you can't talk to those people. So I've been brainstorming about this idea of mine called rogue interoperability where you're not supposed to interoperate but you do. Facebook has an internal messaging service you can use to send messages between users which is strictly worse than email. So you should just make a gateway to IMAP and SMTP and then you can use MUT or PINE or whatever for Facebook messaging, for the people who send you Facebook messages. There's a website, Twitter, where you can send small messages to other people. It's a silo, so it's like sending messages on a BBS in that you have to be on that domain to send and receive the messages. There's another network called Identica and LaConica which has provides the software services of Twitter but gives you the software and can federate between domains. So Twitter is like BBS email and LaConica is like internet email but there's no gateway between the two. So if you want to use the distributed federated system you would need something like what I'm prototyping now called intertweet that routes messages between the two systems. And you know, one of the things you might use for software, you might enjoy it for software, it's the ability to add new features. There's a, again, back to Facebook. There's a, when you go to Facebook and you look at someone's profile, if they list their email address, it's shown as an image. So I wrote a tool that OCRs it in line and a Firefox add-on to do that in line when you view the page. So you go to the user profile, you see the image, you wait about six seconds for it to get sent to my server and the process to come back and in line it gets shown in the text which you can copy and paste. And you know, one of the freedoms you might want to exercise is the freedom to use a program for any purpose. Fmail, Gmail is two things. It's a server-side daemon that stores your mail and talks to your web browser and a JavaScript set of code that runs on your web browser that implements a nice looking interface. So Fmail is something that I'm halfway through writing again that lets you take the JavaScript and retarget it so that the Gmail interface can be used again to your server, not the Gmail server. So, relevance impact, kind of like running software in the mainframe, but bye. Okay. Thank you. The next one is Marcos. About PISNK and another project. Yes. Well, I'm here just to present you a couple of projects of mine. First one is PISNK. It's just... Well, I'm the operator of the mirroring system in my university. Basically we mirror Debian, Mandrake, SUSE, et cetera. And we had times when... Well, we couldn't just finish downloading the whole mirrors in just one night. We just didn't have the whole time for that. So, in those times we had the problem that our mirror simply broke because most of the time half of the packages didn't have time to download. So, well, the mirror is unuseful. It's not useful. So, mirrors can fail updating from an upstream server. So, I realized that basically a mirror is a database that describes all the packages and, well, the whole set of packages. So, what you really want to do is keep them in sync. That is, whenever you have a package in the database description, you have the package in the mirror. So, the only thing it does is, there's this tool, the mirror, that I think does almost the same, but it only works for Debian. I'm, as I said, I'm mirroring some other free software projects like Zip and Slackware. I have support for your PMI and YAM repos. Repos. So, I built this program that basically does one simple thing. It downloads the new databases, tries to download all the package described in this database and once it has finished with all the packages, it really updates the databases. So, you are really in sync from the databases and the packages. If the packages fail to download, well, it doesn't update the databases, so you still have the old databases, the old packages, and the new packages. So, that's the way I keep them in sync. So, the next time I try, I don't have to download the whole packages again, just the ones that got updated that day. And this week, I had seen a talk for Sack who presented the AdDoS Dev Check. It basically does a checking on the packages file that figures out which packages are installable from that packages description or not. So, this week, I just added the Dev Check support. So, I know I can not only make sure that the whole packages and the whole database and packages are in sync, but also the whole mirror is installable. The next one, and you have the URL of the project at the bottom. The next one is just a brainstorm I had. Well, you see the main program is rather common, and it's really fixed several times, but most of the cases you just install postfix, answer all the dev conf questions, and you are ready to run. You just are, I mean, the main server is working now. But in certain situations, that doesn't really work because you have situations like when you're having a laptop that runs all around the world. I think you, most of you have this problem, and sometimes it evens goes disconnected. So, there are projects that try to attack these problems, like mask mail for roaming and MMSSTP for multiple counts, but they don't work together very well. So, I just figured out that in certain circumstances, it's better to try not to force the configuration of certain email server to do whatever you want to do, but the best way to do it is really program it. I mean, you only have to program two pieces of code for a mail server. The first one is the mail routing decision, where you just say whether the mail is local, the mail should be sent off outside, or the mail is unreliable, yes? And the other one is when you are really doing the local delivery, where when most people use prog mail, things like that, you can put better, you can do better decisions when you actually program whatever you need. So, like the TPD has something, it also has Lua support in its config file. You can actually program a little on the config file. So, I took the idea from there. And what? The time itself. So, this code is not published yet. I am trying to use it in my laptop. It works, but it needs some more work. So, that's all. Thank you. Thank you.