 So, welcome to the Lightning Talks, the day two session. So, who of you has never been to the Lightning Talks in his or her life before? Okay, that's quite a lot. So I'll just shortly explain what goes on today here. We'll have short talks called Lightning Talks. They are basically five-minute talks that any congress participant may give. You can sign up for this using the Congress Wiki. And it's open to everyone. We don't filter based on content or we don't do any kind of review process. So basically it's a great training for people who think about giving a talk in a larger dimension. Where's my slide? Why is my slide gone? Slide please? No, I can do it myself. So a short introduction how to make the Lightning Talks work for you and for us and for them. It's quite easy. And for the speakers, for all of you who are going to give a talk up here, please sit in the front row so you can quickly get on stage when your talk comes up. And can I get my slides back? Yeah, so the first one was be close to the stage. Then I think get on stage if your talk comes up, yeah, get on stage quickly. Talk into the microphone. Don't turn around because if you turn around, you can't be heard anymore. You can see your slides down on the monitor here on the screen. So be careful with this microphone. You can advance the slides using this clicker. It's always here available. And just stay calm, deliver your talk. Stay in time because these are the Lightning Talks. Then get some applause after your talk and leave the stage. Also please leave the clicker here because the next speaker will need it. So for the audience, of course, be excellent to each other and watch the timekeeper. The timekeeper is this device here. It makes sure that the talks run on time. Alex, would you like to explain what we see here? Yeah, hello, everybody. I will give a quick explanation. This device helps you manage your time on stage. As in life, your time on stage is limited. At the start of your talk, it will look like this. You have five minutes for your talk. So you see slowly your time running out, at least your time at stage. And don't panic. Don't panic. It will not go that fast, depending on your personal frame of reference, of course. But you will have four minutes of green. So if the column is up here, don't panic. You still have one minute left. The last minute will be shown to you in yellow. And when it comes like this, it will go red and you have 30 seconds left. This is the time where you will come to an ending with your talk, at least. And if it's like this, maybe, then it's over. Yeah, in the last five seconds, I will give you the signal. We will have to grab your attention more closely. And if you're still there and talking, we need all of you. Because when I give the signal, you know what to do. Four, three, two, one, marvelous. It was OK. Let's practice that again. Very good. Thanks. So what's left for the audience? Of course, there are translations available. So if you use your deck phone and call 8014, you can listen to a translation of the talks here. So we have a couple of German talks. Those will be translated to English. And also you can choose a translated stream on your whatever device you use to watch the streams. Please give a warm round of applause for the translation team who does really awesome work this year. And every year, of course. Final note for potential speakers. We have day two and day three sessions already full. So if you would like to give a talk, then we have still some slots available on the day four session. I don't know, maybe 12 slots or so. Just see the Congress Wiki for instructions on how to sign up. Enough said, let's go. OK, so five minutes. I'm Nicola. I will talk about technology and educational form. I work for Education International. And we represent like 32 million teachers and education workers from across the world. And yes, yes, my clicker. And actually, just to give you some background, what's going on at the Global Policy Fear, the UN adopted the Sustainable Development Goal with a focus on education, which is great for the first time. But obviously, then there's also a lot of for-profit interests from companies. And I will focus now on technology and how this takes shape. For instance, we see a lot on when it's discussed, how are we going to invest? What is the most important thing? The thing that always jumps into our face, like cyber and yours, is innovation. So there's also innovation, innovation, innovation. It's not really clear what it means. Usually, they say, oh, innovation equals technology. And technology equals quality in education. We say, really, is that always the case? And we would like to engage in discussion and to see what we actually need in education to have quality with technology and to have just and inclusive societies. The second thing is that there's obviously also in this field a big interest in data mining. At the moment, the focus from the global indicator that then measures whether the Sustainable Development Goals will be achieved is learning assessment. So there's a lot of interest from companies to implement standardized tests across the world. Private companies then have access to a lot more data. And the question for us is also what happens with the data protection of children, schools, and education systems. And as well, whether this data on assessment actually gives us the information we need to transform education and to see what we have to invest to make this whole system work. So there are some innovation traps. There is a lot of technology giants and venture capitalist interests like familiar names like Google, Facebook, Pearson, Apple, and Microsoft. Just an example of Pearson, they are very active in the US and across the world. They own the entire, sometimes the entire education cycle. They have a lot of money to lobby governments to have the education policy they need to implement their services. They offer the teacher training. They design the curriculum with them. They offer the tests. They have everything in control. So it's really scary what's happening at the moment. And they have laboratories in India where they test their little tools and poor quarters and then they scale up eventually. And as well, you have the Lofi for profit private schools which is with British international academies. They charge children every day to go to school. It's based on this idea that you can actually make profit of the poorest kids, which is actually a good way to invest and make money. So, and the innovative and technology part in this is that they say we are very innovative. We use technology and what they do, they give teachers a tablet and these teachers read office scripted curriculum. Please sit down, open page two, do this and that. And it's a curriculum that's developed in the US and then taught in countries like Kenya, Uganda, and so on. So, then there are other things that we discuss a lot but I have more sources in this, more information in the sources section that you can look at later when you can actually pause the presentation and take a deep breath. Yeah, obviously with copyrights, there's a lot going on. Five academic publishing houses own 50% of the market already. There's a new digital market directive in the EU being passed, unfortunately, a leaked document showed. There will be no exception for education. So it's, yeah, we have to be loud. So just to maybe sum it up, I would all like to invite you because we think it's important to have good quality in education and technology that makes sense that helps to transform society and has a progressive idea how to move forward and I would like to invite you also to my talk, to my workshop which is called Punished by the Robot Teacher where I would like to invite everyone working in education or interested in advancing education policy with us. So what we do, that we can then take it to the global level and advocate for those things that you come to this workshop and that we work together and maybe find a better way how to use technology in education and I'm really fast, sorry. I'm done already, I practiced this so often. But yes, thank you very much. And sorry, translators, I'm really sorry. It was probably horrible to translate this talk. So thank you translators as well. Thank you. Next up is the iridium talk. You may start. Hi, so I'm just giving a small overview over plotting some data with mostly using Google Earth about the iridium satellites. I mean, some of you may have seen previous talks where we reverse engineered most of the iridium stuff. Which one is the button? Ah, this one. So what happens if you listen to what iridium sends down for a year now, you have lots of data. One of the things that they send down is the position of the satellites and additionally the position where they think their spot beam of the satellite hits the earth. And this is a lot of geocoded data that we have. It's about one to two set packets per second, which is over a year a lot. And we wanted to, I wanted to make some pretty pictures out of it. So as a base to plot the data, I thought Google Earth might be the thing to use. It actually works on Linux. It's a little bit uncooperative. I mean, it likes to crash on start. And then there's, you develop some tricks like click in the window one second after it appears then it does not crash. Well, but it works for me. So the input I have is basically what the iridium toolkit outputs, which is the position of the satellite, the altitude. Then the second packet you see the altitude is one. This is the point where they think the spot beam hits the earth and the satellite number. And then you have a little Perl script that I wrote to put that into a different format. And I'll show the format later. And then you get like this tracks from the satellites annotated with numbers. The output format is like this. I mean, the KML XML format is not the most terse one. It's pretty verbose. You can get also the live view, which is this Google Earth supports the fact that you reload the KML file every four seconds. You can do that over HTTP. It's really, really low bandwidth then. But I mean, you can see where the satellites are currently that we are seeing from Munich. This is yesterday's at some point. But the tracks as you can see look crowded. So I decided to make a heat map out of it, divide like the whole earth into squares and sum the amount of hits you have in a single square. Then you can make a nice heat map and color this according to signal strength, which then looks like this. This is where they think the spot beams hit the earth. You can definitely see the outer diameter of the circle is I think 700 kilometers, which is about as far as you can technically view the radium satellites from one point on earth and you can more or less guess where the receiver is based on the circle in the middle. You can also plot the satellite position instead of the spot beam position. And that looks like this. The artifacts that like red is a stronger signal is you see a mixture of the antenna characteristic of our receiver. You can clearly see that in the northeast there is a building obscuring the antenna a little bit. So we get less signal from there. And the circle in the middle is we think an effect of the radium spot beam antenna because the satellites do not have an antenna that points directly down. They are all at least a little angle, which accounts for the stronger signal slightly off center. And I mean the output format looks like this. Basically it's also very, very short for one square. It's this amount of XML. So you get like 100 megabytes of KML file, which by the way makes Google Earth really, really slow. And the source for it is part of the iridium toolkit. The script is called mccimel. Really creative name, and you can have it on GitHub. And that's basically everything. Thank you. Thanks a lot. So next one is opening patents from Makers 1629 ratio. Hi, hello everyone. I have a big idea how to bring legal security and confidence for individual innovators like Makers in the field of patents. This idea called the green light involves dealing with challenging legal patent issues, but it's doable. The patent environment, as you may know, is highly saturated with patented solutions. And nothing seems to change this tendency. In the times of Internet of Things and Industry 4.0, we can expect even more incremental solutions, improvements, rather than huge disruptive ones. You may ask yourself about the quality of such solutions, but that's another issue. With the idea called green light, I want to improve the comfort and legal safety of the Makers who tinker and tweak with devices with patented parts, because the problem is that a patent holder has the right to assert the patent against the maker, even if this maker doesn't earn money with the solution, but only share instructions, informations, how to improve device and how to add new functions to it. There is a loophole, why so? So very briefly, a patent renders a market monopoly, exclusivity. This monopoly is limited with patent exceptions. So some uses are allowed without the permission of the patent holder. However, none of them can really support Makers. One exception for private and non-commercial uses is okay, but it's very narrow. With my proposal with the solution of green light, which is very simple, I want to make a huge change in the patent environment, because I want to enlarge this sphere on public sharing of information and instructions. I propose this in my PhD, which will be soon available in open access. So that's the idea. And another aspect in the whole story is patent holders have been taking, companies have been taking advantage of the maker community, poaching their ideas, using them, earning money with them, and the worst, patenting them as their own. Today, we need to do. We need to create a database, something similar to GitHub or defensive publications, but different with focus on hardware and software-based hardware solutions. As you can see in this visualization, the database would centralize Makers idea and serve as a research tool for patent offices, who would then search in the database during the patent examinations. The concept is that all ideas published in the database in the green base will be also protected with patent exceptions for Makers. Deadlines for this project. In the first quarter of 2017, we would like to create steering community, and I would like to publish my PhD in open access so everyone can read about this. The next step will be creating contributing community by the end of quarter 2017, by the end of the year 2017, we would like to have the database standing on its feet and be working, and in the future, we will think about other IPs and opening other fields of intellectual property. With this talk, I want you to participate. If you know and you want to contribute to this project, and if the freedom of making is important to you, please contact me. Thank you very much. Thank you. Next up is decoding AFSK data, 1629. Hi, everyone. I'm Nonic, and I really like to play around with old school hardware like a Commodore 64. For today, I want to focus on one specific old school device, this one. This is called a data set, and it is a storage medium. It was quite popular back in the 80s because it was way cheaper than the floppy disk drives, and as a storage medium, it uses audio cassette tapes like this. So you could either use empty tapes to store your own data on them, or you could buy tapes with prerecorded data, games, and applications. So audio data, you can take such a tape, put it into an audio tape player and listen to it, and it would sound like this. I'll give you a very short sample, okay? Let me do it once again. So this may have reminded you of other devices from the 80s like modems or fax machines, and these devices sound similar because all of them use the same modulation technique to transform binary data into an audio signal. The modulation technique is called Audio Frequency Shift Keying, AFSK, and let's look at how to decode this back into binary data. Starting from the audio signal, how do we get back our bits and bytes? So this is the waveform, and if we zoom into it at a very high zoom level, we start to see individual pulses in the signal. A pulse is a very short segment starting at the center line, going down to the bottom, up to the top, and again to the center line. If you look closely, you may notice that there are several identical pulses in this signal. And there is another group of slightly longer identical pulses, and the third group of two even longer pulses. So we have three kinds of pulses, long, medium, and short. And to decode this back into binary data, we have to look at these pulses in pairs. Each combination of two pulses has a specific meaning. Long, medium stands for start of byte. Medium short is a one, and short, medium is a zero. So this looks already quite familiar. You may have noticed that we got nine bits instead of eight, which we would expect for a single byte. This is because the last bit is used for error correction. It's a parity bit. For now, we just ignore it and assume that the data is correct. Now we're almost finished. The remaining eight bits, there's one last step we have to do because the order of the bits is such that the most significant bit comes last. And to read this as a byte in standard notation, we have to reverse it to read it from the back. Then we get 1, 0, 0, 0, 1, 0, 0, 1 with the most significant bit at first. And this is a simple byte, decimal 145. And so, yeah, success. We finally, successfully decoded a very short segment, segment of AFSK data, audio data, back into a single byte. That's it. If you're interested, I wrote a very short Ruby program on GitHub, which automates this encoding process. You can look at it if you have any questions. Just feel free to ask me after my talk. Thank you. Thank you. So the next talk is called Sipsa. Morning, everyone. Last lighting talk I gave at CCC was 27 C3, I think. We have a clicker now. That's great. So let's talk about Sipsa. That's a small thing that I created. It deals with anonymity on the internet. And it is a diverse topic, meaning there are two different opposing views, actually. One part of you maybe think that, hey, we can be really anonymous, right? We can do stuff and no one will find us. And then there's that other part that actually thinks that whatever you do, there are tools to kind of track you down. So what I wanna talk about is my proposal to fix that maybe. Okay, so a short recap on this thing, on the ISOC model. So on layer three, as you all know, we have those things that are called IP addresses. And sad part about that is that IP addresses have to be correct in order for just stuff to get routed, right? If you include wrong destination IP address, it will not get anywhere. So you can't really encrypt it in a classical sense or you can't remove it and replace with zeros or whatever. So that may be a problem, right? Well, UDP is the basic component of my idea and it will help us here. UDP, as you know, is stateless and this transaction oriented, meaning that we do not have three-way handshake and we do not need to establish a connection per se. We can simply send data and on layer three below UDP, we can just choose a source IP address and destination IP address. We still have the problem of actually having to have real dating there in order for data to reach our destination. But we can at least change the source IP address, right? So my solution includes using more bandwidth because I took a look at some research and during the past 10 years, network speeds globally on average have increased very, very much. Well, we use that, I think we are wasting the bandwidth right now because what we are doing is actually we are creating terrible HTML with huge CSSes and you open a news web page and you have one or two megabytes instantly loaded like that. 10 years ago, we had what like 50 kilobytes at most, right? So I have a better proposal how to use all that bandwidth. Sipsa, instead of sending a single UDP datagram, we are sending many. We can send, for example, 64 datagrams. We create a randomized list of source and randomized list of destination IP addresses, for example, eight of each and then we send all 64 cross sections there. Sipsa protocol lives above layer four and it includes in the current standard layer three again on top so you can encapsulate, of course, anything you want in there, including TCP connections. It supports versions so it allows for expansions. So here's a short example. You can see some of the 64 packets. Can anyone in the audience tell me which is the source IP address and which is the destination IP address? As you can see, they are completely identical except these fields. Therefore, in this case of 64 packets, we actually get the probability of below 2% of guessing both source and destination IP address correctly. If you want more privacy, we can increase the parameters and get even more statistical probability there. So therefore, Sipsa should be able to provide an imitant deniability even if only statistical. I will talk a lot longer about that like four times longer in VFX in that session today at 1745 in Hull B that is one floor down from here. You can check out the current demo in Python on GitHub right now, it's there. And you can follow me on Twitter, I tweet some stuff. So thank you very much. Thank you. So now, efficient Wi-Fi phishing. No one here to give this talk. Anyone would like to try themselves on some... No, no, no talk, okay. Then I'll just... Yeah, then we'll have to skip this talk, sadly, and go on with open care. Open care. Oh, no, people, you can't be serious. What did you do yesterday? Yeah, yeah. Anybody willing to give this talk without knowing what it's about? Okay, yeah, then we'll just skip it. I'm really sorry for the people who wanted the time slot and didn't get one this way, it's not very nice. Then we'll continue with evolving internet transport, Neat Project. I already saw your shirt, yeah. It's easy to tell I was here. All right, start. Hello. There are lots of really cool things happening in the internet and there's continually lots of development. But almost everything that happens in the ITF is not deployable and is never gonna be used. It's very hard. We're building a system to make the network API easier to write against and then give us a way so we can change the things that happen underneath it. I think you pushed the blackout button. Why would you have this? I don't know. Ask the producer. So the API, almost all the network code we write against now is the socket API. The socket API is great and it was built in the 1980s, but it was built at a time when networking was quite new and we were unsure of the best paradigms for interacting with this. We had pretty well figured out how to deal with file systems, so we just reproduced the file system APIs for dealing with the network. The TCP streams we can create do quite a good job of mirroring a file where the other end is another machine and this has worked really well for FTP. Gopher was great, but it's quite an old system and it's done well until now. The view of the network that is required for this is quite passive. We always see in diagrams, the whole internet is abstracted away as a big cloud and there's nothing inside it and the socket API works great when you can assume that the network is gonna take my packets and it'd be nice to them and they'll get there and everything's hunky-dory. And instead, we've built a network full of middle boxes that meddle with your packets and cause as much trouble as they can. There are people doing honest things with firewalls to try and protect traffic, but writing out naive firewall rules means we can't deploy SCTP on the internet because we didn't know what the magic number was and it hides lots of the details of the network. There's so many things in the network that break your traffic and make it hard to roll out options and changes to TCP that it is almost impossible to suggest a change without a complicated fallback mechanism. The way we get onto the network as well is very different. In the past, we could assume that a 56K serial link looks like another 56K serial link, but now two and a half days really slow. 3G, you might have really nice low latency for a while but it's gonna shoot up. A Wi-Fi link isn't a Wi-Fi link, I mean, behind it there could be a fiber connection, but there could also be a two and a half G hotspot. 4G is helping a lot, but 4G sees massive spikes in available bandwidth over time and you can shift from a really high bandwidth bearer to nothing. And now we have requirements from the applications that are very different to the past. We want to do real-time immersive video. We want VR to be possible for as many people as we can and the API for the network we have doesn't really make us doable. The middle boxes in the internet make it very difficult for peers to speak to each other. So applications have to internalize loads of different mechanisms for finding out who they are with STUN, figuring out how to get around NATS with ICE. We can't really use QoS at all on the network because nobody's sure if your packets will get through. We get to the point where every web browser you have has a full ICE STUN turned library just to support WebRTC and we're just building up loads of duplicated code. For an application developer that the network should work, we should have a nice friendly API and the libraries we're writing against should solve the hard details and make it possible for you to use new cool things. So we have the neat library, it's open source. It provides a deployable API for writing applications. It does this by offering mechanisms for discovering what's possible, fallback between protocol choices, configuration and lots of cool things. It's an event-driven library. It's good. The project's being developed completely in the open so you can see everything we're doing on GitHub. You can submit code to us, we'd love that. We have support for building applications that use core transports for TCP, SCTP, TLS, DTLS, UDP, UDP Lite and selection between these protocols so we can find things that will work. And we're a European research project and you can have all of our research as soon as it's published, no barriers in the way. Thank you. Thank you. So have the speakers for efficient wifi phishing or open care arrived yet? No. Then we'll just continue with the other talks. If anyone here has a talk they would like to give on a flash drive or something, we could do this at the end of the session, basically, because we have 10 minutes to spare right now. We'll just continue. So next talk is going to be about Feinstaub. Okay, go ahead. Once again in German. I'm trying with Hochdeutsch. I'm coming from the other side. I'm coming from the other side of Stuttgart. Stuttgart is the dirtiest city in Germany. We have a project. We just want to make a cool project out of a dirty topic. We just want to collect data. The city has only four options. We want to have 300. That's why we found this project. Building costs about 30 euros. That's what the parts look like. Aliexpress, 30 euros directly ordered in China, delivered, assembled, put in seven cables. That's how far we're going. That's what it looks like in Stuttgart. So there's a poster hanging on the Internet. Or then in the bridge. You should let your car stand, you should let your truck go down a bit. The industry probably works a little less. You can do it, you don't have to. Why is Feinstaub Alarm? There is unfortunately already a European law. 50 micrograms per cubic meter per day is allowed to be exceeded, maximum 35 days a year. Stuttgart exceeded it regularly, very clearly, very heavily. We just want to have better data, because we don't have a correlation to traffic, to weather, to other things. Because they only measure one measure per day and we just measure every minute a measure over the surface. That's what the dirty station looks like. That's the famous Neckhardtor, small measuring station, such a small gray house. They always fight the spirit whether it really is the dirtiest corner. At the moment, the data is declared and there it is measured accordingly. That's how it looks like in the past year. So the problem has been known for over 10 years. We started with 2005 at about 187 accelerations. We are now down to last year with 72. We already have 41 this year. So if the European law is passed in 2020, then there will be 6-star European contracts. For the city of the country, we have to see who pays it. That's how the normal day-to-day process looks like. Once exceeded, once down. In Eastern it looked like this. If there were fewer people on the road and less traffic and probably less work was done. And after Eastern it started again, so that you can see the first traffic, reverse lanes and other things there. We came to the Code for Germany, Open Knowledge Foundation, and we just wanted to collect data. Open data should have the data. There was no real measuring data, that's what we said ourselves. Our goal is 300 sensors in Stuttgart. We are almost out of there. We want to open the whole world. It's just like that with the components. We don't do it in gravimetric, but in optical measuring. That's why we don't see it 100% exactly, but we got our ridder's stroke from the material test station. So we are almost as good as the really expensive measuring devices and our budget costs only 30 euros. So we did something there. That's how the card looks at the moment. It is anonymized that you only know about 100-150 meters where the people live, where the sensors hang. But that you have my own tendency, and when the city of Stuttgart is covered with complete waves, you have more accordingly. This is what it looks like last year, and this year again in Vienna, a Sylvester. That means my neighbors shoot up the first rockets at 8 o'clock, then the first fine dust will pass through. If you know it, after 20 minutes you can't see anything else in the city. That's real fine dust. Sylvester theme. Then it's waited, 12 o'clock is always looked at. Half an hour of delay, it just draws around the smoke, the fog, the fine dust. And then my neighbors start shooting at half an hour. Then the whole fine dust comes in. It takes about five hours until everything is as normal again. So that's really a topic, not just traffic, but also the people do it all themselves. That's how the construction looks like. There are two tubes. The components are in the electronics inside. USB cable, micro-net part to choose. And then the data is sent to our APs accordingly. Material cleaning system, as I said, we got the fine dust. He measured it, because they have the same problem on the island of Reichenau with McCloster, and there they took our measuring devices with them. That's how we want it to be. That's not a fine dust alarm, although it's a bit difficult again. But we just want to do it all. We want to measure it all. Everyone can catch up. We want the whole world to know. One of them is hanging in the sink, if you take a look at the card. We have a Dominican Republic right now. Hamburg is currently having a fine dust problem, probably with the wind, with the oil giants that pass through there, because they blow everything out. And that's also measured with it. Great. Thank you very much. Thank you very much. So, yeah. Yeah, so it seems the speakers have arrived now that we're just a little bit late. So, have you just arrived and want to take a rest? Maybe five minutes? I've seen the speaker for open care already, right? So, if you would like to come up on stage, please. Let me just load your presentation. It's a 16 to 9 ratio for the video, folks. All right, you may start. Okay. This is how you can reach me if you have any questions afterwards. So, my name is Nadia Elima, one of the founders of EdgeRiders. It's a community that lives online, offline, and in physical events and settlements. Members are doctors, lawyers, hackers, engineers, people working on alternative responses to systemic crisis, usually very far outside of mainstream. So, on a Drupal social commons-based platform, people share first-hand experiences from trying to do these difficult things and offer each other mutual support. EdgeRiders is also a not-for-profit company. The vision is to see every human being live up to her full potential as a creative, responsible actor in the life of her community and in the planet at large. So, this is supposed to be a graph. You're supposed to see GDP, which is Roto-Hilllands-Product, growing at a rate, and then you're supposed to see the costs of healthcare rising faster. And the reason why I'm interested in this is because we're seeing that the cost of healthcare is rising really, really fast. And this is part of what's driving the rise of right-wing authoritarian fascist political movements because they say that every new person in the system means that one person will be pushed out. And the populists say that if you are you, if you have white skin, if you're born in the right place, you don't need to worry about this. We will protect you. Now, this framing of a zero-sum system is not necessarily true. It depends on how you look at it. I'm going to give you some examples. So, this is a handheld ultrasound scanning device based on free software and hardware principles. You can use it to do early prognosis in emergency care and it costs a 15th of what's currently available on the market. This is a biohacking lab in Oakland. They are working to decentralize the science, engineering, and production of vital medication like insulin. One of the founders got involved because he himself has type 1 diabetes and has first-hand experience of how expensive and exhausting this is. This is a system for peer-to-peer alternatives to emergency response services like 112 or 911. So, it's an application that enables a set of mutual agreements that if I have a problem and I hit this button, you or you or you will respond to me and I will do it for others and it works for high-risk groups for whom calling the police or the ambulance services is not a good idea. And so, we've collected hundreds, I mean hundreds of these kinds of initiatives, there are thousands more. They're promising, they're beautiful, they're hopeful. They show a path forward that is non-zero sum, but they have their problems. The main one being that each one is working at a micro level of the problem or a very specific place. Now, if we want to challenge the rise of these authoritarian movements that tell us we have to be against each other, we have to find ways of joining them together into systems-level responses. And this is where I think the tech community has a lot to offer because a lot of these initiatives, they maybe are very efficient in how they deal with people and solving the problem, but they don't talk to each other. And so, this is what I wanted to invite you to be involved in. So, imagine what would a system look like if it consisted entirely of these different initiatives that actually work together. So, we're building up an open-care pop-up village where we try to bring them together in one place and build a living demo that we then live in as a showcase because the crisis, whether it's political or economic, it's a crisis of the imagination. It's a crisis of the ability to see ourselves working, living, and being with each other differently. And so, if you take it from the realm of the unimaginable and put it into the realm of the inevitable, I think that's how we win. That's it. Thank you. Yeah. So, we'll get back on track with the efficient Wi-Fi phishing talk, which is in a 4x3 ratio, I think. Yeah. Okay. Sorry for the delay. My name is George. I am the author of Wi-Fi Fisher, if you guys have ever used it. Wi-Fi Fisher is a security tool, so... What's Wi-Fi phishing? In Wi-Fi phishing, we want to do two things. First, we want to put the victim in a rogue access point that we have created. There are many ways to do that. I'm not going to go into much detail. The second thing that we want to do as soon as the victim is in our fake access point is to present a fake page. Okay? Like a fake Facebook page and capture credentials and stuff. So, everybody knows that a challenge today in security is to break into WPA networks. And this is very hard because of the hash function that it's using. An alternative to this is to do phishing. So, what can we do, for example? We can first, in the phishing part, actually, let me go to that. We can do something like that. We can create a fake router page as soon as the victim is connected to us, and we can ask him to provide his WPA passphrase because of a firmware upgrade or something. The thing is that in order to do this very realistic, we can do some tricks. Some of those tricks are, we can determine based on the beacon frames. Beacon frames are in the air. It's what the access points sounds to the clients around it. And we can get from the HTTP user agent header some more information. So, based on that information, we can craft a fake page that looks very realistic. So, for example, this page, if you see, we use the BSS ID, which is the MAC address of the router, and we know that we target a netgear. We also know the encryption type, of course, and this way we can make the fake page way more realistic. And this is a better example. So, in this scenario, what you see right there on bottom right, this looks like the Windows network manager. It's not. It's actually a web page. Yeah. So, as you see here, the beacon frames that's in the air told us exactly what networks are around. We also, based on the user agent, we know that the victim is using the Chrome browser. So, we copied the exact same page. So, this is what an advanced phishing technique looks like. And a lot of people follow to this. I wanna thank Dionysius Zindros, somewhere around the crowd, who made this page for me, and it's super realistic. This is, for example, the Mac OS. You see, it's very, very similar. And the one on the top... Honestly, I don't remember which one of those two is the real one. I think the one on the top is the real one. I'm not sure, but they look exactly the same. And we can know what operating system the victim is running based on the user agent header. So, if, for example, it's a Mac OS, we can print this. If it's a Windows, we can show the bottom right. And we can show accordingly the right page for the no internet connection. If it's Chrome, we can show this. If it's Firefox, we can show Firefox page, et cetera. So, this, you can do all this kind of attacks during a penetration test with Wi-Fi Fissure version 1.2. That automates the whole process. It has a demlate engine. It has scenarios like that. If you are a coder and you want to help us on that, please, you are very welcome to do so. And that's all for me. Thank you very much. Thank you. And we are even on time now. So, we'll continue with law hiking, radio device regulation, four by three, I think. Okay. Hi, everyone. I'm Max and this is Jiska. And we are working at TU Darmstadt in the secure mobile networking group. And we will be talking about why the new EU radio device regulation does not work for us. And in case you are not interested in the talk, we have some cat pictures for you. Anyways, there has been a European directive recently and it's regulating anything with transmitting devices and transmitting software. And it's implemented in Germany as so-called Funkanlagengesetz. And it will be, there's a draft and it will be in Bundestag soon. And we hope we can still fix something there. So, what is the problem with this Funkanlagengesetz? The one you have probably already heard about is that it will prevent the installation of third-party firmware or in general, non-certified firmware on certain devices like, for example, Wi-Fi routers, but also potentially smartphones or laptops or whatever. In addition to that, we have the problem as researchers that there is no proper exception for research and teaching. And that's combined with the fact that the German law is actually stricter than the EU directive requires by also outlawing the operation of non-certified devices, which is not the case in the EU regulation. So we can no longer use devices that we have changed that have lost their certification. And this really impacts our research. And finally, it may also have an adverse impact on software-defined radios. So here are something that fear is going to happen. So first of all, we can no longer modify off-the-shelf hardware for our projects and we have stuff like emergency communication, networks, and it would be really important to modify hardware for this. Second, it might be illegal to operate modified systems and then modification might already be, you put another antenna on it. And the last very big important point is that software-defined radios might become impossible because they are software-defined, software needs to be certified for hardware and you can no longer program your own research software. So what can we do about this? We have drafted a joint statement of different research groups and now we are actually looking for other research groups to support this statement. We are asking for a number of specific changes that would fix the German implementation as far as that is possible. Sadly, we are bound by the EU directive so we can't fix all the problems but if we can at least open up the German law a little, it'll be a good stepping stone to then go on to the European Commission and try to get them to make sensible decisions in deciding which devices have to be locked down and which don't. So if you are a research group or you know a big research group, please contact us. We have an email address for this. We also have our draft statement online under the URL you can see here and we even have some printed copies at the Chaoswell assembly which is on the same floor just into this direction towards hall two and you can of course also talk to us directly. If you happen to not be a researcher, there may still be ways in which you can help. If you're a German national, you can write to your representative in the Bundestag, especially if he or she is in the Auschwitz for Wirtschaft und Energie and there is a very tiny URL here that will give you a list of all the people which are actually members of that so yeah. If you're from different EU country, your country also has to implement this EU directive number 2014 slash 53 slash EU and you could try to figure out if your country is doing a better job at it and if not to fix it. If you're from the US, there's a very similar regulation from the FCC which you may have already heard about so try to fix that one and if you happen to be from somewhere entirely different, you can just laugh at us silly Europeans with our silly laws. Thank you. Thank you. Okay, then we'll have the last talk for today before the break of course, before the break. That's Bromium, it's a 16 to nine ratio and we want to show a video first. I hope that works. Yeah, hi, I'm Herman. I'm from the Netherlands and I'm going to show you something about this one. Can I start it? Yes, please. This is something I created last summer and it's showing the movement of random particles and when the green and the red particles come together, they form white particles and this is a simulator for a random movement of particles and reactions and diffusion and as a Bio-Nano science student, this is something I'm very interested in. This is something we were working on and I thought I should implement something like this but web-based. So this is entirely running on a JavaScript. Can I go to the slides? Okay. Right, so what you just saw is something called Brownian motion and it basically means that if you have particles in a liquid, which is in your body, for example there's water everywhere, then these particles move randomly because there are water molecules everywhere that are colliding with these particles and that are moving them in random directions at random speeds and as a result of this, you get something that's called a random walk and you can very easily simulate this by picking a position and updating that position with a random direction every step of your simulation. So that's very easy but it gets a little bit more difficult for when you wanna do reactions which is what I'm going to talk about briefly. So having two particles collide and transform into a third particle. So what you can do is compute the distance between all the individual particles and then just look which particles are close enough to collide. But this is so slow for, if you say I want to simulate 100,000 particles, this is extremely slow. So instead what I'm doing is compute a voxel position for each particle and only do collisions within the voxels. So the algorithm gets much more skills, much better. And in two dimensions that looks something like this, you have all these particles randomly distributed and you put them in voxels and then you can find collisions. You can for example say that particles that are in the same voxel are close enough to react with each other. And one interesting aspect is the data structure for doing this. So basically first you have to find all these voxels for every step of your simulation and then you have to do the collisions. So what you can do if you're very naive, you can build a three level hash table and it will be extremely slow because you have to build a huge hash table every cycle. So instead you can put all these three dimensions, the x, y, and z into one integer and build a one level hash table or you can build a list of tuples which is what I'm doing in this program where you have the position which is this 30-bit integer and the particle number. And sorting this list by position gives you something like this and the yellow square is actually positioned wrongly and then you can iterate through the list and find the particles that are close together and then do computations based on that. And it's much more efficient also from a storage point of view and memory caching and all those kind of things. So apart from that I'm also doing multithreading kind of. I'm using web workers to separate the rendering of the simulation and the computation of the frames. So when you have a very large simulation where every frame takes maybe a second you can still smoothly work with the web page. In order to efficiently communicate this data from the web worker to the render thread through to WebGL I'm using something called a byte buffer because this copies much quicker. If you use JavaScript objects it has to be serialized every time you move it to the WebGL. So I'm creating a large byte buffer and allocating or using small views in there to store the positions of the particles. And then finally I use some WebGL shaders to draw this nice sphere on the position of each particle. So actually you can have a million of these spheres and it will still render very quickly. So the simulation will not be blocked by the WebGL rendering. You can find the source code online it's open source and you can actually try it on the URL where I dare. You can build your own simulation by putting together membranes and particles and et cetera. Right, that's it, thank you. Thank you. Two, we will continue with a four by three talk on the Open Steno project. Hello everybody. Seems to be that not everybody's yet back but I'll start anyway. Well I will briefly introduce the Steno project which is currently hopefully upcoming. I'll talk about why it is needed and how it is working and yeah, we'll look what is going on with that and what can be done in the future. No panic, I'm not talking about the draw such a suit in the lower right corner. I will talk about the two guys here in the middle. So those are doing Steno or the old fashioned way. They do it with a sheet of paper and a pencil. Then we have subtitles for TV shows for example and those are live. So obviously that cannot be done with a sheet of paper and with a pencil because somehow it needs to come to the TV and that's the point now where it comes into place with hardware and software. We also have the subtitles here at the C3. You can see that in hall one and hall two and the guys they are hacking their fingers off to get you the subtitles here. So if you are fast enough, join them and have fun. Yeah, let's have a short way here of writing speeds. As you can see handwriting obviously not really sufficient to capture a speaker at roughly 160 plus words per minute. Even the fasted writer in Quartier and Dwarak they're not yet and that's based. I know there are some exceptions for everything. There are some exceptions. And as you can see StenoGraphers they are really reaching the speed of more or less what you can speak. With speech recognition I think everybody would come up with that. There are issues at least for the time being that this is not sufficient specifically for technical talks and so on. So that is one of those things where yeah we need some improvement. Steno is needed for those who are not as fortunate as we are so for the hearing impaired and people and you can use this for more or less word by word protocols taking lecture notes and so on. And that is one of those things, yeah speed is everything for Steno. The point is now we have only very expensive equipment for that as is proprietary and yeah not available to everybody. The other thing is if you join such kind of school here you are usually yeah very expensive and the dropout rate is very high. So as you see here it's a four digit figure. But Steno is now coming to place that it is available for everyone. And yeah it should background. So Steno is a courting system. So you are pressing multiple keys at a time and then the computer is translating this into the text and then it can be outputted on everything what you want to see. It's a little bit like a piano so you can a little bit refer to that. And as you can see the keyboard is a little bit looking different. So that is because yeah you are not moving your hands in a way of pressing the buttons in that way. So coming now to that OpenSteno project that is now where we are introducing here open source hardware and software and we are also having available sources for learning and documentation. So that enables everybody now to start with that. Say hello to Doris Dolores. So that's the Muscat of Steno. These are Mirabai Knight and the guys who are programming this in Python. Yeah as mentioned we have everything available. We have even games, Steno or Cardi documentation wikis. To start with you need few keyboards. Here are a few man-made or 3D printed. And if you want I also have some of them with me and we also have a Steno machine. So if you want to have a look then thanks. Thank you. Next talk. Next lecture 3D Medicamente in Breitbild. Ziemlich breites Bild. Okay. Sehr schön. Genau es geht um 3D gedruckte Medicamente. Kurz zur Motivation. Link seht ihr, wenn man normal Ausscheidung hat. Das wir dann im grünen Bereich sind. Das ist das therapeutische Fenster. Das ist der Blutspiegel vom Medikament. Und rechts seht ihr, was passiert, wenn man das mal vergisst. Dann kommen wir, sind wir zu niedrig. Das heißt die Dosis ist zu klein. Entweder keine Wirkung oder halt Placebo. Und das dauert halt 6 Tage lang bis wir wieder in dieses therapeutische Fenster kommen. Und das wäre eigentlich ganz nice, wenn wir 3D Printing hätten und die richtige Dosis printen könnten. Auch zur Pharmakodynamik. Das beschäftigt sich damit, was macht das Medikament mit dem Körper? Wir sehen hier, dass die Wirkung, also die Hemmung von diesen Muskel-Relaxanzias, das sind, die werden zur Narkose eingesetzt. Abhängig sind von der Dosis. Also kurz, je mehr Dosis, desto mehr Wirkung, egal welches Medikament. Und man sieht, das ist ein relativ empfindliches System. Das ist auch abhängig von Körpergewicht, was ihr da unten seht. Und das wäre auch ganz gut, wenn wir die Dosis gut verändern könnten. Genau, das Ganze ist auch noch abhängig von Altergeschlecht, Genetik und so weiter. Und Kombination möglich ist auch noch ein Vorteil. Das heißt, es gibt Leute, die schlucken 20-30 Medikamente pro Tag. Da wäre es ganz schön, wenn die vielleicht morgens um abends nur eine nehmen müssten. Und ja, das Ganze ist auch noch leichter zu schlucken, weil wir natürlich eine höhere Dichte von den Medikamenten haben. Und das ist ganz gut für Kinder oder für ältere Patienten. Updates sind leicht erhältlich. Zusatzstoffe oder die Dosis, wie ich meinte, das kann man ziemlich gut regulieren. Hier sehen wir noch ganz kurz, wie so ein Medikament zugelassen wird. Das dauert halt ewig, das sind viele Leute am Anfang ganz wenig. Die wollen halt entschädigt werden. Das kostet viel Geld, also 100 Millionen Euro, 10 Jahre, circa, sagt man. Es gibt auch noch präklinische Studien, die kosten aber nicht so viel, weil da gibt es unterbezahlte Doktoranden für. Umso erstaunlicher, das vergangenes Jahr von den USA, von der FDA, das ist die amerikanische Zulassungsbehörde, das ist dieses Epilepsime, die wir mit dem Sprit haben. Levitier hat das ist ein Wirkstoff, der relativ etabliert ist. Dass das zugelassen wurde, über 3D-Printing-Verfahren. Und die machen das folgendermaßen, die haben eine Pulvermischung. Und die machen immer erst eine Schicht Pulver, eine Schicht Wasser zum Kleben, eine Schicht Pulver, eine Schicht Wasser, so lange ein bisschen ein richtiges Medikament entsteht. Die haben auch schon angekündigt, weitere Psychopharmaker auf den Handel mithilfe dieser Technik, die am MIT entwickelt wurde, auf den Markt zu bringen. Es gibt auch noch andere Sachen, zum Beispiel am University College in London, die haben dieses Hot-Mate-Extrusion-Verfahren, das wird halt erst erhitzt, dann geschmolzen und dann gepresst. Und dadurch ist es sogar möglich, Medikamente in Pyramiden oder in Donutform herzustellen. Und es gibt tatsächlich wissenschaftliche Paper, die haben wir gesehen, die sich mit den pharmacologischen Vorteilen von Donuts auseinandersetzen, weil die ein besonderes Oberfläche-Volumenverhältnis aufweisen. Kurzer Blick in die Zukunft kann natürlich keiner vorhersagen, aber es gibt halt auch Leute, zum Beispiel in Glasgow, die arbeiten an Reactionwares. Das wäre sogar noch mehr, das wäre ein richtiges chemisches Synthesel-Labor als alles in einem. Und ja, das würde man dann als App oder als Programm bedienen. Das wäre eigentlich ganz schön, wenn es das gäbe. Und auch Prothesen und Implantate, da hat dieser, ist ganz bekannt, palästinensische Arzt, der hat beispielsweise erstes Stethoscope, mithilfe der Community über 3D-Drucker gedruckt für 30 Cent. Inzwischen bauen die auch Dialysegeräte, die bauen auch EKGs und so weiter. Also das kann ganz richtig kompliziert werden und ist alles frei verfügbar. Und wir denken, das ist eine gute Entwicklung. Das ist die Zukunft und ja, die Medizin und Pharmacy braucht mehr 3D-Printer. Danke. Thank you very much. Now we are going to continue in English, malware analysis and storage system in a 4x3 ratio. Yeah, hello, my name is Fabian. I will talk about the malware analysis and storage system and a quick intro, who we are, what we do. We are a group of IT security researchers from University of Bonn and Founder for FKIE. And well, we are involved in analyzing threats, in identifying new upcoming malware campaigns, for example. And of course, in providing meaningful information about those attacks, about threats to, for example, companies to law enforcement, etc. And obviously all those tasks involve creating very specific tools to analyze specific file types, to analyze malware samples, URLs, etc. And obviously, you can imagine that there is a lot of data and not a lot of analysis results coming in. And so we need some tool to, some system to accommodate all that data. And unfortunately, up to now we didn't have such a system. We want a system that could execute those analysis on different tools in a distributed and scalable way. And to gather that reports and store them in a way that is easily accessible, queryable by other researchers that want to gain information intelligence from the system. And of course, malware is changing a lot. There are new threats coming up. The new tools are needed to analyze those threats. So system should easily be extendable to create new functionality. And obviously there is a lot of tools available and some of them or even most of them are not free and not open source. So we want a system that is really open for other researchers. So let me quickly give you an introduction how that system is supposed to work. So if you are a researcher or just an interested person, you can submit, for example, a malware file to the mass server and it will be automatically distributed to matching systems that can analyze that file. And then these systems will contribute their results back to the server where it will be stored in a database and you can have a very sophisticated method to query that results and to gain information about that samples, about that malware samples. Querying or getting that information is possible interactively via a web API where you can browse through the available data or you can also have a REST API where you can programmatically download information from the system. We are in the process of putting that code for the system on GitHub. So the code for the service already available and some other code will be available in the near future and it will be available under MIT license. So it's very free to use it. Obviously it is growing, it is still not finished. So currently the project is in a status where we can say it works for me, but probably not for you. So it needs some polishing, some features are not in there and unfortunately we can only do one step at a time. So if we get additional contributors who are interested in such a system and want to bring in their own ideas they are very welcome. And so right now this project is in a status where it can really help to shape that project according to your needs. So if you are interested in such a system and want to help you are really very welcome. And that is also already the end of my talk. I am inviting you if you are interested in using this system or even developing it to contact us and get in touch for additional information and further details. So you see here the contact details. So thank you for your attention. Thank you. Coming up Koala IP in widescreen. Hi. Just give me a second. It's on the wrong screen now. Yeah. Hi. My name is Tim and in the next five minutes I want to give you a quick introduction into Koala IP which is an open intellectual property licensing protocol. You might have seen this comic already. It touches on one of the many problems creators have on the internet when they publish their creations. For once middlemen extract a lot of the value content licensing is incredibly difficult for laymen. Consumers aren't given a chance to pay the artist they love. And on the web we have also a big problem with attribution. So if you like I mean you can see it on the picture as applied. So to address these problems half a year ago the Koala IP working group set out to create a protocol that brings transparency into content licensing and visibility into a work's usage on the web. So imagine Koala IP applied to this picture would mean that for Google images you would be able to tell for each image the supplied license, the original creator and some way to compensate the creator for example like a Bitcoin address or even just an email address or some other contact information. Koala IP is modeled using JSON-LD and it allows to describe the life of a digital asset from the author's idea throughout the creation process and to the distribution of the artwork on the web or wherever and beyond. Its data is organized in a graph. So on the right you can see a very basic graph of how Koala IP could be used to license a book. The authors timestamp their idea of the creation on a public ledger. They attached the produced work as a manifestation and they supply a distribution license and maybe they add a digital fingerprint to identify the work. We use Koala IP uses RDF as a serialization format which as a base layer for the data models allows them to be extended quite easily and to be developed iteratively and it facilitates a community-driven definition of the models. And additionally it allows the branch specific creation of the models to for example fit like other industry use cases for example the music licensing use case you can see here. In the Koala IP protocol links can point anywhere as long as they are resolvable within the worldwide web meaning that applied to the web of today this is kind of an interesting characteristics which is that the networks where these data lives in can be both private and public but interconnected still. From the beginning it was designed to be ledger agnostic meaning every ledger that supports JSON-LD or the and sorry and the ILP protocol the intellectual protocol by Ripple can be used to persist and transact data. Hereby link data facilitates the installation of a global ontology and inter ledger's crypto condition enable cross ledger transfers inside of transactions of digital assets. Koala IP's goal is to make atomic transfers of money and licenses possible. It encourages IP-LD which is the interplanetary link data that is basically a merkle tree implemented into the JSON data structure and allows Koala IP to be cryptographically verified, the retrieved data, track the provenance of an artifact and assert the ontology. Since all data is supposed to live on temper resistant ledgers, signed claims cannot be deleted or censored they can only be refuted for example. A ledger that is already supported by Koala IP is the interplanetary database. It's a network running a decentralized database, public database but it's also a foundation that acts as a non-profit that governs the database. Yeah it's run by IPFS, Koala and the Internet Archive. We have all the source code licensed under free and open licenses so Apache and we also have a white paper on GitHub. Would be really nice if you would check it out or if you're interested, come to me, talk to me. There's also my email address because we're looking for contributors, thank you. Thanks, right on time. So now it's time for the annual Void Linux talk in stunning 4x3 ratio. Hello, good morning. I'm talking again about Void Linux. Void Linux is an open source distribution of Linux as I said. It's founded in 2008 as a testbed for package management on NetBSD. Then Juan, the founder and product leader of Void Linux moved from NetBSD to Linux and that's where Void Linux comes from. What are the core features of Void Linux? As I said, it's a Linux distribution. It's completely rolling release. So as soon as a new commit hits our GitHub repository or our build server gets it, builds the packages, releases package full automatically. So now a really cycle is there implemented. We are quite different on some tasks. We are using Libre SSL instead of OpenSSL instead of SystemD, which we were an early adopter of SystemD and dropped it around 2012. It gets replaced with our unit. Our package management system is called XPPS and this is the base layer of our oldest version. XPPS itself, it's a completely new package manager. It supports signed packages and it has very nice features like support for downgrading packages from the maintainer. So we can say, hey, we screwed up a package. So let's release an older version again. It gets deployed automatically via the next update. Also we have a very neat tool which is called XPPS Degraph. XPPS Degraph is a tool where you can just generate dependency graphs of your favorite packages. This graph is actually outdated because we are starting to link Bash statically as some read line updates screwed up something. So why to use White Linux? It is quite fast in development. You can expect that for easier updates we release our versions of the packages from the release of the absolute maintainer to the package. It's around 48 hours maybe for harder updates. It's a little bit longer but the mean usage is around that. Get quick updates. It's very easy to get involved. We have a very active IRC channel. We have GitHub commits where you can easily fork, update your packages yourself, or contribute to us back. So our Arch Linux users here, please raise your hands. A few? Oh, more. Okay? I do this every year. So Dibin Ubuntu users. Mac? Okay, a few. And the last talks in 2013, there was one active user. In 2015, there was another active user. So please raise your hands. Today, how many? Two? Three? Three? Wonderful. Whoo! Great. Very nice. We are growing. Okay, the last year in commits, 14,000 commits in the last year. Our package database is around 60,000 commits each. Very fast, still this. We've got 7,000 packages on X68. We've got 6,000 packages on ARM. And we are growing about 1,000 packages since last year's talk. Okay, congratulations to the new kit on the blog. Awesome build system. Thanks, you. Thank you. Okay, now next talk. Users 4x3 ratio. Hello, everyone. I want to tell you about S-Pass. Some of you might have already seen that when downloading their pass for this year's Congress. This year, there is a new format you can download before there was only PDF and the wallet format. And this year, for the first time, you could download the S-Pass format. And I want to tell you what this is all about. To give you some context, I really love trees. And I really hate when they end up like this. If at all, a tree should end up like that. At the 29c3, I was stumbling upon the passbook format for the first time. The problem was there was no Android app that was working for me. So I had to do one from scratch because the apps that were there were not free software so I could not fix my problems. So I started an app from scratch. It's called Pass Android. And I used it for all chaos events ever since. And a lot of other users came on board. But I had some problems with the format passbook. It's really Apple-driven. And I'm not a big Apple fanboy. And I couldn't really change things in there. And I had some ideas how to change things about pass formats. And so I needed to create a format. At the moment, on the client side, it's supported by Pass Android. And on the server side, it's supported by Pre-Tix. Pre-Tix is used for this Congress. Big shout out to the Pre-Tix team. It was really easy even for a non-Python web guy to write a plug-in for that to make that as an output format. And this format is even registered now at the IANA, like a small side note, the Apple format is not. Shout out to Katie, the maintainer from the Canine project who forced me to do that. What are these things all about? The main important thing about these passes are these barcodes. And there is where I want to change the main thing. Currently, for this Congress, it's still a password-driven barcode. But now we can change that. Barcode will be the same, but we will use private keys for that. Why should we do that? You have seen, perhaps, all of you have a ticket, but some were unfortunate and didn't get a ticket. And currently, it's just this barcode is representing a password. And if you just pass it on, you don't know if you get it like 100 times, like if the seller sells hold it like 100 times. And if you're not the first one, you will not get into that. So that is fraud to fraud. You shouldn't do crypto yourself. So one option is to use a crypto system called colored coins, but I don't like the aspect from Bitcoin that it burns so much energy. There's another option. Ethereum has an upgrade path to proof of stake. So we won't burn that much energy. And don't harm the environment that much. There, we can use a smart contract. In this case, a smart contract, which is a token contract, basically, it's like a coin contract. Like, see the path as a coin you pass on, like to another one. As long as you keep your private keys secure, you know you're the only one being able to access the event with that. And then it gets easy to resell a ticket. And there are legitimate reasons for that. If you get sick, you want to pass it on. And the buyer wants to be sure that it's a real ticket. You can even avoid problems like this. But we can solve that technologically, but there will be still an ethical problem who should get the tickets. But you don't want people like this to get the tickets. And so that could really ruin the Congress if just people hatch this. The other things, we can use that to get anonymous tickets because another thing to pass them on is to bind it to a name. But I don't like that. I don't want to give my name to every event I want to go on. You can make member-only events or like just the first tickets go to members-only. You can have really nice early bird sales with small progression curves in the beginning. I have to skip that a little bit. But come to me, talk to me about that afterwards. I have some extra items after the talk. I'm really happy about feedback. Just grab me if you're here in the Congress. Just talk to me here or online. Just drop me an email. Use the S-PASS 5-on-1 when possible. I could use some iOS web foreback help spread the word. These are some links. S-PASS.IT is the link for the projects. League.de is smiling. Have a nice day and a nice Congress. Thanks. Thank you very much. Next up, Woist Markt. It's a German title, but I think it's an English talk. And it uses a 16 by 9 ratio. Okay. Hi. I'm Tobi. I want to introduce you to Woist Markt. Where's the market? A quick show of hands who's visiting markets regularly. Like one, two, three, oh, it gets more. Okay, some. So who knows Woist Markt already? One. There are some. Yeah, okay. I brought some people. This is an open data project. And it started in the city of Karlsruhe. And then it went to Berlin. And yeah, it grew. So this is about markets. So any kind of markets, farmer markets and like flea markets, anything that you could think of. Problem is, it didn't work for me. Like there are cities who publish the information about where and when markets are happening. But mostly those websites are really bad. And we try to find something that is simple. And it's a map. So it's basically just showing where are the markets. And then you can click on those items and find out when they are happening. There are some features like what you would expect. So you can filter by time. You can select different cities and find out where the data came from. It also works on your mobile phone. Nothing more to say to that. And currently there are 36 cities who take part at the project. We started in February last year. And thanks to all those people, those cities came in. The idea of the project is that we provide the infrastructure as a website, but regarding to the data itself, we want the cities or someone from the city to maintain that. So if you are like from Hamburg, you want to put that data in and keep it up to date. So this is a shout out to get you people involved. And when you go home to your cities, maybe you want to get on both marked. So it's very easy if you want to add one market and we are not discriminating anyone. We have cities that have one market and we're merging them in. It's quite easy. You need the location. You need the opening hours and a title more or less. And the opening hours format is taken from what OpenStreetMap is using. It's quite readable. And in case you cannot fit the opening hours, then there's another property called opening hours. Unclassified where you can add any text. There are some markets which are not taking place regularly. And then you get that string. You see it on the map. There are different icons and those indicate the unclassified. The code is on GitHub. We have an issue tracker which is full of items. So if you don't want to add data but want to get into code or anything else on the project, it's all there. We're trying to update that regularly and add those tags. And it's not only code. And we're also trying to get it easy for beginners. So we have some tags like for beginners and help wanted on some issues that are quite easy to step in. Further, there's a contribution guide. It's quite extensive. So it tells you like in three steps how you add a data file for your city and how you add code and some help on Git itself. The whole project is MIT license and the pictures and other assets are Creative Commons. And yeah, I hope I see you. Thanks to all the code contributors. And last but not least, if you want stickers which are brand new, nobody has them. Grab me, I will be there. Thank you. Next up, mailbox 3000 and 4x3 ratio. Yeah, so for years I've been complaining about how Java doesn't really work for mobile platforms and how mobile messengers are not really working the way I want to, so mainly federated or being able to run your own infrastructure. So that's why I decided to make my own or to make a new messenger where the main idea was to have something easy. So to make a clear text protocol like IRC to have everything well not central. My idea was federated because peer-to-peer introduces some complexity you always have to deal with. It should be easy to deploy a server to not have the current situation like you have with email where you are dependent on somebody who knows how to run a server. I didn't want it to be easy to split a federation. So like Facebook did, for example, or like Google did when they split up and said, yeah, here now our XMPP servers don't communicate with others anymore. Furthermore, it should be available everywhere. So not dependent on any special protocol. So reusing HTTP in this case. And also in other cases just for keeping it simple reusing as much software as possible. So the basic principle is the server is only acting as a message store. So essentially my idea was why couldn't you use RSS together with HTTP basic auth and then some method to push messages also up to that RSS server and make it available only to certain users. So in the end that's what it is. You push a message up to a server which stores it. Each message has multiple owners. So for example, when it's group chat, you can create a new user if you already are a user which you want to send messages to. So it means you create a user and then this user is the only one who can read the messages you send to your server. So you don't even need a second server to get the messages or to proxy messages. So contact in the end is the URL which means whether the CGI or the server where it is running the username and the password used for HTTP basic auth. So the protocol uses HTTPS. HTTP is of course also possible but there's no reason for using it. HTTP basic auth as set is used for authentication. For encryption signal shall be used not implemented yet. So it was formally called XRotter. It's where the encryption protocol behind also WhatsApp, behind signal where it's known from. So the whole protocol is designed in such a way that everything can be implemented as a CGI which makes the protocol also not as nice as it could be because CGI for example doesn't have access to HTTP header fields not to all of them. And also what I stole from the idea from Matrix if you know it, the Matrix messenger is to somehow make ID servers so where you can query for certain IDs and then it returns you the... So you query for an external network ID for example for the email address and then it returns you the real or the mailbox 3000 ID from that one. Well, how do I implement it so far? It's everything written in C. The CGI is also written in C and I also have a client library and a small basic client which is in development right now. So far the only messenger I'm working on is let's say a proof of concept for Unix and I'm working on making the platform work also making it also work for Android where I use a telegram messenger or to be more exact the data messenger which is also a new messenger but something else well for Android then for the desktop I'd probably want to use telepathy in the future in the back end I'm using SQLite and embed TLS for encryption issues which was formerly called Paula SS and I think they changed the name again to something similar to embed TLS. What has been done? The server CGI is more or less running so far. The client library is also working so generating messages works. I'm still working on the basic client so to pass everything correctly to start in the database back end and the protocol is still under change because I'm modifying it when I see there's something not really working I have to change it and the contact handling is not implemented yet but I know how I want to make it work but well it's not working that way yet so it's just not implemented and how ID handling and spam protection supposed to work is still an open issue. I mean it's it's solvable but I didn't specify it yet. Well if you want to know anything about it just go to the website. I think I didn't publish the source code I mean the source code is public but not linked or just talk to me or link write me an email. Thank you. Everybody's basically finishing on time. That's great. Next up, Luris Hett. Hi everyone. My name is Erdir. I'm from Paris. This is Zora and Arjen. We are from Lurizet Hacker Space. So you maybe think right now what? I know that talk about Hacker Space. We know what is it. There is plenty there. But we are not like every Hacker Space. We are a feminist Hacker Space and the meaning of the result is to be into feminist. So before I was in into another Hacker Space and there were a lot of people thinking and passing by and saying hey, sorry. There are a lot of people passing by, sorry. And nothing get out, no project get out. Nothing went outside of the Hacker Space. Why? Because people were just here for chatting, having beers together and stuff like this. So when the Hacker Space closed, I said, okay, what we built for a year and the result was nothing. The project, none of the projects were finished. Nothing worked really at the end of the year. So maybe I was thinking why didn't it work because people didn't work together. No workshop organized. No people, no sober people because we had a top beer for free which is, could be great but not if you want project. Also, no one was excellent to each other and no one was inclusive. So I will now let Dora speak about what we wanted to do and how we did it. Hi. So we welcome people who usually do not feel safe or included in many Hacker Spaces. We thought first of people like me who do not have much technical or infosec knowledge who are too often asked what are you coding these days and do not find people that can transmit knowledge properly. We thought of women who are sick of being asked where their boyfriend is to justify why they're here and queer people who are sick of hearing jokes about them and so on. So we heard about feminist Hacker Spaces and wanted to build something like that. So it takes place once a week on Sundays in a queer bar and it is relevant because it is already a welcoming place for women and queers. So it had to be a safer space and we wrote a code of conduct which includes respect of people's boundaries, respect of pronouns, no transphobic, homophobic or racist behavior, et cetera. And every Sunday we organized a workshop since our opening in September. We have one-to-one data encryption learning, queer video games making, lock picking, knitting. We wrote letters to Chelsea Manning. We organized a crypto party dedicated to activists and we drank tea and played board games too. We wanted women and queers to be the ones who would share knowledge. So we invited them to take the lead on the workshops. We connected with other feminist projects against cyber bullying especially. And we used Twitter a lot and received many questions of women who did not dare to come to Hacker Space before and of course a lot of men's planning. We are not here to teach things to our fellow men's planners so we keep our time and energy for the people we want to come to our Hacker Space. We apply our code of conduct online. So we let women and queer people know before they come that we will do whatever it takes to keep the place the safer we can and it works pretty well. So as a result we have between 20 and 30 people coming every Sunday and there has never been a majority of cisgender straight males which means that they are always a minority in the Hacker Space which is cool. There is also a lot of queer people who feel welcome in here and a great neurodiversity so people with diverse mental health issues are coming and feel safe and it allows to exchange some tips about like how to deal with an anxiety crisis and stuff like that which is really nice. But there are also things that can be improved. So there is not many physically handicapped people especially because the bar is not accessible to people in wheelchairs but also we have seen a lot of other handicapped either. There is a great majority of white people which is one of the biggest issues and we would really like to change that in the month to come. And also we are quite homogeneous in age. Thank you. Okay. Hacking medical tech development is next. Okay. Hello, I'm Christoph. I'm an associate of Copernicus Science Center in Warsaw, Poland which is the biggest Polish science center and I would like to tell a few words to you about a project that we had some time ago and the overall idea that stands behind it. I'm into responsible research and innovation which is a European Union scheme and might sound like a horrible abuse of buzzwords but it's actually kind of a very interesting idea about how to make the citizens and the general public involved into science and development projects which is best exemplified as the topic of my talk in medical tech development. Medical tech development is what appears to most of us to be a job for a huge corporations but it's surprisingly friendly as a field to hacking, to making, to influence by the random or random, maybe not random but independent makers. Every year at the CCC we have at least one talk about medical tech and it's pretty interesting all the time and what's surprising is that you can actually make the people outside of the corporate or academic fields of medical research involved. This is what we did at the Copernicus Science Center. There was a group of professional academic researchers who appeared in the Copernicus Science Center willing to discuss a very particular medical problem which is, as you can see in this simplified picture, surgical procedure performed in the womb on the fetus and they had a very particular small mechanical difficulty. They were unable to realize a certain sort of a valve for their medical equipment and everything was extremely small and their team had a difficulty acquiring satisfactory results with developing this particular valve. This was just one example of a professional who had a team of professionals who had a problem developing something for their project which could save lives, could help people and this fits well into the responsible research and innovation scheme because what we did in the Copernicus Science Center was we invited people from all over the broad spectra of science and of interests, including makers, students, designers, also lots of professionals but not from the medical field, neither from the typically typical related fields that you would find and we defined the problem for them and we managed to convey the message to them in layman's terms describing what precisely do we want to do so even if somebody were product design students and they had only theoretical knowledge about how to basically make nice good-looking presentations about products but still had some interest in medical tech and could help us somehow develop. They were invited and they were not taken aback by the difficulty in understanding the problem. We did an event. We made a hackathon. It ran in November this year. It was a huge success. The photos I showed come from the hackathon and we are very happy that it worked like this and we would like to continue. We would like to make more events, run more projects and create more open source hardware and software because everything that came out of the hackathon is open source and the teams that had the best ideas submitted during the hackathon now have the chance to develop them with the professional teams and everything then will be open sourced for the general public to reproduce and we are aware of other projects like these happening mostly in developing countries and in places of civil unrest when there are difficulties with accessing the traditional conventional corporate solutions for basic medical supplying for supplying basic medical items and we know that there is interest in what we are doing and we are seeking more new people to get involved to help us one way or another. If you're interested, drop me an email. I'll be very willing to discuss the problem with you and what we want to do next because we certainly will attempt at more hackathons and more events like these and also engage in more projects dealing with new problems which may be general and may be very specific. Thank you. Thank you. Where are the slides? What did I do? Where are the slides? No, the slides are here. Oh, there they are. Okay, thank you. Hi, all. Now something completely different. I'm Jelena. I want to present to you our small event that we are organizing on Vulcan. Vulcan Computer Congress that's happening every year since five years from now. I send you slides there. Sorry. Okay. So, the next Balkan will be next September. 15, 16 and 17 September in Novi Sad, Serbia. It's only one hour drive from Belgrade Airport so it's very easy to come to Novi Sad and this is our fifth congress. We are organizing, thanks for the support with Vowel Holland Foundation from the first year. We came to this idea to organize the International Hacker Conferences in Serbia because in Serbia there is a lot of students and is a poor economic situation. They don't have money to travel around Europe to the Hacker Conferences like this. So, on 28th C3 we decided to let's make something but smaller like CCC in Serbia to present the Serbian students in Serbia tech guys, tech nerds, Hacker culture because at the moment there's only two active Hacker spaces in Serbia, one in Novi Sad and one in Belgrade. So, then we decided to make an event. Louis starts a small event. Thanks for the support of the European Hacker community and every year we have CFP open. It starts from 1st February, it will be also this year and you have here deadlines that will be for this year. So, in 1st July will be submission deadline on 15 we are sending acceptance and September is a conference. If you want to see more information about Balkan you can go on our website. There is also I have for the last four conferences that we have also there we have video material online from the 1st to the last year. So, you can find it also on FTP or YouTube or you can find us on Twitter or if you have some questions regarding how to reach Novi Sad, how to get here or something else you can drop us an email. We'll be happy to answer to you and I hope that we'll see you again on Balkan. Thank you. Thank you. Now the last talk is not really a talk but it's more of a performance. I guess some slides always open on the second screen and I never see them. That's weird. There it is. Moin moin everyone. I'm Marius from Hamburg. So, I'm a local here and I'm glad to see that so many people came to see the free software song presentation. So, since this is an interactive presentation I would invite you to first send up and relax a little your body if you want to just get up and so if you feel like it and perhaps roll your shoulders a little back so that your body is prepared for the full experience of the free software song. So, perhaps stretch a little if this is okay for you. So, that's already enough. Let's prepare our voices a little bit. Join me in some notes like Join us now. Join us now. Again. Join us now. Okay, next one. And share the, and share the, and share the software song. You'll be free. Okay. So, position. Who knows? Thank you very much. That was the... Next slide. Next slide, okay. Well, please get the slides back. You wanted to show us something, I think. Thank you. Okay, Mary Hacking and Happy Always. Yeah, thanks. Great, so this concludes today's lightning talk session. You can experience the whole thing again tomorrow but with different talks, of course. So, yeah, maybe see you tomorrow or see you anywhere else on the Congress.