 is, am I incognito? Hey, ah. All right. Hi, I'm Tanoi Bose. My Twitter is Tanoi Bose. And this is a small project that I was doing on privacy. And it's titled, am I incognito? You have a presenter? Yeah, that's right. All right. So a quick shout out for this project. There's one lightning talk given at Balcon by Brian. Do definitely check out Balcon Buddha's behind this hall. It was a nice starting point for me to be interested over privacy on apps. And also to a guy named Smith who basically spoon fed me the idea when I was stuck at a point. So the talk by Brian talked about how you could use Tinder and Tinder's APIs to basically do mapping and as well as a polylateration, polylateration to identify user's location. It was pretty interesting talk. You should definitely catch up with him if you want to know more about this. So when I was doing my privacy exercise, of course, a lot of times I started opening Google Maps. And I kept checking how you could plot data over Google Maps and how you can actually figure out people's information of people via Google Map. One thing that always caught me is when I opened Google Map and stayed there for about a few seconds, it automatically resolved my approximate location. And you could see some coordinates being appended on Google Maps in itself, which was a bit confusing for me, for which I approached Google and told them, how are you storing my data and what is giving my coordinates away? And Google replies with the definition of IP address. For people who do not know what is an IP address, you can definitely look at this definition beautifully explained by Google. And then I asked Google, hey, can you stop sharing my approximate IP address? And they were like, we are sorry, stop sharing my approximate location coordinates. Then they were like, OK, we are not sharing your approximate location address. We're just getting it from your IP address. Now location via IP address, this is my location where you can see the red mark is basically somewhere I stay, the block where I stay. And of course, I am from Emirates. And I looked at my IP address. And the IP address location of the ISP is somewhere about four blocks away from my home. And this is generally for our entire city most of the times. However, in this case, Abu Dhabi, if you have Eze Salat connection, you would find your location to be this point, which is marked on the maps, which is significantly farther away from my location. Now, of course, there in Chrome, there is a setting called Location Services, which was blocked for my website. Because all my tests via Chrome was done via my website over there, or free hosted website over there. And there's something called as Google Location Services, via which you can do API calls and you can identify your approximate location via the API services, which generally you opt into block. And the browser would not be able to identify your location anymore. So launching this API call via the browser, I got my approximate location to be somewhere close to the area. What was more interesting was when I found these things out, I was kind of curious about what is going on. I approached one guy called Smith. And he was like, hey, why don't you look at Google's geolocate API? And he definitely mentioned a few websites using this and collecting information from users. So it was interesting. So I go on to the geolocation documentation. And I find out that you need to supply it with your Wi-Fi or cell tower data and it'll give you a location based on that. And if you are not able to supply a Wi-Fi address or a cell tower address, it gives you your location based on your IP address. This was interesting because it responds with the latitude, longitude, and the accuracy of the location coordinates that has been provided. So of course, when this kind of an API exists, you can script out a easy XHR request querying the Google API services and host it on any website. And what you actually get is nothing but the location coordinates. Even when you have blocked your location, like you can see in the location services on the top that I have blocked it from the website's execution, you can get the coordinates on the website execution. So on the left side is basically the code that executed on the website that I have on my domain. And on the right side is the plot for those coordinates. You can see the accuracy is also mentioned that's around 1,400 meters. That's 1.4 kilometers. It's decently closed, however, quite a wide range. You can see it's still away from my block. So yeah, when you, however, turn off your Google GPS spoofer and things like that since I am at modern date and for a lab guy, you can geolocate yourself into your block. So I don't allow Google or any services by spoofing my GPS. And you can geolocate yourself into it, as well as I'm very sorry, five, four, three, two, one. All right. Thank you. Next up is Lern OS. Good morning, Congress. My name is Simon. I'm located at the Center-Centrum podcasting assembly, and I want to make a short pitch for systematic lifelong learning. Lern OS is a term. It's a verb. It's coming from Esperanto. It's the artificial language. And it's the future tense of learning. So it means we will learn or I will learn. And I will talk a little bit about how I think how we can hack our own lifelong learning system. The problem that I see is that in more and more knowledge domains, the half-life of knowledge gets shorter and shorter. It's not so much the fact, if we talk about knowledge that you acquire at school, about history, things like that. But if you think about, especially technology and IT knowledge, the half-life gets shorter and shorter. This means that we have to learn on an ongoing basis and also in a systematic way. Second problem that I see is that our education systems are not prepared at all for teaching us these lifelong learning mechanisms. When you think about school, we send our children to school. It's a very formal approach with a fixed curriculum and fixed teaching methods. We don't really teach them how to learn in a self-organized way. I think the same counts for the higher education. In terms of the bachelor and master processes, we apply more and more methods that we have in elementary schools, also in higher education. And I think it gets even worse if we have a look at the working environments, where a lot of people think that when you start working, learning ends, because learning was in school and now you have to work. And every day that we spend on learning and trying out new things, it's a waste of money and time. So the idea of this learner's learning hack, so to say, is to put four ingredients together, which are well-known methods in the business domain, I think, and also in the IT domain, which consists of four methods. One is SCRUM, the agile project management approach. So one idea is to have so-called learner's or learning sprints of 30 weeks to give yourself a cadence for your learning process. Think of it like having school years or half years in university or in school, where the education system or a teacher provides you with material and learning goals and a curriculum. If this formal education ends, then you have to do that on your own. So if you do four sprints a year with a classical planning, like planning of the goals, a learning process, and a retro at the end, you would do four learning sprints a year, for example. Of course, you can adapt it to your needs. In terms of goals, we tried to use a method called OKR. It was developed by or at Intel in the 80s, already made famous at the end of the 90s at Google. So it's sort of the strategic management system at Google, where you try to manage the goals over the levels of the whole corporation, the individual teams, and the individuals. And you just set a moonshot objective to a very ambitious goal for one sprint and have three so-called key results that you can measure what you have at the end of the sprints. So for example, here at the Congress, we tried to develop a guide where you can learn how to podcast and use podcasting as a knowledge sharing tool over one sprint. To get the learning process managed, we use a very old self-organization method called Getting Things Done by David Allen, which kind of replaces the job of the teacher. You organize your learning tasks on your own, put your tasks in a can-burn board. We also have working on pre-prepared boards that can use for the learning process to manage the to-dos. And of course, if you did something, we should share what we learned and what we did. And there we use an approach called Working Out Loud, defined as making your work observable. A lot of you, I think, do that by putting stuff on GitHub or publishing presentations like we do here, but also narrating your work, talk about your work, talk about lessons learned, what didn't work. That's where podcasts come in, for example, where you can talk about what worked and what did not. I put in a presentation, More Food for Thought, for all of the four approaches. There are a lot of sources, like the YouTube we do by Rick Lau from Google Ventures, for example, or the podcast with David Allen talking about getting things done. And the idea in the end is that you learn life long from now onwards until the end of your life, so to say. LearnerS is a project that lasts for six years. We are in the middle of it, so there are three years to go. If you want to, there are some addresses where you can join the community, also a Twitter account, no matter if you learn with LearnerS or take another approach, I would like to motivate you to keep calm and learn on. Thanks. Thank you. Next up is SMS for you. Hey, good morning. I'm Felix, and I want to bring SMS for you. I think it's a valid question. Why do we still need SMS in 2020? That is because not everybody wants to have a smartphone. We have certain services. Think about banks that use SMS for verification in terms of mobile time for verification. Sometimes only GSM is available, and so SMS is the only thing we can send. And at the end, with all the mass of messengers in gated communities, it is still the least common denominator for text messaging. Why not use an SMS in the phone? I think there are a lot of reasons. One is because you're progressive when you want to use other means of communication. Maybe you're traveling, and in some countries you use a different SIM card, and you would still like to be able to receive the messages. Or in my case, I don't want to carry a registered SIM card on my own and carry it with me and get into the whole tracking worldwide movement profiles and so on. The opposite use case is also valid. Maybe you only have a dumb phone, and you're somewhere, and you want to send a message to somebody who is not on your SMS network, and you want to maybe use email or SMPP. So SMS for you to the rescue. It started as a little script. We are now two persons. This is actually a talk because I'm looking for more people that are interested in that and want maybe to jump on and use it. It's a gateway between short messages and other means of communication. Currently, we are supporting email. Yes, it's a hack, but it works. And XMPP, which is like the more solid approach to it. You need basically a modem, GSM, LTE modem, whatever. You connect it to a Raspberry Pi or other computer, and it would receive the SMS, send out an email to you. You can respond to this email. It would send out the SMS back. And the same thing with XMPP. So no matter where you are, no matter whether you have the SIM card with you, you will receive still those kind of old messages. You can find it on GitLab. It's HEPL, so we are here on the free side of the nice things. Thank you, and check it out. Thank you. Next up is Verif Pearl. Let me just open it. There you go. Hello. I'm going to talk about this cool project called Verif Pearl. So you guys use Signal, right? Use TLS, use WhatsApp. All of these things are called cryptographic protocols. And cryptographic protocols are the systems that are tasked with assuring certain security guarantees, like confidentiality for the communications or authentication, and so on. So people design these security protocols, and they tend to be really complicated. For example, a sophisticated, relatively protocol-like signal has to ensure certain cryptographic properties, like forward secrecy. And so it does this thing where it generates new encryption keys all the time between every message. Other protocols, like ZDARTP, have to have certain considerations, because they're dealing with voice chat, like encrypted phone calls, et cetera. And so designing these protocols is really hard. And like, for example, TLS went through many revisions, like 1.1, 1.2, 1.3. And 1.3 was the very first revision of TLS that was designed while actually working together with people who were formally verifying the design of the protocol. So what does formal verification mean? Formal verification is basically you can basically prove certain things or get assurances about the security guarantees or protocols. Are they resistant to an active attacker? Do they really achieve their security guarantees? So generally speaking, formal verification is kind of an academic thing. And you can see people use maybe this at three theorem prover. There's interesting high assurance programming frameworks like FSTAR that allow you to write formally verified cryptographic primitives and recently protocols as well. There's also modeling frameworks like Proverif and Tamarin that allow you to illustrate a model of a protocol, like for example, a model of Alice and Bob speaking over signal. And then you can ask questions like, OK, this is a model of signal. Can an active attacker decrypt Alice's first message to Bob? Can an active attacker impersonate Bob to Alice? And so you can sort of get a lot of interesting analysis based on the questions that you ask and the models that you make. Now, many papers have been published on this and so on, but it's not really used a lot. So why is that? Well, it's because it's complicated. Unless you're a specialist in cryptography, it's unlikely that you will be able to really delve into how Tamarin and Proverif work. So I am working on Verif Pal. And so Verif Pal also allows you to model and analyze and reason about protocols, but it's really friendly. So it has an intuitive new language for easily describing what Alice and Bob are doing. It has a modeling framework and engine that avoids user error and is easier to use. It even has a user manual that comes with a manga about formal verification, and it's really nice. So please check it out. It can reason about advanced protocols, even though it's really easy to use. It has some advanced features as well. So try it out. You don't have to be a professional or a super extreme advanced person to try it out. Everyone can learn how these systems work and reason about them. Definitely look at the instruction manual as well, the user manual. I mean, it's really friendly and accessible. I strongly recommend that you read it. Verif Pal is free open source software. It's very new. I only released it a few, like two months ago, and it's still under development, but it's really interesting to use. I hope it's free and open source software under the GPL version 3. So please check it out at verifpal.com. You can download it for Windows, Linux, and Mac OS today and try it out. Thank you very much. Thank you. Next up is Crazy Sequential Representations. Oh, hello, everyone. Today I'm going to tell something about the Crazy Sequential Representations, or CSR in short. So CSR are basically mathematical expressions in which all ditches occur in order. And this can either be in decreasing order from 9 to 1, or in increasing order for 1 to 9. Ditches may be used as separate numbers, but ditches may also be concatenated into larger numbers. And there are basically five operations that you are allowed to use, which are addition, subtraction, multiplication, division, and exponentiation. In addition, parentheses may be used. And finally, numbers may also be negated. In other words, numbers may be used in a positive form, but numbers may also be used in their negative form. On the internet, there's a large list which gives increasing CSR and decreasing CSR for all numbers from 0 up to 11,111. And for all these numbers, CSR has been found except for the number 10,958. So I thought maybe I can identify this number myself by doing some kind of brute force search. So let's say we want to iterate over all Crazy Sequential Representations, which have three operations in them. And first, we need to go over all the operations, which would look somewhat like this. Then in the next step, we need to go over the different ways numbers can be concatenated. And we need to do this for the increasing order. But we also need to do this for the decreasing order. After this, we need to go over the different parentheses, or at least the meaningful combination of parentheses. And finally, the different ways in which negations can be applied. And instead of doing this for CSR with just three operations, we actually need to do this for CSR with one operation in them up to CSR with eight operations in them, because they have nine digits, eight operations in between. This gives us about 725 billion different expressions to be evaluated. However, there are quite some optimizations one can do. For example, in many cases, parentheses make no differences, so you can just skip them. And in many cases, negations tend to cancel each other out, so also need to evaluate these. So we already had our list from 0 up to 11,000, which was now extended up to about 2 billion, which is the upper limit of a 32-bit signed integer. In the increasing series, we have found 931. We have found CSR for 930,000 integers. And in the decreasing series, we have found CSR for about 1.3 million different integers. However, for the number 10,958, no CSR was found. Only CSR that approximated value, some come really close, but none of these CSR evaluate to the exact number or the exact integer, 10,958. We have found many CSRs which have the same length, so all these equations evaluate to the same number and have the same length. For many numbers, we have found expressions without using specific operations. For example, CSR without using subtraction, without using division, without using exponentiation, or without using concatenation of numbers. And for many numbers, we have found expressions in which specific operations occur at specific indexes. I'd like to conclude with the fact that CSR are basically a proof of work, because if you have a list of numbers, it is really hard to get your CSR, but once you have them, it is really easy to confirm that they are correct CSR and evaluate to specific numbers. All this work is available online, and if you have any questions, please send me an email. Thank you. Next up is how to become an Estonian e-resident. So good morning, everybody. My name is Markus, normally working for a great lightning company, and today I want to share my experience to become an e-resident. Two questions to the audience. Who has been to Estonia before? Yeah, some hands, maybe 10. And who is an e-resident already? Nobody. Great. Okay, Estonia is one of the Baltic countries in Northern Europe, has only 1.3 million inhabitants. It's quite fairly the size of the Netherlands, and I want to share with you why I think about e-residency and how to become an e-resident, what is the number and facts of e-resident, and at the end, maybe how to sign digitally with this. So one question could be to escape from New Zealand, because Estonia is far ahead in the digital world, but the reason is to be part of the state-of-the-art online community. So since 2000, Estonians have a right to access internet, not the possibility, but the right to do it. Since 2002, they have digital ID cards. You heard from the Switzerland that they are thinking about 20 years later to make these ID cards electronically, so they are far ahead, and Estonians can road online since 2007, and the e-residency started in 2014. And with this, you can establish and manage an EU company online. And by the way, the text declaration is fairly simple and done in some minutes, so it's a quite good advantage. So the number of facts, we have about 60,000 e-residents worldwide in 160 countries, and they build roughly 10,000 companies already, which put a revenue of 30 million euros to Estonia already now. So how to become an e-resident? First, you have to apply online, leave your ID information, your address, and a kind of motivation, which can be fairly simple. Next step is to pay 100 euros, so do it today or tomorrow because it will rise to 120 next year. And for a win-win situation, you can use the referral card of me in the bottom two, and then we can win both because it's possible to win a trip to Estonia. The third step is to identify yourself in the embassy. This is the one in Berlin for me, so you have to go there, pick up your card and show your identity, leave your fingerprints, and finally, receive your ID card in this embassy with your name, with your digital ID. So if you have this ID card in your hand, you get also an card reader. It's in this small envelope and you can now install the card reader software, which of course works in every system you sync, so they are far ahead, I told you. You plug in the card in your computer and attach to your card, so then you are able to authenticate yourself with a pin one, so a four-digit pin, and if you sign documents, you can use the pin two, which has five digits, and with this digit or with its authentication, you can sign documents for getting a domain in Estonia or to establish your company when you want to do. So is there anyone I have convinced to become an e-resident now? Oh, one, two, yeah, three? That's good, thanks, mission completed. And for the rest, if you don't want to be an Estonian e-resident, I have another idea. Visit Estonia, it's quite an interesting country, you can learn a lot, and the basic words you need are tere, aite, negamist, and tervisex. So if you have further questions, feel free to ask me via Twitter, email, LinkedIn, or whatever you like, or later on in this conference here. Thanks for your attention. Thank you. Next up is the infrastructure village. Hello, everybody. I hope you're having a fantastic experience in the chaos this year, and let me give you my briefing for an idea that I'm having for next year, and to have an assembly and infrastructure village, and let's see if there is interest for that, and if you're interested to help me run this assembly. So let me start it. 50 years ago, hacking was practically requiring just a few buttons with the correct tones, or some kind of basic electronics in order to start hacking into telephone systems. The years went ahead, and the computers, what you really needed, were just a few hours on a computer, and then we were going to have computers at home. And nowadays, we can even have CPUs with 64 cores and stuff like that. But actually, the reality is that we use a ton of cores in our day-to-day life, even not directly. Let's say graphics cards have even a couple of thousands of cores. Routers have a ton of ASIC and other very fast CPUs, processing units. So this is what gave me the idea to create this assembly about whatever you can technically stack, even if this is Raspberry Pi's smart devices for whatever reason. Bananas, why not? FPGAs, or even just old school x86, 64 computers. Aligning with the CCC spirit, all architectures are beautiful. It doesn't matter what. You can do a ton of crazy stuff no matter what. So what are the use cases? Who may be interested in something like that? Well, of course, self-hosting is one very easy example where you do not need clustering per se, but you just delegate different tasks to different systems. Of NSA-proofing can be something very inspiring as well and can create a nice forum around this topic. And of course, red teaming and blue teaming can be very into this kind of stuff. As an example, you can have some systems for awesome gathering and processing, scanning for vulnerabilities, different systems, orchestrating your non-concessual clouds, some malware or something like that, processing some rainbow tables, cloud in the middle, deploying honeypots, detecting intrusions, or even some DevSecOps for fancy business-oriented people. So if this is something that you're interested in, reach out to me. I'm going to have up this website, infrastructurevillage.com, very soon. You can find me also on IRC on Hackint with the hand this up because. And there is my phone number for the Congress for a few more hours here. Thank you very much. Thank you. Next up is how to run a bad awareness campaign. Hello, my name is Christian Kloos and I want to tell you how to run a bad awareness campaign or how to run it. First of all, what is awareness? Many of you may know it. For all others, it's mainly in the sense of security, the knowledge and behavior of employees regarding the protection of information within an organization, that is, knowledge of the employees, how to behave and how to protect their information. Since social engineering is the first step, it is relatively important not only to protect technical protection, but also to protect people. We want to turn the whole thing around and run a bad awareness campaign. That is, our goals are to make the employees unhappy. Maybe they should also hate the IT directly. So that would be the rise of it. How, also, do we want the employees not to learn anything about it? Use tips. So how do we handle it? There are various ways to run an awareness campaign. I mainly focus on a fishing campaign here. And the whole thing starts with the mindset of employees who are at risk. That is, if we don't have employees, they don't have any danger for the company. Accordingly, it might be good to build up the whole thing so that we can leave employees if they are bad. We start with the fact that we don't announce the whole thing. That is, we surprise the employees. Then they have no idea what happened. We are more successful. And they don't like us. So, goal 1-1 is complete. We start with a lot of fishing. That is, we send all people the same e-mails at the same time. Then they can vote for each other. That might not make us very successful, because they talk to each other and start to fall into it. That is, it will be more difficult to find the weaker people. But they definitely don't learn anything. Because they are warned by certain e-mails. Then we can, of course, because we can use our admin power, that is, we can simply use the internal mail server or start the perfectly built fishing campaign, where the employees have no chance at all. You can really see it. It just shows that you are of the opinion that the IT wants to put you into it completely. And if you work against it, do you have the IT? In addition, you have no chance to see the whole thing. Learn accordingly not how you can behave with a real attacker. We can become personal. That is, we can just, especially when we send e-mails from colleagues or something like that, take care of your life, like I do, I want to buy someone my furniture. Can have the nice side effect, that some other people suddenly ask, hey, you pull out, what's going on? Is something going on with you? Does the employee, in any case, unhappy? It's not bad either. And we don't want to clarify any misdeeds, that is, if you do something wrong, then you should somehow land on a 404-sided line or something like that. Then you don't know that you did something wrong. You don't know how you can improve in the future, how you can behave with a real attack. Accordingly, the goal here is to learn nothing from employees. Now we know how we drive a bad campaign. In theory, you can learn from all these things how to drive a good one. I've changed the whole thing a little bit. In the ideal case, if you really want to bring the people something, go there and see the employees and the IT as a team. You want to work together, that your company becomes safer, announce the whole thing, wait a moment until you start. Until then, the employees will always forget that you have something like that, that means that your statistics are not wrong. But when they behave wrong, remember, ah, there was an announcement, you want to do that with us, they want to help us protect the company, and they are not angry with you. Don't do mass fishing, but go to beer fishing. Real attackers, yes, they sometimes send an email to everyone who is exactly the same, but in the ideal case, send each individual email to individual points and so on. Accordingly, you have a higher chance to reach them every time, to catch them every time, and everyone can learn, or the people will stay in touch for a long time and won't be able to tune in. Then you should only take the skills of an attacker, because an attacker is first of all, so before he has attacked you technically, he is only capable of sending information from the outside, so it doesn't matter if you use your full soul power, because you can just do more than a real attacker can do. Stay enough in general, so don't say, I'll pull up and sell my furniture, I'll pull up and sell my furniture or something like that. So your employees won't be angry with you, or hopefully no one of the selected offenders is angry with you. And accordingly, be aware of the lack of behavior, bring the people with you how to behave properly, and take the wrong behavior at the exact moment. Basically, there are also various offers that you can help or that you can engage in, so they know what to do or what not to do. If you have any questions or want to contact me, go to Twitter and send an e-mail, and then thank you for your attention. Thank you very much. So next up is the work quantum or the live quantum. I don't really work quantum. Are you here? Are you in the room? Who wants to give this talk? No, it's actually okay. I think we have some people from the waiting list here, but we'll just continue with the next talk now. And yeah, we'll see. I'll call him up eventually, maybe. Oh, and actually that's the last talk before the break, right? Does anyone have a schedule? Stimmt also? Okay, then where are the people from the waiting list? Are you here? Nobody showed up, okay. So then that's a bit sad because we have so much time now. No, he left actually, I saw him leave. All right, then yeah, we're gonna have a break until 12.30 right now. Hmm, it's a bit, huh? Actually, I don't have the slides for the people who... Which slides do you mean? Yeah, he was here yesterday, but you can... Yeah. Yeah, it's a lot. No, I mean, I told the waiting list people to come here 15 minutes before the break, and it's still not 15 minutes before the break. So, I don't know, maybe we wait a little and whistle the jeopardy melody or something. There's some... Oh, I don't know. You're the first one after the break. Okay, I mean, we can just take your talk and do it right now. All right, then thank you. All right, then we will just continue with the first talk after the break because the break is important to align all the other talks in the other halls. Then let's go. Hi everyone, my name is Juli Lotzko and I'm an artist and researcher who's focused on subversion and critical stances on the... Or technological and media landscape. And in this, in this endeavor, I wrote a doctoral degree on the intersections between hacker culture, especially hacktivism and arts in terms of historical view and contemporary arts. And it's a little part of my upcoming book, this presentation, and it's really hard for me to dance it in such a short time. So please feel free to find me after this talk or after the lightning talk session in Pomona. If you don't, then probably fish me out of the bubble bath. In order to examine the intersections between the historical avant-garde art, which includes all the isms from the... From after the First World War, including Dadaism, Surrealism, and so on, I looked at different definitions of hacking and this is where I change the slides, no? So I probably don't have to define hacking for you. In my research, I use a definition from Tim Jordan, whereas hacking produces new materialities that define new ways of interacting with technology. I also probably don't have to define what the zero-day or social engineering is. So I just move on to the second slide, where I started to examine the similarities between the avant-garde art movements and hacker culture and especially hacktivism. And within the avant-garde, I mostly looked at Dadaism. And it's very apparent from the first moment that there's a lot of similarities in terms of border violation practices, practices manifestos, which try to aim for a future utopia, a new composition between society like a kind of aim to recompose the societal factors. There's a kind of need for an existing canon in both of these paradigms to build on and to interfere with in a revolutionary way. And in order to better understand what is really going on in these similarities, I looked at some traditions to interpret avant-garde artworks and some traditions to interpret hacking gestures, and I tried to crossbreed them. So in the next slide, we see some avant-garde artworks contextualized by Jordan's hacking typology. You see Duchamp's Lafontaine as a zero-day, whereas a zero-day exhibits the biggest amount of creativity and innovation that has never been done before, so it exploits a yet unknown vulnerability, whereas every other ready-made after Lafontaine would be a zero plus one day as vulnerability is still present, but it already has a smaller amount of innovation and appreciation from the community. Social engineering is really, really big in the avant-garde, especially when it comes to performance. There's a picture from the 1919 Zurich performance in the Sadeh Kalfleitan. There was a reading, the Letzter Lockerung, where the idea was to provoke the audience into a chaotic mess which the Dada is happily achieved, and script-crities in terms of Dada's artwork recombination would be commercially aimed reconstructions of Dada's artworks. And you also see the motivational basis of hacking by Tim Jordan there, whereas most of the original and appreciated avant-garde artworks are more aiming for societal change, whereas, for instance, a T-shirt that you buy in a Swiss souvenir shop would be aimed at a personal gain from who released it. Which might be a bit new for you, is where I try to examine hacker culture in the context of the already available interpretational framework of avant-garde artworks. Andres Kropanyosh has a really interesting book which looks at avant-garde artworks as processes instead of artworks in the classical object definition, like an object as this. He looks at the process, right? So in terms of this analysis, Kropanyosh points out that the avant-garde artwork tries to deconstruct the work of art. And in order to understand it, we have to focus on the process, how it's made. So he refers to abstraction in avant-garde artwork as in order to get rid of representation. Action as in activism aimed at a change in society and anti-art in order to create novelty, which, funnily, as the paradox of success, gets canonized quite fast afterwards. And he defines six characteristics of this process-based analysis. Ephemeros in this regard would be that a lot of avant-garde artworks are just there for a very short time. You don't have really an object that you could buy and sell, which is sort of hacking the art market. We also see that a lot of the hacking gestures are very short-lived in nature, but that doesn't make their achievement less. Combinatorical, as in avant-garde process, can be interpreted mostly to freely open source software and git repositories, whereas... Five, four, three, two, one. Yeah, I'm sorry, we have to take the break. Do you have a contact slide or something, and a last slide you would like to show? Yeah, as a finishing sentence, I'd like to say that one parallel is that nor the avant-garde, nor hacktivism or hacker culture really destructed the institutions that they wanted to reform or hack or revolutionize, but the interventions that they created changed those institutions forever. All right, thank you. Blockchain, Ethereum, cringe, but cool. Yes, there's the clicker. Right, so I subtitled my talk often, but not only a vehicle for fraud, since I don't think there's any really getting away from the fact that is the first thing you'll see if you Google for it. But it's worth having a Google. There are some really funny ones. So I would say that when people talk about blockchain, and I do also cringe when I say that word, people are generally talking about a system that does three things, a system where everything is signed, so potentially you can know where everything's come from. It can have a known origin. They have some sort of common logic, usually referred to as smart contracts, although that's a bad name, really. They should be dumb scripts, but at least you have a way of saying A and B happens, and they always mean C. You also have a fair and reliable ordering events, so you can say that, for example, a house moves from A to B before B tried to sell it to C, and that's very important. That last bit, the ordering, is the only bit that actually uses a blockchain, and if your threat model is different, you can just use an appender on the cryptographic log. There was a talk on that yesterday. So looking at this a little bit more detail, the signing bit tends to be done with what we would call wallets. Some of them are done with hardware. Most of them are mobile phones. The common interpretation is done using some sort of deterministic program. The important thing is it's deterministic. It always has to give the same results, otherwise, being a network data structure, the whole thing breaks down. And then the last bit is some sort of robust consensus mechanism. Generally, the one that people talk about is Nakamoto consensus, where you run a lot of computers wasting electricity, but that's far from the only way of doing it. So in summary, the real general purpose of having some sort of blockchain is you have a very slow, but honest and transparent computer that nobody owns and everyone can accept the result as being fair. Obviously, that's not all that you need. So generally, blockchain projects tend to have what in the Ethereum world, which, as you can see, is my preferred platform. We call the holy trinity. So some sort of messaging, some sort of distributed storage that relies on nodes across the internet rather than just one central server. And then the blockchain itself is just for consensus. It's essentially a very slow, reliable computer, but you don't want to use a computer that takes 15 seconds to respond very much in your system if you can avoid it. But you very often have to. The holy trinity phrase is very much an Ethereum thing. If you talk to Bitcoin maximalists, they would say the only purpose of this is currency. I disagree, but you have to admit their system is very successful. And systems like this are great for finance. There's a lot of interesting products around borrowing, lending, mostly cryptocurrencies, but hopefully real things as well. There are a lot of interesting projects around registering property, assets, et cetera, and decentralized naming systems, trust games around finding the truth of various statements, and Ponzi schemes, which is sort of an inevitable side effect. It's a vehicle for social coordination. So here are some interesting products. Uniswap is for trading things. Uport is a great system for finding, proving things about your identity so you can assemble identity of different statements, different people made about you. There's a kickback, which is a great event organizing software that we have, that's used very often for Ethereum events, and I would recommend it. And Materium, who are doing very ambitious stuff on coding, legal contracts, and ownership onto the blockchain, which is very useful for transnational trade where a lot of parties simply don't trust each other. There's also quite a lot of frauds, and although I'd like to say that some people might recognize the BitConnect guy, I think he's been convicted, so hopefully I'll get away with the fact that that isn't created commons. It's worth a Google, I'd say don't let it define your notion of the space, but it's really worth a Google, and the BitConnect guy is hilarious, and some of the SCC stuff is really thought provoking. So, in summary, the next time you try to kill Facebook, do remember us, most of the fraud didn't involve any developers, it's a very separate community, and there's a great conference in Vienna. I'd like to emphasize the unicorns. Thank you, all right. And then next up is owning our own medical data. There you go. Okay, I'm Reza, you can find me on GitHub as Fishman. I saw some interesting talks the other day here about the electronic health record in Germany, so I spent three years in the healthcare system, in the government, in Germany, I most recently built an earthquake detection system for a big mine in the world, and basically the summary of the talk is, if we want to own our data, the only way we can do it is if we build the infrastructure ourselves. So, we do have a need for medical data stored somewhere, we can improve care, we can improve preventive care, we can improve the speed of medical improvements, we can replace radiologists to some extent, and right now the access model is, you go to the doctor, you fill out the form, at least in Germany, and then you give wildcard access to everything, and there's no real way to revoke it, especially since you don't really remember who you gave it to. So, some of the good ideas of the EPA is that you can give fine grain access, except they already rolled back on that, so that's probably not gonna happen. So, the bad parts of it, all of your data is stored in a central location, all of the decryption keys, because it's symmetric, is stored in another central location, and if you have a breach, everyone's data is gone, and there's nothing really you can do about it. So, what can we do about it is we store our own data, we build a federated API that gives third-party EHRs access to our data, and then of course the encryption keys are not stored on the mobile device or whatever, this is just one of the ideas, so I welcome people to actually give like maybe better ideas on how we could do it, and it would allow us to actually share the data we want with the people we want. So, I mean, like a lot of the threat stuff that they say about the EPA, of course, we know is not true, we know that the moment the data goes on the end device, you can store it, unless you control the entire ecosystem, which is unrealistic because all the health record management systems by the doctors are, most of them are running on Windows, so the moment it goes on there, the guarantee of expiry is kind of not there, so I would like keep it out of the threat moment. But at least if we leak data, it leaks from some devices, not all the medical data. So, basically it's more of a call to action, so we would have to build a POC, we would have to like think about the cryptographic solution to this, and then the real places, and that's the thing, we probably cannot expect the government to use this, but we can expect third parties, if our APIs are better than what the government does, which is actually really easy, then there is a chance that people would actually use that instead, or at least give us the choice of also using that. So, I set up a GitHub account, which doesn't have anything here because I was injuring and like sleeping a little. So, I might fill this in the next couple of weeks, but yeah, feel free to help out. All right, thank you. I think the previously missing speaker just showed up. There he is, all right. Then we will take this talk over here, all right. Over here, all right. Sorry about that, that's perfect because you're invited to disagree with these statements. So, a little audience participation. So, let's look at the top three statements. Values is not price, values is not violence, values is not greed, so without defining values, I invite you to put your hands, palms face up on somewhere where you can remember them, so like your lap. So, you keep them like that. And this side over here, if you would put your hands like this on your legs, like rest them on your lap. Yeah, because you're gonna turn them around, maybe. Let's take values is not price. Like that could say values are not for sale. The question I ask you is, if you think that is a value, turn over your right hand. Okay, on this side, you can put your hands just like this, yeah, and if you think that is not a value, turn over your left hand. And right here, the same thing, you can put your hands just on your lap and if you're not sure, don't do anything. And if you think it's both, raise both your hands. And now everyone up, you're with your hands? What do you got? Okay, we've got confusion and we've got two hands. There shouldn't be any two hands here. You're only invited to, okay, so here's the problem. Everyone wants to say their own opinion and they're like, huh, but I didn't agree to this values. Like I didn't think that that's that. The point is is that some people will think that's a value. Some people will think that's not a value. And it's the real question isn't like, can I represent my own values? It's about can we bring people together to talk about them? So that's the question. So proof of human-corrupt collaboration. Whenever humans have a moment together, they have an opportunity to give what's called a compliment, which comes from the Latin with, let's see, what was it? With, oh, I had it there somewhere. But it essentially means that you give someone a value that you yourself hold dear. You wouldn't tell someone's kind if you hadn't heard yourself that you had been kind. And so that's the basis for this general discussion. So yeah, we tried representing our values and kind of got nowhere, which I think, which I also have trouble with this. Like I'm just discovering values for myself. The main difference is that we're talking about that we're coming from a culture where it's really easy to like have one value, right? Price. It's really easy to know what is more valuable in terms of price. It's really easy to know like if someone is a fascist, it's really easy to know what their main values are. Small lists of values held dearly without the ability to consider them in the context of others. So that algorithm there is essentially one way that you can see how good otherness of value has. There's a lot of depth to this subject. When I talk about work quanta and healing quanta, everything that this project imagines as successful comes in the context of healing. If we design our work to make up for what it takes from the human and from the environment, then we're on the right track. And so by sharing, by federating over values, a federation in this system is any two, any objects with shared values. If that's human, if that's an institution, if you have two values in common, you're federated. This is what we have, we wouldn't have a word for this yet, but this is a way of describing human values in a context. Those are consensus rings and they've shared compliments over a group of values and those red lines are the values that they have in common. This is essentially because we live in an owned power situation rather than a shared power situation, it helps to keep people in their own value structures rather than in a structure of discourse and development and exchange of what could be next money. Two, one. Thank you. All right. So next up is DNS query filtering and with that we are back on track. Two times S in the schedule again. There you go. Hello, I'm here Peter and I'm going to talk about DNS query filtering or how to increase your performance and accidental block users. So the problem we had was that we have an auto-retative NIM server, which is actually two NIM servers. One is a recursive NIM server that's not our solution and there is a solution that's ours and it's written by us but it has to be complemented with a real and feature complete recursing DNS server. But because we are using a recursing server, that means we serve everything and because we serve everything that's a great UDP amplification vector. And the problem is that we had no time to fix it nicely as we were notified by the provider, our cloud provider that we either fix this or we are going to be blocked and we don't like being blocked. So possible solutions, fixing it nicely. This takes long, both on the side where we are fixing our custom solution because it takes development time, a lot of it, or we could use different recursing DNS server that allows us to filter which zones we serve, for example ours and nothing else. Or we could use IP tables rate limiting which means we serve less junk but we still serve it and we serve less of our valid users. Or we could create content filtering and that takes some development and it would be nice but we don't have time so nope, IP tables. So we are going to use string matching to be specific hex string matching but this sounds very, very expensive in the kernel space so we have to perf test it. We did hex string filtering like this. If you want to filter events that CCCDE on UDP port 53 then you can do it like this and this blocks it or on TCP almost the same but this is block listing not white listing. We tried out hex string and the overhead is very low. Our original setup could serve 60,000 queries per second from our zone from one node obviously and less than 5,000 queries per second recursive queries. So with hex string we could drop 240,000 queries per second of recursive queries which is very nice and it took only one percent CPU time which is a great and low overhead solution. And we could still serve near the original 60,000 queries per second from our zone but the problem is that we wrongly filter all TCP traffic which is less than 0.1% of our traffic and wrongly drop all OX20 queries which is around 2% bit less. TCP filtering is not that easy if you think about it because the streams can be fragmented and you can string match packet by packet and it's quite obvious. Although every guide tells you to do it like you do on UDP but that only works because they block list not wide list so it doesn't work for us. And on TCP we don't have a UDP amplification vector so why do it anyway? OX20 is a security feature so if you encode random bits as lower case, upper case then you can be kind of sure that you got a valid answer. Events that CCC becomes lower case E, upper case VE, lower case N, upper case T, lower case S and so on. We like memes but not in our DNS queries. However, this is a quite easy problem because you can solve it with case-insensitive matching and it's not a problem anymore. In conclusion, it was fun to try it out and I blocked to three percent of our users which was less fun. Take a ways that we must test more thoroughly if you introduce stricter IP tables rules. You can do that by inserting it before the existing one, do some logging, whatever you want. Thank you for your attention and major props to Max, my colleague who recommended Hexfilter and thanks to Vista from NOC who helped me prepare. Thank you. And next up is writing drivers in high level languages. Hi, I'm Paul and I'm going to talk about writing drivers in high level languages again. So this is a talk that I've given quite a few times now and a lot of people have contributed to that and I've also brought a lot of slides so we'll just skip over a lot of things here. Good news is there's a long version of that talk available on media.ccde if you just search for something with drivers in high level languages. Okay, of course drivers operating systems and so on usually written in C because C is such an awesome language. It's nice low level, can poke with pointers on memory and do weird stuff, everyone can read and write C and if you try really, really hard then you can even write safe and secure code and see at least some people think they can. I actually don't think they can but well. So if you look at security box in this is CVEs in the Linux kernel over the years there are a lot of security issues but of course not all of them can be attributed to C as a shitty language but some of them can. There have been studies for example in 2017 61% of the code execution type vulnerabilities in the Linux kernel would have been prevented if it was a memory safe language that was in use for the kernel like this would be use after free and missing bounce checks and so on and we took this data from this study that's linked down below and looked at where these bugs actually occurred and well out of the 40 bugs that could have been prevented with a better language 39 of them were in drivers and then doing like a group buy by Vendor who's the Vendor was the most bugs. Well, certain one Qualcomm drivers that was really, really surprising I thought they had really high quality drivers but yeah, so question is can you write drivers in a better language? Yeah, it's a little bit complicated to get a Haskell driver upstreamed in Linux and also to even get something other than C running inside the kernel but the good news is that for many devices you don't actually need a kernel driver you can write user space drivers in any languages. Question is then of course are all languages an equally good choice? Are some languages better suited for writing drivers? What are the trade-offs? What about having a JIT compiler or a garbage collector on a driver? Is that even a good idea? So we looked at network drivers in particular because we happen to know a lot about network drivers and also user space network drivers like DPDK or SNAP and so on are also really common in the high speed or high performance world. So what I did two years ago is I wrote a user space network driver in C that was a talk here two years ago and this is kind of simple driver, easy to understand because it just does only the very basic things. It's only a thousand lines of code and next idea was then to write that in a better language. Of course wanted to write it in all the languages but turns out that's a lot of work and I don't speak all the languages. Good thing is I work at a university so I can just grab a bunch of students and tell them to write drivers in their favorite languages. Then we had in the end nine driver implementations in these languages. C, Rust, Go, C-sharp, Swift, Haskell, OCaml, Python. Our table is not up to date. There's also Java driver nowadays as well and then we compared them by various criteria like which safety properties are being offered by the language under which scenarios, under which constraints and yeah, just going to skip over these results because not much time here. Then implementation size you might think C is very nice because it's just some terse code full of pointer magic but other languages can be short as well. Sure, C is still the shortest counted by lines of code but other languages can be even shorter when measuring the size as in how many bytes are in the source code of the driver because some languages just like lots of short lines like Haskell or Rust. Yeah, next question, is it fast? Is it a good idea? Well, turns out C is still the faster language for this kind of job but Rust comes pretty close and also surprisingly fast for these low level driver stuff. This is a simple benchmark where we just accept packets on a dual port 10G link, minimum size packets, forward and back on the other link like a bi-directional forwarder. The simplest case you can imagine for a network driver and surprisingly fast are go, C sharp and well for us it's not surprising that it's fast. Then performance is always two things. Next one is latency but this graph is too complicated to explain. Basically, garbage collector means high tail latency, no garbage collector is as fast. The lines for C and Rust are directly on top of each other. There's no latency penalty for using Rust over C but there are latency penalties for languages with JIT compilers and garbage collectors. However, the go and Java garbage collectors are surprisingly well done. At least the goes by default well done. Java when you use the new Shenandoah garbage collector then it's relatively fast. We can get tail latencies below 50 microseconds which is acceptable for most applications. Final slide, there is a GitHub repository with links to these slides to recordings of all versions of the talks and to all the codes. Thank you. Thank you. Next up is Tour de Rebel. Hello people, everybody welcome. Thank you for being here. Now you see a logo over there and I suggest that we're gonna start to play a game because the logo has been transformed the past few days. My cousin has helped me to transform the logo because Tour de Rebel is related to Extinction Rebellion but it's not the Extinction Rebellion tour on bike. So let's play a game because I expected to be, I didn't know which room I would be so I didn't know that I would be speaking in such a huge audience over here. I thought I would be something like the nutshell and I'm here with my PowerPoint presentation, anti-technology. But there are a lot of those flyers hanging around and there's a QR code in which you can find much more information and hopefully some are enthusiastic about the project after my short presentation of five minutes and would like to join an introduction presentation online in the next few weeks. But what is Tour de Rebel? And that's what I'm going to explain now but yeah, if anybody can see this maybe zoom in with your good cameras on the QR code. I don't know whether that's possible. So Tour de Rebel, the world's largest moving climate camp on bike around the world. So everybody now sees a pink elephant in front of your eyes like okay moving climate camp. So now I'm trying to wipe away those clouds in front of the Tour de Rebel and try to explain the vision of the people I met the past two months which I was encountering during my cycling tour around. Well I was also cycling around the globe but I mean I was not coming further than the Netherlands and Germany but I started at least. So Tour de Rebel what I'm presenting now to you is not my idea but a collection of the ideas of many people I met the past two months. It started with an idea okay I want to slow travel, I want to make sustainable traveling just much more nicer because it's such an experience to cycle the stage between Hamburg and Bremen for example which have been one hour of train or one hour by car for me the past few years and now I did this within one week with a couple of other people and it was just such an amazing adventure to get in touch with nature again and to slow travel through the world. However slow traveling is only one aim of the Tour de Rebel and experience sending adventure because it's far more than just cycling around the globe because everybody apparently is currently doing that if you look on Instagram you find Pet of the World and all the other people it's nothing really special and connecting and I want to do something with the other people I'm cycling around to do something with just connecting movements who are trying to change the system with each other. So the second aim of the Tour de Rebel is try to form a network of and a platform for people to meet each other. Imagine a few hundred people cycling together from Bremen to let's say Berlin and on the way from Bremen to Berlin they meet a lot of people from different organizations they network with each other they exchange experiences and they exchange skills and knowledge and that is something which is lacking from my point of view within organizations but also between organizations. So the Tour de Rebel tries to be a platform for networking and I already mentioned the first aim or the last aim of the three aims which is skill sharing and information spreading. So if you imagine a climate camp cycling around the globe and you arrive at least for example I arrived with five people together in a small village of 200 people nobody noticed but if you arrive with 200 people in a 200 village, 200 person village it will be noticed it will be the event of the year. So a climate camp slash justice social justice camp is an attention point if you cycle by bicycle around the globe. Everybody wants to be there at least hopefully everybody who prompt for future I don't need those people but at least the rest of the population is interested. So still what is it now? How can I join or be part of it? Scan the QR code because it's not only about cycling I started cycling the past two months and many people joined we were about 50 people cycling in total from point to point and in the end many people also took background organizational stuff like filming and stuff. So what is needed now is an organizational team and building a platform and if you want to join, clap in. One. Woo! All right, thank you. Thank you very much and yeah, play the game and search for the QR codes of the Tour de Rebel, thank you. You can put it in front of the stage. Sorry? You can put the QR code in front of the stage so everybody who wants can scan it. Yeah, just somewhere, somewhere there. So next talk is going to be a listling and open source web app. So, hi everyone, my name is Sven and today I want to talk shortly to you about the need for low threshold collaboration in self-organized groups. So let's start by imagining or picturing ourselves in a self-organized group like a group that's based on voluntary work like an activist group or a civic group, you know. And in those groups we have people with different backgrounds and that also means people with different IT skills and often those groups features an open participation model so you can easily join but that also means you can easily leave and often you have fluctuating members. So for now let's imagine we have our little group with Alice and Bob long standing members and then there's this newcomer eager to join. So what are the challenges they face for online collaboration? So if the group uses multiple online tools for collaboration, you will have this scenario where they tell the newcomer, okay, look, we have a shared task list and we use this app, please install it. And then we have a poll, please use this website and register and also we have a wiki and we will make you an account. And the newcomer is like, oh, okay, I installed five apps and what was my password for the first one? Okay, this can get quite overwhelming. So some groups then say, okay, let's use just one software, a groupware and this is, yeah, cool, but it also has a steeper learning curve. So that's not a problem when we have an enterprise environment where you have one week of onboarding time but in a voluntary setting, this is really off-putting for newcomers and also many groups don't have the resources to feature, like to set up a groupware. So what I see often when I'm active in activist groups is that we use Adapat or spreadsheets like collaborative documents and they are fine if you want to do collaborative text work but if you use them for other use cases and people often do like let's say a to-do list, then yeah, in a text document, it's already a pain to move items around and if you have a spreadsheet, it's like, okay, I have all those cells and buttons and then I have formulas and formatting options, okay, no. So all in all, I would say this creates something like a collaboration barrier and what I see very often in those groups is that only a small minority of people use those online collaboration tools and then we have something like, okay, then let's just do everything over email or let's do everything in a telegram channel and that can be quite messy. So what can we do about that? One night I had this idea, what if we would have a collaborative document but with a bit more structure, better fitting to typical use cases of self-organized groups? And what about lists? So it turns out that groups often need lists. A to-do list is obvious but a Wiki could also be a small list of notes and the poll, it's just a list of options where you can vote and if you have a meeting again, this is also a list and so on and so on and so on. So in 2018, I sat down and said, okay, I'm going to make Listling and Listling is a service to make and edit collaborative lists. It's online, you can use it. It has no registration whatsoever. You just create a list and share the link. It's free for use and it has a focus on a simple UI. Of course, it's open source and you can hack it and contribute if you want. So now this would be the time for a demo but actually what is a presentation? A presentation is just a list of slides so I thought, well, then I do it at Listling and that's what you saw now. Nevertheless, yeah, thank you. Nevertheless, I have some screenshots for you. You can not only do presentations, you can also do like in the middle, you see lists have different features and in the middle you see a task list where you can assign people or check items or whatever and on the right side, you see an example poll so you can add options and vote for them. Actually, I was quite fast, nice. So that's it. You can try it online on Listling.org. If you have any ideas or feature requests, I would love to hear them and you can contribute on GitHub and if you want to get in touch, if you have any questions, we have a GitHub community. You can find me on Twitter and also you can talk to me right there after all the other lightning talks are over. So in the name of Alice Bob in the newcomer, thank you. Thank you. Next up is Unary yet another tally sheet for your hackerspace. Yeah. Good morning. I'm Johannes and I'm gonna talk about something that's at the heart of every hacker or maker space today. It's consuming beverages and keep working based on the beverages you consume and everybody needs a tally sheet for that or to keep track record of who's consuming what so the use case here is very simple version of this just a dead tally sheet but in an electronic system. So it's just a system that helps the users to keep track of their balance and the security model here is trust. So if you have physical access to the fridge, you can compromise the system. So there's lots of solutions obviously because every hackerspace needs something like this and all of these solutions typically are sexy hacking projects because it was so much fun to develop some custom hardware to make it run on some vintage stuff. You know, have a barcode scanner, things like that. That's very sexy for hacking but actually often it's not so sexy for maintaining and also the usability is not the greatest you can have typically. So that's why I've developed yet another of these systems and my system is boring. So the idea is to have very boring solution, very simple solution that's still nice and usability and for this I just use off the shelf components, I just use modern web frameworks and I also use just an old Android tablet. You know, one of these old tablets that you're not fun to use anymore, you can just use it for this system because each tablet comes with a high resolution touch screen and that's really great for usability. So here's how it looks like. You can see, you know, we have a screen where you can pick your account based on the color you picked or based on some icon. Easily identifiable, you can filter for users and then after you picked your account you can pick the beverages you would like to consume. You get visual feedback when you buy it and you have features like adding deposits, cash deposits or also looking at your recent transactions and reverting wrong transactions, things like that. Many more features but the idea here is that it's not about features about how simple the system really is. So looking at the software side of things you can see that on the server we just deliver one single HTML page which is web application and then we continue handling requests and managing the database. And for this we lead less than 300 lines of code actually on the server side in Python. On the client side we use the view.js framework which is very nice because we can embed the variables and logic into the HTML code and then we have the reactive nature of this framework which makes it easy to just keep the state in the JavaScript object. So this JavaScript object is only 150 lines of code and the rest is just how UI elements are supposed to look like and behave. It's a very simple system and it's also very simple because we use web circuits for communication, web sockets allow us to send simple messages and the socket IO library also persists the connection between the server and the client. So it's also very low latency and very robust. And using this we also take note of this reactive component of view.js. So what we do is when for example you buy a product the client doesn't update its state. The client just gets a visual feedback for successful buying the product. The balance is updated by the server and the server pushes the state to the client constantly. So that's also saving us a lot of logic on both sides. For the deployment, still boring. We just have it in Freiburg running for this year now just an old Sony tablet and the whole system is actually contained in the tablet. There's no other server or any other system or hardware needed to consume beverages in the Freilab. And this is made possible because with Teramux environment many people think Teramux is just a terminal application but actually it's providing a full-blown distro and you can install all these packages and you can actually run these server components with a startup script on the tablet itself. So it doesn't even need internet connection or anything to work. The user just sees the browser and the browser is put into full-screen mode so the user doesn't actually see the browser. Only sees our interfaces. I showed you in the screenshots. But this is obviously not the only way you can do it, right? So if you want to you can still put the server component on some other machine and have multiple clients and actually web sockets and JSON libraries that you can find in every language or environment. So in Freilab this system already runs for quite almost a year with no complaints. It's only a thousand lines of code so there's not much that can go wrong but it still works for me state. So I'm very happy to put this to the next level and I'm very happy to get feature requests. People would like to employ this too or also change it and make it more sustainable. And there's also some alternatives you might want to look at. Thank you. Thank you. Next up is natural language processing is harder than you think. All right. Hi, my name is Ingle and I'm going to talk about why NLP is despite things like BERT and GPT-2 not solved yet. And it's really harder than most people think. So have you ever been disappointed by an NLP system like your Alexa or your car or some other thing you use? I am daily and I work on these type of things which is a sad state. But why is that? That is because language is hard and language is ambiguous and language is complex and language is fluid. It changes, people use more than one language usually and generally speaking, we don't really know how a language works. So let's exemplify that. This is a fairly easy sentence. They saw a vet with a telescope. Now, first of all, they could be singular, could be plural, we don't really know. A vet could be a veterinarian, a doctor, or it could be a veteran, a soldier for example, or a pirate, I don't know. So what could we do here? Well, it could be that they saw a vet with a telescope. The vet owns a telescope. It could also be that they saw that vet with their own telescope. And finally, it could also be that they saw a vet. They went to a doctor's office and apparently that doctor had a telescope. Okay, try to parse that. It's pretty tricky. For humans, it's fairly easy if we have enough context. So let's look at some more challenges that we have to face in doing an LP. Languages matter. Most people speak more than one language and not many people speak only their standard variety. People mix and match languages on every day basis and we have to consider that. Context matters. If we see that vet, we know whether it's a soldier, whether it's a doctor, or whether it's something completely different. Data matters, both in terms of privacy and in terms of the data that we use and the corporate views for an LP. And we need to be very aware of the fact that the data that we use to train our systems has an impact on what we are able to do and also on the results that we get. And finally, hidden biases matter. And that could be a translation system that judges gender based on job titles. That could be a sentiment analysis system that judges sentiment based on names and has maybe a racial stereotype inbuilt. And these are all things that we see in these systems that we have available currently. So if we have all of these issues, what could be potential challenges? Well, first of all, it could be just bad user experience. You talk to your Alexa, Alexa doesn't understand you because you are not a old white man on which the data has been trained, right? That could be a case. But it could also be that these systems generate faults and potentially dangerous results and conclusions. And that could be an actual problem, maybe just for business cases, but essentially also from an ethical standpoint, this could be a really big issue. Also we have a marginalization of languages and speakers because for some reason, we still equate natural language with English. And most models that we have are English, somewhere German, and depending on how many speakers a language has, but rather on how much money the speakers of that language have, the models are better or worse. And that's a sad state in which we are in. And lastly, many of these models are reproducing and reinforcing social norms and stereotypes. And we have to be extremely aware that this is happening and that this could be or is an actual issue that we are facing on an everyday basis. So what can we do? Well, we should at least consider these things and we should try to build language models that are aware of these issues. And we should try to think back to include context into our models. And we should be aware of the fact that there is not just English, but there are many languages and that most people speak more than one language. And we have to consider that and that it's maybe unfair for someone to force them to just use one language instead of all the languages that they have available. And that's basically that a call to action. Solutions are very hard, but language is very hard. And we have to embrace that complexity if we really want to do natural language processing in a way that is not just future proof, but that is also fair in terms of stereotypes and it's fair in terms of treating people as who they are and as in terms of the languages that they speak and in terms of the languages that they want to speak. Thank you. Thank you. Next up is Rebuild. Hack a Better Programming Language. So, hi. Yeah, we heard a lot of languages now, so if we want to build drivers or process natural languages, I think programming languages are serving us very well and they more and more become a tool so we not only instruct computers what they have to do, but we also express our ideas and understandings of the world in programming languages. So, I think even though languages are good, we can do much better. So, that's the Rebuild Language Project. We want to hack a better programming language. And so, what's our goal? It has to be as least, as fast as C. We want to have fun, so we skip all the legacy. And one of the major goals is we want to make it more accessible. So, we want to include everybody, want to make them able to hack. And what are the concepts now? So, I cannot express or convince you in five minutes what the programming language or what the project is about. We have a lot of ideas, we have very high motivation and the only good thing is we have persistence. So, we keep on. So, what I don't want to do is I want to set a hackable programming language in contrast to commercial programming languages and also academic languages. So, they have a certain valid concept, but I think getting a hacker perspective into that language realm is really important. So, we have to use our weaknesses as a strength and keep it stupid simple. That's the main hacker culture thing. And we also keep it hackable. So, hackable really means, for example, we want to have translatable error messages or diagnostics that are not only processed by humans, but also by tools. So, it is very easy to do, but almost no programming which we use today does that. Now, to one of the more involved concepts I am experimenting with today. So, the main concept now is to use compile time code execution as a main driver for the language. So, if we have that, when we can basically replace everything else. For example, we can skip all the keywords. So, we can make the programming language more accessible because everybody can place their own keywords however they want it. In the language they need, the language they know. So, it's more accessible for people who don't speak English or kids who cannot speak English yet. So, they want to learn programming and learn the concepts. So, the other thing is, when we have no keywords, how do we do anything in the language then? So, the main idea right now is to have an interactive compiler API. So, what you do basically is, when the program compiles, you talk to your compiler, please do declare a variable, create a function, make a class, whatever is the need to communicate to the compiler. So, instead of using a keyword for that, you just call an API at compile time and the compiler does what you request them to do. So, these are main ideas I'm experimenting right now but there are a lot more involved. So, there are more ideas I want to explore and that's basically the call to action here. Help me help hackers create a more accessible programming range. Thank you for your attention. You can find the experiments at GitHub and I also created an RFC repository request for comments. I try to write down ideas and then explore them in the real compiler. And if you don't want to contribute in code or ideas, then you can at least follow our GitHub or Twitter account we building. And thank you for your attention. Thank you. All right, then next up is open cultural data is out there. Hello. Ich bin hier heute um meine Begeisterung, mein Enthusiasmus über offene Kulturdaten mit euch zu teilen und möglichst viele anzustecken mit dieser Begeisterung. Vielen von euch ist wirklich bekannt, dass Museen, Bibliotheken, Archive weltweit ihre Sammlungen in hoher Qualität und systematisch in großem Umfang digitalisieren. Und viele tun das so, dass Lizenzrechtlich und eben auch technisch diese Daten nachgenutzt werden können. Dabei geht es nicht unbedingt nur um gemälde. Genauso sind natürlich Landkarten da verfügbar, Zeichen, Sätze, dreidimensionale Objekte von Skulpturen, Münzen und so weiter. Handschriften, jahrhunderte, jahrtausende alte Dinge mit vielen verschiedenen Inhalten, die ausgewertet, ganz unterschiedlich auch speckfremd ausgewertet werden können, bis hin zu Audio-Video-Material in den naturwissenschaftlichen Bereichen, Nein, Tänze, Filme, was auch immer. Ein riesiger Datenschatz wird von diesen Kultureinrichtungen aufgebaut und bereitgestellt. Und viele davon sind bis vor einiger Zeit noch... haben das so gemacht, dass diese Daten in digitalen Vitrinen dargestellt wurden, die man nur anschauen konnte. Aber zunehmend ist es so, dass diese Einrichtungen APIs zur Verfügung stellen, über die man die Daten abgreifen kann, recherchieren kann, die Daten auch in großen Mengen verarbeiten kann. Hier zum Beispiel das Metropolitan Museum auf New York oder das Reichsmuseum in Amsterdam stellen. Eigene APIs bereit, über die man deren Sammlungen komplett erfahren und mit eigener Software verarbeiten kann. Viele andere Einrichtungen, aber setzen auch inzwischen generische APIs ein, etwas älter, ein bisschen betagter, sind die URI PMH Schnittstellen oder SRU, die noch auf XML über HTTP basieren. Moderner beispielsweise das International Image Interoperability Framework, TripleIF, das auf JSON, JSON-LD beruht. Also, es ist Link-Data, das heißt man kann es auch sehr gut mit anderen Datenquellen in Verbindung bringen und diese Daten miteinander vernetzen. Darüber hinaus bietet TripleIF die Möglichkeit, die Bilddaten direkt dynamisch abzufragen, also nicht immer die kompletten großen Bilder zu ziehen, sondern zum Beispiel in eigenen Web-Anwendungen, nur Ausschnitte oder verkleinerte Varianten sich von den Bildservern direkt ziehen zu können. Es gibt unheimlich viele Quellen im Internet und ich habe meine Präsentation ein bisschen so aufgebaut, dass sie auch in eine kleine Linksammlung ist, um in diese gesamte Welt der Kulturdaten hinein führt, hier beispielsweise ein paar Anfangslinks, um sich mit TripleIF zu beschäftigen, unter anderem das Internet Archive, das ist glaube ich im Moment nicht öffentlich, aber man findet es über deren Test-Server kann per TripleIF auch abgefragt werden. Darüber hinaus gibt es natürlich auch wieder Verbünde, wie beispielsweise Europianer oder die deutsche digitale Bibliothek, die dann noch einmal auf Einrichtungsübergreifender eben die Metadaten der Sammlungen recherchierbar und eben auch über APIs abgreifbar machen. Was kann man mit solchen Daten tun, fragt man sich, ganz einfach oder das direkt auf der Hand liegende wäre, natürlich die Daten einfach anzuzeigen oder irgendwie durchsuchbar zu machen. Man kann aber auch neue Tools Werkzeuge schaffen, um basierend auf alter Kunst herzustellen oder Designs zu entwerfen, die auf diesen Grafiken, die oft heute gemeinfrei sind oder zumindest als Creative Commons, lizenzierte Daten bereitgestellt werden zu arbeiten. Genauso gut kann man natürlich Artificial Intelligence-Methoden, Machine Learning-Verwenden, um neue Kunst technisch generieren zu lassen, witzige Anwendungen zu schaffen, ernsthaft über Vergangenheit zu informieren, also kann man mit diesen Daten machen, mit diesen Schätzen und eine gute Möglichkeit dafür sind zum Beispiel die Coding Da Vinci Hackatons, die seit einigen Jahren stattfinden, Kultureinrichtungen stellen speziell für diese Hackatons Daten, bereit, dann aber dauerhaft und interessierte Menschen beschäftigen sich mit diesen Daten, schaffen neue Anwendungen und Kunstanwendungen, sinnvolle Dinge, Spiele, was auch immer man damit tun kann, in dem Zusammenhang. Bedanke ich mich für die Aufmerksamkeit. Ich hoffe, die Linkskünder werden abgerufen und sonst kann ich auch direkt gerne angesprochen werden. Danke. Vielen Dank. So, if you want your slides to be available for the speakers, please upload them in the submission system as a resource, then everybody can see them, they are public and can be downloaded. So we're going to have our last talk for this session. I'm sorry for the waiting list people, we are back on time and don't really have space for another talk, so I'm sorry, but maybe see you next year. So this is the last talk, Kaboom, a cruel but fair minesweeper. Hello everyone, I want to talk about a really cool project that I did recently and really enjoyed. So this is a minesweeper game, you've probably all played minesweeper, but just to remind you, so this is a game where there's a number and this is a number of adjacent cells with mines, you have to uncover the cells without mines, if you hit a mine, you die. And of course, as you know, well, you can play this game using logic or you can guess and sometimes you even are forced to guess because, well, because there's like two different possibilities and you cannot just reason about which one is correct, so this sucks a bit, I would say. So recently I had an idea, so what if the computer cheated? So you might not know this, but the default Windows minesweeper already cheats. So you know how it's never the first mine, the first square is never a mine, so if you play somewhere and it would be a mine, then the computer moves the mine around and basically invents a new placement for you. So what if there was never any placement in the first place, so nothing is predefined and when you play, we just invent a maximally inconvenient placement for you. So basically, if a square can't contain a mine, it will contain a mine. So you have to be really careful, use logic, reason and basically prove that a square doesn't contain a mine before playing. In a sense, you could say that this minesweeper is a full information game that you play against the computer. It's like chess, for instance. So this is how it looks like. You have to, so on the left you see the cells and you can see that some cells are safe, these are the dots, some are dangerous, basically they are guaranteed to contain a mine, that's these are exclamation marks and some are question marks, so there could be a mine, there could be empty and you have to play a safe cell. If you play the question mark, then magically a mine will appear there just because it can. And the one exception is that, well, sometimes you are forced to guess because nothing is safe and then we allow you a guess and basically you can influence your future because wherever you will play, whichever question mark you will play, then you will uncover an empty square. So the implementation looks like this. Basically, we have to consider only the boundary of the revealed cells. The outside is not important other than also this total number of mines much, much. And at this boundary, we just compute all the possibilities using a backtracking algorithm and combine them. So you can see on the right-hand side that some of the squares are guaranteed to have a mine and some can, some are guaranteed empty and some are neither. So this was my first implementation but unfortunately it was too slow. And yeah, this way you can basically fill up 12 gigabytes of memory and even though the arrangement is supposed to be pretty simple, so probably we need something better because as you can see actually the situation on the board is not so complicated as if you were a human, you could probably say a lot about the situation. So I decided to use a SAT solver which is basically a tool for mathematically checking is a mathematical formula can be satisfied. So on the right, you can see such a formula, you have three squares, there could be zero or one and the sums have too much. And basically all our board is a set of formulas that say that well exactly this N of some rounding fields have to be mines or in total there has to be exactly M mines. And basically now we can prove things mathematically about the game and I still need to do some tricks to cash the results but overall it's pretty fast, it's pretty playable, it will not hang up on you. And that's basically it, here you can see the game, it should work on a computer and also on a mobile system. You can enter, you can go through this link or you can just Google for it, the name is Kaboom and you can also read a blog post because this is a pretty short talk but actually I had a ton of different adventures developing this game and it was a pretty deep rabbit hole. So thank you very much, happy playing and I would appreciate any feedback about this. Thank you. All right, so this concludes this year's lightning talk sessions, thank you all for being here. Please give a big round of applause for all of the speakers who participated.