 I think everyone here should by now know how it works. For the first four minutes of your five minute talk, the timekeeper shows a green signal, a green rising LED bar, and then the four minutes are nearly up. It will look like this. And then you have 30 seconds of yellow light, something like this, and the last 30 seconds of your talk are shown in red. And when it's about this height, I need your help and you have to make up for all the people still in bed. So give me please a countdown with five, four, three, two, one. Marvelous. Very nice. But I think we can do better. Let's do it again. Do we want to try it again? I don't know. No. We don't have to. That's good. I always have it in the slides. That's... Okay, great. So we also have translations available. Very important. So if you... Since some of the talks are German, you might need a German to English translation. And most of the talks are English. You might need a German translation from English to German. And we also have French translations from English to German, from English to French, and from German to French. So just look up streaming.c3lingo.org, where you can see the translated streams. All right, then let's go with the first talk. So first up is where trust ends certificate pinning for the rest of us. Yes, hello. Good morning, everyone. My name is Harikus, and I have brought a question, an answer, a problem, and a solution with me today. The question is why do I usually trust the web today and why that is sometimes not good enough? So the easy answer is, well, I usually trust the web today, and most of you do as well, because when you go to HTTPS encrypted web pages, they're encrypted, and you implicitly trust certificate authorities to only hand out signatures for certificates only to the owner of the domain and not to anyone else who wants to do a man in the middle attack. So that's what you trust today implicitly. The problem with that for some things is, well, actually, whom do you actually trust? When you go to the certificate manager in Firefox, for example, you can see there is well over 100 root certificates that can be used to sign certificates, so you implicitly trust many, many different entities. For example, the well-known Hong Kong Post Office, which is, I guess, a running gag. I always thought it was a running gag that you have to trust the Hong Kong Post Office, but I actually had a look and it's in there. It's in the certificate manager, the Hong Kong Post Office. So I have to trust them for everything. Well, for most things, I think that's good enough for me, but for other things, it's not I don't want to trust 100 root certificates. For example, for my own websites I have at home, for my banking stuff, and what I had until a couple of years ago, there was a nice Firefox add-on that lets one do certificate pinning, which means you pin the certificate that you have one seen and then only this one is trusted. And if there is suddenly a new certificate, you get an information that there is a new one and then you can decide if the change was valid or not. So unfortunately, Mozilla removed this API as part of a big cleansing a couple of years ago and it wasn't possible to inspect the certificate anymore so that add-on went away. And there was nothing for a couple of years. Until back in September 2018, they added a new API to inspect certificates before the webpage is loaded. So I thought, okay, cool. Then I write an add-on for that to actually inspect the certificate and to pin it. And if there is a new one to block it and for people to be able to inspect that. So that's how the API looks like and that's how the add-on looks like. If you install it, for example via the Mozilla Firefox Store, you get a new icon in the browser's task bar. And when you go to a webpage that you want to pin because you don't trust a hundred root certificates, you get a little green P there that the page is protected. And whenever you go to that page again or to any other page that you have pinned, the add-on looks at the certificate that has been delivered and if it's the same, everything goes ahead. But if it's a new one and a different one, you get an alert and then you can choose whether the change has been fine. For example, because you have changed your own certificate for your own services or if it was the banking website, you can look at the certificate chain and decide, well, that was probably okay or maybe there is something fishy and then you can either go on or you can stop the process before any of your passwords or private information has been transferred to the other side. All right, and that's pretty much it. If you want to have a look, you can use the barcode here or the URLs down here. Have a look at the add-on. You can have a look at the source code as well, obviously. Certificate Pinner add-on for Firefox. Thank you. Next up is Distri. Distri. Hello, good morning. My name is Michael and I think that Linux distributions are too slow. So I've measured the time that it takes to install a small Perl script on the major Linux distributions, but this holds true for both smaller and larger packages. And I think it's really unacceptable that on, for example, Fedora, you have to wait 25 seconds to install a couple of kilobytes of program code. So why is it that these package managers are so slow? Well, all of the widely used package formats are actually archives. So in Debian, you have tar archives in Red Hat, it's CPIO archives. And traditionally, what a package manager in Linux does is it needs to download some global metadata, use it to resolve dependencies, download package archives, extract these archives, and then actually configure the software that was just unpacked onto your computer. On top of that, these package managers need to carefully use the F-Sync system call to make all of this IO as safe as possible. So just in case your laptop battery dies in the middle of a package installation, your system should still work once you power it up again. Now, in Distri, we have removed all of the stages. We no longer need to resolve any global dependencies. We only need to download image files. We don't need to extract anything. We don't need to configure anything. And due to our design, we can do all of this using unsafe IO. So this approach scales to 12 plus gigabytes a second on 100 gigabit link using just a standard GoNet HTTP package, so more optimization might be possible. If you compare this with the data rates from the previous slide, which were like 1 point something megabytes per second, this is a really big contrast. So how can we do this? The key idea is that we're using an append-only package store of immutable images. So we're using an image format, for example, SquashFS in our case, instead of an archive format. Then each of these images we mount under a own path. So this is a concept that we call separate hierarchies. And for example, if you were to install the Nginx web server on your system, you would have a path such as slash arrow for read-only, and then Nginx followed by the fully qualified version number. The same is true for Zachel and all of the other components on your system, but the rest of the system is laid out as usual. So you have your typical slash Etsy directory for configuration, slash var, and so on, and so on. So with these separate hierarchies, you might be wondering, OK, but if you have all of these programs installed separately from each other, how can they still communicate? Because programs do use exchange directories that have a well-known path. For example, if you use your man-page viewer to look at the Nginx documentation, it looks up a file within user share man. And if you're using your C compiler to compile against LibUSB, it looks into user include, et cetera. In distry, we just emulate these well-known paths. So we have a sim link, for example, within user include, which points to the fully qualified file. The advantages of using separate hierarchies are that all of the packages are always co-installable. So for example, if you upgrade from Zachel 5.6.2 to a newer version and it breaks your config file, you can easily just use the older shell or remove the new one without breaking the rest of your system. But more importantly, this means that the package manager can be entirely version-agnostic. So we no longer need to fetch global metadata from the internet and resolve all of these dependencies. So a large source of slowness in installation and upgrades is just entirely eliminated. Furthermore, we don't have any hooks or triggers in distry. A hook is also sometimes called a maintainer script or a post-installation script. Essentially, it's a program that is being run after a package was installed. A trigger is the same thing, except it's a program that's being run after some other packages installed. For example, the manned package in Debian builds a full-text search database of all of your manned pages whenever you install any package that has a manned page, so almost all the time. And I personally never use this, and I bet most of you haven't even known that it existed. So the work that is doing at package installation time is entirely unnecessary for most of us. More importantly, having hooks in your architecture precludes concurrent package installation because these hooks were not implemented with concurrency in mind, and also they can be slow because nobody checks what the package maintainers are actually shipping in these programs. The claim that I'm making is that we can build a fully functioning Linux system without having any hooks or triggers in it, and the approach that we're doing to get there is twofold. The first idea is that packages just declare what they need. For example, if you have a daemon, such as the nginx web server, it might say, I need a new system user so that I can safely run the program as this user. And if you have one of cases where it really doesn't make sense to implement a facility with which packages can declaratively say what they need, then you can still move the work from package installation time to program execution time. So for example, in the SSH server where you need to generate a host key, you can just create it in the SSHD wrapper script instead of creating it at package installation time, which is also good for read-only images. So the conclusion is that an append-only package store is more elegant than a mutable system, and it results in a simpler design and a faster implementation, so it's win-win. Using the exchange directories where I mentioned we have the sim links for compatibility makes things seem normal enough to third-party software, so we can compile on package software, we can run closed source binaries, no problem. All of the ideas that are presented are practical. Live CDs have paved the way with their read-only environments and cross-compilation. I'm not trying to build a community here or a user base. Distry is a research project. I want to encourage you all to not accept slow Linux distributions, and I just want to raise the bar and say it can be much, much faster. So thank you and check out distry.org. Thank you. Next up is hacking neural networks. Hi all, my name is Michael, and I'm working on a small open source course on how to hack into neural networks. Why should IT security care about such stuff? For example, there are a lot of deep learning applications now in blue team applications, such as anti-virus applications, intrusion detection systems, but obviously red teams also need to know how to hack into those systems, and they also need to create their own systems such as automated penetration testing, phishing email generators, and there are, of course, all this questionable stuff you might think is a valid target, such as mass surveillance and crime forecasting. First, I'll give a short review on the terminology I'm going to use and what neural networks do. Here's a typical neural network. It will take an image from the left side, perform some math on it, and produce an output such as, is this image a cat or a dog? It does this by performing this simple mathematical function on each of its neurons, and this mathematical function consists of weights which are multiplied by the input and is added to a bias. All this is fed into an activation function, for example, here, the rellow activation function. For example, let's take an access control system based on an iris scanner. It takes an image of an iris from the left, computes what we just saw, and outputs if access is denied, or if access is granted, depending on the output value. And this output neuron performs the same computation all the other neurons does, usually with something called a softmax, but we'll just stick with rellow. So the question now is, how can we modify this so that we will always get access granted to a neural network? Well, it's quite obvious we can simply just replace all the weights with zero and set the bias to one, then no matter what iris we feed the neural network, it will always give us access granted. How can we do this in real life? The neural network is usually stored in something called a model file, which you can simply edit. Is this realistic? Yeah, it's actually quite realistic because most blue teams don't know how to secure these model files because they are neither code, they don't seem to be a database, and they don't seem to be a configurations file. They're mostly huge gigabytes in size, and the dev team needs constant right access to it. So it seems kind of weird to secure this in a reasonable manner. And you will always often find that they are quite easily accessible. Of course, there are other methods we can use. As a second example, we can always perform GPU buffer overflow. For image processing, we often find that the pre-processor for the image is also found on the GPU where the neural network is calculated. So you might find a GPU memory layout as it's shown here. And if we simply don't have any bounce checks on the image, we can, of course, overwrite the buffer. And we can overwrite the whole model file and simply set all the weights to zero and only the last bias to one. So is this all realistic? Can we actually do this? Yes, you can. If you want some details, you can just follow this link where I have the whole article explaining over 10 methods and you will be able to try them all out in different exercises, such as back-during neural networks or how to do a neural malware injection. And, of course, all the stuff I just showed you here. Thank you. Thank you. So next up is cross-site-request-forgery, a cross-site channel. Oh. Hi. Hello, good morning. This is not about presenting a shiny new fancy bug, but more about talking about the non-interesting bugs. So don't get your hopes up. Basically, I want to get some feedback from the community how to deal with them. I guess most of you know cross-site-request-forgeries. When a user is locked into a web page, he has a valestation, and another web page tries to lure him to click a button or something that then sends a post request to the original web page, which would then execute something the user does not want. Usually, you protect against this by using cross-site-request-forgery tokens. And all the standards and Angular or Laravel send them along with delete or post-request. But the RFC does not define them for get-request. But this opens a side channel. If you can monitor network traffic, you can see what resources the user is able to access by seeing whether the user gets a big response with data or a small response with an access denied. So this allows you to map permissions from different users if you have access to the same Wi-Fi, for example. So this is not highly classified information, but side channel that might be interesting in some corner cases. I talked to the Angular people, and for them, it's like a non-issue. The standard says they don't do that, so they don't. But the question I want to ask here is how should we deal with this as a community? Should we just ignore those things? Should we carry third-party patches to our own source code to fix those stuff? Or are there any other ways that we should handle this? For this special case, you have several options on how to deal with it. But what I'm interested in hearing from you is how should we deal with this politically, right? For the more fancier issues, you can always pressure the vendor into fixing stuff. But for those minor issues, you can't. And I don't think we should. So yeah, please give me your feedback on what you think about this. Thank you. Thank you. All right, next up is Emissions API. Good morning. Good morning. I'm Lars. I want to talk about an open source project I'm involved in, which is called Emissions API, and which is about making emissions data from satellites easier to access than just getting data or binary data blobs from the ESA. So this doesn't work. I think you've got it. Now it works. So we are talking about this probably here. This is a Sentinel-5P satellite by the European Space Agency. It's part of their companion program, and it's orbiting space, or it's orbiting Earth, and it's gathering data about several emissions, like, for example, methane, or carbon monoxide, or sulfur dioxide, and so on and so forth. And the cool thing about this is that all these data are open. However, the problem with open data, as so often is, that open data does not always necessarily mean that the data is easy to access. So it's a little bit preprocessed, which is nice, but you also get large binary blobs from ESA. So for example, if you want to get data here from Leipzig, you would also get data from Antarctica, which, well, you don't really want to do that, or really want to have that necessarily. And the other problem is that what you get is large blobs of binary data. So it's nothing you can easily process, for example, at least not as part of a web application. So this is, for example, a simple scan of basically the Earth, so one flyover of Sentinel-5P. And you see this is the data you would get in one file you would download. So you would get basically all of the world's data and couldn't get something for a single spot. The actual scan looks a little bit different. So this is a representation of a single scan line flying here over Germany. And it would be really nice to get actually some parts of this. And this is what we strive to do with the project. So looking at the architecture, we want to make this as easy to access as possible. So just having a simple REST API, we can just say, OK, I want to have data for this point, this geolocation, or for this area. And you would just get back either geojason or some statistic information in JSON format, which you can then use in any web application you want to use that in. We've built some example applications already. So if you go to our website emissions-api.org, you could, for example, see this example, which is the carbon monoxide emissions of Germany in over one month, including a little guide how to build this. A little bit cooler would be something like this. This is using WebGL and having a 3D representation of carbon monoxide of Germany. We are still limited so far to carbon monoxide. So we started this project about three months ago. And we hope to get this more or less done in about three months, so hoping to increase the amount of projects a little bit and the overall coverage of time a little bit. But there's already an API to talk to out there and some examples. So if you're interested, go to emissions-api.org or find me or some of the other developers here at the Congress and talk to us about this. All right, thank you. Next up is XWM, Emacs X11 Window Manager. OK, hello. So my name is Elis. I'm a longtime Linux and free and open source software user overall. I've been using Emacs for more than 10 years and NixOS for a couple of years. Also a ham radio operator. I'm here today to talk about what is EXVM anyways, my experience with the EXVM and the future of EXVM. So EXVM, it's just Emacs, running in full screen, managing your graphical applications. That's it. And this has benefits and disadvantages. So for both good and bad. So my experience with EXVM, it began with that it was unimpressive and boring. I'm going to show your screenshot. This is EXVM when it's freshly started. It's with my theme of Emacs. And it's just Emacs without window decorations or borders in full screen. But for many years of using Emacs, I have grown very used to the key bindings, the way you're managing buffers, and very comfortable using it, which makes it exciting when you have it as a window manager. So here I'm running a bunch of graphical applications. And all the key bindings used for managing Emacs translates to how you manage graphical applications within EXVM. So you use the same key bindings when you manage graphical applications as you do when you manage projects or code or other just normal editing or running your terminal within Emacs. So overall, it's been a quite good experience. Most things works. It's not the best window manager I ever used. It has bugs, plenty of them. But yes, everything does, more or less. Then we go to the future of EXVM. It doesn't really exist because the future of the Linux desktop is probably Wayland, as most of us probably think. But we're not there yet. And in EXVM, it has a few problems with the future. One of the problems is that Emacs doesn't even run on Wayland yet. There is a branch for it. So we might get there someday. But then we have the big other problem, like Wayland requires to build a compositor. And doing that in the Emacs list, I don't know, just getting it to run on Wayland is probably a good start. Then we'll see what happens. The year of the Linux desktop might not be this year. Might not happen any time soon. But at least I know many people that are humble about it. But for me, 2019 was the year of the Emacs desktop as a window manager. And yeah, to conclude, EXVM is a perfectly decent window manager for long-time Emacs users. It can be combined with evil mode if you want Wimki bindings. The experience is most often it works good enough for me, so I'm not switching away from it. And yes, the future is dark. You can reach out to me later if you want. Thank you. So next up is uninstalled dollar product now. Well, hello. Good morning. I'm one of the PLC developer. I'm not a security guy, but I know there's some security guys around. What I would like to talk is that some morning you wake up and you have your may full of questions. And on the internet, the world is burning because everyone is asking to uninstall your product. And you don't know why. So it was last July. Probably you noticed it. That was the CVE highly critical maximum score or remote, which was over read in Lib. EML, which is MKV. And that was filed by Sertbund, which you may know. And that was sourced from one of our ticketing system entries. And we had, of course, no embargo on this because it was on a public resource. So we also were not contacted. And it was not verified. So yeah, so what was it about? Well, when your ticketing and your security researchers start to point out that the wrong commit is something in the MISO system, might be something fishy from the start. And when we ask to post it on our security mailing list, for a reason, it doesn't do it. And also, nobody can reproduce. So he posted it publicly because we never replied it on security because nobody could reproduce. And why was it not reproducible? His configuration was lip-fuzzer on our master source code. He was running Ubuntu. But he was using a vulnerable, unfixed eBML, which was no variable. And he did not follow our bid instruction because we use a lot of library, which sometimes are not maintained. So we have to maintain it by ourselves. And we ship our own patches mostly for secure OS systems, which are mostly Windows. So this highly critical CVE was totally bogus and already fixed. So there's not much left on that CVE entry. It was totally downgraded. It was not reported by us, not remote at all. But the effect was most annoying thing is the web article, like the recommendation uninstalled right now, which was not checked. And all those click-back articles were spreading like fire in social media also playing the game. So consequences is the whole team had to fight for two days to put the fire and monitoring social media and answering all the companies asking us for updates. We still have some people asking for this. We also had some people telling us that we were hiding things because some people think we are hiding security issues. There was almost no article updates on the web, no apologies. And the reporter ran away with a I'm sorry about that. It's not a problem. It's a volunteer, so even if it's a beginner, it's OK. But that's another effect that we lost one volunteer in the process. But we had some unexpected effect that we received some direct complaints about VLC being blocked from that day. And we discovered that some anti-virus and proactive safety software were based on CVE. So with a book of CVE, you have your product blocked everywhere with those products. And you need to update the version on the signature. So you have a dose. So this is really unexpected for us. Maybe some could exploit this. I don't know, but this needs to be changed. So the lessons we have is we enforce no more public security tickets. We will delete it immediately. And we are in the process of being on CNA for the Multimedia project we support. So we could manage that kind of issue in the future. Well, thank you. Thank you. Could you please put the clicker back? I think you still have the clicker, right? Do you still have the clicker device? OK, sorry, it looked like you went away with it. No problem. So next up is doing quantum computing with school kids. So good morning, everybody. My name is Rene. I'm a teacher. And I'm here to talk about doing quantum computing with school kids, especially about the project that my three students have worked on this year. So let's start with our question. If the clicker works, then the next slide should come up. Maybe a bad time for the battery. Time is running. OK, I can continue just talking a little bit. Let's start with a question. Who of you has already worked with a real quantum computer? Can I see some hands? OK, that's not so many like expected. Maybe it's because you are afraid of the mathematics. So here are some companies that provide you an access to a quantum computer, for example, D-Wave to a quantum annealer. But maybe you haven't worked with such machines because you're afraid of the quantum physics that is behind quantum computing, or you are afraid of the mathematics dealing with complex numbers, solving the time dependent reading equation, or you think quantum computer is only for super intelligent nerds, and you don't have the right t-shirt yet for that. OK, so if we talk about a quantum annealer, all this is, like Donald would say, fake news. It's definitely not so complicated. All you have to do is to find cost function for a given problem, because quantum annealers solve optimization problems, like the traveling salesman or the schedule problems. So when you have this cost function that evaluates your problem, then you have to create a matrix from this cost function, feed it to the D-Wave quantum annealer, and this machine will return you several solutions from which you pick the ones with the lowest costs. So let me explain this on a concrete problem, the one on which my students worked, the n queens problem. Here you have to place n queens on an n times n chess board so that no two queens threaten each other. The constraints you have to meet is, for example, that on one row, there have to be exactly one queen. And this can be translated into a cost function like this. So every field on the chess field is represented by a qubit. That is either 1 if there is a queen, or 0 if the field is empty. So then you can set up this cost function for the row, where you add up the four qubits, subtract one, and square the result. Then you get the lowest cost, 0, when there is exactly one queen in the first row. And similar, you can set up cost functions for diagonals and columns. And when you put all these parts of the cost function together, you can write it as a matrix equation like this, where q represents the current configuration on the chess board, or the qubits that arise as 0 or 1. And h is the matrix of your problem that you get from the cost function. Then you're simply feed it to the D-Wave system with a simply Python script like this. And this machine returns you several solutions. And you just have to pick the ones with the lowest cost. And this is just problem solved. So no big deal, no trading accounts, lurking around, and so on. School kids can do it if they are clever enough, and my students will. So if you want to know more about this project, the students are here, live on stage, at 7 o'clock in hall two at the Kairse Zone Bühne. At 7, they will give a talk in more details, and be here for questions and answers. And we also have a website where you can look at the project documentation. It was a Jugendforsch project. And my students have also given a talk this year at the International Supercomputing Congress in Frankfurt. And this talk was recorded, and you can find it under this link in YouTube. So thank you for your attention, and see you hopefully this evening. Thank you. So next up is Opencast. So hi, I'm Lars, you might remember me. I'm also involved not only in Mission API, but I'm also a main developer of Opencast, which is a free and open source software for basically video recording, processing, and distribution. And its main focus is on the academic world, so university's recordings that there are lectures, for example. So the basic idea behind Opencast is that while here, for example, in this lecture hall, we have dedicated people recording stuff. As a university, you couldn't do that on a large scale in your usual lectures. And you could also not force your lecturers to deal with the technical problems of video recordings. That may work for very few lecturers who are interested in this topic, but for most, it simply don't. And most are really not interested in the technology about INCs and in doing that stuff themselves. So having this automated, having an equipment, being able to just schedule a recording for a talk or a regular course is immensely helpful. Then the recording should happen automatically. It would be processed and you could configure the processing like you could do video transcoding. You could also do some media analysis, for example, Opencast supports slide detection. You can do stuff like text extraction from slides to search through these slides later on or to do speech to text, for example. All these steps are configurable and you can do them. You don't have to do them depending on what you want to achieve. Do you want to push out your sources as fast as possible or do you want to, well, have as many analysis steps as possible? If you want to test this out, you can go to develop.opencast.org, which is a test server, which is reset on a daily basis, which runs the latest development branch of Opencast. There are also other test servers out there, but that one usually works quite well. It's also up most of the times to develop. It's pretty stable. If you happen to break it, please let me know. I can just simply reset it. But usually it's up and working. So talking a little bit about open source projects also means that it's also interesting to talk about the community behind these projects. Opencast is a quite old project by now, so it's about 10 years old. And it's used at universities worldwide. Looking at Germany here specifically, it's actually one of the most used lecture recording solutions out there. So there are some commercial competitors, but I think we are in Germany specifically above these commercial competitors. It's unfortunately not true worldwide, but at least in Germany it is. Looking a bit more about the community, this is, for example, the package repository maintained by Opencast. We can register. And these are the registrations from this package repository, which looks quite nice because you see it's used worldwide. It doesn't necessarily mean that all of the people who are registered here are actually using Opencast, but it also doesn't mean that you have to register to actually use it. So some dots will probably not use Opencast and others will use Opencast but are not on this map. What at least gives you an expression, impression of where Opencast is used. Looking at other community events, you also see that to these community events, a large part of this community shows up. This is two photos taken in Valencia at the International Summit of Opencast and at the Technical University of Ilmenar here in Germany at the local German meeting of Forza's project. If you want to know more about Opencast, you can contact me, talk to me, or talk to the larger German community or the international community like that. And just find out about the project. So thank you. Thank you. Next up is Cider or Cidre by Food Hacking Base. Oops. All right, there you go. Hello, everyone. I'll just try if it works. Yes. My name is Frantyszek Apfelbeck, I'll go during computer work, and I will be presenting Cider. Cider is a beverage, alcoholic beverage from EPLOS done especially in France, England, and the north of Spain. Historically, we know it's around 2,000 years old. Let's say we have the records, the regions I mentioned, science of production in Europe. I would say that at the moment, England is the bigger, like I say, producer and drinker. France is behind, and after it's pain. Quality wise, I would vote for France because the traditional methods of production are still in place. And not even the laws are stricter, but actually somehow whole industry follows the good quality standards more. Not sure why, but definitely better than England at the moment for the mass scale. Now, I will talk a little bit about how you do it. Harvesting, manual versus mechanical. I do manual harvest by the hands on the field, on the knees, in the pane, or basket, which I get in my socks, and after that I have to get it to the place where I process it, which means 13 kilos in each hand, after that to the back, 25 kilos, a hoop, 200 meters, for example, to my remorque, from the remorque on the place where I do it, and after that you have to make something with that. You crush it by crusher or wrap, there are different techniques or different machines, or you can do it also by the hand, which you don't want to. If you want to be survivor, you will press it later on. There are many different types of presses, pneumatic ones, press apache, you would do the, in Germany, the water press, many ways. After pressing, you may wait for two, three hours for special chemical reactions to take place. You would do kovach, special French technique, still done in most of the production. After pressing off the mark, you will get, sorry, after pressing off the apple pulp, which is called mark, you can actually add water to get second pressing, like in wine, it will be called remyage. Once you have your mood, your apple juice, you actually would like to do defocation, which in English means shitting, which means basically purification by the co-legation of the pectin. The solution is clear, lower on the nitrogen and lower on the yeast cells, so it ferments slower. After that, you would like to do sutirage, which is also called wrecking, which means transfer of the cider from cast to cast or your container or cube to the cube, decreasing the depot on the bottom and decreasing also the amount of the yeast cells and sediments, so you can slow down the fermentation. When everything goes well, you will end up with a cider which is actually decent enough to put in the bottle, so you will have to em-bottle it. If you press in October, November or December, there's a cider season, you may be as a traditional maker, harvesting actually or bottling around March or February, March, April, maybe May, not so good after. You put in a bottle, generated by gravitation, and if all goes well, you will do prism de mous naturel, which means you put a cork in, moussel around, and you hope and pray for a few months that the bottles will be fizzy, but not too fizzy, meaning they will not explode. There are many ways how to screw it up and most of the time, you find one or two on the way. Now, products. You have apple juice, bigger and bigger thing in Normandy actually, where I am now based. You have cider, which is generated between three to six percent in Ireland, sorry, in France, around four to five. You have a Lodevi, which means a young distillate, which is a kept laughter in the glass. It's not H. Calvados, which is, you know, Appellation Origin, generally around 40, 50 percent, 40 percent commercial one, age in oak barrels for years, at least two or more. You have vinegar, many times with fioside, it doesn't go well, it ends up like that. Or you have pommo, which is a melange of Calvados and Judea pomm, age in oak barrels for around 12 months or more. These are the basic products which you can do. Also, you can make some concentrates and play around. You can make the cider in the glass and many other things. Now, what you would like to check when you make your cider, the first thing, specific gravity. You press your juice, you check the specific gravity so you know the amount of the sugars and at the end you can say how many actually, how much alcohol you will get. You check your pH, if you can, nitrogen calcium in the lab, so your bottles don't explode and your defecation happens. You would like to ferment around eight degrees of temperature, if you can, and you check that you don't have too much of bacterial contamination. Because if you do, then you are in trouble and you may get explosions. Now, this would be a very simple overview of cider making. If you want to know more, visit Fudeckin Base, which is in the hall too. And we can talk, we can taste, we have cider tasting, which is really booked. But there are some ciders which we can offer and you can have a bit of taste. What I do and other people do in the field. Thank you very much. And I do hope to see you soon and enjoy the cider in general in your local regions. Thank you. Bye. Thank you. So next up is Menschen beurteilen. No, that's, wait a minute. So this one, he's coming. Hi. So it's actually the opposite of Menschen beurteilen. This is supposed to be a non-judgmental talk. I use this, yeah. So we're going in direction of healing. And for that reason, I'll start by introducing the plan for how we can have healing on earth. And then I will go into discussing one of the obstacles. So first the plan. You begin by sharing values and values are word harmonics whose true meaning requires human discourse and intention. So there's a lot of values but what's a good example, like gerechtigkeit or ehrlichkeit, classic example, like do you say that you're hiding fugitives or not? And so there are a lot of word harmonics that we have yet to discover. And that's the first step is discovering our values and sharing them. A federation, by the way, in this context is defined as any two people who share values. So it's really easy to federate, just share your values. The second step is to develop a healing base, a healing basis upon which we develop our work. So humans have been doing it backwards for our entire history. We take a value like leadership or function or anything you want and we put it above as a goal and then we try to get to it and we lose our values. So we inevitably don't have the social structures to support the goals which means that we have to start with a healing basis. So the second step is after sharing our values to develop a network of healing and then third step is to share that with the world. So, yeah, the big obstacles, this isn't a judgment talk. I had a dream the other day where there was a Mexican standoff and everyone had their guns pointed to each other and they all grabbed each other's guns and they went off and confetti went everywhere and we were asked, are you a dancer or a judge? And so this is trying to dance more. And the question is, is there a healthy attitude toward the system in which price is a value? So, there it is. We'll call it Mr. C and it works for each of us but what's the real issue here? The issue is that price, that the irrational actor has to sacrifice most other values in order to get to the value of price because inevitably it's really clear what has more value, right? Price. So the other problem with that is that you're also most willing to have one and then you, again, we lose our values. So we're talking about, you've got price value and you've got all the other values and this is a system in which we are asked to focus on price. So, obviously there will be problems in things like externalities or thinking about the future. Anything that you actually can associate with human values is at risk. So, we need money, we need to think about people who are not going to be developing a healing system but who we also need to heal and we also need to do this culturally. The state is very unlikely to help us as well. This needs to come from the heart. So, the real question is, is peace sexy? I don't know. But price really isn't. Price guides us, and it ends up being about developing a basis of healing. And the main idea is that we always develop our work in terms of a goal and making money or developing a great product or something like that. But the goal of this is to invite you to think differently and develop every one of your work products in a context of healing. And there are four ways of looking at that. So, I don't have a cool diagram for this, I'm sorry. But you can consider it in a box of four where the two top categories are human and Umwelt, right, environment. And so we're healing the worker and the environment that they work upon. And then we also have to heal the deficiencies that are caused by the work and the overabundance that is caused by the work. The trash, maybe the manual labor that hurts the body. But consider every work we do in the context of a healing basis that we all have to develop and then if we develop that healing basis and define all our work within it, then we won't really have any more problems. So I think it's pretty simple, but I'll be working on it and I'll be in the Echo Hacker Farm. You can find me there. And thank you for your time. Thank you. Next up is why we need a delivery chain. This talk is in German. A delivery chain law, sorry. Imagine you're sitting in your hacker space, you want to loot something and you take it with your hand. In this resistance there are various materials, various raw materials in it, among other things, iron. And this iron was somehow built in a mine somewhere in the world, for example in Brazil or something like that. So for example here in Bromadinho, where an iron ore mine breaks down and then a gift-flavored wine is collected on a village farm and more than 200 people are buried under it. The dam has been certified as a security for a short time before, namely from TÜV Süd, so a German company. And in the meantime there are evidence that there have been a lot of safety issues during the examinations and the mine operator then tried to express the certification to distribute it. For the long term, it is quite difficult for the victims or their relatives to go against TÜV Süd properly, because they obviously didn't work here carefully. Because the relatives are sitting in Brazil, and TÜV Süd is sitting in Germany. A small conclusion, the federal government has in this year carried out a survey among German companies on how to deal with human rights. The question was whether the human rights and human rights obligations correspond to the current rule of law. And of the companies responsible do about 20% of the human rights in the German economy. That's why there are the initiative chain laws that I would like to introduce to you today. This is a broad civil society that is supported by many large organizations from all over the world through Greenpeace. We are the ones who are being carried out and supported by even more people and are being spread. And I would like to also tell you the core of this initiative. If a company is damaged, the responsibility has to be taken. For example, in an unsafe factory, for example, where it can be made into a T-shirt, or where it can be divided into self-advertisements, it has to be taken to responsibility. Employment-free companies cannot enjoy an advantage. They enjoy it at the time because they can save money. If you look away and the human rights are taken care of, we cannot be patient in this situation. The responsibility should not be given to consumers. The question whether human rights should be taken care of is not one that everyone has to answer 100,000 times individually, but that human rights should be taken care of for everyone. And that's why we, as a society, can always firmly believe that human rights are to be taken care of. People with human rights injuries need access to justice in Germany. If someone in the outside world is hurt by the action of a German company, this person can also go against the company in Germany. Free-willed companies are too few. When companies close to free-willed businesses, these are too often too small steps that don't actually eliminate problems. If you want to know more about the initiative, look at Lieferketten-Gesetz.de. You will find a lot of information there. You will also find the exact requirements. You can also write an invitation online, which will provide the political lobbyist initiative additional weight. Thank you. Thank you. Hello, I'm Felix Peterson and I want to present some of my research picks to vex unsupervised 3D reconstruction. What did I do? What is 3D reconstruction? I just want to show you the structure of a 3D model that is very much for example of this shoe and we want to reconstruct the 3D model that underlies the structure so that we can the gifts are not working. That's really bad. It should rotate and it should show the 3D model. I want to do this unsupervised so the 3D model is not given. Here I have two examples. We reconstructed a 3D model on the right from the respective images on the left but now the new thing is that the 3D model is not supervised so we cannot say whether the reconstruction was correct and in which direction to train the new network to give an appropriate reconstruction and like a school class if it's unsupervised it can get a bit chaotic and I want to show to you that it's still possible to do the reconstruction and you might wonder it might be magic or something but I want to show you that it's science which underlies this 3D reconstruction and that can really work. So we start off with an image for example of this bunny and then we do a reconstruction so what exactly happens here is that the reconstruction is not so important and what's really important is how we can supervise whether our reconstruction is correct so we do a rendering of that but there we need to be a bit cautious because we need to still put it into the new network so we cannot just do restoration and thus we need smooth restoration so the result will also be a smooth image and there we have the problem our round trip doesn't really work so if we try to train on that it will not converge and it will crash so we need to reconstruct the texture and the style of the image and how do we do that we use a Pix2Pix network for example when I drew this cat I could just apply a Pix2Pix network to make photorealistic image of a cat then I've drawn the city hall of Leipzig and also the city hall of Leipzig looks quite cute as a cat and you might wonder why it is a cat so always in the training data were these edges and on the right side there was always a cat so no matter what you put in on the left a cat comes out and if you put in the ccc logo then you will get it as a cat if you input bread you'll get cat bread and if you input this you'll get a multi-eyes alien cat alien you want to try that at home you can just google for Pix2Pix and you'll find a website where you can try it out yourself so with this component we have this round trip and we come back to the original image and then we train this and unfortunately it still crashes so why is that we have here a lot of components and if we just put them on top of each other there's no way that should be a crashing building so the entire architecture crashes if it's too many steps after each other so how do we stabilize that we stabilize that using Garn but not the Garn that we use on microchips but instead using the generative adversarial network where we have a counter fader which forges money then we have a bank who has a hobby and it wants to print money so it gets money but then we have the discriminator or the detective who discriminates real money from fake money but over time he gets better detecting the money from the counter fighter but then the counter fader gets better at forging the money so in the end the counter can forge the money better than the bank can print the money and at this point the texture can also be reconstructed so if we apply this concept on this roundtrip we can train this stable and here are some results on the left side there are the input images and on the right side there are the renderings of the reconstruction and to show that this also works on real images and on single images I've gone to a website and downloaded 50,000 images of shoes and then I've trained it only on these images of shoes which were camera captured and as you can see it was still able to do a three model reconstruction of that so I thank you for your attention and if you have any questions you can come up after the talk thank you you still have to get that cat image out of my mind alright next up is Balkon 2020 good morning do you know how it looks like when the guy turns a badge into a taser or how bad hackers are at karaoke or how it looks like when you call an elevator in US if you don't know what is going on in the next Balkon hi I'm Yelena I'm co-founder of Balkon Balkon is Balkon Computer Congress that's held it's now annual event in Serbia in Novi Sad we are not living anymore in Serbia but we want to contribute to our home country so that's why we started organizing something there Novi Sad it's a long town very beautiful 80 km from Belgrade very nice some key facts about congress we started in 2040 as a small conference it was around let's say less than 100 people 20 speakers something like that up to 7 years we grew up to 500 visitors and we have more than 20 countries some of the highlights from the Balkon that were our guests and that maybe it's interesting and that they are supporting us now it's Travis Goodspeed Virus, Zos Mitch Altman, Robert Simmonds there is a lot of them numbers, you can name it we try to give them the best what's good in Balkon you can also join us planning CTFs so if you want there is a CTF you can play with us but also what's also interesting is that we also have a Habokon so we started this year and we found it very interesting and we will continue it if you have any questions how to reach there because I know it's Serbia it's not so popular place but how to reach us you can visit us at Balkon Assembly or you can contact or you can send us an email we will rather help you because we think that more people have come from abroad to visit our small conference because we are building community in Serbia and in Europe so we want to show them how it looks like and I think that's important there is a lot of students there we have so much money to travel and that's one of the reasons why we started this because in Novi Sad you have big technical university and with a lot of students, young people that are interesting in techniques and hacking so the key facts from this whole story is Balkon 2020 25th, 26th, 27th, September that's for remembering Novi Sad Serbia you have the web page for others, please contact us thank you thank you next up is a concise introduction to double entry accounting alright, hello everyone I'm going to skip the intro today because I've already done that yesterday my name is Luis and we're going to talk about double entry accounting what I'm going to talk about is has been known for a few centuries I'm going to talk about my account today and the way I'm going to frame it is heavily influenced by my use of new cash okay so accounting lets you track money movements across accounts and we have five different types of accounts type one, assets assets accounts, hold the money that you have in your bank accounts but you can also be like a physical asset that you can evaluate the value of type number two, expenses expenses accounts, hold the money that you have spent okay type three, income income accounts, hold the money that you have received from for example a salary type number four, liabilities accounts they hold the money that you owe to someone else can be any form of debt and type number five is a bit more abstract it's called equity you have to imagine of equity as the global wealth in the world from which your own money is poured off and use the equity accounts to set the initial value, the initial balance of your assets accounts if you take the money that you have from the word then you know it's yours and you use that to open your balances and your assets accounts accounts from a Yorkie and you can see on this picture on the left that the top level of the Yorkie correspond to the five different types that I've laid out before now let's take an example of what money movements look like in the balance track accounting in this example I have four different accounts right, accounting represents my wallet that's assets cash and accounting represents my checking bank account that's assets checking an account for categorizing food expenses that's expenses food and an account to categorize banking fees, that's expenses fees let's add two transactions in those accounts we can see that the first transaction at the top is a $20 withdrawal from the bank and I've incurred a $3 fee which is a very common thing in the US and then with the money in my wallet I've used that to buy some food within transactions we have a further concept called splits a split is a debit meaning you like to take money out of an account or a credit meaning you put money in an account and so for example in the first transaction we have three splits on the left we have the $20 credit to my wallet then after that on the right we have the minus $23 out of my checking account and then on the right in the first transaction we have the $3 fee the same thing for the second transaction a fundamental property of the voluntary accounting is that within each transaction all the splits if you add them all the debits which are debits and credits if you add them they sum up to zero and that's super interesting because it means that with this rule you cannot make money appear or disappear the voluntary accounting is very much like chemistry you cannot invent elements from nowhere nothing appears everything gets transformed it's exactly the same concept likewise if you add all the splits in the other direction meaning all the splits in a single account you actually get the balance of that account right so in my wallet I had like a $20 credit and then a $8 debit and so my balance in my wallet is $12 you can do this for all the other accounts this property is reflected like that zero sum trick is reflected in the accounting equation that's to calculate how much money you actually have and you basically it's like net worth net worth means how much money you actually have and it's just a simple subtraction you take all of your assets and you subtract all of your liabilities so you take all the money you have and you subtract all the money that you owe and that tells you how much money you actually have and NewCash displays that at the bottom of the account hierarchy then I'm going to give you a few tips about NewCash the documentation is split in two there is an help manual that's about an interface itself and then there is a concept manual that dives deeper into what I've explained today and yesterday and it covers accounting and like how to like different accounting methodology and so on and I found this manual the concept manual is much more helpful than the manual about the interface then I have some like NewCash tips this is not an exhaustive I'm not even going to go through it but I think my favorites one are is that when you start using NewCash do not try to import all of your transactions all the history that you have in your bank just start from today take how much money you have in your accounts set that at the opening balance and go forward don't spend like days trying to import all of your history it doesn't like really matter then like one of my favorites like tips are use a mobile app to track your cash expenses right there is one on Android that's like supported by the project and there is another one on the iOS that's a bit harder to use it's fine yeah NewCash is a very mature piece of software like interface like some people tell me they gave them an eye cancer like NewCash didn't give me eye cancer it's clunky but it doesn't matter accounting hasn't changed in like centuries like it's fine it works I find it cool to be able to like backup a version of your accounting books so like that's what NewCash the file that NewCash gives you like archives that's cool what else yeah make sure like unlike other things at congress like accounting is something you cannot really hack if you're hacking it you're doing it wrong you need to understand what you're doing and that's about it I can help you set up NewCash anytime today or tomorrow feel free to contact me and that's about it thank you so much thank you next up is your crowd okay hey guys my name is Till I'm part of a small group of legal professionals and we want to present you a small idea of ours which we call your crowd so this talk is going to be about the legal tech market in Germany a problem that we see where of possible monopolization or legal polls and a solution that we call your crowd and that way so we start with the legal tech market in Germany as such it started in the 1970s with legal informatics we just tried to bring this together informatics and legal professionals because the legal language is basically pretty rule based so you can try to automate this very simply but in the 1970s we had storage capacities which were limited and also the computing capacity was limited so we had lots of ideas but we just could not implement it which changed around like 2011 till 2017 where lots of companies put up and they put up some nice tools on the market where it's not rocket science it's mostly about text analysis, text recognition and you try to automate this simply legal analysis but that lead to a lot of efficiency on the legal market because some lawyers could simply they had a lot of repetitive work and now they can try to automate this via legal tech and save their time so some lawyers reduced the working time by 70% having the same income as such and now they spend 70% on pro bono work which they could not do before and another thing is to get access to justice an example is flight ride no one cared about flight delays before flight ride and now even we have a new system with avocado and they get your money back from your flight delay even for free so and these are a lot of companies that work in that market well what's the problem that we see there it's not necessarily the technique as such even if we can discuss it but it's more where that comes from because at the moment it's more that startups and a lot of big law firms who are outsourcing their development implement their technique and they have a lot of money to produce the products that they want and they produce mainly software which is specialized for small law firms or their law firm and which is made perfectly for their firm either if the workers in the law firm work with the software internally and make it perfect or externally and they invest mostly in software and their own workers to do so but small law firms cannot do that really so because they have no money, no time to do that and they don't have the capacity of big law firms which can work collaborative with IT specialists and have very big teams to work on that problems they only have individuals who have an IT interest maybe in the law firm who try to implement that thing so some of them are a bit upset about that because we might get new dependencies because big law firms try to implement their software license it to small law firms and on that way these products get on somewhere pretty expensive and on the other hand also the big law firms get on the market of the small law firms because big law firms before just focus on the big cases not on the small ones, the small law firms took care of and now they get in that market and it may be the chance that we get something like Google on the legal market it's not necessary that will happen but it might be the case which is not necessarily new that we have these dependencies because in law we don't have a system of open knowledge as in other sectors we have two privatised search engines like back and juries for legal knowledge and they're pretty expensive but with legal tech now we might get new dependencies by automising your processes in your legal law firm or you might just disappear from the market if you don't use that techniques so we will think how could you change that we looked in practice and we saw okay in informatics you just use github to share knowledge you just talk about what you're doing which lawyers at the moment don't do they don't share knowledge we have projects like GIAX where they try to upload lots of documents in the cloud but the idea to share documents is interesting and we know that we can distribute work even if we don't like captures but it shows that we can distribute work over a huge crowd and get things done and we saw this guy this is Joseph and he was able to predict court cases well it doesn't matter I just skip that but our idea is well we just combine expert systems like lawyers let them form a crowd and the group to share their knowledge their documents maybe use algorithms like machine learning to get something out of their documents which we have not seen before and let them try to distribute the work to develop their own techniques which they want like we have it with github with informatics bring the lawyers together and let them share their knowledge all they have and maybe let them form collaborative teams so they can work and if you want to help us contact us, thanks thank you alright next up then is there's javascript in my power plug hey I'm Harry welcome to my talk there's javascript in my power plug I will tell you the story how I found my first CVE you pressed the wrong button I think because you're don't press anything right now please so now go ahead to the right one day I walked through a supermarket and found a smart IoT power plug and it promised it can be controlled via an app from all around the world and I thought hey maybe I'm able to control it without app or somebody else is able to control it also so I started reverse engineering it first we dumped the flash image and saw some javascript flowing through our terminal so one of my colleagues said hey there's javascript in my power plug then we used simple tools like strings to analyze the binary and found that there are clear text wifi credentials in the flash image which is also a very interesting point so we used reverse engineer it to search for bugs and especially for the configuration interface where the javascript comes from so details to the CVE it is a bug in the USR Wi-Fi 232 module it is like kind of a shitty ESP and it is in the configuration interface where you can obtain status information firmware updates and this configuration interface also renders of nearby Wi-Fi networks where you connect to so what you see on the bottom is the source code maybe you already have an idea about the bug because if you open a Wi-Fi network with a malicious SSID nearby so if you put javascript in your SSID and somebody opens the configuration interface and reloads the page then this javascript gets executed and we have a cross-site scripting attack this can be utilized by loading a more complex script from an external server and within just this javascript we can then do an HTTP request to the web interface page where the Wi-Fi credentials are inside and append this result to a request to a domain we control to requestcatcher.com so we can leak the Wi-Fi credentials from the home network where this module is locked in this looks like that on the right side you can see the javascript export code it is not very well written but enough to show the concept on the left side you can see the requestcatcher the sensor part is the password below this you can see that it is also possible to leak the username and password of the web configuration interface itself so some words to the disclosure process it is a Chinese company from Shenzhen who produces this module and they didn't react to any of our emails so we requested a CVE from CVE.mitra.org I'm currently doing this lightning talk and I also write a blog post at Tiller Home so a short conclusion if you use this module in your Wi-Fi network it is possible to steal the Wi-Fi credentials via an cross-site scripting export by opening Wi-Fi nearby with a malicious SSID next step could be to gain code execution in the code there are a lot of sprintfs without a length check and seems like they wrote their own protocol parsers for example an own DNS parser and custom protocol parsers we expect some buffer overflows to be found there which can be exploited so if you want details and the proof of concept code you can have a look at the blog post or you can contact me via email or decked thanks for your attention thank you alright next up is Meriway Research Telegram group thank you welcome everybody this will be a short lightning talk about an open research group we have on telegram there are currently nearly 3,500 members in here so first up I'll tell you really short something about myself what the group is and how you can join and I think that should all fit within 5 minutes so my name is Max Gersta I go by the nickname of Libra I'm one of the administrators of the group I worked as a malware analyst before I currently work as a threat intelligence analyst I write my own blogs and tools which are also discussing the group together with other people who also do similar things so what is it it's a public group meaning anybody can join there's no vetting, no invite process the last slide will contain the join link for whoever wants to join we're strictly white hat we don't do pawning of anything we just analyze the malware and help each other out when we have questions the target platform you have where the architecture doesn't matter too much there will always be someone who can help you out so you can have a couple goals when you join the group itself you can stay up to date on the latest news and developments and new tools that are being released are also published in here you can learn from the questions of others maybe someone else is stuck with a problem you never had or never encountered by chance but you might have use of in the future you can collaborate with people there's multiple small groups that split off that did small things I created a group a small subgroup myself with a couple of people I know in there we have some projects running you can ask any question you like as long as it's related we have a weekly item the current week is week 2 for the running of this in this case for example discuss things that you within your company your experience changed within a company or within a network and see what had the benefit out of that so we can all learn from each other there there's a pinned message with resources these resources vary some of them are how to obtain new samples some of them contain tips and tricks some of them contain posters but in general there's a list of resources that you can use the rules are also described in there the TLDR of the rules is be excellent to each other and don't do anything weird the group consists of students and professionals alike I don't believe there's much of a difference aside from some experiences in some cases but it's open to anybody who wants to join as a last remark at the end of all lightning talks so they end at 1345 we'll be having a first meetup at the left exit of Borg starting from around 1345 till 1400 is the gathering I guess and then if there's enough people we'll just move somewhere and if not we just stay there since we don't want to block the exit for others if you want to join this is the joining you can also search within telegram for malware research and you should find this one you can take pictures you can write it down do whatever you want with it and hope to see you either in the group or at the exit after all lightning talks thank you thank you next up is privacy mail there we go okay thanks a lot so I'm going to be telling you something about privacy mail it's an email privacy platform that we developed at TU Darmstadt we is myself, Stefan Schwer and my professor Matthias Hollig and also yeah so your first question might be wait email tracking I mean usually when we talk about email privacy we talk about Alice and Bob wanting to exchange a message and we don't want like all of the evil people that are on the wire we don't want all of the evil people to know about it and this is not what we're talking about here we're talking about the fact that the sender of the message wants to actually know if the recipient of the message has read the message so it's not private communication between two people it is for example Amazon sending you a newsletter and wanting you to click links when you want to track emails there's usually three different things you can do you can track views or remote style sheets and then when the email is opened with remote content enabled a request will be sent to the server of the analytics company and they will know that you open the email the second thing is you can track interactions this usually happens by using personalized links so you have a link that is only used in this specific email to this specific person and if that link is clicked you know that this person clicked in this specific link in the email fairly straightforward and finally you can also link identities because if you think about it you have your emails usually on your laptop and on your phone and then you will have like all of the nasty web tracking that is going on which will have a profile of you on your phone and the profile of you on your laptop but if you click the same link or a link with the same identifier both on your laptop and on your mobile phone link the identities that they have so they can basically merge the profiles from your phone and from your laptop so this is actually fairly interesting so depending on who you ask between 24 and 85% of emails contain tracking the truth is probably somewhere in the middle the person sending you the email knows if you open the email when you open the email which device you used which software you used so Thunderbird, webmail whatever and where you were and then you go to the website where you have the IP based geolocation of course always if you have remote content enabled and click links and all of this stuff so we built a software that is intended to detect this kind of tracking and make sure that you basically know what you're getting into when you register to a newsletter and to do that you go to our website privacymail.info and you tell us look I want to sign up this specific service like example.com you sign it up with that service the service will send us the opt-in message we will check that there's no shenanigans going on and then confirm the registration of the newsletter please don't try to register accounts we will not click confirmation links thank you very much and then we will receive the newsletters and with these newsletters we of course receive them on our email server and then get them into our crawler our crawler uses openwpm which is basically a variant of Firefox that you can remote control and that is intended for online privacy research and with that software we then open the email also click a link track all of the all of the interactions that are happening so all of the requests that are being sent and so on the results are of course written into a database and then we have an analyzer that basically creates the results that we can then display on our website so this is a subset of what you can see on our website here you can see for example if you read the newsletter from Spiegel.de you will see that there is four external parties that are contacted when you open the email including Spiegel themselves but also a newsletter to go IOAM which is some sort of German tracking company and similarly when you click a link there is also additional third parties that are being included in basically a long redirect chain until you reach your final destination I actually gave a longer talk on this topic at GPN this year so if you are interested in more details take a look at this talk you can also find the link to that down at the bottom at this mars.xyz slash talk slash GPN 2019 you find the slides you find the paper everything and of course a link to the platform with that so you can play with the platform at privacymail.info it is of course open source so you can also send pull requests or take a look at what it looks like and right now I have a student who is working on redesigning the interface and for that we would be really interested in finding out what your priorities and concerns are when it comes to email tracking. Is it worse if they track which links you click is it worse if they click if you open the email and so on so if you have like three minutes of time and are willing to fill out a survey you would help us a lot in the future. Thank you very much. I am running around here at congress. Feel free to talk to me. Thank you. Next up is OpenH. So hi. We are the OpenH maintainers and this is our really update talk. OpenH is a free engine clone of HF Empires 2. We are trying to provide the original look and feel of the game. For that we used the original assets since we are no artists but we plan to spot free replacement asset packs. The project was started six years ago when we didn't really know what we were doing. With the main goal of providing unlimited extension possibilities think about support for more than eight players, actual sane networking stack or like a really crazy map mods like a zombie survival pack or whatever. Similar projects for other classic games which you can see conveniently listed here. The game is based on C++17 with Python 3 used for scripting and a size and photo glue layer. HF Empires is still very active even in 2019 because Microsoft just released the definitive edition with vastly enhanced graphics and fancy new UI. Yeah, we're not quite far but we're getting there and this year we mainly had a documentation overhaul with fancy Sphinx stuff defined our modding API started the conversion of the original game to that new API have continuous integration for macOS and Windows nightly builds for those and Linux and we have world first SMX, SMP conversion for the definitive edition graphics so with that we can extract for example Asia graphics assets. Then we have nightly builds hosted for convenient download and also continuous integration now based on AppVayer. The documentation now looks also way fancier and is easier to browse and read. Maybe you remember from our previous talks that our configuration system was this domain specific neon language we now need to use that to create asset packs in order to create those we have to convert the original genie engine data to our engine and this is the structure of the converter so we convert the data to intermediate objects which we then basically export to our format and in parallel we can export the media file, the images and sounds. Our goal engine architecture looks roughly like that so we have a separation between the simulation of the game world and the presentation of this game world and so the next steps to reach that is to actually implement the presenter and extend the simulation engine for the actual gameplay features. Then we need scalable path finding because we want to support many many units and also think about how we do that over network. With the scripting API we need to introduce Python stuff so that also artificial intelligence programming can be done with fancy machine learning whatever you like. In the past year we had in the past years we had 140 contributors so if you have some interest in joining we'll always be happy about you and you can crunch some issues. To reach us you can either visit us at our assembly or join one of our chat rooms or write on whatever platform else we have. We also have a blog where we occasionally post status updates whatever is going on in the project so in case you want to check us out here are the GitHub links and we'd be happy to see you contributing. Thanks. Thank you. Next up is B-Sides Amsterdam B-Sides is a community led conference for people involved in security and local organizers create their own instance in the city and so last year I discovered B-Sides and there was an event which was going to be hosted in Amsterdam and they've been hosting it for the last two years and I thought that's where I want to go but of course it didn't happen so I decided to make sure that it's going to happen and this year I'm going to actually be involved in organizing it so we're we have a call for papers this year so if we're going to hold it over the end of April so if you want to be involved if you want to contribute and if you want to get it out whether you're in the Netherlands or in Germany or Belgium then you're welcome to come and visit and see and talk so send us your proposal before the end of February and perhaps I can see your talk thank you all right thank you then next up is let's invent futuristic sleep and dream technologies hello I'm Chris Sofa and welcome to my lightning talk let's invent futuristic sleep and dream technologies so wouldn't it be cool to go to sleep and wake up with new knowledge or skills so for example go to bed and in the morning next day you suddenly speak Spanish or speak a new programming language or to have quite high quality restful sleep no matter where you are or to maybe have some immune system boosting sleep so maybe you have a code like I have at the moment and you just go to sleep it takes seven days but it just takes one night until you are healthy again or maybe cure other diseases overnight or maybe even have interactive dreaming so be able to talk from within your dreams to the waking world or maybe to other dreamers there are many other ideas that can be thought of about how the future of our sleep and dream experience might look like and actually I'm looking for people who want to invent these kind of futuristic sleep and dream technologies together with me not for the money so this is strictly non-profit but just to improve the sleep and dream experience of potentially billions of people and of course also to have fun while developing this crazy things and making things possible that are seemingly impossible and yeah it might take decades to invent these kind of stuff so learning a new knowledge a new language for example during sleep but why not start today and move closer to this goal step by step and so actually I've started already with this project set up a small website and also connected to professional sleep and dream researchers I also conducted some first studies with them and also started publishing the results in peer review journals also developed some stuff and also I recycled some old stuff from my PhD which was actually about sleep research I was sleep laboratory manager at the university for some years and now after this time I just do this as a hobby anymore and yeah but so some things have been prepared but the real fun is starting only now so actually really inventing these kind of things getting some ideas what could be done and also then to find ways to make this possible somehow and yeah so if you find this interesting and if you want to get involved into this because maybe you have some ideas what could be done what should be done also what maybe should not be done about futuristic sleep and dream experiences or if you want to develop some hardware, software printed circuit boards whatever or if you have some other skills that might be available even if you don't really know what when and how to do stuff that could contribute to this then please get in touch with me either via email so I've created this email address futuristic sleep dream text at protonmail.com or talk to me here at the conference and I'm really happy to hear from as many as possible from you and if you first want to read more about it then there's this website sd20.org which stands for sleep and dreaming in 20 years so you will find some more information on that website and yeah so let's invent futuristic sleep and dream technologies thank you then next up is investigating organized crime everybody so my name is Friedrich I work with a group of investigative journalists that's kind of spread throughout the world and as I mainly use in Europe and I want to talk about the technology aspects of what we can do to facilitate the work of investigative reporters most people in this room will probably first think about security aspects cryptography but what I'm most interested in is kind of the data analysis aspect of doing this so for example a little while ago in early December we published a series of stories about a company called formations house based in the UK that set up all kinds of weird fraud on schemes around the world issued fake gambling banking licenses or they tried to cut down forests in Cameroon and grow weed there and all of which was kind of very shady and illegal but how do we get to these stories how do we kind of get to this evidence the data that we got was basically a dump of that company's server coming from a group called dito secrets and it comes as zip files where you have millions of emails you've got documents you've got even a dump of a MySQL database that they used internally and how do you go from there to actually having journalism right how do you tell stories based on just random zip files so reporters need access to structured and unstructured data right sometimes data is in documents sometimes data is also in a structured database and people need to be able to see that then you need to be able to find overlap between datasets right so if you have an email in one dataset that might be might have a phone number in its signature right and that phone number was then another dataset used to set up a company that might already be the crucial connection that you need to find in order to link things and tell good story and then also this stuff gets really complicated right we look into a lot of offshore companies and so what we need to do is we need to also keep track of what we're finding so for a couple of years now we've been working on this thing called the ALEV project and it's at this point kind of a tool kit of different components there's a search engine called ALEV which is basically kind of a knowledge graph explorer that can give people access both to structured data unstructured data can do forensics on large numbers of file types in pieces and show up connectivity between them and it can also do cross-referencing between different datasets right so if you have like a list of all the politicians in the country and you have a list of all the offshore companies in the Panama Papers you can just kind of go and compare that and we've also been working on this thing called VIZ which is basically for making these little network diagrams where people can kind of keep track of what their stories are and obviously all of this stuff is free software and I'd really like to find people to work on that help us especially if you are maybe a pen tester or you want to work on more data import mechanisms or better data visualizations one thing that's kind of particularly fun about this is that basically underlying all of it is just streams of JSON entities so you can think about structured data, unstructured data both as just kind of line-based feeds of JSON entities they're all formatted according to an ontology that we've developed a knowledge graph model that we've called Follow the Money and so basically a lot of what we do is just convert stuff to this Follow the Money format and then you can do whatever kind of operations you need to do with it so in this case for example you can just pip install a thing that downloads all the people from the 29 leaks the Formations House data that I was showing puts them in a JSON file and then you can take that JSON file with all the people that have been sending emails in that data dump that I was showing at the beginning and say okay there's those and then there's the Panama Papers data published by ICIJ and you can kind of say okay find me possible matches between them and then you've got yourself a list of candidates and that's something that a journalist might already want to look at and say okay if someone is both involved in dealing with Formations House but also is involved in or was mentioned in the Panama Papers that I was showing to me to spend my time on to give my attention and yeah so if anyone is interested in hacking on tools for investigative reporters go to alefdata.org thank you thank you and next up is the very small 1 by 1 of passwords one by one one times one one times one yeah I have looked it up it's a very small one times one of passwords it's actually a stripped down version of a bigger talk and yeah there are some amazing passwords out there and for me it's interesting is my password actually working and how do I manage all these passwords so let's start with how do we manage all these passwords we I can strongly recommend using password managers at some point because with that you can set different passwords for each account you don't lose track which password have I used on which account and so on and so on there are a few password managers around for example passwords store aka pass there's key pass around you can also use a notebook if you store that safely in some place yeah there are lots of possibilities unfortunately you still have to remember passwords to unlock the password managers to unlock the password managers so if we have a password prompt there's the next thing called brute force protection and that's some sort of mechanism that helps that the password can't be tried out so but brute force protection is actually harder than you might think it is and that leads to the right password length because long is good but not always practical and that means I have to determine somehow how long my password have to be and maybe we can calculate the needed length maybe you remember the xkcd comic with the diceware passphrase where they just add up certain amount of words and say oh it's much easier to remember some random words from a word list and the system behind that is diceware because you can basically roll a dice to five times and then you pick a word from a list with the number you have you have diced and yeah how is that working basically let's go back to the air shield example in the first slide one, two, three, four, five is a number obviously so can I build all my passwords as a number if I have a certain amount of characters we have different number systems with different amount of digits or characters in it so yes those are all numbers and there is a formula for that so you can use the amount of digits in the power of the length of your password and you get the amount of combinations so this is for 256-bit key in the binary system it's a huge amount for a four digit numerical pin it's obviously a bit fewer combinations and the idea is that we take those amounts and make an equation out of it then we transform this equation and we get a nice formula how many digits we actually need for a certain length of password so let's assume that we use alpha numerical characters so we have 56 digits for a 256-bit equivalent key we need 44 characters and you can try to remember the example in the bottom of the slide it's very hard to remember so now get back to the diceware means we have 7776 words and we randomly choose by either rolling a dice or by using some software we put that into the formula and we get a length of 19 words that we need and you can much easier remember but the password is a bit longer and that's for the mass so far there are some two-factor systems around if you want to secure your passwords even more you just use two-factor with SMS ton, one-time pets or some unique passwords lists and always remember biometric attributes are passwords that can't be changed if they have been lost but on the bright side you can use this as a second factor too and finally there are application tokens so a second password does not have to be authorized with the user password applications tokens are not used frequently enough but it's worth a try and you can find the full 45-minute talks at the archives of the Privacy Week 2019 which was held in Vienna and I hope I will be there at the next Privacy Week thank you very much thank you so this concludes today's lightning talk session thank you very much for being here please give a big round of applause for all of the speakers who were here on stage today and again for having to deal with so many speakers in such a short time a big round of applause for the translation team