 So, hello people at Easterhack and Hamburg at the Kampnagel, hello people in the internet. Welcome to this presentation called Puntus Backpäunti Part 2, Update from the Dialogue on Cybersecurity, a project we started with the Innovation Council Public Health rather recently. Well, let's start. This talk is meant in the spirit of Cunningham's Law. The best way to get the right answer on the internet is not to ask a question, it's to pose the wrong answer. In this spirit, I would kindly ask you to get involved, scan the school R code and access the PED. If I did not mess this up, the link should also be in the pre-takes and the program. And because we're trying to learn things and make people talk to each other, which usually don't talk to each other. And yeah, let's see where this goes. So please bear with me and imagine you live in a perfect world, because you will need this in the upcoming minutes, with puppies and rainbows and a certain interpretation of unicorns. What does this perfect world include? It includes security researchers not being prosecuted when they're doing their work. Already got an S-bomb and they're socks ready to go at a moment's notice. People well rested, caught enough sleep, no vacation ahead, no holidays. Then we look at the life cycle of a vulnerability. One of those nasty things hidden somewhere in some kind of code. The first thing that can happen, which is the best part of a life cycle of a vulnerability, the software it was in gets decommissioned and it's going to get used or discovered. It's a sad, lonely life, but also rather uneventful, which we kind of might like. The other way to go at this is someone discovers this vulnerability. And this could either be a white hat hacker or a black hat. And then two things could happen. Either people are like, yeah, you found this vulnerability, so what? You can now change the color of the things I write on my personal blog where I describe rising rose bushes in the backyard in Aachen. Or something like a race starts because things might get ugly quick. The question is what happens if this race starts? You can have very smart people like Desiree, who actually you write about how you look after which vulnerabilities you run after. And if you have one of those vulnerabilities reported by the white hat hacker, they can report them to an organization. Or they can report them to some kind of git repository where they found it in. And then this can take up to six years to get the information there if you look at lock for shell. The lock for shell vulnerability gender injection was reported 2016 on black hat. Lock for shell was found 2022 and well, that's six years in between, so perhaps people would have talked to another bit more. Some of us could have had more of a Christmas holiday. Because everybody got an S-bomb. This organization would probably have an S-bomb to look up where this vulnerability goes to and could then report to the git. And because this git also has an S-bomb, it could report to the git that's actually the underlying core issue. Like with the innovation council public health, we found out that we have a core piece of open source technology. We call iris connect. And it actually shares GRPC protobuf with six store, which is not something that you might expect on the first notice looking at both software frameworks. And then there are other frameworks and dependency graphs and you can look at all those. And if you look at this vulnerability, there are two questions. The one question is how do people actually report this? The other question is how do people tell each other if it's somewhere in their dependency graph? Those two questions are, as we found out within this project, questions that might need some thinking about and eventually people having innovations and things. So this is the Cunningham's law part for the second part because now there comes a thing where I try to think for myself. Someone still wants to flip out the smartphone. Be my guest now. So what did we learn so far? If you compare those two questions, the security researchers reporting and the green question mark and the git repositories talking to each other in the amber question mark, there are standards. There's the RFC 9116, which is a file format to aid in security vulnerability disclosures. It's called security TXTs. So we actually know how to put up a standard for systems and production, how to get contacted by security researchers. And we have a standard for that. So people could kind of enforce that and there could be laws written about this and you could also say people were not up to standard if something goes wrong. But there's a standard. Those tend to be useful if done well. There is nothing we're aware of in the field of open source projects. Somewhere somehow in some git you might find some information whom to call, whom to contact, whom to write an email to because maybe, just maybe, you don't want to put the vulnerability you just found in form of an issue and the repository for everyone to see. Then there are procedures. There are procedures for reporting vulnerabilities. The Dutch have one. They're kind of nice. If you report on vulnerability, you can publish. They will not punish you. And you get a T-shirt. That's kind of cool. Then there's the third bund, which published a new guideline for coordinated vulnerability disclosures on the 1st of December this year. People took it for a spin and took months to actually close public health information leaking to the public due to an appointment scheduling system that was having an open API somewhere on the internet. So you could see who would go and where and see which doctor. So maybe there's work to do. Also about the procedures. There are procedures in place to contact open source frameworks like EUFOSSA. EUFOSSA is a rather successful European Union project that allowed for 309 reported bucks, 130 valid vulnerabilities, and off-critical high severity bucks to be discovered. Who in this room used putty? Ever. Okay. Who of you didn't use putty in the last 20 years? Okay. All of the rest of you, you actually used a highly vulnerable program that allowed for remote code execution on the service you accessed. So sometimes it's the tiny pieces that make the problem, and it seems to be rather widespread. Yeah. What next do we have? We have legal frameworks, and now it gets funny. So when we come to legal frameworks, when it comes to reporting vulnerabilities, there's the legal situation on the German federal level. It's called the Hecker paragraph. The CCC has a rather valid position on this. Can we please put it away? But it's still there. Discussions are to be had. And unfortunately, the Constitutional Court didn't help this time. So we might try to get more creative. And then there is a discussion of the European level, THS with their last, well, November, and talked about how it would be helpful if the European Union might try to help to basically drain the market on training on security vulnerabilities, because training on securities might get wrong if, well, it's not Bitcoin and stuff, right? Then there's the OECD, which has a surprising progressive view on that. They take more of an economic view on the damages done, and they badly encourage vulnerability treatment. Then there is the United Nations, as Khaleesi mentioned yesterday, and also was so kind to point out last month at the Nets-Politiation Abund. Yeah, there is the UN Cybercrime Treaty in the making, and it seems to be a Hecker paragraph on steroids. So if anyone feels like doing international politics, well, please help the woman. And things might look a bit better because, actually, THS seems to have understood on some level that this might be a bad thing, and there is now an executive order and the prohibition unused by the United States government of commercial spyware. Somehow people might eventually somewhere learn something from Pegasus. But this is going to be an evolving space in the next course of the year, and we're trying to maybe put a dent in the discussion. Yeah. Then there are legal frameworks when it comes to making get-repositories talk to each other. The European Union is working on something that they call the EU Cyber Resilience Act, which is a rather interesting thing to put things forward. They basically imply that if you use open-source software when you have a commercial use for your software, that means that either someone gives you money or you harvest their data by using your programs to then sell those data to someone else for money. You're liable if there's something wrong with the open-source software. So eventually this at least establishes some kind of liability between people running active systems using open-source software and the open-source software itself, which this is explicitly relevant. They don't hold the people making the code liable. They hold the people liable who are using it. So there is a first step forward. Then there's the question, are there institutions who do preventative work to prevent vulnerabilities to happen? There is the open SSF funded by companies from the US, funded with 150 million euros for two years trying to run back resilience programs and building on the whole discussion. And in Germany we have the sovereign tech fund who actually has not only 10 million to foster the health of open-source software, but also in addition to the best of my understanding, a 1.5 million funded back resilience program in the starting phase. So Europe might be a bit late to the party, but apparently there is public funding in it which the people in the US don't have. But things are slightly moving forward. And if you look at this, the last question that remains is do we have actually institutions who do reactive work? And there's few and far between. There's the open SSF who discuss building in FOSS SOC, where they want to build in SOC for open-source software. There are the Apache and the Linux Foundation actually have reporting processes for software stacks that they maintain and take care about and deem relevant. Well, let's try to be used in the lock for shell incident. Well, I see processes leave things to be desired. Yeah, so that's what we learned so far in the first half of the year of this one year dialogue. And we're trying to look at all this and not get, well, insane. And we thought that we will propose solutions that will allow people to talk to each other and that's the current state of the discussion that we'll be coming next. So if you look at the solution to how can we get this stuff done, if you start with a security researcher, they would like to have a single reporting entity because an example, if you look at the CVD process of the BSI, we're in a situation that the BSI did not talk to the data protection agencies because the data protection agencies are actually capable to put a fine on a company. So to open the CVD process and to have companies voluntarily have a discussion with the BSI about security vulnerabilities, they tend to not report the companies to other agencies which might enforce fines on them to actually make them play along. The discussion to have had if we wanted to go this way. This reporting entity could also give out a crypto token to the people who made the submission. If so, people tend to have the discussion if there should be some kind of a custodian involved. This puts up lawyers who actually by law are holding certain privileges when it comes to the communication of their clients and they could act as intermediaries. I'm interested to see where this goes. And then this reporting entity also has to report, put the report somewhere, right? So for example, to the BSI, which then can give it to the institution that's actually in charge of taking care of the issue. This is an overview of Germany's cybersecurity architecture which is kept regularly updated by the Stiftung Neue Verantwortung. As you can see, this might get a bit confusing if you're a security researcher and you're trying to figure out whom the fuck to call actually. Yeah, and then there are the data protection officers who are also the people you might want to talk to because if you, well, look at an API and it smiles at you funny and you say, well, let's crawl this thing and then you sit there and you have two million data sets at your hands with health data in them, you might have to actually write two reports at the moment which is, well, perhaps more hassle than needed. And then there's the big open question like who's actually gonna call the open source people because at the moment, it's like a goodwill gesture because they're not companies and they're not obliged to any legal framework. In Germany, there's not much that is currently there in a structured process. Well, because also on the other question who's gonna pick up the phone, right? So what's the idea to fix the other issue? The idea is to make it's talk to each other, put it in the pipe. Because, well, there's this question, whom do I call in Europe? It's also whom do I call an open source? And there are processes with the Apache Foundation where you can put up those calls. And now the idea is we call it security hoping that someone's coming along who's better at naming than we are as a mashup of kits and security. And we suggest a solution that speed is not perfection which would help more because at this point still 20% of the Lock 4J frameworks used to this day and DREP was being addressed still have the lock for shell vulnerability but apparently nothing's burning at the moment. So what is the idea of security? You take the concept of a security TXT, you try to build a standard which might only take six to seven years because then perhaps people might see why this might be relevant. You combine this with Git repositories. You take the idea of the KonaVarnabar RSS feeds to actually automate the process to make some talk to each other. There is a lot of work being done for automation. There are robot TXTs which already show that this can be done where your websites talk to Google crawlers. And then the question, what are the things we miss like an X factor to get to a successful concept? And is that OSPOS and foundations adapting open source frameworks? What do you think will help for open source projects to either talk to each other or make them reachable for security researchers? Yeah, that would be great if you could talk to us. We're kind of going on tour with us. You can meet us at several things like the Hack and Parallel, the NPA, the FOSPAC stage, the EtherHack. Next month Bianca Kastel and Sabine Griebsch are going to be at the BSI IT Sicherheitskongress. People submitted papers to the troopers. There's going to be a camp this August. And if you have suggestions where we should go, please let us know. We might try to put in a CFP. And also we put up an online series and I missed to translate this slide. So every Tuesday in the months, there is an online meeting. Last month we had the Lock for J team and the OpenSSF, represented by Christian and Brian there to talk about how they saw a lock for shell and what consequences they draw from it. On the 18th of April, we will have the BSI cert and now this is the two days announcement, which caused me being late. We will have the data protection officer of Berlin take part in this discussion and talk about how they handle their processes to actually make reports on vulnerabilities or data breaches. In May, Katie Musiri from Lutter Security will hopefully explain to us what not to do when setting back boundaries. Because they, as the title of this project, Montess Back Bounty suggests, were our first glimpse at saying like we could build an intermediate resolution like if I put out a back bounty and tell you to hack me, then perhaps the German government might be less inclined to come after you with a hacker paragraph. Then in June, there will be a talk about the general legal situation of security researchers in Germany. My Dr. Fogelgesang, we hope to get her or someone else from the SEC for a research team which put out a remarkable paper. Then for July, we're looking for community recommendations. Is there someone we should invite? Is there someone who might have a contribution to be made? And then we're looking to have a panel discussion in August on a place in which people contributing that still is in the works. Yeah, and that's basically it. This might have been a lot. I hope I didn't steamroll you too much. It's been half a year of long discussions, pressed in, I don't know, a couple of minutes. So please either write in the pad and if someone can actually look at the pad and tell me what's written in there because I can't open it right here. Or ask questions, I'd be happy to hear them. And address points we missed. Okay, those guys, yeah, they tend to have a lot of fun with URLs and as of late cookies, if you read their publication from yesterday. They're a group of independent hackers from Berlin and they basically look at the systems they discover in their everyday life and they publish when they find vulnerabilities and they report them. They gave a rather amazing talk in 2021's, well, remote Congress together with Linus about security vulnerabilities and what it means for them and how they get treated. They make regular encourages with companies telling them that there is this hacker paragraph and that they will sue them for hacking their system when they try to tell them that their API is actually openly accessible. Did anyone see any typos? Oh, we get those a lot. Did I butcher any pronunciation? I honestly don't have an idea. So, what I feel like is that the idea of abolishing the hacker paragraph has been spoken for several times by several institutions. We also have the coalition contract of the Ample Coalition, which actually said that they will try to allow for, how to put this, legally secure, a legal framework that allows to do security research without criminalizing yourselves. To this point, there seem to be discussions ongoing between the Ministry of the Interior and the BSI, whether or not how this could be implemented. But as Khaleesi pointed out in her talk yesterday, in this space, there seem to be the same problems if there are discussions around chat controller. I think they haven't yet gotten used to the fact that actually in democracies, elections might have consequences, and that the course on certain areas of your politics might actually change after an election. So, this might be a long discussion to be had. But eventually, there's like a fix to it. Like if the state would go ahead and say, like, here's a buck bounty, and we will pay this to you if you report vulnerabilities in certain critical systems, and we can actually go ahead and we can ask our suppliers to put out buck bounties on their software, or otherwise we're not gonna buy their software and try to see if the buying power of the German state still means a bit. But I've honestly no idea. Sorry. Yep. This has answered your question. So the question was, when will the hacker paragraph disappear? Now we have two people over there. I'm not sure whom of the two of you, okay, the person in the front, of course. So that would be you. Pardon me. Could you try to get a microphone or speak up a bit? I have slight troubles understanding you. You mentioned the Netherlands. Are there any other good practice countries you could mention? There are quite a few. We're trying to work up a comparative review of the implication of, I think it's EU regulation 2013 something. I can't put it in the slides if I'll upload the presentation, that I show for good practices. And I think that there is a huge discussion to be had to learn from each other. What I find remarkable is that the Netherlands actually also have a hacker camp like us, with the Shah. And there the government actually reaches out to the community and gives out prices for the best reportings being made. So I feel like there's a lot to learn. Yeah, I'm not entirely sure if I understood your question correctly. You're asking if I'm aware that the German government, yes, that's true. I completely agree. A worldwide solution might be nice. Get your GTXT as a standard, hopefully in IEEE format might be a step towards that. I do kind of have a similar stance on this, like on climate change or the climate catastrophe. I would kind of like to start at home. And we're in a situation that the US do have reporting processes, which due to my personal understanding, not meet the requirements I would like to see, because a US intelligence agency actually have the right to use vulnerabilities that are disclosed to them for their own use. I feel like it might be a good idea to actually, I talked about this at Hacking in Parallel for a longer time, to have an anonymous or sedentary low threshold not making you a criminal ethical way to report vulnerabilities. And eventually Germany could step in there and say that if you report your vulnerabilities to us, we will set up a process that will ensure that those vulnerabilities are not used by intelligence agencies. And if that's done correct, this actually could be used internationally. I'm pardon me, I seem to have something with my ears today. Could you? I'm fully aware of the fact that intelligence agencies, so the comment was that I will not be able to take intelligence agencies out of the picture, which I completely agree with. I'm quite certain that they're closer to me at this moment than my laptop. Which is a lie because my phone is over there, I'm sorry. They're as close to me as my laptop. Sorry for that. But I feel like if we make up a process that might include some kind of a custodian function that will actually also know about the reported vulnerabilities. Like if I give this to my lawyer and then my lawyer gives this to people, then my lawyer can publish that I reported something. And then at least we can have the transparency that might be necessary in a democracy to actually talk about things. Because at this moment in the situation that there's this hacker paragraph out there, no one ever has gone to jail for it. Tons of people have been threatened with it. This includes highly profiled security researchers, like, have you ever been? Well, Lilit talks about being threatened with the hacker paragraph, which is like the wrong person to do that to. But how could, what else can we do? Does that make sense? Okay. Are there more commons and points? Does anyone else feel inclined to talk about intelligence agencies and my way to butcher the English language? Does anyone have an idea to how to do that better, other than write code that doesn't have vulnerabilities in it? Now, this is a serious question that we have to ask ourselves. How is it possible that for six years there's one of the most widely used lock softwares out there called lock4j and they don't get to know about a vulnerability that's known in the security community? So the intersection between people actually bullying stuff and those playing around with it and breaking it, doesn't seem to be as good as this could be. And that's a question where we, I think, might start to talk to each other. A bit more, how this can actually be fixed. Because of course we can ask for the BSI to put in a state-run process to do all those things. But that might not solve all the problems eventually if they could be solved by a single email. So lock4j team would have gotten an email in 2016 like, hey, I found this vulnerability and I think it applies to you. They could just have fixed it. Then there are two questions. I think she, that person asked me for you. And you have the mic, so I would suggest you ask your question with the mic. And when you've asked your question and this time I thought about repeating it and answering it correctly, you take the mic with you and bring it to the people in the audience. Does that work? Great. So please let me hear your question. So what I want to know is how, I think you mentioned something at the beginning that there is some kind of bit of material for software so to know what software components are included. How many open source projects do we have something like that? Well, yeah. Not as many as you would like. Not as many as I would like. Unfortunately also not as many as wouldn't make a sleep well at night. That's why I put this in an assumption in the front that everybody got an S-bomb. Because basically no one has those. And we actually need them. And people are talking about this and it's part of the Cyber Resilience Act of the European Commission that if you build software they will have to include S-bombs at least on the first level. So you will have to list all the packages you include. But setting up S-bombs is still an open question. But I basically skipped that part and put it in as an assumption to point out that if we have S-bombs they still work to do and doesn't fix all our problems. Okay, yeah. Does that kind of answer your question? Yes, answered my question. Thank you. I'm not sure if it's good to say it now but I'm not sure if it's... I once I read a paper about the process of BSI and BKR and other Secret Service Institutions and that they have a process for security and all that stuff. And that they have a process to deal which vulnerabilities the BSI can report and which one the Secret Service and BKR can take for them. Isn't it right? Well, so this has been said. On DefensiveCon 2020 actually asked a BSI representative directly on stage with a running mic and they said that they don't have such a process. Okay. But did you read that paper? I haven't read that paper. I would be happy to read it if you could send it to me. Yes. The point at this is, as was pointed out earlier, intelligence agencies are involved where in the situation that at least to the understanding of some people who actually studied law and something useless like mathematics like I did that they wouldn't even be allowed to tell you if they had such a process. Yes. So as soon as it includes intelligence agencies the participation of the agency itself is privileged information to be kept secret. Thus people would even if asked about it directly have to lie to you. That being said, I know some people who work at BSI who told me that if they would become aware such a process would lead to their resignation and it kind of seems believable. Yes. But I think it's how is this institution called who made this graphic about these cyber institutions? The reporting entity. No. The BFDI. No. On the right my English is really bad. I'm sorry. Don't be. That one. Yes. It was maybe from this bubble, this paper. But I'm really sure I read it but I'm not sure if I understood it, okay? Maybe there is a slight confusion there. To what I have understood from talking to the people from Stiftung or Anfantung where in a situation that there are processes how that would react if their vulnerabilities reported like how the BSI would actually talk to the BKI the basically FBI equivalent of Germany to go after the people using the vulnerability. I think there are two questions to be asked. The one question is we have a cyber security incident. Someone gets hacked, someone gets ransomware. Who will look at the vulnerability? Who will help the people? Who will help to get the computer systems up and running? And who will be the other people who go out there, kick down doors and arrest people for doing that? And as far as I know, there are processes in place to take care of this where basically the BSI takes care of the cyber part and the BKI takes care of them kicking in doors part. I'm hoping I'm getting this right. If someone in the audience knows more about this than I do and I'm quite certain there are and I messed this up, please let us know. Looking at you. Second row. Okay, concentrated on something else. Does that help? I have a question. Sure. Yeah. Okay. Hello. The BSI does also other things. For example, publishing the so-called Grundschutz Compendium. Yeah. And in 2023 there's a section which requires for those who need to be compliant, which are public institutions, mostly, to have a vulnerability scanner. So this partly, at least, sparks the hope that some, that people will get vulnerabilities and kind of an idea which is what's running on their premises. So it's a little bit like S-Bomb, but not really. Well, it's basically looking at it from the outside, right? Say it again, please. It's like looking at the running program from the outside. No, basically with a vulnerability scanner you can do authenticated scans. Okay. And this basically is supposed to do a software inventory. Great. So that is basically S-Bomb through the back door, but not really. For everybody who's, I don't know really how good the project is, but there's an Overs project which is called Cyclone DX, which is supposed to provide S-Bomb information. Okay. It's open source. Thanks. So if anyone wants to use Cyclone to scan for vulnerabilities, please do it on your own hardware. And thanks for the contribution. Good. Does anyone have a watch to look at and can tell me how we're doing with time? Still time left or are we more or less done? Does anyone want to discuss things? Because there will be a workshop after where we can perhaps more productively discuss this without a camera and people from the internet watching us and me standing in front of you and not seeing half of you because there were lights in my eyes. And... Okay. We have 15 minutes, so there's another five minutes. You will have me standing here and beg you to ask questions so I don't feel embarrassed. Or not. Well then... Oh, there's... Thank you for... I'm quite certain they're doing that. There are laws in the books who enforce them too. And at least the head of this... I think it's division in English who is in charge of cybersecurity of the Ministry of the Interior is also under the impression that he can buy zero-day exploits on an open market if the price is right with an exclusive right to use it. So I'm quite certain there are not only those who are enforced by law but those who are just actually taking regular business with federal governments at this point because in Germany we do have... ... where federal agencies are supported to hack back and spy on people. So I'm quite certain that's the case. Am I'm correct in not seeing any hand? If so, please wave more vigorously so I can see the movement. Okay, if that's not the case, thanks for all your time.