 And now it is my pleasure to present the talker, the presenter of this next session. He does work in organized incident response with all the IT security problems you might have heard of or might have faced. So, a warm applause for Gell. Thank you. It was great working with you, but I hope never, ever to see you again. I've heard that so many times, that's one of the downsides of working in incident response. I've done that for over 10 years, investigated some of the largest breaches, but you always get those kinds of responses at the end. It was great working with you, I hope never to see you again. And it was with that, with that ongoing struggle, I guess, that I became a little bit frustrated with our lack of success in our discipline, in security. We keep doing the same activities, taking the same general approach. I also became a little bit unhappy with, I guess, the general tone of voice within our community. And when this all started, like a couple of years ago, this was still a niche topic, right? I mean, nobody really knew what security was, what data breaches were, what security incidents were. That has completely changed. Right now, this is a mainstream topic. Everybody understands security, or at least understands that it's a big problem. And as soon as you say that you work in this field, they always mention two or three different newspaper headlines that they've read. So we have their ear now. People are listening, and they do realize that this topic is important, that it's a serious problem. Nobody wants to have their money stolen, or their election rigged, or whatever other problem. But so we are at a good time, which is a good opportunity now, when everybody's listening. But my frustration lies with what we can offer to them right now. Because I think a lot of what we do right now is still not as effective as it can be, and it's at best only a sort of a compliance-driven paper tiger in many organizations, where you do literally have teams of people filling out spreadsheets, sending them off to other departments. And it's not reflecting the actual situation, but it just keeps a lot of people awfully busy without really contributing to security. So I'm gonna suggest a couple of focus areas or approaches that I think will be beneficial in maybe slightly changing our perspectives a little bit. And of course, none of this applies to you personally. But maybe there are some couple of things you recognize, and you can help a colleague or a client or somebody you know with. And maybe if you open your mind a little bit, there might be some bits and pieces that even apply to some of you personally as well. So let's start. My first concern, people. We don't work with people very well. I mean, we all have these sayings, right? Like, people are the weakest link or there's no patch for human stupidity. There's a problem between the chair and the keyboard. It's layer eight security. We've all said it, right? But it's incredibly pedantic. And it's incredibly condescending towards the people who are our customers, who are our colleagues, who are essentially paying our salaries to suggest that they just go out there to make a mess of everything. I mean, nobody comes to work with the intention of, hey, this is a great day to get hacked or this is a great day to do something stupid and get my company on the front page. That's just not how people work or what their intentions are, even though it might look like this in hindsight. But we don't acknowledge this. I mean, what we keep doing is still put up more science, put up more instructions, more awareness training, trying to steer people to this imaginary world, this imaginary reality of how they should do things. But what we fail to understand, what we don't acknowledge is how real work works. How does actual work work? How do people, how do real people do their work on a daily basis? And can we, rather than try to shift them away from that, can we build security around that so that it's actually, they don't have to adapt their behavior, but they can actually do it in a secure way. And I always like the analogy of the desire trail that you see on the screen, because I think this perfectly depicts how bad we are or what we don't acknowledge within security. You see the bus stop in the middle at the end. And of course the council meant that you should go all the way to the left here, take a turn and use the sidewalk. But people being people, smart, maybe a little bit lazy from time to time, of course they take the straight route across the grass and one person starts, a couple of other follows and before you know it, you see the little trail form itself. But we should fail to see this as something bad. This is a message. If you see something like this in your organization, people deviating from the intended route, that's the organization telling you a message. The way this was designed is not the best way to use it. There's actually a more effective, a faster, another way of doing things. And you should really keep our eyes and ears open for these kinds of messages instead of making fun of these kinds of things. Because what we do, we have stuff like this showing up all the time. This is from LinkedIn last week. And we are like, oh my God, what's this guy doing? He's plugging in his phone in the USB charger on the plane. What an idiot, what's he doing? Real people, real people charge their phones when there's a charger available and that's not a silly thing to do. That's very beneficial and it's actually very useful that they have these chargers here now. For over 99% of people, the threat model, it's probably not in that threat model that somebody has tempered with the in-flight media system to do something bad with their phone. This is how real people use technology. What doesn't help is that we also keep being stuck in our old beliefs. We have lots of things that were maybe a good practice or a good approach 10 or 15 years ago and we're not very effective at challenging some of those beliefs or stop doing them. One of the examples, how we think about passwords and complexity, for example. There's been a number of organizations, the NCSC in the UK, NIST in the US, who have put out good research on how we should change our thinking about passwords and complexity and frequency of changing them. There's not a lot of organizations who are really opening up to this and are really changing their approach. We're in general not making a lot of progress as well when it comes to usability and user-friendliness of what we offer to the people we work with. There's papers from 1999 about the not so great usability of PGP and I don't think a lot of it has changed since then. People still send their private keys to the handful of people that are using PGP but our progress since then has been terrible. This is 2017 and if you wanna print an attachment right now, this is how Microsoft Office is trying to protect me. Get an attachment, the Word file, Excel file, I hit Ctrl P and I get this. I'm in protected view and to leave protected view and enable printing, I have to click the well-named enable printing. But this is literally what the screen is like. There's no explanation, there's no alternatives, no explanation of what the risk might be. So what do you think I'm gonna do? Of course I'm gonna click enable printing. Have we made my laptop more secure or did we make printing one click harder? It is the latter if you ask me. And there's a lot of these kinds of examples when it comes to how we deal with people. It's the same with phishing. We spent 20 years teaching people how to click on links. And I think by now most people get it. I mean, my dad, he still double clicks on them but most people by now they get, I actually had to apologize for making this joke earlier. Sorry, sorry dad, but most people now realize if it's a different color, if it's underlined, you click and something useful, something funny, something interesting happens. And now all of a sudden we're trying to undo that and we're like, yeah, well, the link thingy we told you about, it's actually a little bit more complicated and you have to do a little thing with the mouse and look at the link and do all sorts of stuff that's actually really hard to do on your mobile phone but it's important anyway. Think that you are telling this advice to somebody who works in recruitment or HR whose job it is to receive emails from people they don't know yet because they're not working for your organization yet who send attachments, they're CV and you are there like, oh yeah, don't open emails from people you don't know and especially don't open their attachments. We don't know how actual work works and we fail to acknowledge that in our advice. To make matters worse, a lot of organizations, especially large ones like the one I'm working for, send out tons of legitimate emails that look exactly like phishing, like employee awareness surveys where you have to go to an external website, fill out all sorts of confidential or sensitive stuff on whether or not you're happy with your manager which comes from an external email address. Can we blame people that they don't understand our instructions anymore if we keep mixing up the message like this? I mean, emails are meant to be read, links are meant to be clicked and attachments are meant to be opened and there's only so much awareness training we can throw at that to change that picture and that understanding. And we have to think about language as well when we're talking about people. I mean, a lot of our language, the wording we use is derived from military lingo. We have cyber kill chain and we have threats and we go threat hunting and it sounds like we're trying to play James Bond and yet we complain that the business doesn't take us seriously. I mean, it's not simple, it's not understandable and it also doesn't really sound like a group you wanna be part of for some people and we need a lot of people in our discipline. So this might put off people as well whose help we could actually really use. One very simple example, sorry that again, who was at some point showing me the screenshot you saw on the screen. It was like, my iPhone is connecting to our home Wi-Fi that a very old router and it's showing this message and he was like, the only word I understand is network and I understand I have to do something but I have no clue, there's WPA, there's AES, there's WPA2, he didn't have a clue and I think we've failed often to understand and really take this step back and say, hey, we thought we did a good job here. I think Apple probably had good intention putting this message here and it's a good start but it's still not understandable for the majority of normal people out there. What I think would help in general is if we broaden our perspective and look at other disciplines a lot more and one example I always really like is to look at how online marketing especially tests their prototypes because that's not something we do often. We have our little drawer of good practices and we just apply them without adapting, without seeing if it's relevant for our situation or our problem, we just take out the good practice without measuring if it works but what people in online marketing or in app development and those kinds of things do is something called A-B testing where you put out one different version of, let's say a website or an application to a group of people, another slightly different version to another group and you test which one gives the best effect whether it's sales conversion or less churn of people putting stuff in their web cards, whatever it is you're trying to achieve and example you see on the screen are two icons that Facebook tested when they were developing a new iOS app so these were the icons for whenever the app was busy waiting for something and I think most of you will recognize the icon on the right this is the default iOS spinner icon one on the left with the rectangles was one that Facebook developed as a new icon for the app. So they put this out to two different groups and I asked for feedback and the interesting feedback they got was that the people on the left with the new icon said well yeah great new app but from time to time Facebook was a little bit slow. People on the right they were like yeah great new app from time to time my phone was a little bit slow. Facebook slow my phone slow because they associated this iOS icon with their phone and this is of course a very simple very silly change but great insight in how people perceive your application and of course you can apply this to indeed something as literal as an interface like we see on the screen but we can also apply this to things like wording of policies, wording of instructions again don't just take out the same good practice but why not test a couple of different things and ask for feedback or even better measure if it actually works or not. So I think with these kinds of examples that we shoot I guess sort of suddenly lights the path that we want users or people to take instead of trying to steer them away with science, with instructions, with awareness training trying to understand how work works and try to see if we can slowly and subtly guide them in the right directions. And I think with that we should really be humble. I mean really focus on people work what their problems are and how we can incorporate them in what we do here and I heard a great quote last week from Marius Janssons the one of the conductors of the Royal Concertgebouw Orchestra his advice was don't disturb the orchestra as his role and that should be our role as well in our organizations don't disturb the orchestra, don't disturb how people are doing their work that's the real way to scale up and work on things that really matter. This brings me to my second concern are we actually working really on what matters? Are we really effective in what we're trying to do? If you ask the Dutch banks, they say yeah, we're doing really well. This is a graph from the Betal Vereniging, an organization with all the banks are united and this graph, the blue line shows overall fraud the light blue that's a little hard to see on the screen is fraud due to internet banking losses. As I think you can see, this looks great, right? I mean from 2012, internet banking fraud millions and millions a year down to less than a million in 2016, so maybe we're actually not doing too bad of a job here. I don't think they can take all the credit because I think this is a prime example of what a bad effect, whack-a-mole, whatever you wanna call it. Yes, fraud is declining in internet banking but that's mostly because criminals are changing their tactics. What we fail to acknowledge from time to time, of course these people on the other side, let's call it that way, they really are running their operations as a business. I mean they've outsourced, perfected outsourcing, they really look at which customers to focus on, which way to deliver their services and what they realized in this example, internet banking fraud, that doesn't scale very well because you need a manual step at some point. I mean you can steal bank account, you can steal login details, you can put malware on machines, that's all sort of automatable, but you need at some point a bank account and that part is hard because you do need a money mule, somebody who's willing to let you borrow or use his or her bank account for a little while and that's an expensive part, you have to recruit them after they've used or you've used their account a couple of times, the bank probably blocks it so you have to recruit new ones and if you look at it from a sort of a supply chain perspective like these people do, then you're at some point like, yeah, wait a second, how can I optimize my cost structure? How can I deliver my services in a more effective and more scalable way to make a decent living? And that's why, yes, the banks can take a little bit of credit because of course they've done smart and good stuff since 2012, but it's also because a lot of the attackers have now shifted to unsolicited backup awareness training as in deploying ransomware because think of it, that really takes out the manual step. You can affect people automatically, the payment is done directly and yeah, you really take out that expensive step of recruiting actual humans. And I always admire ransomware as well because I think it is a great example that this is really smart business people working on this because it's clever pricing. Yes, you can ask 5,000 euros to unlock your laptop but who on earth can pay that? So they choose a great price point and they look at different price points for different countries, even incorporating like the Big Mac index to see what's the right price point for a different country, like real businesses do. And they deliver great customer service as ironic as it may sound because their interest lies in keeping the business model afloat. If you pay to get your machine decrypted and you don't get it decrypted, you will start spreading that word at birthday parties and so on and some people will realize, hey, actually you cannot trust a criminal these days, you should not pay. So they actually have an incentive to make it easy to pay and to deliver good customer service as well if you don't understand how this Bitcoin stuff actually works. So these are the people we are up against. They are fast, they are adaptive, they are really adapting to changing circumstances and optimizing their business model. And what do we put against that is I think just a lot of adding and adding and adding and adding stuff to what we've already been doing. We had our good practices from 10, 15 years ago and we hardly ever stopped doing anything. We just add more complexity, add more complexity, add more complexity. It's well intended, don't get me wrong, but with all these intended layers of defense, we only build layers of excuse. We just build this attitude of, I don't have to check this as thoroughly as I probably should because there's actually still another team looking into this and if they don't catch it, we still have this tool and if they don't catch it, it's still somebody else's responsibility. So really with this added complexity, we get slower and less adaptive to changing circumstances contrary to the people we are up against. What doesn't help is that we let our agenda be dictated by vendors. And many of us mock them and I hear even a bit of snickering already now from the audience and many of us mock them but yet we buy their products and we often fail to put an alternative narrative out there to counter their stories and to present better alternatives. And what I think also doesn't help is this massive focus on breaking instead of building and of course, don't get me wrong, I love it as well to tear stuff apart but keep proving other people's mistakes and sometimes even making fun of them, spending all our money on pentesting which in a lot of organizations have become a compliance tick box when we already know most of the findings in advance anyway. I mean, we could have saved a lot of time and money if we started to help with building from the beginning instead of just proving the obvious and leaving there. Every pentest report that's not acted upon is a waste of money, is a missed opportunity and a sign that we're nowhere near as effective as the people we are working against. And I think we also fail to incorporate impact. I mean, we do it all the time when we are reporting findings. We have likelihood, we have impact, we're great at doing that but we don't do a great job at prioritizing the solutions. We often just blindly recommend the technically perfect solution, the 100% perfect solution that maybe doesn't scale very well, that maybe is very hard to use. Why keep aiming for 100% perfect solution that's only used by a small minority when there might be an 80% perfect solution out there that gets a far more widespread adoption. And with the breaking, we often focus on the sort of the exciting newspaper headline kind of stuff. I mean, of course, I understand the trivial fix doesn't lend you a speaking slot at DefCon or Black Hat. So James Bond style hacks at zero days, the newspaper headline stuff but the world is listening right now. We have their ears. They understand it's a serious problem. We have to give them more than that. We are like a bunch of doctors who only choose to investigate the most rare forms of disease, the most rare form of cancer that is out there. And while we're investigating, we're not even looking at how to fix it. We're only looking at how to describe it instead of fixing and healing the patient. And this is not how real people experience harm. These are not the diseases that real people suffer from. So what can we do? Focus more on what's really happening out there. What does really drive incidents? How are real organizations being hacked? That's not by zero days. That's not by the exciting stuff, as cool as it may be to watch and look at. That's, of course, by the easiest path available. It's been reported time and time and time again. It's simple stuff like password reuse. It's leak credentials. It's whatever you name it. It's all the stuff that we find incredibly boring, but that's how real people and how real organizations get harmed. So we should really learn from that and use good reports in my biased mind, like the DBIR, the Databricks Investigations Report, to look at what's really the underlying causes for incidents. Or we should create our own incidents, like Netflix does with the Chaos Monkey. Use software to break stuff for those of you that don't know the Chaos Monkey. That's a piece of software that Netflix uses, puts it on all their machines or in all their parts of their software to automatically and randomly disable servers, network segments, hold data centers from time to time with the purpose of forcing their engineers to keep failure into the back of their mind when they develop stuff, to develop a resilient system. And this Chaos Monkey often gets credited for Netflix being indeed incredibly resilient whenever, for example, Amazon has one of their outages again. Netflix is often one of those companies that's hardly affected. And it's because they apply these kinds of practices that they force to build resilience into their systems from the ground up instead of breaking them, looking for known vulnerabilities after the fact. And we should also look at incidents maybe with a slightly different perspective. A little bit of audience participation required here. No harm will be done. A little graph here of the temperature on the horizontal axis and number of incidents, number of failures on the verticals. So the red dots are the number of failures we had at, let's say we had two failures at 75 degrees Fahrenheit and so on. Let's say real temperatures actually quite low, we're even below, far below the graph to the left. What should we do? Should we go ahead with our experiment yes or no? Can we expect a failure yes or no? Can I show your hands if you wanna go ahead with the experiment? Couple of hands. I really have to disappoint you guys because you actually just blew up the space fiddle challenger and seven people died. What we didn't realize is that we were looking at flawed data or imperfect data. We only looked at the incidents. What we didn't plot and actually is a lot more clear once you see them here are instances where we had zero failure. We only were looking at the bad outcomes instead of the good outcomes. And I think that's another good lesson to keep in the back of our minds. Yes, the Databricks Investigations Report, looking at badness is a good approach, but we shouldn't fail to keep into account as well what drives good performance. There's of course a lot of time when stuff just goes right. We should also investigate how that looks like and what's driving those kinds of that kind of performance. And I already mentioned aiming for perfect solutions really don't, done is better than perfect. For example, PGP, I already mentioned it from a usability perspective. And of course it's a great tool, but not a lot of people use it and the people that use it often find it very, very hard to use. On the other end of the spectrum, you have WhatsApp that have rolled out and to end encryption pretty much overnight to over a billion people. And we security people were a little, we're on the side and putting on our most pedantic hat and we're like, yeah, well, wait a second, encryption, encryption, how do you do the key exchange and it's Facebook and you actually trust them and we see tons and tons and tons of objections. And I think in these kinds of examples, we should really take a step back and say, yeah, of course, PGP, great. However, end to end encryption in the hands of over a billion people, that is actually something that's far more effective. And I think this is one of those examples where I would really prefer the maybe 80 or 70 or whatever percentage you wanna put on it, perfect solution that scales massively instead of, yeah, the technically better one that doesn't scale that great. And another prime example that Alex Temmels of Facebook actually pointed out is our view against using SMS as a second factor for two-factor authentication. Yes, of course, you have to trust the telcos. Yes, of course, there's ways in which you can intercept some of this and there's indeed technical ways in which SMS might not be the perfect solution out there. However, if for a lot of organizations and a lot of sites and tools, that's probably not in that threat model. If we only had implemented SMS to factor at more and more and more of these instances, we would have given the bad guys a lot more work to do to make a decent living. So really, we should keep looking for these maybe slightly less perfect solutions that do offer that scalability that we often lack right now. And we should from time to time also scale down and look for elegant solutions of doing that. This approach I really like is what Slack, for example, has been doing to monitor what their admins are doing. Whenever they run a sensitive command. Of course, they were like, okay, we really should send this to our SOC guys because they should keep an eye on what's going on and we wanna know if it's actually, indeed, Ryan, the DBA running this command or if that's a hypothetical example, you have Gany from Russia running the same command. So what they're doing now, whenever you run a sensitive command, because you can of course send all those commands to the SOC, but like every SOC, they get over flooded with false positives and that doesn't really make them more effective. So they are now asking whenever somebody is running one of these commands, they're sending them an acknowledgement message saying, hey, I see you're running this command, can you please acknowledge? So Ryan, in this case, indeed types in, yeah, that's indeed me, I'm this, all good, nothing to worry. And of course, because you're like, yeah, but what if you have Gany took over his account? Of course, they ask people to also acknowledge via a second factor. So I think great, and in my view, very elegant way of being able to filter down all the noise and focus really on those really suspicious examples where there is a sensitive command being run and you don't get the confirmation message. Sorry for this short interruption. There is a car, FBI 5363, German car with that number plate. It's in a dangerous position, interrupting fire brigade access. If it's car of anyone of here, please remove the car. It is FBI 5363. Is it really FBI? That's hilarious. That's what I was told to. You don't make this stuff up. So really we should, I think, keep our eyes open for trying to understand what's driving good and bad performance and focus on more on building, building relevant actual working stuff. So why not indeed even shift some of our budget, some of our capacity from doing the breaking, some of the, I guess, required testing, the proving the obvious stuff. Why not even be proactive and shift some of that budget to fixing and to defending? And this brings me to my third concern. The world is listening. They do understand that this is a serious topic, a serious area of concern. How can we really work on building trust? And I first wanna draw a comparison to medicine. And of course you all recognize Louis Pasteur, who was one of the great contributors to the discovery of modern vaccination. If we have the approach that's often taken in security, especially when it comes to new developments, innovation, new things, if we had that 100, 150 years ago, would we have the great healthcare we have right now? Would we have aviation? If this security approach, where we're often very fearful of new developments and often saying no to them, had ruled in those days. I mean, it sounds especially with those statements like, yeah, assume breeds, don't ask if, ask when. It sounds a little bit like we've already given up. And I think we have to be very critical, think back about vendors, about the incentives underpinning these statements. I mean, we have to stay ambitious here. We don't wanna run a hospice. We wanna cure cancer. We have to keep that in mind whenever we're thinking of the solutions we develop. And whenever we hear statements like, yeah, 100% security is not possible. We want our organizations to speak up, right? Whenever they see something, whenever they hear something, whenever they get this weird phishing message, speak up and tell us and inform us so that we can react. But if they do, what do we then have to answer for? Well, then you get another nice aspect of the security industry. Assume breeds, don't ask if, ask when. However, if you are going through a breach, then we're all the first to lean back, start, bring out the popcorn and turn into these Monday morning quarterback observers commenting, essentially our friends are in security teams in these organizations working their asses of trying to deal with the problems. And we're all the first to tell all the stuff they've been doing wrong. And it's incredibly damaging characteristic. I mean, this is, these people, these organizations are victims of criminals. It is the same like she was wearing a little bit of a short skirt or yeah, this newspaper shop got robbed, yeah, well, he saw it coming. I mean, this is a terrible and bad characteristic of our industry. I asked the poor maniant analyst who's had his laptop hacked earlier this week. And it's with these kinds of things that we position ourselves as the department of no. I mean, we lose involvement, we lose sympathy with these characteristics. I mean, I already said, we have people's ears now. They understand it's a serious problem. However, whenever they come to us, the day we always say no, we always, whenever you pick up the phone, you already hear the person on the other end of the line sighing a little bit, oh yeah, it's one of those idiot users again. It's with these kinds of behavior and with the victim blaming that we end up with people only asking us for these sort of gatekeeper yes, no kind of questions. Can I put my data in the cloud, yes or no? And is it okay if we use USB sticks, yes or no? We have positioned ourselves not as the team that you turn to with a question along the lines of, hey, I have something that I wanna develop or that I wanna fix. Can you please help me? We're so far away from a blameless way of looking at things going wrong. I mean, if we really wanna move forward, we should focus more on investigating incidents in a way that doesn't put a person or a group of people at risk, sometimes even of losing their jobs. We should not just keep investigating an incident until we find a person, a human having done an action and then stop and say, ah, what's human error? Yeah, we've solved the incident. We should really learn from industries who do continuous research into complex systems and who do proclaim that human error is only, it's at best only a symptom. It's hardly ever the underlying cause. So if you do conclude that a who is responsible, you have to keep digging deeper and find the underlying systemic failure. Maybe there were competing incentives. Maybe there was some un-clarity earlier on, whatever it was. And we should with that also shift our approach, how we look at our problems, because I think we treat a lot of our problems as sort of the simple or maybe complicated situations, but those are still situations where you can analyze all the individual parts and based on that predict how they will work together. And I think a lot of businesses, they still apply this sort of Frederick Taylor kind of approach where you indeed assume everything can be regulated, can be managed if you only just manage all the individual parts. If you only tightly control them, write down rules, remove any ambiguity. And I think that's an approach we often take as well, testing individual parts in isolation without really understanding their broader interaction. And with that approach, you only get confidence at best. You do get the confidence that with a certain input, you get a certain output. However, in the complex world that we are actually operating in, interactions are a lot more important. And we really have to take that higher view and in those environments, aim for something that goes beyond confidence and that is actually trust. Now we can only get there if we get rid of this tightly regulated approach, this three layers of defense, because that does show, like I said earlier, it does show security is someone's responsibility. It lies with so-and-so's team. It leads to everybody dropping their guards a little bit because hey, the security guys will take care of it. Now we really need to scale up here as well and deputize everybody in our organization, shift responsibility for security back to the people in the trenches, the people who write code, the people who train new hires in the organizations, people who keep old systems running. And it doesn't mean that security will go away, role, but it will change. It will change from more of the gatekeeper, or less of the gatekeeper, less of the policeman kind of team, and more of a supporting change agent, a source of knowledge, a team that really grasps up a lot of security knowledge, but then does share it with the whole organization. Really, I think security is too important to leave to just the security team. But that does change the way we have to behave. I mean, we have to move away from the department of know and become this smart, this sharing, this supporting team that's distributed throughout our organizations. The really the go-to team if you wanna get things done. And again, of course, this doesn't apply to any of you individually, but please think in your organizations if you are indeed the go-to team whenever you want something done. And with that, if you wanna move from the department of know to the team of hell yeah, we need different people as well. We need a more diverse makeup of the team. We need more builders. We maybe even need psychologists or marketeers and might even be a sort of a different career path in security where you actually do progress more if you can, if you have more of this building, enabling kind of capabilities instead of just being the annoying kid who always breaks the tower of blocks and then doesn't wanna help fixing it again. So really, should we hire five more people with maybe a background in psychology instead of five risk managers or pen testers? Or with that, I think we should have a right way of curiosity foster that in our team. Phil Gilbert, the head of design for IBM said it really nicely. Curiosity is humility with ambition. So it's humility to listen to people, to acknowledge that we work in complex and unpredictable environments. And pair that with the ambition to be the Louis Pasteur of our world because our problems are far more relevant and the world is listening to us now and is looking at us for answers. So with that, I think we should aim for worthwhile security. I would really like this word to describe it because it has, of course, the literal meaning, it brings value, but it also describes that it's something that's worth working on which to me is, and I think to a lot of people is actually what attracts us in our field. It is these problems that are worth working on. And I also like it because it has this slightly larger than life sound to it as well, which to me echoes this move from just the confidence part to really the higher level of building trust in our organizations and our systems. So summarizing, I think we should focus on real people, how they work, how real work works. Remember, don't disturb the orchestra, light their paths. We should work on real problems. Remember, done is better than perfect. And we should combine it with to build trust by understanding complexity and by applying the right type of curiosity. I think that's what the world expects from us all, to help them make sense of new technology and how it impacts society. So remember, we are the wizards of these modern times, so let's not disappoint and let's create worthwhile security. Thank you very much. So, we do have time for questions. Cue up, that's great. If you still have your car somewhere, I'm not sure whether it has been moved. You know. So, first question, please. Hi. First, thanks for the talk. I really liked it and I just, I'm completely behind it. It totally reflects my experience. What I found out is that the path to trusting, you know, developers, product managers, is a really long way because often they see security as a cost factor and don't often, you know, understand that it helps them to have a secure product. So, what's your experience there? How did you, you know, convince them that security is worthwhile? I think one approach that does help is, I think, if we manage to frame security a little bit more as just quote-unquote quality, because I think actually a lot of the stuff we aim for does actually in the end help to create a product that's more reliable, more trustworthy, and yeah, does whatever it's meant to do actually better. I mean, that's maybe the sort of the framing part. I think one approach that helped in general as well, what we've seen is to, especially in that sort of the scaling up part, is to really be, I guess, smart about who you recruit to, let's say, maybe you want to, it depends a little bit on the terminology, but it's often called something like a security guilt or whatever you want to call it, where you have these ambassadors in the different development teams. I think you can already get a lot of traction if you actually pick the right people who might not be security experts right now, but do have a keen interest in the topic and you give them some training, you give them a bit of a forum where they can raise their questions and where they can interact with their peers. And I think that already really helps to have this, I guess, almost a grassroot movement within those teams. And of course, they might still have a hard time within their teams to raise the case, but I think that does something like that, is probably the best way to slowly, slowly have this approach of expanding within the teams, paired with indeed saying, yeah, well, we call it security, it is actually just building a better product. Thank you. Thank you for your talk. It lines up pretty much with my experiences and my ideas, especially taking the user seriously and taking security as an integral part of doing your job well. Security is not just for security specialists. However, in my experience there, you run into some difficulties now that you've just gone a long way of destroying this nice fear and certainty doubt model that we've been using to get management to do things a bit more securely. That's bloody hard to quantify in itself because we're talking to business. The main problem in my view is the connection to the business because they are the ones who control the budget and they have to approve new products. And now you've just taken that away from us and told us we have to start building stuff. Hell no, that's not gonna happen. And there's a second thing and another thing. In my experience, it's very easy if you're employed in an organization that security is pretty much used, the maid of all, what do you call it? Maid of all jobs. Anyone who wants to get his way in an organization, especially in an IT project, sooner or later he invents some security aspect to it like availability or something like that. And hey, all of a sudden it's a security problem. What's your experience and your view on this? So maybe starting with your first one. I think on the building part, I think what, and maybe I didn't bring that across clearly enough in my talk, but I do think there that we should actually do that indeed either with the business or that it should actually be the business doing that. I mean, I think ideally, yeah, I'm not sure I should go as far as saying actually security should cease to exist in five or 10 years but then again it is in the end just quality. So maybe it can actually be a very small role and we should really distribute that knowledge more to what we now indeed call the developers or the business. So that to your first point. The second question was, you get a lot of, because security is essentially- The handmaiden to be employed for whatever the other person wants. Yeah, so you're like a little bit short on capacity. Let's frame this as a security problem so we can borrow a little bit of their time. I'm not sure, I would probably already be on one hand be happy that they come. Maybe that's because I mean, rather that they come with something that is actually maybe not really a security problem but at least we're talking. I mean, rather have that than the day before go live say hey, you do this pentesting thingy, right? I think she'll have to do that and by the way, this has to go live tomorrow so better make it work. So I would probably applaud that. Yeah, it's probably a little bit vague but then I would probably start the dialogue from there and try to see what's actually the underlying problem and yeah, but that's probably depends very much on the actual situation if you can come together with them, find a workable solution. But I think it's the downside of that a lot of people in this field are actually, well, they understand a lot of stuff pretty well and indeed, especially if you have something like an IR team, yeah, it's also hard to plan what they're working on so they don't look busy. Yeah, let's get them involved. So yeah, it's probably a bit of a, maybe we're a little bit of victim of our success I guess in that way, but I would probably still to applaud that they're coming even if it's a silly question. Yeah, so as you note, the we tend to take some best or good practices out of a drawer and don't actually look at their effectiveness. To what extent have you looked at the effectiveness of the approach that you're describing, right? And like how did you do that? What is the data and what's the outcome? Well, definitely not all aspects. I mean, this is definitely meant for people to pick and choose what they like and try to test in their organizations. I think what I've especially worked a lot with is the DBIR because I used to work in the team at Verizon that wrote that report. So mostly have experience in applying insights from those statistics, from those types of visualization to really focus on, hey, this is actually a great talk that was presented at DEFCON. However, it doesn't apply to any industry anyway at least not to our organization or our type of data that we have. So it's mostly that. And for a lot of the other aspects, especially on the elements around complexity and how to deal with that, I heavily rely on research that's been done mostly in, it feels like aviation, like healthcare. So have some more case-by-case examples of applying those, but not enough yet to say actually this was before, this was after, and this was in a population of 100 organizations. So definitely work to do there. Yes, it's a hard problem, so. Yeah, well, but if it was easy, everybody would be doing it, right? Right, are there any more questions? No. So thank you again for our speaker, Yedda.