 Hi everyone, and welcome along to this talk around informing cloud native vulnerability management of views from the past You know we've got a lot of talks over the course of this conference looking at the future, right? Looking at where we're going and how things are going to happen And I thought it might be interesting as I kind of adjunct to that to take a bit of a look at the past and say What are some of the things that we've seen in vulnerability management and security management in the past that might help us inform and design those things in the future A key point before we start is the ideas in this presentation are my own not necessarily the views of any employer past Present or future. This is something I've kind of been looking at over many different jobs. So it's not necessarily the view of any one company So to talk a bit about my history why I think you know that I might have some interesting ideas for this I've been in information and IT security for about 22 years now I started in financial services as an analyst and I also then managed Internal penetration testing services for a number of financial services companies in the UK And so I did a lot of vulnerability scanning a lot of vulnerability analysis work a lot of working with the business to get things patched I then spent quite a lot of time as a Pentester as a pentesting consultant and I've done hundreds probably actually thousands of Pentes of a variety of types infrastructure work wireless work web application work and recently a lot of container work And I've also done some work around the kind of vulnerability disclosure space helping find and disclose multiple CVE's I'm a member of humanistic security and CNCF tax security Unfortunately, I obviously wasn't able to be there at the conference in person I actually stay here, which is lock oil head in the West Scottish Highlands Which as you can see looks very pretty in when it's not raining If you ever come to the West Scottish Highlands, I'm afraid it'll look probably won't be able to see those hills And there'll be a lot of rain in the way, but it is very nice when the sun's out anyway So lessons from the past why would we and why we won't want to think about the past when we're looking forward? Well, the first thing to get out of the way is that I'm not saying that nothing should change I think there's a concern we make everybody will start bringing up the past But they're going to say hey nothing should change things work well in the past. We shouldn't mess with it I don't think that's the case. I think we do need to very much change how we do vulnerability management how we do security for a cloud native world But that doesn't mean that we can't look at the past and look at some of the things that went well and badly and Use them to try and inform What it is that we're going to do in the future And I think that's where it's also good idea when you do these things to kind of take a bit of time to step back and do that so The first one the first theme I want to talk about was openness But I think openness is critically important and obviously this is an open source conference So this is going to be a key value anyway, but I think we can talk about why and some of the ways where openness is very important The first one is I take for my time in security testing Which is when we would do assessments either as an internal team or an external team You would often find that some other group would also do an assessment, right? And they'd be using a different vulnerability scanner product And then when they did this you would get a different list of vulnerabilities You might get different severity You might get a different number of vulnerabilities and this was a problem, right because you've told a business group Hey, you've got you know, 50 vulnerabilities the highest is 10 highs and then someone come along It's actually you've got 75 and there's 15 highs and the business it you what is these right and the answer comes down to It's differences in how the scanners worked If those scanners aren't open if they're closed source of the proprietary if you are even worse at their sass So you really can't see what they're doing You have a difficult time answering that question in a lot of cases You can't actually say actually, you know what we can tell what's going on here, and it's this One of the things I see is a great positive of some of the work that's being done at the moment in container vulnerability management With products like gripe and trivy is that they're open source If you want to know why a different result was received You can go and look and they do produce different results in some cases for sure But if you need to actually find that out from a business reason You can go and actually look at the source code and look at how they do their work and come up with some kind of answer So I think that's great, and I think we should be encouraging that openness So that we can actually see what's going on and actually make sure those comparisons because it helps us build confidence With business people in the fact that we're telling them, you know We have some basis for why we're telling them that specific vulnerabilities are in place. I Think the other aspect of openness that I think is very important Is about getting visibility of all the vulnerabilities In the old enterprise world one of the things that was increasingly popular over my time in financial services was appliances And appliances were essentially treated as black boxes, right? You weren't able to get access to them to find out what packages were installed Most cases you weren't able to find out what vulnerabilities were present and to an extent Honestly businesses like that, right because it was a system. They didn't have to manage It was the vendor's responsibility to manage packages and to manage patch management Now in reality, this is a false economy because the vulnerabilities are still present the packages are still installed It's just you don't have any visibility of what's going on So you can't assess the risk of them and I think obviously time's gone by companies have realized this You know, there's a problem here We need to make sure that we have some idea of what is going on inside of those black boxes So that we can have an accurate or a more accurate picture of the risks and vulnerabilities that we face But if we look at cloud native and how cloud is developing one of the things that's very obvious is the Popularity of SaaS services. So we're a company will essentially just subscribe to a service and they'll send their data to it And it'll get processed in whatever way and put back and that essentially is similar to my mind to the way appliances worked A SaaS service is a black box. You don't know what software is in use in most cases You don't know what packages are in use or what versions So from that perspective, I think that we need to think about how can we make that more open? How can we You know, make sure that we're not essentially just hiding the vulnerabilities In another company's network or another company's environment or if we are how are we assessing the risks of that because that's a challenge for us So I think openness to my mind anyway is very important And I think the more the more open we can be the better It's going to be the more confidence will build in what we're doing This one ending the tyranny of the CVE now, I don't want to this come off as a slam on CVE's I think CVE's are great I think having a unique identifier for a vulnerability is a great thing because it helps people discuss that issue And having something to do that with is is great But there are a couple of ways in which In which CVEs can be essentially a kind of a problem and I explain what those might be The first one is that there is there is a some tendency to To say that if there isn't a CVE present then there's no no problem, right? It's not tracked And it's not necessarily prioritised so end user companies Don't look at things or know things unless they have a CVE assigned to them And that's a problem because there are things that don't get CVE's that are definitely something companies should be aware of from a risk perspective An example of that is um, kubernetes security audit So there were multiple uh, high rated security findings in the 2019 kubernetes security audit that haven't been fixed Um, at the same time if you look back over the last three years of the kubernetes project There have been a number of cvs the very least get a bulletin with work around the vast majority of cases get a patch as well But those things in the audit didn't have a cve So I think there wasn't as much pressure necessarily for them to get fixed whereas things with a cve get pressured to get fixed So I think that aspect of of cvs is important one and something that we need to look at The other one is that I think there's a problem where cvs are seen as a bad thing Right. So if I have a cve that's in somehow a black mark on my project and then we want to be careful of that Because you'll get people making false equivalences. They'll say things like oh, well, this doesn't have any cvs and therefore It's a good secure product, which is not correct If you look at things like for example, ibm mainframes So ibm have a published policy that they do not issue cvs for security issues in their mainframe products They don't say that that means they have no security problems However, I have seen it said by some people in the industry that hey, there's no cvs. It must be super secure That's not true. It's just the fact that they don't do that use that system um Another aspect I've seen this with is projects open source projects who don't want a cv assigned because they're concerned It's going to be a black mark And that leads to um, sometimes the things being raised in inappropriate projects or not being raised at all And again, it reduces this tracking So when we're thinking about how we design the future um Having there's a couple of things I think we could look at we could look at the idea of Having something which is wider than just straight up vulnerabilities So things like for example, where security architecture decisions might be tracked Because then you can say here is actually something which is relevant to your risk. Mr. N user company And you should know about it. Maybe there's not going to be a patch or fix for it But you should still know about it. We also need to try as much as possible To get away from the idea that there's some downside to cvs, right? Or to vulnerabilities having a vulnerability for you know, every product has got decisions or Coding bugs. There's no such thing as non buggy code. So everyone's going to have security relevant bugs And I think it's important to try normalize the idea that that's fine. It's a good thing The important part is that there's a process these things are found quickly Trapped as well as possible and fixed where people have got time and resources So I think that's one of the things to keep in mind there And then the last one is um, well not everyone is going to be as interested as does And an important point here is that I'm not saying that I'm not saying that End user companies aren't interested in security What I am saying is that they've got a lot of things to think about they've got a lot of Priorities pressing on them things they have to do with their time And vulnerability management and patch management isn't necessarily always going to be at the top When we are designing systems for the future Having that in mind I think is an important one because if we design something that requires an awful lot of effort From those companies to do to work well Probably won't get used that well and there's a couple of examples from my time that I think were relevant The first one is that um as a pentester in the uk. Um, there's a very common thing which is an annual pentest So a company will essentially do a review Every year and they'll look at the same scope to look at a network a system or the entire environment If you're doing a pentest for the second third fourth here You would bring along the report from the previous year Open up and run down the list of findings to check to see if they're still present I many times have had the experience of some or indeed even all of the fundamental of the findings with previous year Just not being fixed Tests have been done But they haven't actually fixed the findings because presumably they just didn't have the resources and time or prioritize it to actually get that done So there's a key thing there, which is you know People aren't as interested in that in that sense, uh as we would be um And I think that's that's you know, that that's something to think about the other thing is that we look at something like cvss temporal scores You know, there's this thing you can add on to cvss where I've seen You know people managing lots of vulnerabilities and you think about container security world where you might have you know thousands of tens of thousands of container images Something where you have to do it manually for every vulnerability and actually do an analysis Probably isn't going to happen, right? but realistically speaking that they don't have in the kind of Person resources available to actually do that kind of work So trying to steer clear of anything which actually needs that kind of work I think is an important one and I think something that would help us so um to conclude this talk I think it's great to see the renewed effort or to do the effort that we've got going into improving this space I think it's vital that we do take this time to consider and to work out how we can best Apply good security practices good for ability management practices to the cloud native and cloud world um, I think looking at it, uh Trends like improved openness So having things done in the open where we can actually analyze what's going on and everyone knows how things are done It's fantastic because I think it enables us to demonstrate to business people That we're actually finding things and why we found them, you know, we're just telling them something We actually have a reason for it. Um, I think that's very important I think we do have work to do in terms of making sure it's as open as possible And that we reduce as much as possible those kind of black boxes where you don't know what's going on Because you can't assess your risk well if you don't know what's inside the box Um, I think as we're thinking about how we identify vulnerabilities anything we can do to um to widen the scope and say It's not just necessarily bugs, but might be other things as well. We have those things tracked And assessed and scored and prioritized will be good as well. So that we're not too focused on things that come as CVEs and I also think that um I think we need to be conscious of time right conscious of resources Designing systems needs to take the user into account. That's one of the first precepts of any system design End user companies Have a level of resources available for this kind of thing and anything we can do with new systems to try and make it It's easy for them as possible to adopt it Is going to be a win. Um, you want to have that detail there so that you know If someone does want to dive in and you know has the time and resources that they can do that But you also want to have a path for people who have limited amount of resources available for this kind of work So thanks very much For taking the time to listen to this talk. Um, if you want to get in contact with me afterwards Um, I am available on twitter as at race scene and my email address is there on the screen And hopefully you will enjoy the rest of the conference