 Hey everybody, thanks for tuning in today to Clyde Smith's monthly webinar. Today's topic is on how do mature DevOps teams manage security. So before we get started, let's go through a few housekeeping notes. We have prizes to give out. So we have two free lunches and two free prize packs to give away at the end of the webinar. So be sure to watch till the end to win a chance to be in a chance to win. We're also streaming on Twitter, on LinkedIn, on YouTube, as well as our webinar platform. Please post questions to whatever platform you're using. The wonderful Hillary will be monitoring our channels and giving them back to me. We're going to be holding one or two polls. And again, you post in your platform or tweet or chat and we will be looking for those questions, your answers to your polls. So we have two really amazing guests today for our talk. So let's bring them on stage. Hey Nigel. Hey. Hey, so today we have Nigel Kirsten and Jack Chester Nigel Kirsten is the field CTO at Puppet by Perforce. And he's the author of puppets much love report on state of the DevOps. And we also have Jack Chester. He's the senior staff software developer at Shopify. He's an author of a book K native and action on building services applications. And he's the chair of open SF software repository working group and heavily involved in Ruby's open source community. So thanks for coming today. Hey, and Nigel so you have released 10. I think that's the full 10 reports every that's a lot on the state of the DevOps. So how was that? It's been a pretty massive effort over the years and I have to say, you know, I'm sort of the last person standing so to speak that the people who I if you're going to talk about the history of state DevOps has a bigger impact than, you know, so Alana Brown, who's since moved on and now works at remote.com. It was her idea in the first place and she was she really drove it for a number of years. I was co-author. And then when Dr. Nicole Forsgren came on for four years, I think it was there. She really bought it level of statistical rigor and research to the whole project. But there's been so many people Jean Kim, James Turnbull, jazz humble, Michael Stonkey. So many great authors over the years. But last year for us was a really big one because it was 10 years and suddenly made me realise how long I've been messing around this industry. I had to look up the term for this the other day of semantic satiation. You know, when you say a word over and over and over again and it stops, it loses all meaning. I think DevOps and DevSecOps are kind of like that. You say DevOps 20 times and it doesn't mean anything anymore. I'm trying to get to 10 years. And so what's once number 11 going to be out or are you still going to be as big a part to it? Yeah, I'm guiding it at the moment. We've got a fantastic researcher, Ronan Cleenan, who's taken on sort of the bulk of the work and is working with some research firms for us. We're trying to do something a bit different this year because I think I'm looking at about this. This topic goes forever. But basically, I think DevOps is now such a big field. It's very difficult in a single report to sort of come up with interesting, useful findings. You have the folks at the beginning of their journey, the folks who are very much post DevOps, the folks who've moved on, who've tried it, who it doesn't work, who it does work, and trying to do all of that in a single report. I think you just end up producing a book every year. So this year we're just focused on platform engineering. I can't write a book every single year. It gets to you. Absolutely. And Shaka, tell us about how you recently, I know you're focusing on open source security. And I'm a long time listener. I haven't gotten out of your working group software repositories. And I'm just wondering how you got into that and what was your journey to? Yeah, I've been co-chairing or deputy chairing. I don't know how you want to describe it. Dustin Ingram from PyPI from the Python Software Foundation. How did I get into it? I used to work for a company called Pivotal, which I really enjoyed my time there. And one of the things I worked on at Pivotal was what was called Pivotal Network. It was our distribution point for all our software products, which we needed to do legally. And some of the products got installed in cages and the Arm Guard because they were fairly sensitive sort of operations. Is this like an infimism or really? No, this is really a thing that happened. Like the software, like the USB stick would be walked under Arm Guard. Like it was that kind of a place. Okay. And I suddenly thought there are people in the world who would be very interested in getting inside those cages through our software. And that was one of those, oh, expletive moments. And that led to my interest. That was sort of like the lightning bolt that led me down the path to where I am today. Oh, cool. So our topic today is how do mature DevOps teams manage software security? So I thought my first question I'll pose to Nigel and like, I know we're saying DevOps means nothing anymore. But what does DevOps mean? Is it just automation and cloudy stuff? It's just tools, right? Yeah, I mean, this is, I think it's a tough one. And you talk to folks like Patrick Dubois, who was very much coined the term. And there was a deliberate, it was very deliberate that there wasn't a clear definition. Here is exactly the reductive definition we have of what we're trying to do here. Because in many ways, if we tried that, if you look back at the early days, it was basically a bunch of sys admins going, how do we actually be agile? How do we actually take agile in spirit and apply it to operations? Oh, look, we have all of these cultural problems, all of these accountability, you know, ownership doesn't match authority, all of these things. And I think we had had such a vibrant, interesting, exciting space emerge because it wasn't really tightly defined. And you turn up to DevOps days and you could get a talk about just about anything. But then we hit the enterprise. And I think the lack of a definition meant that, honestly, like shyster vendors stepped in and started going, we do DevOps. Here is DevOps in a box or consultants coming along in a similar way to agile and safe and various, you know, sort of permutations like that. But I don't think it particularly true to the original spirit. So as far as what it actually means to me, I take a pretty big tent approach. It's a loose collection of practices, technical and cultural, to get over organizational boundaries inside organizations so that we can ship software with less stress and better. And that sounds really vague and could apply to just about anything, but every time I try and narrow it down much more than that, I end up cutting out something that I think is important. Yeah, it was like I worked on those teams where it's like every six months, I think that was a normal kind of thing. You would release something and it would be very stressful and something would go wrong and then you'd have to roll back and it was a high stress moment in a big long journey. So I think that moving away from that is a good thing. You think that's having 183 people on a bridge call over the weekend? Yes, no one ever wants to do those calls. Yeah, so and so how does that, how do you see security? Is security becoming being brought into it more? It wasn't like at the start, we tried to merge development and operations. And now we're like not just now, but now we're like, oh, security are kind of still a bit siloed. Let's bring them back into this tent. And is that how you see it, Jack? Yes. And unfortunately, and just because of the sort of the economics of the situation, it's going to be siloed for a while in a lot of ways. There's just not that many cybersecurity folks to go around. So a lot of organizations either deliberately without thinking about it or out of regret, wind up with a central security team that acts as a gatekeeper, which we know from our DevOps days is an anti pattern. The other thing I see as an anti pattern is again very much like the experience that DevOps went through the evolution is the idea that there's a box of software you can install. And today you have security and that's not true at all. And it's, it's, it's a pity that we have to go through this evolution, but I'm, I'm hopeful that will come out the other side with something better. And I know it's not a box, but is there like some nice tools? Well, a bit of a lag up. Yeah, it is important to think carefully about your tooling. I think the similar rents who's the CEO of a company called chain guard says a lot and I agree with him I had a similar sort of model once upon a time which is that build is production. The systems where you are building the software as sensitive and you know risk dense as production itself because as I said, you know, like if somebody gets into the bucket of bits, you are in a world of hurt, and a lot of the time people historically have underrated that risk. And so the bucket of bits has been the fastest path into production to attack the build system itself or the the artifact system itself as well. So in terms of your software, you should think carefully about about those systems and securing them and hardening them and applying all the security practices you have now to them. But I think there's there's sort of like two great tributaries of risk or two great tributaries of security risk that you can think about flowing into into the river as it were. And one of them is that build system upstream dependency risks that come from the outside of the organization and then risks that come from the inside and the really big one is making sort of, you know, unintentional errors in your software that lead to a vulnerability. That one I think doesn't get as much time as it needs to because it's hard again to install something, or it's hard to have it, you know, a checklist that says I have now secured myself against security errors. Yeah, actually, one of the times where you probably as a developer have the most power over security is when you're bringing in these dependencies. What like what is it that you should consider when you're like considering bringing in a brand new dependency like water that you could like what is the checklist should you have a checklist or you can use or should you be able to test anything on your developer. Well, yes or no so that that's an emerging field right now is is people producing these checklists. There's even a startup called socket dot dev who have sort of automated the checklist for npm, at least. Yes, I would broadly say like, take the things you're already doing so is this project lively, you know is it active are people still contributing. Quickly to problems. You also want to look at security practices like do they have MFA enabled on the repository accounts that they use. But also you want to make sure little things like are you accidentally installing a different dependency from the one you thought are you making a typo so double check that you're getting the package you expect to get little things like those can add up to a lot. So I think we're in the early days of having having a strong story about how to pick dependencies with a security point of view. It's funny something you said there's our calendar jump on which because I think one of the things that's underpinning all of this is how hard it is could you know software development as a team sport the teams keep getting bigger and bigger with different roles. And it's often really hard as an individual practitioner to actually make a good decision with your locally or globally optimizing. And I think that's what a lot of this comes down to it's like your your job that you're being measured on is to ship some software and some features resolve some bugs or whatever. And if everyone just goes for the shortest possible path to get there you end up in a situation where the environment they're operating in becomes more fragile more error prone more insecure. And yet we're just not very good as human beings working in large groups. How do you surface the right kinds of things to make a decision between local and global optimization. I don't have a solution here. No, if you have a solution then I urge you to put your name in for a Nobel Prize in economics. Exactly. Because that would be a pretty big breakthrough. There are many points in the software lifecycle like the source code, the CICD system, the artifact repository, the dependencies the external dependencies on public repos. And then all the tooling is as well, you're like, you're scripting your environmental variables like it's just, there's just a lot. There is there is and that's that's one of the hard things about being a software developer is that there's so much to know about so many topics that it's hard to be an expert in everything. I, again, I wish I wish I had the solution where I could just, you know, do a sort of an Isaac Asimov thing and you play a tape and that puts a memory in your head. You can tell how dated that story is. Yeah, exactly. You put in the real to real and some blinking lights and there you go. But I think there's there's still a lot of value in creating a minimal level of awareness of the possible issues you don't have to necessarily know the solutions. You just have to know a that there might be a problem here and be where you can get help. Really, and I know both of you are are I've talked about how cultural change and how people are actually and focusing on people is is a great way to get better security. Do you want to talk about cultural change in DevOps and how to how to get your DevOps processes. Really nice and secure using culture. So I think there's a bunch of things to I think unpack there one is that DevOps and in a lot of the most significant tech movements we've had of how we build software. These are grassroots movements these went by people at the top of the hierarchical pyramid inside organizations these are people who are down at the bottom. And so it's easy to sort of go. We have a cultural problem. And one of the things we found out from last year's state level support when we did a bunch of qualitative and quantitative research was that all organizations with lots of what we would call cultural problems talk about culture all the time. But organizations that don't have many of those sorts of problems. They stopped using the word culture because it's not it's not actionable. And it's actually encourages a weird kind of form of helplessness inside organizations like if you're an individual developing you like up. Well our culture doesn't allow for people to just make those sort of decisions everyone goes. You know it's like an earthquake when you're going to do about it you know you just sort of wait for it to move on. But organizations that actually implemented these sort of changes and had fewer cultural problems. Somewhat paradoxically don't talk about culture they talk about specific things. We have a problem with ownership we have a problem with making decisions quickly. We're the problem with documenting tribal knowledge or ancestral knowledge around a code base. Like all of these things are quite actionable. And one of the things I found really interesting last year with the team topologies authors Manuel and Matt. If you haven't read team topologies it's one of the best organizational design books ever around tech. And their big definition they came down to was stop talking about culture. Talk about what you need to do to be able to ship software quickly with low cognitive load and stress on individuals. If you actually look at those things and identify them. Then they start becoming things people feel like they can do something about. So that was a really long winded way of saying culture is massively important. But you've got to go at least one level below and go what is it we're trying to actually achieve here. Let's not just say culture and throw up our hands. But let's go what's the problem and how are we going to fix it. And then you can make like incremental changes and get better and better and better until you've just. And people can tell if they're making a difference. As you say when they're working currently one of the things I found really frustrating when I worked at Google was there was this ineffable phrase. Googliness of people go well that's not very googly. And you're like I don't actually know exactly what you mean. And I'm pretty sure you're just using this as a weapon to get you across. We need to get that the Googleometer. Exactly. And so on that you guys think that metrics are important to improve software security. Is it like part of improving your DevOps like. I would say it's like metrics. Yeah, sorry to cut you off. No, no. But to answer the question is, as I see it, metrics are essential. They're not enough. And as we all know, if you govern purely by the metrics two things happen one anything that's not in the metrics you will ignore. And to if what you're doing is like a control loop where you have a little controller. You think of yourself as a little controller you've got your sensors which are the metrics coming in and then you've got the actuator which is you doing stuff to the system. It turns out that if you want to improve the difference between the target and what's actually happening now. The easiest thing to do is to fiddle with the sensor. Right, it is much easier to gain the metrics than to actually improve the system. So you need to be aware of that and the reason that that's important is that if you tie punishment and reward to metrics, they will be immediately gained within each of their life. So those would be the two cautions I'd give about metrics. Yeah, that's very human to like change the change the measuring system. It's a really good example of this where they try to incentivize all of the tellers to getting everyone to open up bank accounts. And instead what they end up finding out was that all of these terrors tellers on mass, we're doing the sort of natural optimization there which is just going okay let's open up lots and lots of accounts with people whether it was a good idea or not. Yeah. Oh yeah, I remember when I was in, I was in Curry's it's like electronic store and I was a cashier. And I had to really get my metrics up on selling insurance I think on product, but I didn't see what the product was and I was like this is my this customer is getting my two cents. Would you like insurance on your product and she just looked at me and goes like no. It was a vacuum cleaner bag. Yes. Very good. I know I asked a question. Yeah, I would say use use metrics to sense the environment. But as I said beware time punishment reward like if it didn't work for the Soviet Union who had unlimited authority to try to make it work and unlimited supply of men and women with guns and dogs to try and make a metrics governing system work. Then it's not going to work for you. Right so use with caution. Yeah, I think there's a good example so to cut you off here is like because I get asked this a lot about the big four metrics that came out of the work we did with the door folks and that they ran with you know the meantime to recovery change failure rate etc etc deployment frequency. And it is horrifying what people out there in the world have done with these metrics they were a same collection of four metrics that pull in different directions so you can optimize one too much at the cost of the other. But you literally get teams inside enterprises competing on how to improve all of these things and you know exactly as I was saying like you can improve deployment frequency. And mean fake change failure rate by deploying more often and not being as good at measuring it for looking for errors. And so you get these teams optimizing for 1% 2% 3% improvements in these metrics and sort of losing sight of the biggest picture. But to I guess bring this back to a security lens. The thing I often talk to folks when they're trying to do DevSecOps inside organizations at the start of this journey is like how quickly can you push a change to production and know that it's actually gone out. Because if you can't do that quickly if you can't respond to something push out a fix to it or a change of any kind and know whether it worked or not. That is just the 101 sort of substrate and you can spend all this time optimizing all sorts of other policies and processes but if you can't create change in your environment quickly and reliably and be able to see the results of that change. Like stop caring about DevSecOps and all these things just fix all those things first. Yeah the worst time to find out that you can't deploy to production quickly and safely is in the middle of a security incident or an outage. Absolutely. We both found out that recently with Log4Shell. And on Log4Shell is like, do you see critical vulnerabilities and updating your software, all the dependencies as in having a process for that being really important or what you, yeah. So actually on that we have a poll. So the question is do you pin your bills or do you update to the latest. So this is sort of this question comes up with it's mostly around vulnerabilities. Well there's loads of good reasons to update but with respect to security. So if you update to latest you'll get all the fixes but if you pin your bills you're not going to be tricked into updating to a bad version. So and so we see here there's most, it's kind of half and half but most people prefer to update to the latest. So 24 said by 49% update to latest 20, 40% say pin my bills and the rest are it's not important to me. So I think and I don't really feel like this question is solved. So in Claysmith we always say, we recommend to pin your bills, but like, if there's a critical vulnerability it'd be great if you're updated as quickly as possible so I totally see the other side. So we like to say pin your bills but then use Tulane like dependable or I think renovate or to give you a prompt and alert appeal or to with an update to the latest and that will kind of quicken that cycle. So what do you guys think on that topic. That's a bit of a horn is this I'll let Jack answer this more in more detail but I'd say at a high level the way I feel that is some of it depends on scale. If you're like two developers who own the whole system that you're in like, you know, in a very small startup. The answer is very different to if you're a multinational bank with regulations and you know hundreds and hundreds of teams interacting with each other. I mean, you know, the big problem with auto updating to latest all the time is, when are you creating that artifact that are you testing something like are you creating something that's going to be tested in a test environment. Are you going to be able to reproduce that artifact again. I think there's some nuance here and it involves, you know, probably doing a mixture of both but choosing when in your software delivery lifecycle you do each of those activities. The most depressing answer from experts is there's nuance. It depends on the one hand and on the other hand. I'm broadly in the camp that you should pin your dependencies in source code and update them automatically. I don't like mystery dependencies showing up in production without warning and without record. That makes me deeply uncomfortable personally, but I recognize that it's a hassle. We are sort of like in, I don't know, like, not quite the pre history but we're definitely at least no further than the bronze age in terms of dealing with this stuff. We have technology but it goes blunt easily and causes a lot of a lot of hassle. And we just need to learn to grow the muscle to do it and that's just going to take a lot of time and be sporadic and uneven. But I do agree with Nigel's point that there's sort of minimum standards of hygiene you need to reach first. You need to have good testing in CI in place. You need to have smooth the road to production from source code changes. Those are the same capabilities you will need to automate upgrades. I will put an asterisk here about like the tradeoff and risks between waiting to upgrade versus upgrading too soon. And Sonotype have released their eighth state of the supply chain report a few days ago. It's worth reading. They do fantastic, fantastic research. Their position is that you should hang back a little, you know, one on two versions behind the pace or maybe some amount of time, I think would be a better way to do it. On the theory that if you're right at the bleeding edge, you will, you will get cut from time to time and that it's not worth the risk. I'm kind of on the fence about that. I think that the incidence of a vulnerability existing is far higher than the incidence of a supply chain attack being successful. So the balance of risks. What about general bugs too? Because this is the one that always gets me like there's nothing I find more frustrating than if you're developing something using a bunch of libraries or frameworks and you keep beating your head against the wall going, why is this not working? It should be working. And then you upgrade a dependency and you're like, ah, it was actually a bug all along. I think it's something to be said to staying on latest generally leads to a better experience. It's also because upgrading is not just like a linear function of the number of things you have to upgrade. It's exponential, right? Because there are interactions between the dependencies. So the longer you lead it, you know, like the larger that sort of Cartesian join of doom gets. So you want to you want to keep close to the edge. If you can Shopify, for example, we have the monolith, which is the main application that probably the largest Rails app in the world. And we keep that on Rails edge once a week once a week we upgrade to what is literally in the main repo of Rails, like we're not waiting to point releases or anything like that we're keeping up with it. Because we know that the upgrade pain is just too large. If we hang back for a year, like it would just just be catastrophic. And I can I can sort of look back at the earlier history of the company through through documents and and and get get commits and I can see that pain and I can see why we did it. Yeah, I just got off the call with a customer who's still on Red Hat 4 and is unlikely to ever get off because they they left it too long and now they have slides little bit of history that they have have to work around. Yep. You're muted. You're muted. The webinar guards are against us. No, still muted. This is how you know it's live everyone. Yeah. So I think one of the interesting things while she was working out her audio is a lot of this conversation around security issues and software supply chains. It often feels kind of one sided in terms of companies that are getting an awful lot of software sort of for free from volunteer maintainers who have been, you know, every time one of these vulnerability comes out. So it's like everyone has the pitchforks out for the main title. Like, you know, like, I'm, you know, I was doing this out of the goodness of my heart and I was maintaining that stupid backwards compatible feature because you all protested against it. I think something has to change about the producer consumer relationship with open source. Like there's a general assumption that it's software of a certain quality everyone should try and write good software but something feels out of kilter in society about the promises and commitments that people expect. There's a there's a really fascinating paper that just is currently in preprint SSR and the social science research network. It's a preprint server called tragedy of the digital commons, which, which is like written for a law journal but goes into kind of like the economics of it, you know, like the law and economics kind of situation and she makes exactly the same point which is that large software companies in particular are free writing off the community. In a big way. Her argument is that the sort of the ambient costs of security risks should be pushed back onto those companies to bear because they're the ones who are best able to bear it. Yeah. It's, I totally agree. Big tech loves open source like sharks love fish, you know, Did you see the I saw, can you guys hear me now sorry about that. Yes. Yes. But my laptop, the battery is gone. But anyway, I saw that legal letter that one of the log for Jay. Modern maintainers received and it was just like, Oh, for the love of God, like, he, he's like doing this for free and you're like telling him, giving him a legal letter to update and like from a company that's using his code for free. It definitely doesn't sit well. Doesn't seem morally right or something. I mean, I'm kind of in an interesting position here because I've been one of the champions for introducing MFA requirements for for software repositories, you know, where the authors need to have MFA enabled, because their packages are so widely used. And in the sense that's imposing a cost, you know, it's imposing imposing additional effort on the package maintainers who didn't didn't ask for it. Right. And I do feel bad about that but I then have to sort of take the utilitarian stance that the end consumers are far more numerous and for them the consequences are far more serious if if there's a compromise. It's, it's a tricky, it's a tricky thing but I think that the difference there is that like the end consumers can just involve other, you know, random open source developers who who didn't expect something nasty to come down the pipe as well as the companies who can bear the cost and should contribute back. Yeah. Yeah, I saw there was on PIPI they have some stats on who has converted to MFA. It's not like super impressive. It's like 20% of people that will eventually be asked be forced to have to have turned on MFA. Maybe some of them don't know about us or some of them don't want to do it and then just wait till they have to. It won't be a big deal. That's, that's largely what happened in Ruby. I know some of those authors because they work at Shopify. And I said, yeah, we agree with logic. We're just not going to do it until you make us do it because it's just, it's just work right. It's an additional thing to do. Yeah. Yeah. And what about like, I've seen, I saw a list of things that maybe open source maintainers can do to be more secure, but it was like a lot of stuff for someone to do. It was like, add scorecards to their repo. There was like, there was just a ton of stuff to do a course. Like I just can't imagine if you're doing this in your spare time that like a lot of people are going to do it. Especially when people often got into this because it was fun, you know, like, hey, I solved a problem in a fun interesting way and I want to share that with the world and I think I don't know if software licenses are the way it is or some kind of opt-in system. But I feel like there's got to be a way to distinguish between, hey, run, here's something fun and cool. Have at it. And I am deliberately building something that I would like to be part of a bigger structure and a bigger ecosystem. And I think that that's sort of the constant trade-off. You don't want to, you don't want to stifle people just sharing code that is used to your fun, but there's going to be some declaration of intent somewhere. I think the sort of the coordination point or choke point, depending how you look at it is probably going to be the software repositories because they can set the terms. Under which they agree to distribute the software. And so if you don't like those terms, you are within your rights to take the software which is open source into running yourself. And within your rights to just distribute source from a website that you own. Like there's alternatives, like they're not as convenient, right? They aren't, but you know, that's the trade-off. Yeah, I think that's, and it's similar to, all of this reminds me of, I was a Debian maintainer back in the day when you were sort of in one of two big Linux camps. And I was quite shocked when I sort of moved to that point level of suddenly having all of these security processes enforced on me. But it was the right thing to do because that was the distribution center, you know, to all of these volunteers. Oh, and like, so a lot of these things where Debian community already had a lot of these. Yeah, I mean, I think, you know, as much as, you know, I hate to, you know, particularly towards the end of the day to claim the death of the operating system distribution. A lot of these problems, I think, have been solved in smaller communities before we're just now dealing with them happening faster and at a bigger scale. And, you know, in tech, I feel like we love nothing more than to ignore the discoveries of the past. Yeah. And so what I'd say so what do you think there are the biggest challenges in software security or is it like we've been talking about how there's just so many challenges and it's just all of them together. But if you're going to give yourself a top one or two, what would be your favorites? Well, that's tough. This goes back to that earlier discussion about culture versus practices. There's this vast amount of latent risk out there. And we've just got to sort of chip away at everything that gives. We're pushing in every direction at once and anything that gives we push hard up because we're getting some progress out of it. We're retiring some risk from it. Building up those layers of security. Building it up and, you know, reducing the net risk for everybody, which is which is the sort of the goal, you know, that there's that problem that open source is basically a commons. Right. Like it's it's a kind of a resource that you can't exclude people from using but where if lots of people use it and that puts pressure on the maintainers it's rival risk as economists call it. And that's commons and they're difficult to govern the difficult to manage because you know everybody's an individual they've got different incentives to be selfish and the difficulty is finding those well positioned parties to be involved. So to their credit, I know we bashed up big companies but to the credit a lot of them are coming to the table or trying through the open source security foundation which I participate in open SSF. So you've got your Google's and your Microsoft's and your Amazon's and and a whole bunch of companies participating, contributing money contributing folks time, trying to sort of attack this on all fronts. Yeah, the trick is, is going to be like to your point here like, will it just seem like a loud crescendo to open source maintainers like here's a massive list of things that we can offer you. Where do I start. Like, is that the 10 point mobility plan is, is part of the open SSF way to secure open source. And do you feel like open source is one of the most important things to secure when we're talking about software in general. Oh yeah. Yeah, it depending who you ask it's it's Yeah, it's the sky blue. Only on sunny days. Yeah, it's it's everywhere now it's in pacemakers it's in nuclear power plants. Like there isn't there isn't a single critical or high, you know, high consequence piece of infrastructure, whether social or technical that doesn't rely on it. We have to like it's, it's the soft underbelly of the whole of the social economic system at the moment. And what do you think about regulation, because I know the US federal government is bringing in some new rules about espams and even vulnerabilities. And do you see that as a way to improve security of a product. Strictly, yes. This this is a good example of that argument from tragedy of the digital commons article that the cost should be pushed on to the large companies that currently free ride and have the resources to not free ride. And the US government is in a great position, because it's the single largest purchase or a software in the world to push push those standards down and to make them common. Once they become common than other consumers from those companies will say well you already have that capability I demanded also and that creates a sort of a flywheel effect. But in terms of regulation of open source software itself. I think big companies like your regular maintainer at home on a weekend. Dear God know that that would that would kill the golden goose, but not before the goose, you know, defecated all over the bed. And I think what is a rather hairy thread mix metaphors and we don't value maintenance enough in society. And I think this is sort of part of the problem that, you know, and is why I think right to repair movements and all of these things are so important that, you know you work in lots of, we have a culture in software development that I think reflects society in general at the moment which is, it is considered better to launch new things than to iterate on existing things. And the job of maintainers is to iterate on the existing things and I think the healthiest software engineering environments I've ever worked in have been the ones where really senior folks are sort of proclaimed, you know, lauded to look at systems make small incremental changes to them over time keep them going in the right direction, and that that's recognized as valuable. I think this is the whole problem. We recognize value maintainers anywhere near enough. And so they feel at the end of the supply chain when we should be going, you know, know you're a critical part of this whole process and you know, if I could wait one it would be around us to maintain the act of maintenance more. So that big companies did want to participate in it so that they, you know, reached out to maintain projects with respect, you know, I think Google does a reasonably good job of this like we've had Google reach it is a security vulnerability and something you ship, you know, we've seen some of our users have it, you know, they basically wield a big stick and go if you don't do something about this in 30 days or 60 days or whatever we'll just we'll shout it from the rooftops and they can because they Google I think there are ways to do that sort of encourage people to do the right thing, but fundamentally we've got to value the act and process of maintenance more everywhere. And I wonder if like government funding could help I know like obviously the 10 point mobility plan should improve security and that's using money. I know maybe to there was talk about resetting putting funding towards resetting to FA shock and public repositories. But do you see it. Do you think that maintainers could get paid for improving security of product obviously selective products like that are used in critical systems, like what do you think that would that's a solution or is that just it's just it's not it's not going to some maintainable in the long run. I'm concerned that it goes back to that problem of metrics that people will be, you know, the incentive is just to do what what the funder says. And that will attract people, you know, like the story of the British trying to get rid of Cobras in India, and they pay people to bring in Cobra heads and people just started breeding Cobras. And there's something similar where there's gun buybacks and people are just three be 3d printing guns on mass and bringing in boxes of 3d printed guns and making money that way. I'm concerned about that. I think where government has a role in terms of funding at least would be on what you might think of as sustainment activities. So, things like subsidizing or fully funding training, right making it freely available to as many people as you know, encouraging colleges and universities to pick it up as part of their curricula. Things like, you know, shared resources for software repositories shared resources for open source projects that need a security review a lot of things that the open SSF is already doing can definitely be scaled up with government funding. I think about punitive approaches to like, and this is something I'm always curious about, because it feels like most of the huge companies that suffered data breaches that were honestly pretty derelict in terms of. Oh yeah, I think they just haven't been punished, like either markets or by governments. And why would you invest in security when it doesn't actually matter. I'm a bit of a, you know, like I consider myself a centrist. I used to be a libertarian, but I'm about to sound like a raving loony lefty but because I think there is far too many things in corporate malfeasance, in which the punishment is a fine, whereas it should be criminal time for the executives who authorized or who failed to authorize, you know, some activity. Because that's the only thing that actually gets their attention. If you get fined, it doesn't fall on the people who made the decision. It falls on the shareholders. Exactly. Or the people who didn't have control. Like Optus is a good one. Yes. Like you have a company that literally litigated, you know, pressure lobbied the government to make sure companies weren't accountable in these sort of situations. And then now all of these millions of people have their data spread. Including me, my passport number got stolen. Actually, I was listening to the Security Weekly podcast and at the end of it, they talked about insurance as a way to drive companies to do more to be better at security. And it can be a more effective way than compliance or that like when you when you have a data breach and you realize you're not insured and you have to pay a lot of money to maybe for on on people so suing you or even to get back to where you were if you if you've lost data. That that is a quite an effective way. Why not why not both. We have punishments for people who, you know, like, if you don't do fire safety in your factory, right, not only do you mess up your insurance and not only can you face fines but the people who are responsible are criminally malfeasent, they can go to jail for neglecting fire safety. You know, the consequences of of data breaches are dire. The consequences of lackadaisical security are only going to grow worse as time goes on and as all matters somehow becomes programmable. Basically, then this stuff really matters and I think this argument that like, oh, but the corporate veil is sacrosanct. It's just like the corporate veil is there to deal with, you know, questions of who owes debt to whom, like, who can be who who is liable for how much. It didn't give you like a magical get out of jail car that was never the idea. So as I said I sound like a raving money on this point because I'm so frustrated by companies that walk away with a fine and the executives are still there. Right they don't get sacked. They just go like, oh well that's the cost of doing business and that to me is psychotic. Yeah. I mean, you know, yeah. There's not enough accountability at the cover level. Absolutely wouldn't even that one. We rise up and smash the system to death. We're going to do it. Right, no to all countries. Yeah. On that, I think we're going to announce our prize. Hillary, do you want to? Now that we've done the rally. Yeah. We've gotten to where we were meant to go. So the prizes are announced there in the chat. We have here. Jojo Dutta, who gets a free lunch. He's sharing on on the streaming platform. We have Arthur courage. He's a free lunch. Jinsu prize pack. Arjun just she price pack. I'm, I'm so sorry. I'm butchering these poor people's names. Caitlin. Oh, oh God, Caitlin. I'm so sorry. You get a price pack. Hunter. Cune price pack. And Hillary is going to be reaching out to everybody over email with, with your details to send it on your send it on to you. But I hope everybody enjoyed our talk today. I loved it. I'm so sorry about my. My speaker issues. It happens. You guys were such pros. You continued on the conversation. Yeah. The other way to put it is that we talk too much. And thank you for being such wonderful guests. Shark and Nigel. It was like really nice to talk to you. So it's bye from our guests. You guys can say bye. Hi. Thanks for having us. Thanks everyone. And it's by from me. So thanks everybody for joining. We'll see you at the next monthly. Thank you. Bye. Thanks. Bye. Thanks. Bye.