 Hello All right so before we start I would like to remind you that there are some lighting talks slots open tomorrow and you can sign up at the board upstairs there's a white board you can just write your name and the topic and perhaps you might get chosen for the lighting talk tomorrow okay and right now there's Paul Moore and the core infrastructure initiative badge program enjoy. Hello, core infrastructure initiative so yes first who knows what the core infrastructure initiative is oh okay all right well this is good then you'll get to learn a little bit about it for those of you that already know about I'm sorry you can check your email or I don't know watch cat videos. To this first of all you know my name is Paul Moore that's me on Twitter those are my email addresses I check both I respond to both I'm you know not a whole lot of this relevant what we're talking about today but I do maintain the SE Linux the audit and label networking subsystems in the kernel I also created and maintain the Libsec Comp project we're gonna be talking a little bit about the Libsec Comp project today but that's really only so we can talk about the badge program so I guess if you have any sec comp questions we can we can discuss it but that's not what I'm here to talk about what I'm here to talk about is the badge program and sounds like we had I think two or three hands that came up for the core infrastructure initiative of those two or three hands how many of you actually know about the best practices badge program oh okay 100% that's great so you really can check your email then for everybody else in the room it basically what we're gonna be talking about is improving the quality and the security of open-source projects yeah that's that's it no no chill so the core infrastructure initiative started back in response to Heartbleed does everybody remember Heartbleed it's fine was putting these slides together it's funny I thought Heartbleed was much more recent than it was I didn't realize it was back in 2014 I was thinking it was like last year but I mean it's almost three years now so my heart bleed was kind of the tipping point that was when the Linux Foundation realized that you know we have a real problem we have a lot of open-source projects that are in critical pieces of the infrastructure worldwide and some of these open source projects were at very high risk you know open SSL is you know maybe a lot of developers knew that open SSL was not a large team they had lots of issues but this surprised most of the public in the world you know and they realized that all of a sudden all their infrastructure all of their online banking all of their shopping on Amazon you know really came down to a small handful of people that worked on this part time and that wasn't good so anyway the core infrastructure initiative was designed to find a way to help support these critical projects that had a high amount of risk associated with them and they were trying to figure out what's the best way to help these projects so we can reduce some of that risk and provide some better assurances that our critical infrastructure wasn't going to fall apart basically try to prevent the next heart bleed and that's the website up there I'm gonna show it to you at the end too but core infrastructure.org if you want to find some more information about it that's a good place to start so like I said so they were they were trying to figure out how do we improve the quality how do we improve the security and one of the first things they did was something called the census project and I wouldn't say that this was this was new other companies had done similar things in the past but what was unique about the census project that this is the first time at least that I'm aware of anyway this is the first time that it was done in such an open and transparent manner and the idea behind the census project was to look at a lot of the common distributions that people run look at the open source projects they're bundled with those distributions and then take a look at each product each project against a set of metrics and try to judge the quality of that project and how much risk was associated with this project and all this is open source it's available online they wrote a very interesting report about all of it it's all in GitHub you can go and look at it and if you have a few minutes I would actually encourage you to do so if you're responsible for a project because you can look at you know what they used to measure measure risk in your project so it might be a good clue to you if you're responsible for a project you know are you are you at risk you know what what things can you do better to help reduce some of that risk so we're gonna talk a little bit about that today with the best practices but we're not gonna cover everything so it's worthwhile to go take a look so after they did the census project they had this nice really cool list of projects that were at risk and they were trying to figure out okay well how how do we fix this and there was no one thing that they could do every projects different every team is different so they looked at and there was a number of things and and some of it some of it was money right you know some projects just need a little bit of extra funding so that the developers would have more time to spend on it sometimes it was travel assistance you know the team was spread out across the globe and they really just wanted to get together for a week so they could brainstorm and come up to a solutions to some of the problems they had but unfortunately that doesn't scale right I mean nobody has an infinite supply of money we're not Scrooge McDuck with our big money rooms you know you have to find a way to help these projects without you know handing them bags of cash and so they took a look at what was available they took a look at what some of the projects were facing and they said okay well maybe we can come up with some some programs to help educate developers come up with some better testing tools some tools that will help promote some secure development practices and make those available to everyone so that those projects that were at risk that you know had the time to to spend to improve the quality of their project they would have an easier time doing it and that's kind of what we're gonna be talking about today with the best practices program the goal of this really falls into that developer education idea it's a voluntary program you know no one's gonna force you to go out and participate in the badge program but if you're the maintainer of an open-source project or if you're a main developer of an open source project I would really encourage you to go check this out we're gonna talk about this a little bit more but I did this with Libset Comp about a week ago and it didn't take very long it took me about an hour and it's largely just clicking on some buttons and filling out some forms and if if you're hosted on GitHub a lot of the stuff is gonna be automatic it'll pre-populate it for you and the nice part about it is GitHub kind of promotes just due to the workflow the issue tracking pull requests it promotes what are considered to be a lot of good best practices so if you're following you know a lot of the GitHub procedures and whatnot you'll be able to tick off a lot of those boxes automatically so I would suggest give it a shot an hour of your day it's not not a whole lot to ask and you might learn some things about your project some areas where you could improve I know I did I completely forgot that I should be doing a few things and it's they weren't big things there were things that I can easily resolve so anyway check it out that's the URL for how to get there I'll show it again at the presentation but you know feel free take a picture also what's kind of neat about this is the badge program itself is an open source project so if you have some thoughts and ideas about how to help improve open-source development and open-source development as we all know is pretty unique especially if you've ever worked in a in a proprietary closed-source shop you know a lot of the things around secure development architectural reviews we don't necessarily have the same thing in open source so if you have some ideas I know they would love to welcome your input there's a mailing list they have get hub issues you know for comments so once again I'm sure they would love to have you if you're interested in contributing all right so I guess before we go off too far does anybody have any questions okay and if you have questions during the presentation just wave your hand I think it's better to answer them while we go as opposed to waiting to the end when you've forgotten the question and I probably not gonna answer the way but anyway alright so for the rest of the presentation we're gonna go through kind of a case study I think that's probably the easiest way to explain some of the highlights of the best practices in this particular case study we're gonna use Libset Comp as an example I'm not above shameless self-promotion so before you do that just a quick slide to explain what Libset Comp is a couple years ago I think three three maybe four years now the Linux kernel added support for a more flexible syscall filtering mechanism we call that Sec Comp BPF Sec Comp mode 2 and Ant Sec Comp goes by a bunch of names people seem to be kind of consolidating on the Sec Comp BPF term but anyway so if you hear that mention that's that's what people are talking about basic idea behind Sec Comp and syscall filtering is that the application when it starts up constructs a filter that basically defines what syscalls it wants to be allowed to call you know sometimes it's just a syscall say hey you know I just want to call open and close and read and write sometimes you can be a specific as saying you know hey I want to be able to open open a file and then you know just read on this particular file descriptor just write on this particular file descriptor so you can you can filter on the syscall arguments as well as the syscalls themselves anyway so the application generates this filter loads it into the kernel and then from that point on whenever that application makes a system call the kernel runs that filter against the system call and decides you know should that be allowed should it be disallowed should I kill the program you know should I allow it to be traced there's a lot of different actions that can be taken the the idea the motivation for all of this was that applications and programs that had to deal with high-risk environments let's say you had a network facing you know server that was dealing with all sorts of garbage that people were throwing at over the network you know if that application was compromised in any way the attack surface it had the the ways that it could attack the kernel and potentially exploit the kernel were much more limited you know if you had a very restrictive sepcom filter you know that the odds went down as to your chance to exploit a kernel vulnerability so this was very nice containers use this quite a bit system D uses this web browsers use this a lot of network daemons use this so it's actually it's only been out for a few years but it's been quite popular and so lib subcomp fits into all this to try and make this easier for developers to use you know we talked about this filter that the application would construct and then push into the kernel well unfortunately the filter is an assembly language that we call BPF and to make things worse it's architecture and a bi specific so when this was first coming out I looked at and I realized that you know application developers probably aren't going to want to program this weird esoteric assembly language thing just so that they can have this little bit of extra functionality I mean it's for the security people in the room you might know it's security is always a hard sell if you go to an application developer and say hey we have this great new feature but you just have to write this really weird really long assembly language program and by the way it's going to be different for you architecture it runs on but you love it yeah it didn't go over real well so the idea behind lib subcomp is we try to abstract away the fact that it's assembly language we abstract away all the architecture in the ABI specifics and we provide a nice programmatic interface you know they just they call a few functions it generates the filter it optimizes the filter loads in the kernel and off they go and it's once again it's proved it's proved pretty popular I actually give a presentation here at devconf couple of years back in 2014 so if you're interested in that you can look that up I'm not sure if dev comp archives previous presentations if not tell my website or you can you can bug me or send me email and I can point you at it all right so now back to the core infrastructure initiative so the best practices kind of has six focus areas first one is just what I would consider open source basics you know do you have a public publicly available website you know do you release your source code under a open source license that sort of thing then there's you know change control you know what are what are you using to you know for your source code repository that sort of thing then there's you know reporting bug reporting vulnerability reporting stuff like that there's project quality which you think of it more as build quality reproducible builds you know is it how hard is it to get your your program up and running that sort of thing project security and then analysis so you know think like static analysis of your source code you know runtime analysis things like val grind and whatnot if you're familiar with that okay so the open source basics I kind of just talked about this the best practices bad program requires you to have a floss license which I really shouldn't be a problem for most projects it suggests you use an OSI approved license once again my guess is that's probably not a major problem for for most of the projects here have a public website and probably the most important thing about this and you'll you'll see this come up a lot when I'm talking today is documentation right if there's one thing I can convince you guys to do today is document your project and I'm talking about documenting the project here so that users know how to get your project install it use it and also you know make sure you have documentation for all your interfaces and you know things like that it's gonna be project specific but I think you get the idea the the reason for this is good documentation really kind of is the starting point of any sort of secure development or secure use of your project if people don't know how to use your project properly they might misconfigure it and I think we all know I mean there's plenty of cases throughout history of vulnerabilities that have been down to just just people configuring the applications wrong so having the good documentation out there so that people understand how your how your project works how the program runs how to use it properly if it's a library you know what what what functions to call you know maybe don't use this combination of flags that sort of thing so documentation is really important in that way and it's also nice because once you've documented how your program should be used it's a good guide for you when it comes time to look at testing if you haven't already because now you have a starting point you know okay this is how it's supposed to work so I should write some tests to make sure that it actually does what it's supposed to do and all these things I warn people about doing I can test to make sure that you know those are handled appropriately so people don't think about documentation when they think about security but I would argue it's probably the most important thing if you haven't really started looking at the security of your project start there and the nice part about is your users will love it because they have some documentation they can see people that want to help out and join your project will love it because it gives them a entry point into your project so they can understand how it works yeah it's a big benefit anyway and probably the last thing on this slide is a public discussion mechanism you have to have at least one public collaboration mechanism and once again I that's a cornerstone of open source development so I'm guessing everyone here can meet that you know mailing lists to GitHub issue trackers web forum you name it alright so this this is pretty much the same slide I just showed but this is the use you know the the case study side of things Libs.com we were under an LGPL license so we pass we're on GitHub and I've tried to kind of highlight when the things you get for free by running on GitHub I tried to make note of it that gives you a public website and as you know you know GitHub hopefully displays your readme file and so if you've got a readme file already that documents a lot of these things guess what you can tick off that your public website has all these helpful instructions for people to use once again I got this one for free I got lucky project documentation man pages I personally love man pages I know that they've kind of fallen out of favor a lot of projects but that's still where I go so I made sure that Libs.com always had decent man pages with examples so I was able to take that off one of the requirements and I kind of feel bad being the American here at a Czech technical conference telling you that you have to write everything in English all I know how to say in Czech is is hello and a beer please on thank you I can say thank you but huh yeah pivo crocine quick a quick note for those of you traveling generally I found in most languages if you can figure out how to say hello beer please and thank you you'd be amazed that solves a lot of problems you know if you upset someone buy him a beer that generally you know covers things over so yeah anyway Paul's travel tips but anyway so I apologize on the on the English thing it's not my requirement it's the it's the core infrastructure initiative and I think what they're trying to do here is unfortunately for better or worse English is probably one of the more common languages as far as open source development goes and so I think the idea is to make this as as accessible as they can to the widest group of people they want it in English so anyway I you know that is a hard requirement for them but here's the thing if it's not in English I mean you don't even if you're not able to meet all of their requirements 100% still going through this best practices program and just as an educational experience for you and your project is still a good thing if you can't make this last you know final 1% or so because of the language requirement I'm not gonna fault you for it so anyway okay moving on for that public discussion mechanisms we have a mailing list we use Google groups it's a good way mailing list seems to be falling out of favor I was surprised it's very difficult to find a public mailing list server unless you happen to know somebody who runs a major domo instance but if you go to Google you can set up a Google groups thing it's it has a normal email interface and you don't actually have to have a Google account to use it most people think you do you don't you can sign up through the normal email ways so something to keep in mind we also if you're on GitHub already you have this mechanism through the issue tracker if you're using it that's that's considered a collaboration mechanism or discussion mechanism so that pretty much an open source basics then there's change control and I think this should be another easy one for most people the first one you have to have a public source repository and it's open source projects that should be given for everybody here and you don't have to use get you know I think you know probably majority if not all the projects in this room probably use get but you're not required to use get the important things is whatever you're using for your source repository it has to track the individual changes so basically you can't just have this huge mega patch when you go from one release to the next you have to have that split up into the individual change sets and each of those change sets you have to be able to attribute to an author that made the change and the date that they made it so once you if you if using get get takes care of this for you you know subversion I believe does it's been years since I've used it not sure if CVS I know but anyway use get if you're not already trust me your life will be much better versioning so releases have to have unique version numbers this is I think obvious to most everyone in the room why this is a good idea if you don't have unique version numbers how do you know what software you're running and the semantic versioning concept is the preferred way of doing versioning and if you're not familiar with that I'll be honest I didn't actually know what semantic versioning was until I went through this and then once I learned what was is like oh yeah I'm already doing this so semantic versioning is the the major release dot minor release dot patch level or what not so the concept is that you know when you make a change that breaks things you know so that you know your other projects that depend on you are going to need to update the source code in a non-backward compatible way then you have to increase the major version number if you're doing an update because you just want to add new functionality but you're doing it in a backward compatible way then you increase the minor version number if you're doing a small patch which doesn't really change the interface or add any new functionalities but fixes that annoying bug that you found then that's you bump the patch level version number so I think most projects already do this whether or not you know that it's called semantic versioning or not so anyway that's that you need to have release notes when you do releases most projects have a change log already so it's pretty easy for you to get release notes out of that if push comes to shove you don't have a change log hopefully you're using git or something else and you're writing nice handy very useful descriptive commit descriptions for your git commits I know we all do that we don't just write the one-liners thing fix that bug by the way if you have those then it's easy enough for you to you know just do the git log and kind of generate some release notes pretty quickly yeah it's very it's very reasonable thing and they especially want you to identify any vulnerabilities so any CVEs that you may have fixed make sure you call that out I think you'd want to do that anyway just for the sake of your users users don't like it when there's a CV against your project they have no idea if you fixed it or not so anyway just call that out in the release notes once again for Libsec comp this was pretty easy for us once again the public source repository we get for free just for being a github the versioning we looked out like I said I was doing semantic versioning without knowing that I was doing semantic versioning I suspect most other projects do I take it a little step further with Libsec comp is I create new branches for each you know major dot minor release just makes it a little easier for me when I'm maintaining things but however you want to do your own project that's fine that the best parts this badge doesn't actually mandate the branching release notes we talked about you know we Libsec comp has a change log so it's easy enough for me what I do is whenever I do a new release I take the the snippets from the change log that apply to that new release I just take those cut and paste them into the the github release field you know you can have a little release note box there so I put in there it looks nice I don't know if users like it or not but it's kind of handy for me to when I have to backboard things I can look and see what big changes were and once again you get kind of get that sort of for free with github reporting so sorry blank for a minute I was looking at the clock and I forgot what time we actually started I'm going bug reporting so you have to have a mechanism for users to report bugs you need to have a public archive of these bug reports as well as your responses now that can be a variety of things that can be mailing lists you know as long as that mailing list is archives people can report by the mailing list then you discuss it that that's great that's a public record it's archived to get hub issue tracker also a public archive you know people can file issues you can respond that's great you also have to have the majority of your reports acknowledged and that's important you don't have to have the majority of reports fixed you just have to have them acknowledged and if you look at the best practices I think they actually give a window to say the majority of the problem reports within I think last 12 or 14 months need to be acknowledged and I read that to mean basically that when someone someone submits a problem report to your project you look at it and you say okay you know yes this is an actual legitimate problem or maybe it's something you know like they've misconfigured it or maybe they're using the API wrong you know I'm sure we've all seen this where some some problems come in they're not really problems they're easily resolved so I would encourage you when you do get problems reports take them seriously so I mean these are these are users they're using your project and they want to use your project that's why they're giving you a problem report so help them out you know they like your project help them continue liking your project anyway and very similar things with vulnerability reporting they take vulnerability reporting a bit more seriously than normal bug reporting I think we can all understand why that's a good idea and they want to see responses within 14 days and now I think it's very reasonable especially as we see how security vulnerabilities have changed within the past few years and now that they're a marketable item with websites and logos and whatnot having a timely response to these things is pretty important and unfortunately with that I hit my first fail the bug reporting was easy enough for me you know there was mailing list there was github thankfully Libsetcomp isn't horribly buggy yet the benefit of a young project I suppose so it was easy enough for me to look back and say that yes we've done a pretty good job acknowledging bugs and most cases fix them we don't have any really super long-standing bugs so that was easy enough unfortunately vulnerability reporting I don't have a section in the readme file or in anywhere in the documentation that specifically explains how users are supposed to report vulnerabilities now it's easily solved all I have to do is simply say that you know you can report vulnerabilities by the issue tracker by the mailing list and if you go to the best practices website they even talk about private disclosures so if someone is found an issue and they're being responsible about disclosing it you know they need there should be a mechanism so that they can do that and my understanding of this of reading their requirements is just as simple if you provide your email address you know so that they don't have to send it to the project's mailing list and basically advertise to the whole world that there's this nasty security vulnerability in your project so anyway the solution for me is just basically to add some additional documentation into my readme file and that would be enough to meet this requirement all right so just to check nobody's fallen asleep yet but do you have any questions okay so the quality side this is as I was trying to talk to you earlier this is more about the quality of the project and ensuring that people can get your project and build it without too much difficulty as far as the build system goes they the requirements are somewhat vague because there can be a wide variety of build systems and depending on the language your project that can change things dramatically and also if you you know if you have a project that works with an interpreted language you might not have a build system you might not need it but the recommendation the requirement is that you do at least have a working build system that uses common free and open source tools basically you you don't want to have an open source project that requires your users to go out and get a proprietary piece of software to build your to build your program that's a non-starter for a lot of users the next thing is take build warnings seriously if you have a project that is a compiled project one of the easiest things to do you know if it's a C project you know turn on all the warnings dash capital W all and then you'll probably get some warnings if you haven't used that before track those down the GCC keeps getting smarter and smarter and smarter with each release so if it makes a recommendation look at it it's it's probably not a false positive it might be it's possible but GCC is pretty good about that so look into those warnings is that might be a vulnerability in disguise that you haven't caught yet also if your language supports like a safe mode I think was a PHP has a safe mode camera of Python does but anyway if your language does support a safe mode really consider making sure your project works in that safe mode also a lot of languages have you know linters or lint like tools that will help clean up the code and whatnot it's recommended that you try to make use of those if you can and testing right I think everybody knows we need to do more testing I don't think I've ever talked to anyone that has sat back and said yeah I think we've got hundred percent code coverage with all our tests it's great it's awesome so anyway write tests write tests first one write documentation second one write tests and also your test should be under free open source license doesn't have to be bundled with your project you can have your the tests in a separate repository that's fine I just recommend if you do that you know mention somewhere in your main projects documentation where to go find the tests and whatnot tests should provide full code coverage should I don't know anybody that does but it's good to have goals right good dev goals and they really prefer if you do continuous integration methods and I think the idea behind this is so that you're running tests every time you make a change to the codebase so I think that's all pretty self-explanatory it's another bad slide see it kind of I was real happy when I was doing this on the on their little web app it's like hey this is easy this easy then I hit the vulnerability reporting I'm like oh then I got to the testing I went oh so it was kind of starting to go downhill anyway for Libs.com build system was easy enough we you know we use auto tools we use make we use GCC we have some Python binding so we use Python but these are all commonly available on pretty much every Linux distribution I've seen build warnings we enable dash W all and I'm happy to say on a modern Linux system there are no warnings that pop up maybe if you're using an old version of GCC I don't know you can't test on everything right and there's others I mean if this is one of those areas especially if you're dealing with C and GCC there's a lot of compiler flags that you can specify and they tend to change from GCC releases dash W all is supported on pretty much every GCC release but if you're using some of the newer GCC there's even some additional compiler options you can use but I don't they don't really require those like I said because of versioning but if you at least for development purposes if you've got modern versions of GCC I'd recommend you just Google around GCC hardening and whatnot and there's plenty of pages out there that walk you through some some pretty helpful compiler flags that you can add even if you don't enable them by default at least do it on your development machine so you can get a little better well better of idea of some problems that might be there let GCC help you out as much as it can okay and so the automated test suite so I've been on a crusade the past few years for more testing for a lot of reasons and so I was really kind of proud that you know we had a test suite from web.com you know from the very early stages and we've been we've been pretty well disciplined that when we add new functionality we make sure we had a test for it and you know doing all the stuff that you know everybody always says you should do however I kind of forgot one thing and this is important I've never actually done any code coverage analysis so well here I was I thought I felt pretty good about you know Libs.com for testing we've got test suite it's bundled you run make check it runs like this is great it's packaged up in Fedora Fedora even runs make check as part of the build process for the RPM I'm like hey hey good ball but when it comes down to it I really don't know I mean I think I feel good about these tests but I don't know I can't tell you that we're doing a good job testing because we've never actually looked at the code coverage so you know write your tests and tests are better than no tests but once you've got a decent number of tests written I would really encourage you to look at the code coverage and there's a lot of tools G-Cov is the one that I know of but when I once I realized I failed this and I was feeling kind of bummed about it I googled around to see what other things are out there there's actually some really kind of cool tools that hook into GitHub and and Travis CI and whatnot that'll do some really nice little pretty graphs and statistics so I would encourage you to check those out but you do have to have a fairly comprehensive test suite first so if you don't have tests write tests if you do have tests then look at the code coverage the other thing we don't do any continuous integration testing now there is a policy that we enforce that say that you know whenever you're submitting code I'm not going to accept your patch if the test suite fails so we sort of have some continuous integration testing by convention but it's not continuous integration that you see from a lot of cool things where you know you do get commit and that automatically spawns off the build and runs the test we don't actually have that yet but it is on the to-do list and it is something that I need to look into and it it shouldn't be too difficult we did look at a couple years ago but at the time we had a problem because a lot of the kernels used in some of the continuous integration systems and a lot of the user spaces when we looked at it didn't have the necessary support for libsec comp but that was several years ago they should now so I really have no excuse alright and security so this was kind of an interesting thing so there's some obvious things here you know kind of work backwards on this slide there's some obvious things here about you know defending against man in the middle attacks you know make sure that when you go to the project's website that you're really talking to the project's website make sure that when you're downloading the release source code from that project that you're actually getting it from that project and not from some you know in man in the middle website that's you know gonna send you the source code with all sorts of good Trojans and whatnot good news is it's relatively easy enough if you're you know going to a well-known website that supports HTTPS and SSH so that's easy for you to do cryptography don't invent your own protocols don't invent your own ciphers unless you're a crypto expert any crypts but crypto experts okay we have one if you are a crypto expert make sure somebody else reviews your work all right good okay implementations have to be open source for the crypto libraries you use because we don't want to trust some guy who just says yeah use my crypto library it's great and there's also requirements on key lengths and storing secrets and all that stuff more than what we want to go into now but it's all up on their website the last thing secure development it requires that at least one main developer have some background in secure development practices so this is sort of a vague concept and it's it's I wish I could give you a more concrete example you know I wish I could say okay go you know read this or follow these instructions it's not always that simple we're getting better as an industry about this but it's still not to the point where I would say that we have a definite you know set in stone set requirements the best practices website you know if you click on the little you know question mark bubble if we have time I'll show you the website but I don't think we're gonna have time and it's got some recommendations in there for you to follow so I would start there there's plenty of resources available for this online if you Google around a lot of smart people have written a lot of good things about it so go from there that's that's I guess my best recommendation at the moment for loops that come this things started looking up after the last couple failures my previous life as a proprietary software developer my employer was actually kind of pretty good about secure development practices and they had a bunch of programs for us so I kind of lucked out and that I was able to check this box but it might be difficult for a lot of projects but please do take the time your project will thank you for it your users will thank you for it cryptography that was easy there's no cryptography in Libsec comp so not applicable man in the middle of tax we get hubs so the repositories are accessible over SSH over HTTPS all the project materials or HTTPS so if you're in GitHub to your golden all analysis and we're almost done here so basically the requirements are that all your major releases have both gone through a static code analysis and some form of dynamic runtime analysis and that you've resolved all the problems you know I think that's obvious but so the static analysis for Libsec comp this was easy I use covariate now I know there's a lot of people that don't like covariate because it's not an open-source tool and the badge program is it's very careful about this they said that basically that they're not going to require you to use closed-source tools but if if you're whatever language you're using has an open-source tool that is a static analysis checker they want you to use it I'm mixed on this you know I I like open-source a lot you know it's benefited me tremendously personally it's benefited me tremendously professionally and so I like to encourage open-source as much as possible but when it comes to static code analysis there's there's some open-source projects and they're getting rapidly better but I still think nobody does it better than covariate and that's my opinion others will probably have different opinions and covariate very kindly makes their tools available to open-source projects free of charge so I understand if you have a philosophical arguments against using covariate but if you don't have that philosophical argument please use it it will find problems in your code that you had no idea existed and a lot of those problems could end up being vulnerabilities in the future or perhaps now that you haven't realized so please goes use covariate it's very easy you can just log in with your github credentials it'll take your github info it'll run it's quick I mean libsec.com analysis usually by the time I hit submit on that go and check my email it's usually done within like five minutes so the interface is pretty easy it's easy to use a lot of information on how to use it it's great there are things dynamic analysis does anybody who knows about Valgrind hey great okay good good good all right so most of you know what Valgrind does for those of you that don't it's a great little runtime checker you basically type Valgrind and then your program and then you do whatever you want with your program and then at the end if you've done everything well Valgrind prints out a little message on your console saying good job no problems but if it's first time you've run it through Valgrind it's probably going to identify a few problems you know out of bounds memory double freeze you know the the usual things that you can't always find a static code analysis but they're going to come up when you actually start exercising your program and an easy way to integrate Valgrind is if you have a test suite which you should at this point if you have a test suite just run your test suite through Valgrind that's why I do I mean it's Libsec when we had Valgrind support it was simple we run the tests and I choose to run them again but I suppose you could do it all in one so we run the test to make sure the test pass and then if the test pass we go ahead and we do the exact same thing with Valgrind and it's pretty simple it works really well it's great it's supported on almost all architectures I think we had a problem on PA risk where it wasn't on there and I think the new open source risk doesn't have Valgrind yet but XA664 arm all that they've got Valgrind use it all right and with that we're pretty much out of time so just real quick the best practice badge program that's the website you can go to I'll throw the app in it you can at least see what you're looking at if you're interested in Libsec.com that's where you can find it and that's my information and I put the website down so if you go to my website you can find that Libsec.com presentation from 2014 that was telling about if you can't find on the Defconn website so anyway we've got I think a couple more minutes anyway if anybody has any questions so thank you so the question was are you familiar with the swamp secure marketplace help me out I know there's some some other people that have been doing some some analysis and quality assessment okay yeah no I haven't heard of that I was thinking it was something else when you first described it but yeah no that would be a great tool like I said I when it comes to testing and analysis I think if you've got multiple tools that you're willing to run all the better you know I said I like covariate but I'm sure there's things that covariate you aren't going to catch isn't going to catch that other tools will yeah so that that would actually be kind of interesting do you have the website okay thank you I'd appreciate it okay okay so for the for the sake of the people watching this so that well I guess the statement was that covariate has a limit for open source projects you were saying what a today there's somewhere there is a limit so you probably realistically can't hook it into your CI infrastructure especially if you're a very active project and so he was asking about the swamp I want to get the term right swamp secure okay thank you see some interactive presentation here so it's the swamp software assurance marketplace and the website is HTTPS colon slash slash www.MIR-SWAMP.org so should I give him his phone back thank you all right I think we're almost the point where I should let you guys go to go to your next presentation and these nice people probably want to watch their next one but maybe one more question no okay well thank you guys I appreciate your attention