 Yeah, so as he mentioned, my name is Mudefa. I'm going to talk a little bit about the security of open source. Or you know, rather what most people say, the insecurity of open source. So before I start, a few words about myself. I work as a principal product security engineer at Red Hat. I'm a part of various open source projects out there. I work closely with Mozilla, LibreOffice, Python, PHP, Samba, XOR and a couple of other projects as well. Probably some of them I don't even remember right now. I have been a Fedora contributor for the past eight years. I've been doing various things. I maintain a couple of security packages. I've been doing a lot of speaking at conferences in the last two or three years. And in most of the conferences, I've been trying to speak about open source and why open source is secure as compared with the closed source counterpart. So this is a real short talk. And I want to convince all of the attendees that open source is really secure. And in the last two or three years, if you have ever heard of heart bleed, how many people have heard of heart bleed? So if you have heard of heart bleed, or if you have heard of shell shock, or if you have heard of any other things like that, a lot of these security flaws has led people and enterprises to think that open source is where most of the flaws are found. And we should probably go back to the closed source counterpart. And I will try to convince you guys that open source is really the way to go. So why are we interested in security bugs? Why are security bugs so important? All software has bugs. So if you have used any software, be it Windows, or if you have used any open source software, or if you have been using your mobile phones or your tabs or something like that, your iOS, all of these softwares are bugs. You may have encountered that you try to do something with WhatsApp or you try to open an image in a browser and it will crash or something like that. That's probably when you hit a bug in the software. The bigger the software, the larger is the bug. The more the number of bugs you can find. So a very good example is the LibreOffice project. The LibreOffice project, last I checked, I think has got 3.5 million lines of code. It's a combination of C, Python, Perl, Java and whatnot. Mozilla is a large code base as well. The kernel is the biggest code base, I think. 1.5 billion lines of code, mainly C, assembly. You need to be a real nerd to really understand what is going on in there. If you have ever looked at open SSL code, you will probably need a PhD in mathematics because it's a combination of weird C programming, assembly language, different optimizations for different compilers out there and a lot of things which only the writer will basically understand. So it's really complicated. We use this complicated software everywhere. I think 60% of internet uses open SSL. Not only the open source of people, but even a lot of closed-source software is compiled against open SSL. They embed open SSL probably more than 60%. A lot of people use Firefox, Google Chrome. Google Chrome is a big beast as well. Last time I checked, it embedded 35 libraries inside it. So these are libraries which are openly available. So when you click on your web page, say google.com and you want to see an image, your Google Chrome uses libpng or libjpeg or libtiff to probably render that library onto your browser. So each library you embed inside your software, the more is the risk of a bug or the more is the risk of a security bug. So we are basically using software which has large code bases out there. From kernel to web browsers to our web servers, all of these have extremely large code bases. The larger the code base, the more likely of a security bug. Some of these bugs may have security implications. So to give you an example, when your friend sends you a file and it tells you that this is the latest photograph which I took in Singapore. When you try to open it and if the application copies all your data from your hard disk and sends it to somebody on the internet, that probably is a security bug. So you open the file, you assume that you are seeing something else, but the application behaves maliciously. It copies all the data, sends it to a random hacker on the internet. It tries to get onto your machine and install some software or it tries to steal your web banking password or something like that. That's probably a security bug. Some of these security bugs are bigger than other security bugs. I just mentioned shell shock or heart bleed or something like that. These are big security bugs and the reason they are big security bugs is the fact that they affect a lot of users on the internet. So I'm sure you've heard of, as you mentioned earlier, you've heard of heart bleed. This is when most of the people started to notice these big issues in open source software. Heart bleed was a bug in open SSL. This bug was around for 10 years before it was actually found. The person who found it found it incidentally. So it was not like he was actually actively looking at this bug. He accidentally found it. It is said that it affected 60% of all web servers on the internet, which is a very big amount, which is a very big number. 60% of all web servers and some of these web servers are embedded. So we have this track on IoT and stuff like that where you have the web server actually embedded onto the device and there is really no way you can update the device. So the only update you can do is probably the firmware or something like that, but nobody really cares about updating the device. So 60% of all web servers, it led to a lot of widespread attacks on the internet. Scripts were available in which you can actually download the script and you can attack some web server. I read about a Canadian student who was like 16 years old and he used Heartbleed to log into his university web server and he could download the credentials of all the students and it took like 10 minutes or something like that. So if you've heard of Shellshock, Shellshock is a flaw in Bash. Bash is the most common shell which everybody uses. And when we were working on Shellshock, we figured out a couple of things which were really shocking. The first thing which we figured out is a lot of set-top boxes. Set-top boxes are these devices which you use to get cable TV and you connect your set-top boxes to the television. A lot of set-top boxes use Bash, which is very, very strange because you would assume that you want a set-top box to download the signal unscramble and just send it to your TV to see the movie or something. But a lot of set-top boxes use Bash, which is really annoying. So it so happened that Bash was used in set-top boxes so that the cable operator could send commands to the set-top box and you could be subscribed to some channels or something like that. So that's why they use Bash on set-top boxes. If you've heard of NAS, Network Attached Storage Devices they use Bash as well. NAS devices use Bash as well. We even figured out that Android mobiles use Bash. So when you go to a conference, when you connect to the internet and if there's a malicious AP in the conference, it is possible that the malicious AP may execute code on your Android phone. So right now when you come to Fawceture, you connect to the access point. If I set up a malicious AP, I could actually run code on your phone. I can see what data is on your phone. I can see what images you have. I can see what transactions you are doing on your internet banking website, which is really scary, right? If you've heard of Ghost, I doubt if you've heard of Ghost, this is a flaw in Jellipsy Get Host by name. So Jellipsy is one open-source library which is used everywhere. All applications on your Linux and your Unix Box and probably on your mobile as well, they're compiled against this library called Jellipsy and the list goes on. I could actually spend hours talking about them if you've heard of Logjam or if you've heard of Slot all of these are big vulnerabilities which were found in open-source software, right? So why are we doing this? So there is a famous cryptographer called Bruce Shichiner. I'm not sure if you've heard of his name. He coined a word called the Security Circus. And the Security Circus basically means four things. The first thing is whenever an open source, whenever a big flaw is found, the person who finds the flaw, he wants to make himself as famous as he can, right? So if I find a flaw in OpenSSL and I want to, on instant fame, what I do is I go to the media and I tell the media that next week I'm going to disclose a flaw in OpenSSL and that flaw is going to affect everyone on the internet and we all are doomed and we should go and live in a cave, right? So that's basically what happened in case of ghost. The person who found the vulnerability, he was a French guy, he actually went to the French media and he told him that if you want big money, then next week I am going to disclose a flaw which is going to affect all internet servers on the internet which use an open-source operating system. And what they basically did was they made a press review. They wrote a press statement and they were working on a lot of hype and suddenly the only mistake which they made was they sent one of these emails on the internet and the email was in French and somebody from our team he figured out the email, we translated it and that is how we came to know that this guy is planning a big media hype, right? So this is going to happen. So what Bruce Scheischer basically says that in a security circus, the first thing which happens is there is media hype. Because there is media hype, the companies which provide this open-source software, they are forced to have emergency updates because they are emergency updates. We really don't have time to test all of these. The customers panic a lot because if I have 10,000 or 1 lakh machines on the internet which are serving an e-commerce website and my website is affected, I really want it to be patched as fast as I can. So who benefits from that? The only person who really benefits is the person who actually did the hype. He found a flaw and he really wants to become famous overnight. So why open-source? Why all of these flaws are targeted against open-source? I am sure you have never heard of a heart bleed in Windows or you have never heard of a shell shock in Android or something like that. So why is open-source being targeted right now? All of these flaws, they want to target open-source or open internet protocols, right? So here I am going to compare internet protocols with open-source because they are open out there. They are written in plain text and RFC and things like that which leads a lot of interesting questions. The first question is, is open-source really secure? If you count on your fingers in the last two years, we have had like 14 big flaws and if you compare that with Windows, there is none. So is open-source really secure? That's a big question which comes to us. The second question which comes to us is, is open-source easy target to attackers and hackers? That's the next big question which comes to us. And last but not least, should we still continue to use open-source? Or should these big enterprises, like you have Z-hat or you have other big enterprises, should these enterprises continue to use open-source? I am trying to give a few answers to these questions. Close-source software has equal number of bugs. That's the first thing which I am going to say. My first statement, all software has bugs, bigger the software, bigger the bugs. So my first statement is, I am sure close-source software has equal number of bugs. In fact, there was a research which was conducted in which all these security advisories which Microsoft tries to issue every Tuesday. So if you have ever heard of patch Tuesday, patch Tuesday is the Tuesday of the month where Microsoft will issue security advisories. And patch Tuesday is the day in which no Microsoft, in which no system administrator working on a Microsoft operating system is allowed to take a PTO. So you cannot take a leave on patch Tuesday because you have to be ready to patch as soon as Microsoft comes out with the updates. So there was a research which was recently conducted and they tried to figure out what all security fixes are fixed by Microsoft on a patch Tuesday. And they took all the open-source fixes for the entire month and figured out actually Microsoft fixes more security issues than open-source which does not really mean that they are most secure. It really means that more issues have been found. So there is a really interesting story and I am not sure if I have got time but I will try to make it really quick. There was this Microsoft guy and I don't want to take his name. What he used to do is, he used to go to conferences, he used to go to customers and he used to have two bowls, two glass bowls and he used to bring a bag of jelly beans. And what he used to do is, he used to take the first jelly bean and he used to say that this jelly bean is for this particular security issue found in an open-source software, say that had linux or something like that. And he used to drop the jelly bean in the bowl. And then he used to take one more jelly bean and he used to say this is for this particular security flaw found in Microsoft, Windows, 2098 or whatever and he used to drop in the bowl. And he used to keep on doing that, keep on doing that. And at the end of that, the bowl which contained open-source was full and the bowl which contained the Microsoft which represented Microsoft had like 5 jelly beans or 10 jelly beans. And by doing that, he wanted to prove to people that open-source is less secure. We talked with him, a lot of open-source people talked with him and we tried to put it across him that when Microsoft says that in this patch Tuesday Microsoft is going to fix MSFA 2012 with some random number. They are not really telling us how many security flaws they have found. They are just telling us we are fixing security flaws with Internet Explorer. There is Microsoft care to tell you that what flaws have been found on what part of the software the flaws have been found how many CVE IDs which are numbers assigned to security flaws have been addressed. So if Microsoft wants to tell us how many security flaws they have found or how many security flaws they have fixed with all of these security updates only then his demonstration is correct. Because when you find a security flaw with say Mozilla Firefox Mozilla will tell you that we have found this one security flaw. This flaw is in this part of the software. Here is the open-source patch. You can actually go and verify the patch yourself and with this particular update we are going to fix this patch. None of the closed-source companies have this kind of transparency. So you know as I said companies hide details about these flaws one update can actually theoretically address 100 issues it can even address 1000 issues so nobody really knows. Open-source makes it easy to find bugs why because the code is out there if you know programming if you know C if you know some other language you are free to download with some amount of expertise with some amount of time you are free to look into the code open-source you need to buy a software called idapro which is like 5000 USD last I checked you buy that you run your binary through the software and you will get the disassembly and probably if you study assembly language for 5 years or something like that you may be able to figure out what they are actually trying to do plus they try to obfuscate things intentionally so that you cannot disassemble and last but not least it is illegal to disassemble their software. So you know when you install Microsoft 2000 or something and when you just clicked on I agree one of the clauses which says is it is illegal to disassemble our software which is mostly good. So open-source having the source out there has got a disadvantage as well it is easy for people to read and find bugs but it is mostly good because everybody will generally benefit from the peer review out there this generally leads to more peer review which means more security in most cases right so the quick answer is that the security circles basically shows that open-source projects are resilient to these kind of big flaws it is possible to quickly patch these flaws it is possible to quickly send these updates to the customers or the consumers of these softwares and they can you know as compared with open as compared with the close-source counterpart who just keeps showing things inside the carpet and you know say we will fix it we will fix it I reported a bug with Microsoft 2000 if I remember in 2001 or something like that 2001 or 2002 I don't remember they still won't tell me when that particular bug is going to be fixed and I think windows 2000 is end of line or something like that last last I checked so it was like 13 years ago right so this is the kind of response which they have they they keep on shoving things inside the carpet they keep on telling me that we are working on it and probably some sometime they will tell me that we won't fix this flaw because windows 2000 is end of line or something like that I have not checked if it is one more advantage of open source users can react faster so the flaw is out there on the internet there is media hype everybody is trying to talk about it you need to react now you need to make sure you install the latest version from your vendor or you know you download from the internet the vendors are under pressure as well right so if I find a flaw with open SSL I tell open SSL I found this big flaw if you don't disclose in one week I am going to go out and disclose it myself so the vendor is under a lot of pressure to patch as well as compared with that if I if I go and tell that Microsoft probably Microsoft will tell me go ahead it doesn't really care I think I probably reached the end of my conclusion open is secure and if you have some questions I think we have some time so are you talking about having a career in security but more about having contribution to these teams like if someone is interested about some project and then say like they want to learn about how can they you know how to you know how to you know how to you know how to you know how to you know how to they want to learn about how can they you know help you guys I think the usual process would follow which is you know true for any contributing to any open source like you know get on a mailing list see what part of the project you are interested in first try to understand what the codebase really is understanding codebase is kind of important because if you want to contribute positively to the security of that product you need to know at least a little bit about the codebase see the security flaws a lot of these security flaws are open out there like you know Mozilla has a policy in which once the security flaw is fixed they will open up the security flaw so you can actually go to the bug you can read about it and you can see what analysis has been done so that's probably a starting point a lot of these security groups have a high threshold to get into it because like you know there is a trust involved so if you really want to get into the security group you need to prove yourself worthwhile of it so there is a high threshold of getting into that particular group but you know once you have started to contribute really good probably most of the people in the project will feel that you should be put into that group yes so before I answer your question we get a lot of queries some customers because I work for the red hat security team a lot of customers who come and tell us that you know the system has been back door and you know is this because there is a flaw in your operating system that our system has been back door and for 99% of red hat run system is this because there is a flaw in your operating system that our system has been back door 99% of red hat run systems we have figured out that they are probably running a web panel or something like that which allows people to log into the FTP sites and they have either their web panel has got a simple password or they have an open SSH port which is open which has got a very simple password like you know that had 1, 2, 3 or something like that so most of the cases the back dooring is because your system is not configured correctly I am not sure what the mainly Linux back door is and another way of back dooring is somebody spins up a malicious package and he is able to push on to a repository and when a person downloads it some day repository he has a back door package so a very good example of that if you have ever heard of VS FTPD very secure FTPD a long time back somebody got on to the repository where VS FTPD is and they back doored and the back door was very interesting when you log into FTP you need to enter your username so if you put your username as anonymous and if you put your password as a smiley it will allow you to immediately get on to the FTP server so that was the back door which they put and the creator of VS FTPD figured out what the back door was it was back door because the hacker made a mistake the hacker compiled object files and he put the object files into the tar tar ball which increased the size of the tar ball and he realized that all his updates need to be signed so if you can do something like that if you can sign all your packages plus if you are downloading packages from the main repository if you can verify that the signatures have been signed that the signatures on the packages are correct then probably that's one way to avoid the back door but then that again brings a very important question if there is an insider inside Mint Linux and if he has access to the signing process then the whole thing is lost that's what my previous speaker was talking about he wanted to talk about the web of trust as in you know you need to trust someone at the end of this whole thing does that answer your question sorry if the answer was a little bit long but someone signed the package right I think one thing you can do is you can figure out who put the back door and then you can kind of ban him for life or something like that so these things are not about open source in and of itself it's about the people it's about the process it's about the point in your work the DevOps part of the story if you are not clean in how you do stuff you don't just do random things you can do your system that you do work with that's where trust is the very basic information is never invent your own pictography please trust the people who actually does that and use whatever they suggest never think that the algorithm of the new thing you discovered will be able to and everyone and just as a final thought Colonel Alvord went offline for I think 3 months 3 months of war months of whatever it was to recover and bring the system back to a pristine, bold, good state so why does Alvord go offline fix it and come back here and then make sure it fix the process of the work the Fedora project yes we the Fedora project was compromised as well most of this compromise has happened because of human mistakes the guy who had commit access on Fedora he was trying to login to Fedora servers from one of his own servers and his public key was on his own server he was trying to SSH to Fedora servers from his server where his private key was and his server where the private key was was compromised so once your server is compromised you can use that private key and you can hop into the Fedora infrastructure and you can do a lot of mess the bad thing about that was his private key was not protected by your password you can protect your private key by password which he did not when he hopped into the Fedora system we were actually able to observe him so when the hacker hopped into the Fedora system we were on that system and we were able to observe what he was doing and he managed to create a backdoor open SSH package but he could not sign the package because the problem with signing is we have a hardware which does the signing and you really need to have deeper credentials to actually sign so he created a backdoor package which is the same on the back end which does the actual signing process but of course there was a press release so if you go to the internet it's there there was a press release and we even made a script and we published a script on the internet saying that if you have an open SSH package please download this script and this script will check if you are using the backdoor package I am not aware if anybody actually installed the backdoor package because once more this signing process is quite important but it actually tells that we trust this particular thing but I am sure it could happen with any infrastructure so there is a weak link which the hacker can get into the infrastructure and do stuff like that that's always different yes that's a very interesting point that you made about HSM now I don't know much about HSM but I sort of know that it can form an essential part now if you are not someone with the resources of Fidora right now is there a poor man version of HSM oh there should be I think yeah lot of people use a disconnected machine to sign packages like you know I have a laptop with me and I have the private and public on this laptop the primary signing machine I don't connect to the internet and what I do is when I want to sign a package I put it on a USB or something like that bring this over here, sign it transfer the signed package back to the USB and take it back so basically disconnected from the internet in a secure location as well so if I keep on roaming around with my laptop it's not going to help so physical security is extremely important so this is kind of a poor man's version it does not have to be a virtual machine that it needs to be on a physical machine which is at a secure location and definitely not on the network right Raspberry Pi's they will do I am not sure if there is an issue with the random number generator of Raspberry Pi or something that effect you need to generate your keys only once okay the whole signing process so you generate your key on a ZL system which has got a good source of entropy that is for your public and private keys on to the signing machine which you can use as a source to sign your packages so keys need not be generated on Raspberry Pi and they have to be done only once so if you have a good source of entropy you can generate a big key copy it to Raspberry Pi and use it as your primary machine to sign stuff okay let's go we were saying that Raspberry Pi has an automatic signing device automatic as in well it's in the safe so then we bring it out the person that USB signs it just make sure when you plug in on to the USB it's not on the network yeah and you need a backup copy of your private key because if something happens then you are kind of gone so we have a backup copy of our private key at multiple locations so there is one in APEC one in EMEA, one in US so if you get hit by earthquake or something like that then you have a backup there is a shop called Cardinal Concepts and they have hardware tokens within the $50 range which you cannot wait time for so you have to have physical security but can those hardware tokens sign yes they can sign you can use it directly on the Fedora boxes the Linux systems so you can have your private key on the token and you use the token to sign and it cannot be completed the only problem is that it makes you buy more than one if anything happens to the key yeah because most HSMs would generate the key themselves so you can use HSM to generate the key and it's impossible to extract the private key from the HSM they are very cheap the only problem I think that's all the time we have for questions so thank you very much for the great talk thank you