 All right, then let's get started. I will briefly introduce myself. Getting rid of this right away. I'm a Norseling, what you may discover from my accent that I'm French, but also I have the ambition to write an operating system on my own, which is what drove me to NetBSD actually, because it's an excellent way to learn how to do it. And I use it also as a base for my current work, which I put under the umbrella of Defourer OS, like here, this project that I've been running since 2004. Most of my work happens offline, so I'm working mostly on NetBSD with Git instead of CVS. Through the HBSD project, where I publish all of my public repositories, and also welcome other contributors to participate, we have a small community since August 2013 over there, which even brought some developers to the NetBSD project itself. All right, but now to what's interesting to us right now, if you do not know about Package Source yet, this is the right place. It's a project managed by the NetBSD Foundation, as in here. Initially, the way to obtain third-party software for NetBSD, since we insist on doing the right thing, we managed to get it to work across systems, so it's now portable and ported over 17 platforms now. Package Source itself isn't the subject of this talk, but really my motivation to work on this. I know I'm going to say cyber, but there is a cyber war going on right now. I'm working on security because it's a hostile world out there, and I feel like I have to protect my systems. I do also this as a job actually, but when it comes to the context of Package Source, NetBSD and distributing software, we have a responsibility towards our user because they rely on us for their daily operations. They need updates, for instance, security updates in particular, and this is also true for Package Source. We can probably even go a bit farther and be a bit preemptive about it, and this is where I chipped in. So what can be done about this? In this talk, I will introduce the security management that's already in place in Package Source. We have different teams taking care of security. I will also speak about technical measures that we can or are applying now for hardening the code base of your system and future work, different perspectives for the improvements what we can work on, and of course at the end Q&A if we have enough time. All right, so now moving on to security management. We have two options. As Cherry just mentioned, we can either panic or we can try to recover from critical situations. And in the case of NetBSD, we have two teams which help us recover. I will detail their roles. We have the security team and the release engineering group. Hopefully we also have some tools and also there are specific branches, releases that you can use as a user and that are expected to be stable and kept up to date regarding security. So the security team in particular has two duties. It handles security issues regarding package source. There is an email address for that and a PGP key of course for emailing them. And this team in particular maintains the vulnerability database which can actually be updated by any NetBSD or package source developer but has to be signed and uploaded by the security team. This database is assembled from different sources. We use release notes from mainstream packages of course when they report security issues. There are also external vendors which may report security issues through adversaries. We used to depend on Secunia but they cut the feed for us. So unfortunately we have to use something else now. Of course there are also public mailing lists like OSS security or bug track and so on full disclosure where security issues may be announced. So we have to keep an eye on that. And also we get a lot of other adversaries from other distributions, governmental agencies, different certs and so on. And as I just mentioned, there is a PGP key for you to report security issues to us safely, confidentially. All right, now as a user, how do we make sure that the system is actually up to date regarding security using this database? You can have the system downloaded it every day if you use NetBSD in particular. This is going to be done in daily.conf with this variable fetch package vulnerabilities. For this you need the system to run 24-7 actually because the cron job will be at some point in the day. Or you have to adapt the cron job to run when the system is up. You can also fetch the database manually if you prefer. This is done with package admin. It's pretty much what the cron job does. Do not forget the minus S sign there because this is what actually will tell package admin to check the signature for the vulnerability database. And then if you want to use the vulnerability database to audit your system, there is a command for that also through package admin, package admin audit. And there are also a list of different options. You can do this per package. You can even check packages with versions which are not actually installed yet to check if, for instance, an update would actually fix an issue that you have. All right, so in practice, how does this look? What happens if you install a vulnerable package from source, for instance? Here I picked the Zen kernel in version 455 and installing it from source with make install. As you can see here, we get a warning after a check for vulnerabilities. More than a warning here, it's an error. The system is telling you, please change your configuration files to allow this package specifically to be installed or generally vulnerable packages to be installed. In the case of binaries, I picked a way of shark in this case, which on my system actually right now has an infinite loop vulnerability, for instance, in version 225. And in this case, you get a similar message which also comes from the vulnerability database and package add will say it was not able to install the package because there is a known vulnerability. This can be tweaked, of course. Typically in package install.conf you would have check vulnerabilities as always. If you never want to install any package with known vulnerabilities. You can change it, however, for it to be a bit more convenient as a user because there are still many vulnerabilities which are not fixed for a number of packages, unfortunately. But at least they are known and you can agree or not to have them on your system. So in the case of a shark here, I could simply say yes or no after a list of known variabilities to agree or not to install this package. So this is how this is configured and I like to take this opportunity during this talk to thank the security team for maintaining this. Some of these members are here, actually. Thank you, Sevan and everybody else. Petra, Petra is in the other team. I'm not in every group. Yeah, yeah. All right, I've seen you also around anyway. The other group as we get to this, the real engineering group, which I will call Releng here. It's easier to pronounce. It's also relevant for security even if it's job is actually to manage releases, sorry. So it manages the stable branches but it also processes the pull-up requests when we have changes we want to apply to stable releases. We have a process called pull-ups and this is also relevant to security since some of these fixes actually imply security fixes. There are freezing periods in particular for package source every three months. The tree enters the freeze in which case we proceed not yet in pull-ups but we mostly focus on security fixes and build fixes for instance. All right, so this is how the pull-up requests list looked like as of yesterday. We have a couple of issues open for a few weeks but the rest is just a few days behind. I see one here with round cube, regular offender unfortunately but this is not the topic today. But coming to stable releases, we have right now a freeze on the 2017 Q3 branch meaning it's about to be released. We focus on security and build issues. Okay, there is no branch yet but that's the head basically, yeah, that's right. Thanks, Penny. The latest table is still Q2 which accepts pull-up requests and then Q1 is no longer maintained. We do not offer long-term support in package source as a project itself. However, Joyant, a company using package source regularly, does it for its customers. They focus on smartOS which is where the service that they offer. I think they also have packages for Mac OS. I do not know if they have it maintained for LTS also if they build it for LTS. But if you're interested in older stable releases for stability on your systems, for instance, if you do not want to upgrade every three months, you can use Joyant's source trees instead for package source and they will be usually maintained for security. Then you can build your binaries from there. The welling team is composed of a few members, Benny and Petra here. So thank you also for your work. And with this done, I would like to move on to the technical aspect of security in package source and how we are trying to improve the security level of the system as a whole, 17,000 packages at a time. So which tools do we have? We have a number of tools here already that I'm listing, package signatures, SSP, Spotify, StackCheck, Py, and Railroad. I will go through them one by one. Do not worry if you're not familiar with any one of them. I will try to clarify this. So starting with package signatures. I believe we were in package source. One of the first, if not the first project to introduce this support as a distribution. Initially in 2001, which was already supporting two different mechanisms to implement that. Either X5 and I certificates as managed by OpenSSL for instance, or with GNU PG. This ensures authenticity and integrity, meaning that package signatures do not actually help with finding or fixing or mitigating flaws in packages. It's more about when you download a package, you will know it has not been altered on its way. So it's safe to actually download it over HTTP or FTP as long as the checks and algorithms are not broken. And then you can be sure that it was actually built by the person who owns the key, if the key was not leaked. Duriant has enabled signatures in production in 2014. And in package source upstream, we still have kind of a difficult situation right now. We cannot easily do this. I will mention why. On one hand, we could rely on X5 and 9, but it's quite complex in both setting up PKI. And then there is GNU PG is a lot easier, but still you need to determine how to ship the key, which policy should apply to the key. But there are also a number of other technical issues with GNU PG and NetBSD. In particular, we have a chicken and egg problem because GNU PG is not available in base. So how do you install a package, which is supposed to check it so in signature? That doesn't work. So instead I've been trying to add support for NetPGP, which is another implementation of GNU PG in base, available in base in NetBSD. I wrote a command line wrapper, which takes the GNU PG syntax and adapts it for NetPGP instead. So I called it GPG to NetPGP. This still requires a few patches. I imported a lot of them recently in NetPGP, but there are still some issues. This is not enough still. For instance, there is a security issue remaining with detached signatures with NetPGP that we have to fix. So this is still a work in progress. However, if you want to test it out for yourself, you can first generate a key with GNU PG or with NetPGP. This is how you would do it. Then in Mac.conf, you should enable sign packages and mention GPG. And then for the individual tools to know how to call GPG, you can set it and force it to a specific implementation. And if you want to try things with NetPGP, you would use my wrapper, for instance, changing it here. You can specify, of course, a particular key to use if you have multiple ones on your system. And you can also specify key rings if you want to separate things even further. Then you just use package source from source normally to generate sign packages. I will show this in an instant, but when it comes to installing sign packages, you have, of course, to import the public key of the user which was building the packages. You can use GPG for that if you already have it, or NetPGP should also work. This is simply done with GPG minus minus import. Then you just pipe the key on the standard input, for instance. You can configure, as I mentioned, a lot like with vulnerabilities, you can enforce signatures to be enforced on packages using in packagingstore.com, the verified installation parameter. And then you can use package source normally this time from binaries. And it will tell you when you install packages that the signature is there and if it's valid or not. We'll even at the moment give you the fingerprint. You could probably make this silent, but right now this is how you can confirm that the package was actually signed and validated. All right, now moving on to a different mechanism, SSP for a stack smashing protection. This technology was initially introduced by IBM, then picked up by OpenBSD. You can actually find bugs in programs since instead of silently corrupting a memory, which is what attackers use to subvert your programs, the program itself will detect that it was altered when returning from functions, checking the counter-revalue in the stack, and then crash. This implies a different memory layout if you want it to be very efficient because you want your buffers because of overflow to overwrite the counter-revalue as soon as possible, as close as possible so that even enough by one will actually trigger a crash instead of corrupting memory. There is a slight performance penalty associated with this measure, I would say between one and five percent, depending on the settings and the program that you use, and unfortunately this is not perfect. This is just a mitigation. It will not fix and magically block every bug and every exploit. It can be defeated, for instance, through memory leaks. If the attacker discovers the counter-revalue because there is a arbitrary read somewhere, for instance, and your stack address is known, then this can be defeated. We support it on NetBSD on almost every architecture. In package shows, I have added the support also for Linux 6.86 and FreeBSD 8.86, simply because I couldn't test it somewhere else, but it should be really easy to add support for further systems and architectures. It's just a matter of testing it and making sure, sorry, that GCC supports it effectively over there. In Mac.com, it can be enabled systematically for the whole system. Of course, it supplies only when building packages, not when installing binaries. In this case, this is package shows us SSP, which you can set to yes, or to all, or to strong. The difference this makes is that if you set yes, it will set a compilation flag in the case of GCC and Clang minus F stack protector, but this will only protect some functions. The one functions where the compiler sees there is a buffer in this function and it's actually relevant to put a canary value. In some other cases, it's actually useless or apparently useless to put canary values. You can however enforce it using all instead of yes, so that every function is actually covered with a canary and protected by SSP. Or you can use a patch from Google, which uses, which is then used by strong. The patch applies to GCC, so you have to rebuild GCC to import that. So logically, this requires the package that you are trying to build to support C flags, since this is added to the C flags automatically by package shows. Thankfully, more and more packages now support C flags. We've been focusing on that in the past few months. And as a corollary, this only protects C, C++ programs or interpreters written in C or C++. And in particular, just in time, compilation is not protected, since this is then code that's generated and run, executed at runtime. It has nothing to do with the compiler and if it wants to protect its own stack, it has to implement it itself. So this is not perfectly covering every situation. You can make it stronger as mentioned with F stack protector all or with the patch from Google who is strong. We cannot support formal comparison platforms, so feel free to check on your own platform package shows and if you can add the bit of glue to enable your system to use SSP. If you want to validate and check, be sure that SSP was actually applied when building your packages. You can check the binaries built, for instance, with NM listing the symbols, present inside the binary. And if you see underscore underscore stack check fail or stack check guard, you can be fairly sure at least one part of the binary was built using SSP. It will not definitely say, definitely say that all of the binary was built this way, but usually it should apply everywhere. This has been enabled by default in OpenBSD since 2003 in their own ports. Also in Fedora and Ubuntu since 2006, Dragonfly 2013 and now also package source as of the coming release that's run on freeze. We have enabled SSP by default were supported. So I'm quite excited about this. It's great. And as a companion technology to SSP, we also support Fortify. It's a bit different in the sense that it will, this time change calls to specific functions. So if the code uses unsafe, notoriously unsafe functions from the Lipsy, for instance, as printf or string cat or memmove, the compiler will automatically replace these calls when it has knowledge of the size of the buffer with safer versions. So in this case, this completely mitigates some buffer overflows, the ones which will actually use these functions. But it involves support from the Lipsy through the system headers in particular, which have to be reflecting the compiler you use. In this case, the performance impact is relatively negligible. The one difference it will make is that it will actually check the size of the buffer. So that's typically just one check. And again, the program will crash instead of silently corrupting your memory and allowing hackers maybe to execute unfriendly code to set the list. So in package source, we supported using package source use Fortify, which can be set to yes or to weak. We supported right now on Linux and at BSD with GCC. In practice, this sets a preprocessing flag or flows through the C flags, which is 45 source equals two in the case of yes. This requires again the package to support C flags. So we have the same limitation as with SSP. However, as of today, this is the case for most packages, maybe not absolutely all of them. But again, this protects only CC++ programs and interpreters. Just system compilation will not be protected. And additionally, there is something else that's quite tricky with Fortify. It requires the compiler, the flags to be set to an optimization level of one or more. This is because compiling the binary in this case will be slightly different from the source code. So the user has to agree that this will happen, that something will be optimized. So it's entirely possible that even though you're compiling with 45 source equals two, Fortify will not actually be applied. We can of course add support for more compilers and platforms and to check that it works on your platform. Again, there is a way using symbols, very helpful. With NM again, from Binyteals, you can check if the symbol, for instance, Sprintf check is used by the program, in this case, specific to GCC.NET BSD. But you can look for similar functions and it will tell that Fortify was used at least in one place. And it has been enabled default in Ubuntu Linux for a good while in Android, maybe even from the beginning. And in Packet Show since the coming release, 2017. This is also now enabled by default. Very similarly to SSP this time, StackCheck, which generates code to verify the boundary of the stack. It's a lot less relevant in production or globally for every package since, at least according to the manual page for GCC. This is only really useful for multi-traded code. It involves support from the compiler. This is not in Packet Source yet but I have an external patch in HBHD, which implements that. We could consider combining it with the building definition for pthread, since it's a good hint that there is multi-traded code inside the current package. And so we should not really apply it to every binary in the system. It sets this compilation flag in the case of GCC. And so again, we need to support C flags to actually implement that. StackCheck doesn't work on every compiler. I do not know at this point if Clang supports it. It apparently applies for multi-traded applications only. I mentioned that. I do not know how to validate if this mitigation is effectively in use. And we should also investigate if it is relevant by default, if at all. All right, now moving on to Pi. I know we just had dessert, but this is not related. This means actually position independent executable and it is a companion to the Pax SLR mechanism from the kernel. In more and more canals nowadays, there is support for randomizing the address space for processors in user land and a necessary mechanism regarding binaries to take the most advantage of that is to build position independent executables so that they can actually be placed in arbitrary positions in memory and therefore make expectation more difficult because then the hacker needs to know at which offset will be which code. There are mechanisms for hackers to work around that but there are more complex to use usually. So in this case, it's a bit different from what I just mentioned because it involves not only the compilation phase but also the linking phase. So let's have a look at how this is enabled in package source using package source makePi set to yes. In this case, it's just yes or no. And it will first set a compilation flag minus F peak and it also needs an LD flag for linking time. However, there is a caveat in this case since this compilation, I mean this linking phase must be completed with minus Pi but only for executables and not for libraries. Otherwise, it will just not build. This is implemented right now in the GCC wrapper in package source. So we can easily walk around limitations in LD flags because we can tell in the wrapper if we're building a library, shared object or an executable. And also as of the current release of package source, this is supported in C wrappers, the new way introduced by Yorg recently to gain performance among other things when building package source. There are advantages actually to Pi also since package is not compiled with the appropriate C flags will fail to build since the wrapper will always enforce minus Pi to be used. And Pi will not build if the package was not, I mean if the objects were not built with minus F peak. The package will not build if the mechanism is actually not enforced implemented. So this reveals which packages are not implementing C flags or LD flags right away, which is great. So if you want to test across the entire tree, you can simply enable make Pi and see what breaks. This can be combined with Pax control for the few binaries which actually crash right now, legitimately when using a SLR and protect we're trying to fix that. But in the meantime, you can use in a package definitions not Pax SLR safe and not Pax and protect safe. These two variables expect fine names the path to the executable files. And if you're interested in fixing some packages in this regard, you can check this file from the package source framework for more details on how to use that. To validate if Pi was actually in use, you can do it after building the package anyway using file, which will tell you even for executables that you actually built a shared object. It's a bit weird, but it works anyway as an executable and it will tell you, okay, this is very likely to have been built with Pi enabled. Okay, now one of the last mechanisms that are readily available in package source in my list at least, Railroad, which will protect health executable programs. It's specific to this specification right now and it will prevent hackers to tamper with relocations at runtime. So if hackers can arbitrary write different values in the lookup table for procedures at runtime, they can actually use jump tables to execute code outside of the normal perimeter. But this can be prevented since a program can be started now with by now in Railroad so that before running any code, it will relocate everything as required, looking up every symbol that can potentially be used and then making this page with these jump tables read only so that they cannot be altered by attackers anymore. So there is a performance penalty when starting big programs, but only when the program is starting. After that, it can actually be faster since every symbol is already looked up. This involves the linking phase only. In the case of GCC, this will be done using the Railroad and now flags, LT flags, so they're forwarded to the linker. In package source, this will be enabled using Railroad equals yes, package source use Railroad. It can also be set to partial, in which case only Railroad will be set and not now, which means basically that the symbols are not all relocated when starting the program, but right away, if I'm correct. This requires the package to support LT flags. I just mentioned that. And there are still a few challenges with Railroad and by now, right now specifically for Xorg. Because of the way some programs implement plugins, they may load drivers as shared objects, which then in turn, declare which dependencies they have for the symbols that they use. And then asking Xorg to load these subsequent drivers implementing that. And unfortunately, since symbols are looked up right away, like also when loading shared objects, the program will crash because the symbol doesn't exist, it doesn't find it, and it breaks. So right now Xorg has to be built using partial Railroad and doesn't support full Railroad. This could be adapted to more platforms, however, not considering the issue with Xorg. The support I put in place suddenly works on it busy, I think. It should be easy to also adapt it for OpenBSD and Linux. But again, this requires support from packages to actually implement LT flags this time. We can confirm that the binary was built with Railroad and by now. This time it's very definite. We use OpjDump to list the different sections in the F binary. And if there is a section called by now, which is actually the dummy value, the linker will see that and apply Railroad immediately. And otherwise, the relocation table will be also prepared under the name Railroad in the program header. This verification is now automated in package source. I implemented a check using an org script. This was my first org program actually. And this is enabled if you enable package developer, which will then automatically enable the number of checks when you build packages and then run the Railroad check. In HPSD, I also created a small package just to verify that everything is fine. So it's a program which does nothing but links to a library. And then it runs itself. It can tell you as per the library code, if it was built with Fpick, it can tell you then as part of the executable code using the library if it was also built with Fpick. It can be improved further by using Fpy, but this is not strictly necessary. Programs can also tell you if they were built with Fortify source, since the preprocessor will prison the macro to them. And then I have also even an additional check for a map in which case we do still not implement the security of a map as good as it could be, as well as it could be, but this is also being worked on. Now, let's spread the demo gods for the demo. Well, the good news is that I don't have to because this is a demo. My system is running right now on binary build of package source with all of that enabled, except maybe save stack. So my user land has every feature mentioned so far, except for Xorg with partial railroad and even LibreOffice was built on my system here on this laptop using Py, Fortify, SSP, and it works fine. And I can slide, I can swap slides. Okay, it's not super fast right now, but you can blame LibreOffice for that. It's not me. All right, now for what's left to do. I haven't reviewed this part of my talk as much as I should have. I gave this talk already actually at Azure Bizdecon and Bizdecon this year. And before that, there was also a reproducible build summit where I was introduced to this community. It's a very nice community. Right now I encode a lot in Debian, but what they actually try to achieve is a set of software development practice which create a verifiable path from human readable source to the binary code generated. So what this means in other words is that if you have a given source and a given system to build it, you want to be sure that it will actually compile to the same binary. Like bitwise, you can compare bitwise the banners are generated and they will be the same. This isn't a purpose of verifying your compilation chain. If you want to be sure that everybody has the same compilation chain and that no one else fooled your compiler to make your code generate something else, then you can verify against binaries built by other people if you could generate the same package. This involves many changes in the tool chain and also inside the source code of the packages. You have to make sure that, for instance, the test suites will not be present inside the final binaries if they include the current date and time or build a number. This is also relevant for about dialogues when people like to put in place which tool chain was used and at what time the package was built. This needs to be removed because we need to have the same binary everywhere. Or we need to use the same time stamps. I mean, there are a number of implications. So basically we want to reach a deterministic build system. This doesn't bring security right away but it allows you to check that your tool chain is actually correct or that you share it with everybody else. OpenBSD does that in a sense that they only support their own binaries. They do not support packages that you may have built on your site because when they get bug reports they want everybody to be using the same binary to restrict the number of invariance to what everybody can actually verify. All right, so then with that, users can reproduce and verify what they build. This is already implemented in FreeBSD Sports. It doesn't work yet across every package of that tree but we could also try to achieve that in package source. This is already the case for the base system in BSD but we do not have it yet in package source. Some things are actually very easy to get built reproducibly just by setting an environment variable, source data epoch. And some flags may also be relevant for GCC for instance when importing debugging symbols for instance. They typically include the absolute pass to the source code where it was built but this can be replaced with debug prefix map for instance and so on. There's a number of ways to alter that so that everybody has the same binaries eventually. All right, then there is also code flow integrity, CFI which prevents exploits from redirecting the execution flow of programs. I'm not gonna have it off the top of my head to explain it in other words but basically again you get control crashes instead of undefined behavior that can be exploited. Package source would be a great test bed for this feature since we can take care of thousands and thousands of packages at once through the framework. This is available right now in Clang which we support and it involves C flags, typically FL2 and sanitize equals CFI. There's a number of different shims which can be selected. Possibly you can add visibility hidden. I do not remember the details right now but basically it has a negligible performance impact which means that it can even be suitable for release built and bring an extra level of security when using Clang. With Clang again you can enable safe stack which is maybe even more interesting than CFI right now. This is the definition from the Clang website. It involves C flags, F sanitize set to safe stack and if I remember right it will divide the stack into two different stacks actually yes, the safe and the unsafe. The safe stack has the code flow so it will store the return addresses, local variables that are always accessed in a safe way while the unsafe stack stores everything else. What can be corrupted and will not have an impact for instance on the execution flow. So then if it's overwritten it's less drama than if it would be the return address. Okay, then in GCC we also have some mechanisms that you can apply. So the C flags, the address sanitizer, F sanitize is set to address. It will detect out of bounds use or use after free by instrumenting memory access instructions. It's documented on the official GCC website. However it has a huge performance impact and it's not always suitable for production binaries. It can be used however for fuzzers. If you first code it this is very useful and it will very easily detect invalid memory accesses even without using Lipsy functions and so on at the instruction level. All right, since time is also slowly running out I will conclude by saying that package source is a great project for testing security features. You can implement it once through the framework and then it will be applied across the entire tree. And since we support so many different packages so many different packages are available. This can be applied to an entire distribution at once. Some mechanisms are enabled by default now among the ones that I just mentioned particularly SSP and Fortify. I hope we can get also pie or railroad soon. A lot more can still be done. I will welcome you to have a look at the features I just mentioned or even further ones if you feel like it. My own current focus is on testing full railroad getting pie to work. Working again on package signatures and implementing additional checks in package developer. Of course you can beat me to it, please. So that my two duties reduces magically, that's great, I love that. And otherwise I can only thank you for listening so far to your base DECON for having me. To package source project, Joyant for helping out Skyline also really helpful with the HPSD project. The different communities which support me providing shares and otherwise bits of knowledge and I'm available at netbz.org, Corbin. So thank you very much. If you have any question, I will be happy to go ahead. Yeah, we have one, two, three. So Mark. Well, actually I have lots of questions but I will limit myself to three of them. Probably. The first one is probably very simple, about Fortify. Like you're using it for calls like Memmove which is usually built-in in compilers. How does it fare with respect to that? It's always going to emit Memmove check or is it still going to be able to do some built-in work? So what happens is that the Lipsy headers are automatically redirected if Fortify source is set to one or two to a different one which will, through a macro, change the call to Memmove to something else like Memmove check. And then this will only be done for instance if JCC is also in use, checking this macro. And then it will use attributes to talk to JCC and tell it to change which call to use. Okay, so then you lose the built-in actually. Sorry? You lose the built-in. You lose the built-in. It's also a built-in from JCC. It just has a different name. Yeah, but for instance the fact that it can be optimized to winline code, you lose that completely. Yeah, probably possibly yes. But I think JCC is smart enough to also inline the Memmove version with the binary check if it has to go. JCC smart enough? That's a joke. Yes, two more questions. I'm sorry. Second one, you're aware that we have a way to work around the bind now issue. Like we have a system call which is called I think a bind which has the peculiarity that it can only be called from one address in a process space. And we use that in our dynamic linker so we can still have lazy binding and still have almost as strong, as strong Raylero as you have actually. You might want to look at that probably if you haven't already. I think we got Raylero mostly from the tool chain because this involves the dynamic loader and dynamic loader is from us. You need to port from the OS because there are parts in the kernel, but otherwise by now it's impractical for stuff like a LibreOffice if you still want to have any kind of performance. So that might be useful for you as well. Okay. Well, I didn't notice any specific issue with LibreOffice. Also have a modern laptop quite recent so this could also be the reason. Last question is a bit controversial. I've seen that you've got lots of options to make things more secure. And I'm still wondering why don't you turn them on by default, like for instance, why do you even have to say that you want to have sign vulnerability database package instead of having it check the signature by default and always have the most recent version unless you specifically ask otherwise and stuff like that. So actually a lot of what I described is already enabled by default. Cool. So the package vulnerability database, I don't think it's turned on automatically every day, but the signatures are verified by default. Then now in package source we have SSPN45 enabled by default. Okay. I did actually this talk to push for it because a year ago it was not in place and I did it, explained it and I had to go around the world to convince everybody that this is okay. Right. But it was also a great experience and I keep pushing. And there's a small second part of that question like for signatures, you only showed us what you do when you add one single package and you show the full signature. Yeah. If you add a list of packages and they all have the same signature, I assume that you're only going to show the information once, right? No, the information is displayed for each and every package that you install. That's something that you can fix and that will quickly reduce the number of information displayed. Okay. It should be easy. Yeah. Vinny maybe or? You said that address sanitizers to slow to for a production build. Yes, you wouldn't want to use it for the thing your users install. But one way you could use it in a sensible way perhaps would be to provide a target that would build the package with ASAN turned on and then run the regression tests. Yeah. Like a make test dash ASAN or something. Yeah, this would be great. I do not know if we have the notion of does a package have a regression test in package source yet in a framework but if we could do that, that would be great, of course. Well, you don't really need that. You can just, what happens now if you do make tests and the package doesn't have tests, it just, you know, does nothing. Yeah. But it also needs to be specified for each package, how to run the test and it's quite a bit of work. And you can probably also do T-SAN at the same time, Threat Sanitizer. Yeah, yeah. On M-SAN, Memory Sanitizer, yes? Yes. Christosan. So I see that the security model you propose here is quite centralized and it's strongly relying on the compiler. Yeah. If I look at security nowadays, it tends to be really decentralized. There are like technologies like Blockchain, for example, it tends to decentralize things. So my question is, do you think that there would be like a better way to do it instead of relying only on the compiler? What a single piece of software which is like GCC in this case? It's not really that we rely on the compiler. I'm just using features offered by the compiler in package source and enabling it for the entire range of distribution. What the packages do in themselves isn't relevant to us. We are only a framework to build a distribution. Security then is also a field which applies to everything. There is a security dimension to everything you do, even sitting on a chair. Someone has actually tested the chair to be safe that it doesn't break in pieces and cut your hands if it fails. So we cannot cover everything centrally or in a decentralized manner. This is highly context specific and there is no magic wand to just secure everything, unfortunately. Hi, my question is probably been asked, I'm not sure, because it was slightly technical and I'm not a security person, but the same way that we do testing with Anita on NetBSD, for example, where a bunch of test cases are done and there's an automated way to test, I'm wondering if something like a pen testing tool like Kali or something could be run real time to continuously probe if our security. So again, instead of building the fort and not assuming it'll work, is there a way to kind of continually throw whatever challenges or whatever known threats there are out there. So I'm just wondering if that's part of a broad design approach to this infrastructure or if there's possibility for that. I think it should be possible to do a lot more in this direction. I do not have any specific idea on top of my head outside of what we just mentioned with Fuzzing, maybe, Christos. So Google is offering a great project which is called OSS Fuzz and basically you can submit your open source project and they have a library API for Fuzzing so they will run your code and they will actually open bug reports to you. They run your code against arbitrary inputs using both all three sanitizers, I think both M-SAN, UB-SAN and ASAN and this is a great way to get cheap cycles to run your functions against whatever you want. So if packages want to get more secure, they should just subscribe to that and actually there is also money back. Google will pay you money if you advertise and you find bugs and you introduce with your build. So that's a great plugin for Google although I don't work for them. If you want to do it in package source, since we have the framework, we can easily, for each and every target, build the system up to the package you want to test and then build this one with ASAN, T-SAN, UB-SAN and so on and then run this test suite and easily script that through package source. Petra maybe? I just wanted to comment that it's probably not useful if a package source has tests for Firefox if Firefox already does them themselves. So it depends on the package if it makes sense to do this from package source or if the project we are using is doing it themselves. Yeah, I think we should disable test suites by default when building in bulk and but then tell package source how to actually run it if people want to have it. So as I said, you can do like a test target equals something and yeah, it's not run as part of the build normally except I think you can set a variable that will run it after building. Absolutely. The other thing, just as you mentioned Google's offering, I don't know if David Maxwell still does that but there were the security scans, what was the name of the company? Coverity. Coverity, right. So their approach last time I looked at it was your thing you want to test must be in package source and they use that for building and then they test it with their static analyzer which is normally a commercial very expensive product and they'll send your report and you can look at all the boxes found there's like dozens of them typically. Yeah, thanks, Benny. Is there anyone else who, yeah, over there? Thanks a lot first. I'm actually not very familiar with net BSD or package source. So there was a great overwhelming introduction for me. Thank you. You just learned the repository and well, it was just about one gigabyte so I'm very sorry for blocking the network here. And I was wondering why it's so huge at the look and it seems like all the ports were in there first and a lot of stuff already. Have you thought about splitting that up and just offering without the bootstrap toggle but just the source system cells? Can someone else answer that because this is not where I'm the most familiar? So I'm gonna give a half cynical answer. One gigabyte is not a lot of disk space. Well, downloading one gigabyte is not nice but you could start it from the tar XZ which is less than 30 megabytes and I think it's relatively reasonable to be downloading 30 megabytes. Thanks. Okay, so should I conclude here? Okay. Thank you.