 Right, hello everybody. Sorry for those of you who are expecting the Cillium and Kubernetes talk. The gentleman is a new father, and so this is a talk I had pre-prepared for other conferences. I hope it will be of interest here today. So I'm Andy. I've done various different things. You could call it DevSecOps, but it's a disgraceful term. I think that is just an engineer position. I work at Control Plane, a container in Kubernetes security engineering firm. We find right now a lot of our work is educating people who've deployed containers. Their containers may not be entirely secured by default, but like everything else, are as secure as they're configured to be. So I want to talk about open source exploits, web server exploitation and how containers have impacted the exploits we've seen over the last few years. So open source exploits, vulnerabilities in open source code, and insecure containers. Do they contain? What can they contain? And how have they defended against major recent vulnerabilities? Our users expect their data is safe when they hand it over to us. So is it possible to protect all data? Surely our users deserve some level of privacy. And if we can't protect the data of code in the system, is it possible to protect all data? Well, if we travel down and specified untested code paths, as bugs are, then we can't be sure that that's secure. So is bug free code possible? And if not, is anything completely secure? Or to lower the bar, is anything completely secure from teenagers? So this talk is about security, the anatomy of some recent major open source vulnerabilities, how containers affect security models and defense from future vulnerabilities. Right, we'll go through some internet melting open source vulnerabilities with a focus, as I said, on web servers and containers and see what part containers had to play. So Heartbleed, catastrophic flaw in open SSL. A commonly used encryption library referred to as a modest code base of outsized importance. Considered theoretically unexploitable, Cloudflare opened the competition and within nine hours, people had retrieved their private keys from the shared memory of their servers. Affected 25 to 50% of major sites of disclosure and impacted Apache and Nginx, email servers, XMPP, SSL VPNs. It's a problem in SSL heartbeat handling. So heartbeat is a message used to keep a connection alive. And in this case, a user sends a question with an expected reply word and untrusted data containing the length of the word that they're sending. As we know, that should always be done server-side, never trust any client-supplied data. So in this case, when the client says hat, this is 500 letters long, then the server responds with hat and the next 500 bytes, well, the next 497 bytes of private memory. Memory that should never be leaked that may be containing encryption keys and obviously breaks the security model of open SSL. This attack can't be seen in the logs because it occurs at handshake time and the poor guy involved missed validating a variable containing a length. So it's a buffer overrun and it was discovered by fuzz testing with American Fuzzy Lop. For those of you who don't know, it's a genetic algorithm-based fuzzer and you provide it the source code, it instruments, recompiles and then significantly reduces the brute force problem space by intelligently attempting to make your program crash. Mitigation, update, open SSL. So how do containers help with a Heartbleed scenario? Well, they don't really. The host's memory was protected, but this attack is only leaking process memory anyway. Heartbleed is essentially indefensible with a container, but containers do help. Hopefully having a mutable deployment artifacts in a Swift CI pipeline means that the time to patch is significantly reduced over traditional configuration-managed systems. So in kernel and container terms, Heartbleed is like building a castle, securing the walls and then having the guards give away the secrets. On to the next vulnerability. Shellshock, a bash vulnerability allowing local privilege escalation. By adding code to a specially crafted environment variable, containing a function declaration and executing code out of the calling context. So Heartbleed leaks data, Shellshock can be used for privilege escalation. Some impact, CGI web apps, SSHD, DHCP clients, open VPN again and all Linux boxes, all BSD boxes and all post-1991 Unix deploys and Macs, basically the world for most of us here. Although Debian Ubuntu with dash as a default shell were actually not vulnerable. So here's a Shellshock call. The payload is encode, well it's just sitting in the user agent field. Obviously that's just a bash function with a no op and then whatever you want on the end, executes, as I say, out of context. That's obviously executed server side when called remotely over HTTP. At the lower half of this Burp Suite output demonstrates the contents of the past, in fact the contents of the previous ping ID and password. You'd only see this attack if you were logging all HTTP headers in your web server logs. The bug was in the initial implementation of function exporting and importing on which was written by Brian Fox on the 5th of August 1989. So this was before the web existed, HTTP existed or Linux had released Linux version one. So it would be incredible if this bug wasn't being exploited in the wild at some stage during those intervening 25 years. Loads of CVEs all associated with this bug, but of note is that it was eventually fixed by Florian Wymer of Red Hat and he said he found the bug after fuzzing with American Fuzzy Lop. So the mitigation again, upgrade, how do containers help against Shellshock? Well, they provide process isolation. So the escalation of privilege or the running of that untrusted code is only within the container's PID namespace. So as opposed to Heartbleeds, we're able to cushion the impact of the host by segregating the process namespace, segregating the process space that they're then able to run code in. So back to the castle, the perimeter is secure but this is like allowing Trojan horse in through the front door but locking it in the dungeons. It's safe, it's observable and once it gets out and starts trying to do its thing, it can be effectively shut down and terminated. So the next vulnerability, drown. TLS based attack, decrypting RSA with obsolete and weakened encryption. It's a protocol attack again and similar to Heartbleed can't be defended with containers. The problem here is old US export grade cryptography on symmetric ciphers, 33% of sites vulnerable at disclosure, the mitigation disable SSL version two if that was being run for some unusual reason and upgrade open SSL, rotating any secrets that may have been leaked. As a side note, there are a lot of TLS attacks that containers simply can't help with and why so many and why the cluster after May 2013, our friend Edward Snowden, a renewed vigor was placed upon examining the protocols and algorithms used for encryption, searching for back doors or intentionally introduced vulnerabilities. So, was this more secure for being open source? Well, given the cavalcade of academics and researchers required to find this particular attack, it was probably in the realm of nation state exploitation. Back to the castle, this is like securing the entrance to the castle with an obstacle and only allowing people over a narrow bridge and then turning a blind eye as the most is drained and continuing to trust it provides security. Can containers help? Well, this is a protocol attack. No amount of containers are going to protect us from broken protocol specifications. The only benefit we reap is being able once again to redeploy quickly. On to the next vulnerability, dirty cow was Linux kernel bug which allowed privilege escalation on every Linux kernel since July 2007. The URL is dirty cow.ninja, potentially the worst of all logo vulnerability URLs. And it's a race condition in the copy on write implementation in the kernel. Again, this is on every Linux device and embedded devices probably have no routes to upgrade. Exploitation of the bug doesn't leave anything unusual in the logs and the mitigation as with everything else is upgrade. So of notes, this bug was found in the wild by the researcher involved by packet capturing all traffic into his server. One of his sites was compromised. He was running this rolling packet capture. He extracts the vulnerable payload, reproduced it and submitted it back to the Linux kernel main lists. So how do containers help? Well, they couldn't contain this bug because the security subsystems that containers rely on are all kernel based subsystems. Containers rely on the kernel for protection, namespacing, C groups, invocation of further extensions. So if it's in the kernel, that's almost nothing they can do. We'll look at exactly why that is and what we can do to prevent, to protect ourselves against unknown vulnerabilities later. So back to the castle. It's like building a castle on the biggest rock you can find and then being surprised when someone burrows in underneath and steals the crown jewels. If the kernel lets the containers guard down, there's very little the container can do. We'll demonstrate this exploits in a moment. So a couple more vulnerabilities. Honorable mention to Cloudbleed, February 2017, an error in an HTML parser triggered the same error as heartbleed, buffer over read, private data leaked in HTML caches around the world. Probably could have been more secure of open source. There was a huge number of reads actually required to trigger this particular bug and potentially because Cloudflare will never quite reveal the intricacies of their internal development processes, we'll never know. And because it's been in slides already, Apache struts and Equifax, a simple example of failing to patch systems. Vulnerabilities will always exist but timely patch hygiene is imperative. Container networking policies may have helped here. We still don't know the full details but the AquaSec security team suggests that the data that was exfiltrated from Equifax was all done on the basis of one RCE. No pivots, just straight through all the way to the database. Obviously an application architected like that will be vulnerable almost no matter what. IDS, network policies, these sort of things are required. So again, without more information, it's difficult to say, still speculation at this point. So what these vulnerabilities have in common? Humans inevitably making mistakes and that will never cease. The people reporting a lot of these bugs were not part of the core project teams. So opening a project up and open sourcing it opens a huge pool of resources with demonstrable advantages. Fuzzing applications yields fantastic results. This should be performed whenever practical and containers are not a panacea. There are plenty of security exploits and vulnerabilities and application level protocol problems that cannot be fixed by namespacing and sandboxing application runtimes. Okay. Call to action for open source, review, fuzz other people's code. Donate to open source projects. Open SSL is horrendously overfunded, sorry, underfunded and overused comparatively. And as you've seen, people are migrating to other SSL libraries rather than trying to fix it. But we need some money in there. So finally, major vulnerabilities, kernel, TLS, remote execution for the last few years had no mitigation except upgrade. So let's demo a container breakout with Docker. This is potentially the most insane attack to demo because it relies on a race condition and I'll demo inside of VM and my CPU will be stepping that or actually on battery happily. But it's the most recent, serious breakout and potentially the most dangerous. Dirty Cow, as I said, was on all Linux versions since 2007 and all Docker versions. Because those Docker syscalls are going straight from the calling application through to the kernel, then there's nothing we can do inside the container itself. It's the container demon and the kernel that we rely on. So what this will do, let's cross, okay. Yeah, just wait for it to pop, everything. Is that all right? Well, there's gonna be a lot of data flowing, so we'll go through it once and then modify it if required. So what we've got here, so this is just a bit of bash to build a container image and we also popped some other windows. So the other Tmux tabs here, we use Sysdig to trace dead beef, which is the exploit name, hilariously. The file descriptor port one, two, three, four, that's because it's a copy on write bug exploitation. So the privilege of a piece of memory is changed and then Ptrace is used to try and write to the original piece of memory, exploiting a race condition. So writing to a piece of memory that root owns and then writing to it, one in every n, we will see a crash triggered. The ability to write some code is then written into the virtual dynamic shared object, which is mapped into all applications by the kernel. That bit of code then opens TCP connection back on port one, two, three, four, which is also running inside this container, mapped to the host and that will be the control channel. Then the next window will be dmessage so we can see what the kernel thinks is happening as this goes through and then the final window at the bottom will be watching this file temp.x, which is essentially used as a lock because this is a massively parallel attack. We're going from Debian, we're running the dirty cow VDSO and at this point, if we want to proceed, yes please. This is just to show that dirty cow is in fact matched to the kernel version. Do we want to run with app armor? Well, not in this case. I'm just going to make this a little bit smaller because there is a lot of outputs. Okay, so we can see the mAdvise call being filed off a lot of times. You'll notice that that barely increments in terms of time, ptrace call to attempt to write to that piece of memory and actually take control and write to it while it's still owned by root. And then this will carry on for quite some time. So at the bottom you see that we've got temp.x currently owned by root. And there we go, so the assisted call will continue for quite some time because of the volume of writes of syscalls there. But if we now just check who we are up here, then we're root and if we have a look at the process tree, we can see that somewhere up there will be the docker demon. We can see virtual box running in fact. So at this point, we started off inside a container. We ran an exploit that affected the kernel and the piece of shared code, the virtual dynamic shared object, used that to open a TCP connection as root back to our unproved container process and we are now root inside that box. So that's an example of the controls and measures the container puts around or the user's namespace in the C groups to put around a process being broken and we'll look at how to protect against that in a few minutes. In the meantime, let's shut all this down. Let's start it again. So we know we can't secure everything. We know there are problems in software development that will always generate bugs. And we know that nothing is entirely secure. The cost of formally specifying everything is very high and formal specifications can still contain bugs. And from an application perspective, security struggles to keep up with the cadence of delivery. The DevOps revolution means shipping software fast, sacrifices security, so unless we start to be penalized for insecure software, as Bruce Schneier suggested in front of Congress, then there is no incentive for the bottom line of these companies to prioritize security. Speed of shipping features is a competitive advantage and this is the case across IoT, phones, cars, wearables, everything. So we're not going to win by slowing down. So back to software. Is it possible to write bug-free code? Well, only with sufficient time and conformance to specification does not guarantee the absence of bugs. NIST has published an analysis of the state of the arts. This subject is quite broad, so we won't cover it now, but it's in the slides for posterity. Essentially measure software quality, efforts, people, be nice, make sure people are well-trained and if you're not measuring it, you can't improve it. There's also David Wheeler's secure programming how-to. It's a wonderful primer. So reducing bug count generally is one of the roots into this problem, but something that we can impact without the ability to change the application code itself is the architecture. So let's examine vendor best practices with regards to deploying containers. So where do containers excel? We know about these things, but is security one of those? Well, excluding the fact that integration with advanced kernel security features occurred rather late in the Docker lifecycle, are they bad at security? Well, not everything is names-based. We still don't have devices, Sys, bits of proc and dev. The demon runs his roots, so ultimately when we have one of these breakouts you saw similarly, we're escalating from a root-owned process. Rootless Run-C is coming, that's an open-source thing, got a guy called Lexus Sarai's doing a lot of work. It's about a year's worth of PRs gone in or time to merge, but a lot of changes, meaning that we can actually unshare namespaces without privilege and shared kernels, but we understand the compromises of isolation mechanisms, cloud providers run stacked containers and VMs anyway. So with the advent of things like Intel's Clear Containers, which is essentially a very thin KVM wrapper, or hypervisor wrapper, but able to plug into the OCI spec, sorry, the container runtime interface spec, then, and things like Cryo, in fact, then we can choose our execution environment for the level of trust we have for our specific container. So, the good, Docker specifically is actually configured to prevent a large number of attacks by default configuration. The reduction of attack surfaces inside a container, so the reduction of dependencies, does minimize the attack surface per container. It allows a more granular application of privilege or, in fact, capability to be assigned per process. Speed of deployment, so running fast CI pipelines immutably means that we can ship patches quickly. Content trust signing, this was touched upon earlier by Lenard, but actually addressing things cryptographically means that we can take manifests and know what we have. It means that we can do things like scan everything we know in production offline, so, for example, union FS file systems are able to scan the individual layers, a new CVE feed comes out, then re-scan everything without having to go and check exactly what we've got deployed, and native log drivers, so post-mortem analysis is easy. These are a list of events. This is a list of events that were prevented by default, not comprehensive, of course, but you'll notice that the ones that are not mitigated are all beneath the container in the kernel. So, how does Docker provide this security hardening? I'll run over these briefly, I'm sure we're mostly aware of these. Namespaces provide a different window into the kernel or whatever particular entity we're looking at, so processes running within the container cannot see processes without. They've been in the kernel since July 2008, for example, the PID namespace, any time a process starts, it's assigned a PID in both the master namespace and its own particular namespace. So, these exist for a number of different things. Now, the username space has been touched on already, username spaces V1 are, as Lenard said yesterday, incomplete. I think maybe they're almost not fit for purpose from the perspective of the way Docker uses them and preaches about them as a security model. There are very real dangers using a root user inside a container because you're mapped to the root outside the container and that affects anything because the Linux security model is based on users, it affects lots of things all the way up and down the system. Now, by extension, this is why this is a hard problem because we want to be able to share bits of data between different containers, we want to maintain the right permissions on files and processes. As it stands, there's probably still a few years work to go until this is a complete story. Username spaces are not on by default for Docker, they have to be activated with a command line flag when you start the demon and this is because they were only introduced around Docker 110, which means if that was on by default, suddenly a whole raft of containers prior to that would not function, all container images rather. So, mapping the root user to a non-root user outside the container really aids with mitigating the risk of container breakouts but it's something that is incumbent upon each administrator on his own system to actually configure correctly. So, control groups, another key feature of isolation, providing resource counting so things don't run away. They're now all mostly implemented in Docker, the store is not quite the same for Kubernetes which still doesn't have network IOPS or block IO properly accounted for but the story, again, is converging. C groups are obviously fair sharing of resources and prevention of denial of service. Again, there are questions around how users actually operate here, so Docker runtime, there's various things you can set when you run a Docker image, you can create a read-only file system, so obviously in a mutable container becomes more difficult to write to, however, this isn't mitigating all the attacks because if you can pull code, if you've got, say, if you've got bash and curl installed, then you can just pipe to bash and you're still executing code on the mutable file system. Pid limits and fork bomb proofing, of course, this is now dependent upon the user namespace story which isn't complete, so without setting these things, then you can effectively starve the resources of the host from inside a container. And security options, no new privileges, again, it's a case of ensuring that your application doesn't require an escalation of privilege for a certain operation before you're able to activate that option, but these are things that should be on by default. So kernel capabilities and process restrictions, capabilities give us a more fine-grained view into what root can do and avoids just a blanket application of all privilege, but the kernel has over 600 system calls, obviously a bug in any one of those could lead to a privilege escalation, so only those necessary for a container to do its job should be allowed. This can also protect us from dirty cow. In this case, dirty cow, like the exploit demonstrated there, relied on ptrace. Now ptrace has a lot of valid applications, including just listing out processes, so PS and S trace both rely on the system call. So again, a blanket drop of that would be inappropriate for some operations, but we'll move on to how you can actually run debugging tools outside a container by mounting all the relevant namespaces, and then perhaps we can drop all these debugging tools from our production applications, again, reducing the attack surface that we ship inside a container. So Docker's app armor profile to definitely allow us ptrace specifically for the reason of running PS inside the container. Docker actually drops a huge number of capabilities. The container bounding set on the left there is kind of getting off at 40%, which is beneficial in some ways, but of course it's not a comprehensive story. Not many people really run their own hardened kernels in production, so that's more of a hat tip than a recommendation. But security policies and whitelisting, Set Comp and App Armor are now both, and Desi Linux, are all integrated with Docker and Kubernetes. The problem there is one of utility. It's not generally easy to statically analyze or dynamically analyze a program and see its entire range of system calls without theoretically exposing it to all the potential actions it could perform. So there's some benefit there, but again, the user experience is not quite what it should be for the average administrator. All those things said, when surveyed and asked how many people use security policies, not really very many people, still less than 50% in production, this is the only way to really be sure that you've locked down what you expect your containers to do and should probably be higher. So kind of a wall of text here again, but I'll just go through some of them and then obviously these are in the slides for posterity. A note here is dropping to an unprivileged user, well that's just not running as root, but debug by attaching to relevant namespaces. So this means, for example, when you're running from scratch and you've just got a statically compiled binary sitting in, or if you're running a dynamic language, you only put your interpreter in there and whatever libraries you need. Don't install bash, don't install your OpenSSL libraries if you don't need those things and then run, in this case, busybox, but another container and attach into the namespaces of the containers that you're interested to. So this gives you the observability that you require. You can also run this debug Docker image as privileged to do whatever you need to do, but it's just going up for one time. It's only running while your namespaces open up for you to use it and you close it and everything shuts down again. It's the most secure way to debug things in production which invariably we need to do from time to time. Obviously, certain good binaries are the worst. Shouldn't use them. Privileged containers essentially act only as a bundling mechanism. They're just a way to ship tiles around and to run them because running as privileged puts you in the same namespace as the host for almost all the operations that your container performs. It also means, fundamentally, if you're in that privileged container as root, you can remount the host file system and then you've got right access to the host. So it's really, there is no protection at all and that should only be used in the knowledge that it's privileged as a way of shipping around binaries that may require that level of observability. For example, a lot of monitoring agents require privilege, sadly. So just good hygiene, drop all capabilities and just add back in the ones that you need. Running that through a CI pipeline where maybe your test suite wants with and wants without those capabilities is a good way of introducing that to a team who may not be kernel competent, if you like, or aware of the range of capabilities because there's a huge confusing amount. Enable these namespaces, well, that's the demon again. It's just called whitelists. So all these things are relatively obvious and securing a Docker host, that can be done with Docker Bench or QBench has an equivalent. Signing images, mandatory. So Moby here has decided to try and have another pop at the way we interact with kernel capabilities by introducing entitlements, which are loosely modeled on the IOS nature of entitlements. That is now available, actually. So you can now run Moby from head and essentially what that will do is it'll bake in a set of required permissions and privileges capabilities at a build time. And then when you run the container, the image, you have the option to give it a level of trust. So if it attempts to, for example, request privilege and you don't trust it and the container just won't start. This is, it looks like it's a nice way of bucketing up all the privileges that are normally just capsis admin. But it's yet to land, but it's something to keep an eye on. Kubernetes actually supports set comp now, so we could be far more granular. FS, sorry, runners non-routes, again, just mapping to a non-route user outside the pod that you're running in. And then there's some extra SD Linux bits and bobs in there. Kubernetes has got some strange insecure defaults while we're on the subject. At this point, this was a default service count mounted into every pod and up to one-six with the introduction of RBAC. That service token was administrator access to the API server. Very bizarre default choice. That is now fixed. As in that service count has less privileges, but RBAC configuration is mandatory for Kubernetes. I've got to keep the KubeLit ports locked down by TLS, otherwise you can also write in there. There are some crazy C-advisor bugs. Going back to the old adage that speed of delivery affects security. Right, so going back quickly to the demo, just demonstrate what it looks like to run the same attack. But this time, with the application of an app armor profile, we'll actually have a look at what that profile looks like. So, again, just have a look. It's a bit noisy, I apologize. So, run the app armor, yes please. So, what does this look like? Well, this is default Docker app armor profile. It's relatively sensible in most places, but this call here, so ptrace peer Docker default, that means if you're Docker, you can run ptrace. In this case, a quick fix for our problem is to just disable that call, and we'll have a look at how everything responds when we do that. So, we have just had a SIG kill and D-message hasn't actually shown us what we want, but nevertheless, that is app armor kicking in and terminating the process and generating, anyway, killing the process off. So, this is how you would ultimately configure all your applications, minimum set of privilege, and then anything that attempts to call, system call that's not permitted to do, SIG faults, and it's just shut down. So, how do we actually generate these security profiles? They are really quite painful, that requires intimate kernel level knowledge in order to identify what your application needs. The reference example is probably the SockShop demo down here. It's got good defaults for databases, web servers, and microservices. There's various different attempts to do this. All the major container-based IDS solutions will also do some sort of dynamic observation where they will whitelist these calls and generate a set comp profile for you under the hood. Got some quick slides here. They're probably less relevant to the interests of this group, but I've got five minutes, so I'll just whizz through them. CI is probably the escalation point that people want to go for, because it contains all your keys. So, there's some nifty tools you can use to, for example, prevent key leakage into Git hub, in this case. Git Secrets is your actual repo. The two of them search for entropy in strings and Git Secrets, the fixed-length strings, actually AWS keys, so probably are better used as a pair. Sock container scanning, this is focused on open-source stuff. There's some vendors listed later, but broadly, Clare is CoreOS' offering. This is integrated into their Quay Docker registry. It also runs open-source, and it will give you essentially the CVE numbers that you have, which types, in which layers of the images that you post to it. Linus gets a shout-out for old-school gritty aesthetic. Docker Scan is an interesting attempt to dynamically analyze who is running a container at runtime, which is a less obvious problem than it seems. It can do some nasty trojanning and back-during as well, so demonstrating the fact that you shouldn't trust anything you pull off the Docker Hub. Orel mentions OpenScap, which is folded into Red Hat right now, and Banyan's older. So Docker Bench Security will essentially enumerate the configuration options of Docker on your host, and make sure you haven't left any glaring, wide-open mistakes in there, so there's things like default U-limit. It could actually be more brutal and fail a lot harder, but it just operates in more of an advisory capacity at the moment. Obviously, your application dependencies are a likely point of escalation as well. There are some nice plugins. Snick's really nice. That will actually pull requests for vulnerable dependencies and transitive dependencies on GitHub. It's currently only JS and Ruby right now. Obviously, AFL is the mother of all fuzzers. Just go through stuff there, and also there's no point securing all these things if we allow our web applications to have RCEs and open up an attack surface for people to get into our containers and actually do harm. So how do we stay secure? Well, ultimately, intrusion detection systems mandatory. There's a whole host of container native stuff available now, which is namespace-aware. As I said, we monitor system calls either in Sysdig's case. They've just launched Sysdig's secure products, which requires a kernel module to be compiled into all your systems. Your security guys might not like that. Mine don't. And there are a number of other options which sit in front of the docker socket and are able to proxy calls. Falco has now actually been folded into Sysdig's secure, but it's the open-source version and twist lock and aqua come very highly recommended. So is anything inherently secure? Well, obviously not. Is open-source software more secure than proprietary? Well, no one really tells us any of these things. Open-source vendors disclose everything. Close-source vendors live through obscurity and tell us nothing. But fundamentally, open-source is secure enough for our needs as demonstrated by its adoption in major enterprises and governments around the world. So this talk has been about exploits that are scary, shiny, hopefully some more interesting, but they're not the only way that people get hacked. The OWASP top 10, it's actually becoming OWASP 2017 right now as soon as they can stop infighting, is the definitive list of various application-level problems that we need to fix going forward, running consistent security scanning in the pipeline and ensuring that the base level that we set does not slip. These are boring, dull, and omnipresent. And application security is just as important as network security and user security. OWASP now recommend essentially pentesting is dead and that we should be constantly testing the security of our systems. So in conclusion, prepare for the unexpected, secure your networks, secure your application code, secure your users' browsers, and when all else fails, run intrusion detection systems. Thank you very much. So we have about one minute left, any questions? Okay, well then, thanks again for your talk. Cheers.