 I'm Matt Jarvis. I'm director of developer relations at a company called Sneak. I'm joined by my glamorous assistant, Cabla's co-host, whom many of you will be familiar with, Andrew Martin, who's the CEO of Control Plane. So... At some point in the 2010s, I was sat in this semi-derilic data centre on my own, bootstrapping an open stack public cloud, and Andy was in the rather more salubrious environment of the UK government's offices in Marchham Street in London. But we were both working on kind of similar things on how to build and operate on-prem private clouds, primarily using open stack. And there was no Kubernetes, there was no Docker, but the practices were pretty good for the time. Good DevOps principles, everything in code, orchestration using things like puppet, but there was still lots of brittle bash scripts that needed regular tending. And the experience of managing security back then was very different. There was lots of manual work, scanning stuff using tools like Nessus, custom vulnerability checks in Nagios, and pre-limited tooling for automation. And the point of all of this is that the cycle of technology change moves pretty fast, and it's only getting faster. And the difference in the next decade is likely to be significantly bigger than in the last one. So today we're going to talk about a few things that could change the way we think about and manage security even more dramatically over the next 10 years. At this point, huge disclaimer, none of this is in any way a prediction or investment advice. History is littered with examples that most things in technology rarely go in the direction we think they're going to. It's over to you, sir. And with that, let's start by looking at a history of CloudNative, how we've developed, where we've come from and where indeed we may go. So we started with mainframes, moved through co-located rental servers, and then virtual machines opened up the Cloud. So, where did CloudNative come from? Let's see if we can get refocused on this. There we go. We have Linux, Birthing, LXC, the first container runtime out of Ubuntu. Google contemporaneously builds Borg, the distributed container manager, containers, but not as we know them today. And the Linux kernel ships KVM, the kernel virtual machine, which runs software virtualisation based on the paradigms of Zen that was then built out into AWS and leveraged to instigate large-scale public cloud. Those resilience and scaling lessons that we learned from cloud usage inspired the working patterns and practices that have become CloudNative today. And from there, the technology exploded. Soccer, Kubernetes, OCI, the birth of next-generation hybrid runtimes mixing containers and VMs, such as Cata containers, Gvisor, Firecracker. These foundational technologies birthed public cloud and brought us somebody else's computer. So those lessons that we learned, scalability, elasticity, resilience to underlying machine failure, these helped us to commoditise distributed systems. Containers arrived to split the kernel into individual buckets of compute. Each container, a microcosm of Linux, running on a shared kernel for resource sharing. Containers do not even exist. There is no struct for them in the kernel, and this is why Docker grew to such rapid success. A user-friendly wrapper around the deep Linux complexity of container implementation, which didn't support cross-host networking, data sharing, service discovery, or any of the other differentiators that brought us the good ship Kubernetes. So we rocked on with Kubernetes for a while until people started to question if containers were better than VMs after all. Surely we could take the hardware-assisted isolation of virtual machines and the lightweight container security barriers and merge them together. Welcome to the next generation of runtimes, micro VMs, various different forms of isolation, and the container image formats remained compatible. Containers on a rainbow-fired trajectory into the stratosphere. But this isn't the classic high-level white-collar trends talk. This is what's coming through the CNCFs next zero to five years' worth of project and industry trends. We touched some of these technologies on a daily basis so you don't have to. So next generation process isolation, software abstractions require CPU time to execute, and so virtualisation is a balance of security and performance. We have server-side WebAssembly. Tooling is now starting to proliferate after Docker's alpha driver for container D. WebAssembly is a binary format for executable code. As I say, Docker and container D now support this. It's already maintained within browser virtual implementations for 40 or so languages, including C++, Golang and Rust, and everything can be transpiled straight down into a binary format. However, server-side Wasm is unlikely to replace containers entirely this year due to its core and some of the file system APIs being under active development and the lack of developer friendliness, which is actually starting to be changed by companies such as Fermion and their SPIN platform. Unicornals, different view on the same story. We've tried this before. A lack of debugability and introspection makes them very difficult to use. Approaches are still being investigated, but generally this involves repackaging, which is a developer-hostile step. We don't expect this to proliferate anytime soon. And Cata containers. Arguably the most mature project in the hybrid container VM space will start to be put through more hardcore production use cases. Things that require high IO and challenge some of the layers of abstraction and indirection. Cata has pluggable run times, so it can be used with Firecracker, QMU, Cloud Hypervisor and is working to build what will become the standard implementation of lightweight virtual machines. So, whatever packaging and isolating technologies we end up using, that's really just part of the security puzzle that we need to solve when we create modern software. What most of the emerging trends in security are doing is building chains of trust. And the reality of any chain of trust is there has to be something at the bottom that we can actually trust. And it's obviously very important that we trust our build systems, the things that produce our artifacts. And there's a couple of different dimensions here. The first being, can we trust our build to always produce the same thing? Now this might be non-obvious to some folks, but there's not necessarily any guarantee that the thing you're building is exactly the same every time you build it. And if it's not exactly the same, then you can't really guarantee its validity. Even slight differences may introduce unexpected behaviour or potentially security vulnerabilities. And this could happen if you're using time stamps, if your ordering is volatile and for a whole host of other reasons. So how can we ensure our builds are completely deterministic? Well, this reproducible builds concept has been around for a long time and that aims to do exactly that. It's a set of software development practises that's aimed at creating bit-for-bit identical artifacts every time we run our build process. And lots of large open-source projects already practice this. But again, we kind of hit this issue of what can we trust? If we use pre-built binaries in our build pipeline, can we know that those binaries aren't compromised? And with that in mind, some folks are starting to talk not only about reproducible, but also bootstrapable builds, where our entire build chain, our entire build pipeline is also built and can be verified. So before we build, we build the thing we use to build, but where does that stop? Can we trust pre-built operating systems? And there's a train of thought that even the smallest general-purpose operating system is now too big to be auditable and verifiable by a human. And there's lots of interesting work happening in this space with projects trying to build the smallest thing that can boot hardware and build compilers, which then can be used to build other things and so on. And these are generally written in very low-level languages with the aim of being human-auditable. So at least some programmers, clearly low-level languages is a very specific kind of programmer is working in these low-level languages. But at least some programmers can be capable of reading and understanding that entire code base. And we're talking really, really, really tiny here in the order of hundreds of bytes. But now we're really off down the rabbit hole of finding something somewhere that we can ultimately trust. Even if we can boot our hardware with something really tiny that we can fully audit, can we actually trust the hardware? Now, you might say at this point, no one cares about hardware anymore, we're all in the cloud, right? But as Andy pointed out earlier, the cloud is still and always will be just somebody else's computers. And the world of silicon is notoriously proprietary. There's lots of proprietary features in modern chip sets that you may never know about in the hardware space we're operating almost entirely on trust. And this is not just the chips themselves, this is all the tooling to design them, build them, is also in general proprietary and unavailable for us to verify. And this is one of the things that strive in the creation of open source silicon. And there's lots of interesting projects happening in this space from things like OpenRisk, which has the aim of creating a fully fledged open source processor to more specifically security focused projects like OpenTitan, who are building an open source design for a route of trust chips for validating hardware and software when we boot machines. And these projects are all about having those designs available for folks to verify and toward it. And there's an argument to be made that computer architectures have remained relatively unchanged for more than 30 years. A lot of the conceptual things, virtual memory, multitasking operating systems, paging, they've got smaller, they've got more powerful, but fundamentally a lot of the things about how the CPU works are fairly similar. And because of that, we're still using a lot of the same paradigms in programming. And these architectures have features that could be considered contributory to certain classes of security vulnerability, particularly around memory safety. So are there changes in computer architecture which can help us to reduce the attack surface? Well, there's a team at the University of Cambridge in the UK who've been developing a new instruction set. Cherry, acting in for capability, hardware enhanced risk instructions. And this is designed specifically to mitigate software security vulnerabilities. And the way Cherry works is by introducing a new set of processor primitives which provide a mechanism for fine-grained memory safety and process isolation, but directly in the hardware. So it's a combination of tool chain support as well and hardware to reduce the amount of vulnerabilities that attackers can exploit. So the idea of least privilege, but highly efficient, highly scalable because it's done directly in the CPU. And Cherry is a concept that's been around since about 2010, but there's now hardware that actually supports it in the form of this Armorello board. And there's lots of development going on to widen the ecosystem of software support for this new architecture. Right. So let's move up from the hardware into the runtime and look at the kernel and see how we expect Linux to develop. First of all, we have Rust, a programming language that has been gaining in popularity due to its focus on memory safety and performance. Despite scant initial support for Rust in the Linux kernel, the inclusion of a recent device driver written in Rust is a significant step forward towards a new era for the kernel, starting to assuage the risk of memory safety bugs and bringing in memory management in a way that the kernel has done natively before. Rust's place as a memory managed low level systems language distinguishes it as long as unsafe mode isn't used to bypass some of those compile time guarantees and its developer community expects to see more Rust modules turning up in the kernel in the future. Those skeptical of Rust's claims to kernel contribution have quietened and the kernel refuses to break backwards compatibility. So now that we have the device driver in Rust in order for that to be removed from the kernel it would have to be back ported or re ported into another language. This is the inertial start of a change that may take many years to come to effective fruition but in terms of security benefits heralds a new era for memory safety in the Linux kernel. EBPF so hot right now a technology increasingly used in recent years to enhance the performance security and observability of various Linux based systems. This technology allows for the creation of in kernel programs that can be used for a wide variety of functionality such as packet filtering system call tracing and security enforcement. One of the most notable features of EBPF is the express data path XDP program. This is high speed packet processing which can be off loaded to a smart network card for acceleration. You can drop packets before they even hit the kernel. Additionally, ongoing work in the kernel runtime security instrumentation EBPF Linux security module closes the gap between asynchronous observability and synchronous enforcement by executing these code this code in a synchronous blocking LSM providing a consistent interface for security enforcement makes it possible for some of the new CNCF security projects to leverage EBPF to enhance application security obviating the need for consumers to write BPF code itself. While BPF has many advantages we're also seeing a kernel TLS off load which means that running code on a shared kernel can expose things like perfect forward secrecy tokens to a shared EBPF runtime. So, as with any technology there are always compromises that affect how we deploy and architect our systems and our applications. There are two Linux capability requirements for BPF cap BPF and cap Perthmon. Most of the container breakouts that we've seen over the past few years have required these elevated privileges. So again, there is a balance between commoditising and making things safer for adoption. But hiding the contents of memory, execution and disk from a hostile user is difficult and is the paraphrase of confidential computing. This requires, as we've been talking about, trust supply chain security all the way back through the hardware and microcode that is used to run these platforms. But organisations have existing deep levels of trust with, for example, their cloud vendor and we're seeing things like AWS Nitro able to support things we never thought we'd see in the cloud such as Quant and Hedge Fund secret source algorithms being run on shared public compute because of the isolation and secrecy guarantees with these platforms. Long term goals such as homomorphic encryption where data is processed in an encrypted state are also being researched. However, these technologies are nascent and in an early stage which makes them reasonably impractical due to limitations in the instruction sets and slower execution and longer processing times due to that overhead of encrypted processing complexity. A concrete use case from the OG Kubernetes product owner David Oroncik, Web3 bringing the promise of computer with data. This is running trusted workloads on untrusted nodes. So if we have a shared data set but we want to mutate parts of it on a distributed system like the Internet how do we achieve that? This is the most useful high-scale concrete computing use case at the moment. So you're going to hear a lot about keys about signing in about certificates this year. It's a fairly safe prediction that artifact signing in our software development lifecycle becomes pretty much standard and public key cryptography served us well for 30 years plus so underpinned all the innovation in the web and internet spaces but there might be problems in cryptography approaching. This is a quantum computer and quantum computers not only look cool but they work very differently to classical computers replacing the concept of bits, zeros and ones with quantum bits or qubits based on the properties of quantum mechanics. Unless you're a quantum physicist most of quantum mechanics is completely mind bending for normal humans but most folks might be familiar with the concept of shrodingers cat thought experiment about a hypothetical cat that's both alive and dead at the same time whilst in a sealed box with a fatal radioactive element and this rather macabre example illustrates how particles can be zero or one or both and it's that super position phenomenon that quantum computers are based on and they are theoretically very good at certain classes of problems that are very hard for classical computers and factorisation of prime numbers is one of these. This is a visualisation of Shaw's algorithm developed in the 1990s by mathematician Peter Shaw and this is widely viewed as a proof that quantum computers could potentially break public key cryptography including Diffie-Hellman key exchanges and algorithms like RSA and this might be theoretical but a lot of researchers believe it could happen in the next decade on a day known as Q-day when most of our crypto would be broken all our certificates become vulnerable to man-in-the-middle attacks and all our encryptions cracked and there's a lot of different actors involved in building quantum computers including lots of three-letter agencies and nation states so we might never even know that this has happened and it's serious enough for governments to take action with a move towards new kinds of algorithms more resistant to quantum computing this is known as post quantum cryptography and in the US NIST there's been running this competition for the last few years and the algorithms were actually selected in 2022 they've all got pretty cool names any algorithm with a Star Trek reference in it has got to be good and at some point it's likely that we'll all need to switch our keys our certificates and our infrastructure over to these new methods. So we've talked about a bunch of stuff about crypto at hardware, build systems, the kernel but the real elephant in the room is the rise of artificial intelligence and more specifically large language models now unless you've been living under a stone you'll have seen a lot of traffic about chat GPT since it was released back in November it's hard to believe that's only a few short months ago and the growth in users has been unprecedented millions of people have been trying it out GPT-4 has just been released which is an order of magnitude more powerful and if you've actually used it it's clear pretty quickly that this technology is going to be truly transformational for how we interact with computers entire industries are going to be disrupted by these abilities to write complex text based on fairly limited human input and pretty quickly after the release people start to experiment with having chat GPT write code and again results have been pretty extraordinary given relatively small inputs chat GPT is already capable of writing pretty much correct complex applications in multiple languages manipulating data between formats translating programs from one programming language to another and even writing programs in fictional programming languages and it's clearly not about to displace human programmers just yet this field is moving incredibly quickly it's already clear that this is going to drive massive change no matter what you might believe about this field in general I actually gave this talk a version of this talk in Seattle around the start of this year and I had note here that they can't reach out to other systems on the internet but there's already systems in place now that are able to do stuff like that so things are already changed pretty dramatically and a lot of researchers have been talking about the idea of conversational programming in general so this is that we would interact with models based on what output we need not on how to achieve the result and the model will get to that result by any means it deems appropriate but this kind of raises some pretty fundamental questions about the future of application software if that future involves large language models writing programs for us where we only care about the output then the kind of question arises about why computers would use high level programming languages at all to get to that result computers don't know anything about programming languages all languages are basically abstractions to make it easier for human programmers to program computers whether interpreted or compiled down to something the computer can actually run and since such a huge proportion of our issues with software supply chain at the minute come from how we assemble software using packages and libraries perhaps this era of AI generated programming will solve that problem for us so this gives birth to a new form of job prompt engineering attempting to trick the machine into giving us the answer that we want one thing we've missed off the slide is if you haven't played with jailbreak GPT yet you can bypass all the controls because everything is just a prompt on top of the language model itself and this will lead us into an interesting new space where new jobs are created machines will provide us with an embellishment or an augmentation to our workflow rather than a full replacement but then we will see classes of jobs at risk how long will it be before AI eats our collective breakfasts ideally a little bit further so perhaps we'll see things like integrated application security platforms supply chain to runtime integration and attestation personal user security which may require access and indexing of all your data whatever form that takes you be the judge of whether it is sensible to seed control of those things to an unproven model how long might it be well let's look at deep mind to get an idea of the timeline looking at the arc by which the organisation develops features we see rapid exponential growth we're at the leading edge of the curve for public AI use cases and of course we'll see innovation and adoption dramatically accelerate so what will that look like automated attack simulation has been growing in popularity since DARPA's 2016 cyber grand challenge which was a competition for automated binary analysis and exploitation we also see things like deepfakes infect the information landscape China passing laws requiring watermarking of AI generated content universities and organisations banning the use of AI assisted writing but the greater threat is probably as yet unrealised AI tooling that is able to effectively iterate through implants or generate novel exploit code and chain those things together in new and unusual ways may wreak havoc on our systems and our defensive models and require greater defensive capabilities so moving forward the spiritual or literal successes to chat GPT are expected to be able to generate weaponised payloads for existing and novel vulnerabilities and eventually novel to human approaches to exploitation therefore automated remediation is a potential foil to the harbinger of AI DOOM attackers need to only find a single entry point to our system while defenders must protect all aspects attackers thinking graphs defenders thinking lists so defenders can proactively probe their own defences with the same automation that attackers are using and automated remediation may do things such as isolate breached workloads black hole in compromised routes and revoking compromised key material my colleague Francesco is talking with Matt Turner from Tetrate on automated cloud native incident response with Kubernetes and service mesh to explore some of these ideas further in the security and identity room at 11.55 there is one extra element here which is false negatives or remediation from a security perspective the remediative actions that take availability of a system down and result in production downtime is a potential issue that will see a slow and measured adoption of these potential technologies but AI systems make this a potentially viable approach in future so I mean I think what we're saying is that it's going to be massively impactful across the security landscape and potentially for the way we write applications and the way we attack and defend them and ultimately we're still back to our starting point can we trust our AI model and more importantly is it plotting a robot takeover of the human race and I think our general point in this talk is that the security landscape is likely to become much more complex than it's ever been in the past and the technical challenges of some of these emerging threats are going to be much more difficult to solve it will become imperative that together as an industry we rethink how we educate and train the next generation of developers of systems folks and security practitioners to meet these challenges and Andy and I are both involved in some of these efforts from bodies like the Linux Foundation and running things like CTFs and that kind of stuff there is obviously no such thing as secure software in most cases the reality of the world that we live in is the friction between velocity being the key driver for business success how fast we can get our products to users and understand the data coming back from those users versus the increasing complexity of security issues is a circle a square that can't be a circle it's not easily overcome and this is a fragile balance that may not survive contact with the ever more complex vulnerabilities and automated adversaries so to return to the disclaimer some role of this talk is very likely to not come true predictions aren't notoriously difficult in technology and that was me 30 years ago so what do I know but we'd like to leave you with a quote from that eminent futurologist Dr Emmet Brown your future is whatever you make it so make it a good one thank you