 Ddweud yw'r cyffredin, ddweud. Felly, y decadig oes, ac yn yw'r cyffredin yn y Cymru, yn ymddi'r ffyrdd, yn y cyflwyng gael cyflwyng o'r rhaid o'r rhaid o'r cyffredin yn y Cyfrif, yn ymddi'r cyfrif. Ac yn ymddi'r cyfrif, rydyn ni'n gydag y cyfrif, rydyn ni'n gydag y team, ac rydyn ni'n gydag y platform ar gyfer cyfrif. However, there were no vindenetes, no docur, everything was pretty good for its time, everything in version control orchestrated and used as a puppet, but there were lots of brittle bash scripts that needed regular tending. And we took that platform through the ISO 27000 and one security certification, and the experience of managing security back then was pretty difficult. There were lots of manual work, external scans using things like Nessus, custom vulnerability checks in Nagios, and very limited tooling for automation. The point of all of this is that the cycle of technology change moves pretty fast and it's only getting faster. The difference in the next decade is likely to be significantly bigger than in the last one. So, yn gweithio i ddweud 5 oed yn cael ei ddadion y ffrindiau yn y gweithio gweithio'n teimlo'i bobl a yn gwneud dros ymlaen o'r 10 yma. Felly, at y momentu, mae'n arweinydd o'r ddael ymdweithio, mae'n i gweithio i ddweud y dyfodol, ac mae'r gweithio ar gyfer, ystod y gallwn i wir yn gweithio'n adroddau. most things in technology very rarely go in the direction we think they're going to. So you're going to hear an awful lot about keys, about certificates and about signing this year, and it's a fairly safe prediction that技 day signing in our software development lifecycle is going to become pretty much standard and public key cryptography has served us well for more than 30 years, felly dyna'r cymdeithasol yn ymddangos cyfnod o'r cymdeithasol yw'r cymdeithasol. Ond yw'r cyfnodau yn cymrydograffiaid yn i'r cymdeithasol. Felly mae'r cymdeithasol yn ei ddechrau. Mae'r cymdeithasol yn ei ddau'r cymdeithoedd yn cael eu gael, ac mae'r cymdeithasol yn ychydig o'r cymdeithasol. Mae'r cymdeithasol yn ymddangos, oherwydd eich sgwyl gyda'i cyd-a-bwysig, o'ch cybit o'r wneud o'r cymrydau mecanicau. A yn cael ei wneud o cwnton i'r ffysysau, gyda chi'n cwnton i'r mecanicau yma yn gyfionio gyda hwnnw i gweithio gyda hwnnw i'r gen i'u gweithio. Ond oes y ffocs yn ei gweithio gyda'r concept o'r cwmp yw cat. Ymgyrch i'n'r cyflwynt o'r cyflwynt, o'r cyflwynt hyfforth, cyflwynt i bobl yw ystod a'r cyflwynt i gyflwynt yw'r cyflwynt yn y cyfrobol yn y fathodol â phredsanol ac yn y bydd am gyfynwys cyllidiaeth. Llywodraeth oedd fynd i'n mynd i sylwyd am y cyfrwyng i ymdeithasol gael 0, o 1 oedd y brannach a os bydd mewn yn y sgwmhysydd sydd honno'r cyfrwyng sy'n ei Funomiad yma? A gyfrwyng y cyfrwyng sydd yn sylwyd, diolch yn y cyfrwyng cerddur ar y cerddur gan yn ein cynllun cyfrwyng, ac o'r mewn ffactorion a'r anhyg driver yma er mwyn. This is a visualisation of Shaw's algorithm, developed in the 1990s by mathematician Peter Shaw, and it's widely viewed as a proof that quantum computers could potentially break public key cryptography, including Diffie-Helman key exchanges and algorithms like RSA. This might be theoretical, but some researchers think that this could happen in the next decade. On a day known as Q-day, when most of our cryptography would be broken, our certificates would be vulnerable to man-in-the-middle attacks, and our encryption would be cracked. Now, there's a lot of different actors involved in building quantum computers, including nation-states and three-letter agencies, so we may never actually know that this has happened. And it's serious enough for governments to take action with a move towards new kinds of algorithms resistant to this threat of quantum computing, post-quantum cryptography, and in the US, NIST, the National Institute for Standards and Technology, has been running a competition for the last few years, and the algorithms were selected in 2022, and they've all got very cool names. Any algorithm with a Star Trek reference has got to be good, right? But at some point, it's likely we'll all need to switch our keys, our certificates, and our infrastructure over to these new methods. So, coming back to signing, once he starts signing everything, what we're really doing is building chains of trust, and the reality of any chain of trust is that there has to be something at the bottom that we can actually trust. And it's obviously very important that we trust our build systems, the things that produce our artefacts, and there's a couple of different dimensions to this. First being, can we trust our build to always produce the same thing? Now, this may be kind of non-obvious, but there's not necessarily any guarantees that the thing you're building is exactly the same every time you build it. And if it's not exactly the same, then you can't really guarantee its validity. Even slight differences may introduce unexpected behaviour or even new security vulnerabilities. And this can happen if you're using timestamps, if you're ordering as volatile and for a whole host of other reasons. So how can we ensure that our builds are completely deterministic? Well, this reproducible builds concept has been around for a long time, and it aims to do exactly that. It's a set of software development practices aimed at creating bit-for-bit identical artefacts every time we run the process. And lots of large open source projects already practice this, and we again hit the issue of what can we trust? If we use pre-built binaries in our build pipeline, can we know that those aren't compromised? And with that in mind, some folks have started talking not only about reproducible, but also bootstrapable builds, where our entire build chain is also built and can be verified. So before we build, we build the thing we use to build, but where does that actually stop? Can we trust pre-built operating systems? Now, there's a train of thought that even the smallest general purpose operating system is now too big to be auditable and verifiable by a human. And there's lots of interesting work happening in this space with projects trying to build the smallest possible thing that can boot hardware and build compilers which can then build other things and so on. And these are generally written in low-level languages with the aim to be human-auditable. So at least some programmers are capable of reading and understanding the entire code base. And we're talking about really, really tiny in the order of a few hundred bytes. So now we're really off down the rabbit hole of trying to find something somewhere we can ultimately trust. Even if we can boot hardware with something tiny that we can fully audit, can we actually trust the hardware? Now you might say at this point that no one cares about hardware anymore, right? We're all in the cloud, but the cloud's still and always will be just someone else's computers. And the world of silicon is notoriously proprietary. There's lots of proprietary features built into modern chipsets that you may never even know about. And in the hardware space, we're operating almost entirely on trust. Not just the chips themselves, but all the tooling to design and build them is also proprietary and unavailable for us to verify. And that's one of the things that's driving the creation of open source silicon. And there's lots of interesting projects in this space from open risk with the aim of creating a fully fledged open source processor to more specifically security focused projects like Open Titan, who are building an open source design for root of trust chips for validating hardware and software. And these projects are all about having those designs available for folks to verify and audit. And there's an argument to be made that computer architectures have remained relatively unchanged for more than 30 years. Yes, they've got smaller, they've got more powerful, but fundamentally the way the CPU works, virtual memory, paging, multi-tasking operating systems are all quite similar. And we're still using the same paradigms in programming. And these architectures have features that could be considered contributory to certain classes of vulnerabilities, particularly around memory safety. So, are there changes in computer architecture which can reduce the attack surface? Well, there's a team at the University of Cambridge in the UK who have been developing a new instruction set, Cherry capability hardware enhanced risk instructions, which is designed specifically to mitigate software security vulnerabilities. And the way Cherry works is by introducing a new set of processor primitives that provide a mechanism for fine grain memory safety and process isolation directly in the hardware. So it's a combination not just of hardware but of tool chain support to reduce the amount of vulnerabilities that an attacker can exploit. So basically the idea of least privilege but highly scalable, highly efficient as it's done directly in the CPU. And Cherry's been around as a concept since around 2010, but there's now hardware that actually supports it in the form of this Armorello board. And there's lots of development going on to widen the ecosystem of software support for this new architecture. So we've talked about cryptography, we've talked about build systems, we've talked about hardware and architecture, but the real elephant in the room is the rise of artificial intelligence and specifically large language models. Now unless you've been living under a stone, you will have seen a lot of traffic about chat GPT since it was released back in November. And the growth in users has been unprecedented and millions of people have been trying it out. And you know, whatever your feelings about chat GPT in general, if you actually used it, it's clear pretty quickly that this technology is going to be truly transformational for how we interact with computers. Entire industries are going to be disrupted by these abilities to write complex text based on limited human input. And pretty quickly after the release, people start to experiment with having chat GPT write code. And again, the results here have been pretty extraordinary. Given relatively small inputs, chat GPT is already capable of writing mostly correct complex applications in multiple languages, manipulating data between formats and translating programmes between different programming languages and even writing programmes in fictional programming languages. And whilst it's not about to displace programmers just yet, the field is moving incredibly quickly. We're really only a few weeks into the AI revolution and it's already clear that it's going to drive massive change. These models can't reach out to other systems yet or to the internet, but that will almost certainly change. And a lot of researchers have been talking about the idea of conversational programming, where we interact with models based on what output we need and not on how to achieve the result. And the AI model will get to that result by any means it deans appropriate. But this raises some kind of fundamental questions about the future of application software. If the future involves computers writing programmes for us where we only really care about the output, then the question arises about why computers would use high level programming languages at all. Computers don't know anything about programming languages. Languages are basically abstractions to make it easier for human programmers to program computers. Everything's either interpreted or compiled down to something the computer can actually run. So, since such a huge proportion of our issues with software supply chain at the minute come from how we assemble applications from packages and libraries, perhaps the era of AI generated programmes will solve that problem for us. Most likely, though, it just introduces a whole new raft of security issues. We're still back to our starting point about trust. Can we trust our AI model? And more importantly, is it plotting a robot takeover of the human race? So, to return to the disclaimer, some or all of this talk is highly likely to not come true. Predictions are notoriously difficult in technology. And yes, this was me 30 years ago, so what do I know? But I'd like to leave you with a quote from that famous future oligist, Dr. Emmet Brown. Your future is whatever you make it, so make it a good one. Thank you.