 Okay. Hi. Good afternoon. Who's enjoying the conference? About half of you. Who's learned something new? Well, that's great. Yep. That's the reason we're here. And I have one more opportunity to maybe try to share something else that's a little new. So, welcome to my session on taming attestation for the cloud-native world with Pasek, a title that was so long that it actually had to be truncated off the end of the digital display just outside of the room. It is a long title, but I'm going to try to justify all of the parts of it as we go through. So, quick business card. So, my name is Paul Howard. I'm a principal system solutions architect with ARM based out of Cambridge in the UK. I've been with ARM for a little over four years now, and I focus on solutions that span hardware and software and try to get the best out of both. So, if you're wondering what is somebody from the CPU architectures company doing at a cloud-native conference, hardware and software working well together is very important. I clearly need a higher resolution selfie, or maybe the picture's fine, and I just look like that in real life. So, a lot of my work is investigations and prototyping, but also open source development, and I'm actually one of the founding maintainers of the Parsec project, which is a CNCF Sandbox project. Other facets of my online presence, you can see there, and you're welcome to make use of them, so if you want to connect with me or follow up on anything you hear this afternoon, please do. So, here's what I'm going to cover. So, this is a session on Parsec, but it's a session on new directions in Parsec. So, there isn't going to be time today to cover Parsec in depth. It's not going to be a deep dive on Parsec architecture. I will introduce it. I'll go through the motivations for it, because maybe not everyone here is familiar with it yet, but also because some of that context and introduction will help with the main topic of the afternoon, which is attestation. So, I'm going to talk about some future work in Parsec to expand its feature set into the world of attestation, and I'm also going to bring in another open source project called Verizon. Now, Verizon lives and breathes attestation and is concerned with the verification of attestations. So, I'm going to be talking about Parsec for evidence gathering and Verizon for verification, and we're going to see how these two projects can be combined. And then we'll see to what effect, and I'm going to focus on two examples that I think are particularly interesting and hopefully relevant to us. The first is the use of attestations as part of establishing a secure channel between two entities where we're working with the industry on an extension to TLS 1.3 that builds attestation into the handshake. And the second is going to bring in another CNTF project, namely Spire, and talks about how I think that Spire can consume the work that we're doing in Parsec and Verizon to do a new way of doing hardware-based node attestation. I will, I hope, leave some minutes at the end for Q&A. I don't mind being interrupted briefly, so if I need to clarify or just repeat something. But let's save more involved questions for the end just in the interests of running to time. Okay, so let's begin here with the cloud native edge. So what does this mean? Well, edge computing nodes were traditionally gateways. So in an IoT application, the role of a gateway is to sit between the endpoint devices that are typically non-IP devices and to talk over those local protocols. So think of your Bluetooth, your ZigBee, Modbus, those kinds of things. A gateway can gather data from endpoint devices and it can deliver actions or commands over those same protocols. And then, of course, in turn, the gateway is an IP capable device. It's on the internet. It can go and talk to services in the cloud. As for the kinds of compute workloads that would run on gateways, well, those tend to be embedded, very tightly hardware integrated, singular of purpose, flashed in the factory. And they would be doing things like protocol translation. And the compute power of devices like that would generally be quite modest. So that, if you like, is the gateway model of edge computing. So what's a cloud native model? So as the number of connected devices increases, so too does the volume and the complexity of the data that they generate. And backhauling that kind of volume of data for processing centrally in the cloud starts to make less sense economically or in terms of performance or even sometimes for regulatory reasons. And what this means is that we see a drive towards processing more and more of that data locally at the edge. On edge computing nodes that have more compute capability built into them, able to run more sophisticated workloads doing things like ML inference or sensor fusion. So if our workloads are more sophisticated, then it means that they are looking more and more like stuff that we're really used to deploying in the cloud. The edge becomes more like an extension of the cloud rather than this very rigid, locked down embedded gateway that we have over here. So the edge becomes a place where we want to roll out and orchestrate applications and services just like we can in the cloud. And so for this reason, we can call this a cloud native edge. So gateway model, cloud native edge model. And that's great. But we do hit a challenge when it comes to security and it's because we encounter this collision of worlds. So the edge sits precisely astride to very different engineering paradigms, the world of the cloud and the world of IOT devices. In the cloud, we have rich workloads in high level languages. We have multi-tenancy. We have CI, CD. We have containers and orchestration. Everything is very fluid, very portable. We don't even really see the hardware platform much less care about it. We're just writing and deploying software. Now, in the world of IOT, it's very different. We're dealing with devices that are running outside of the cloud. They may be in temper prone environments. They're also far less uniform. There's lots of platform diversity and suddenly you can't escape needing to know a lot about the platform and the hardware. And a great example of where this is true is in the application of hardware backed security. If we are in a temper prone environment, then secure assets like secrets and private keys are really going to need hardware backed protection. And hardware backed security is very platform specific. Some edge nodes will have something like a trusted platform module, the TPM. There could be some other kind of hardware security module or a secure element. And there's also the possibility that discrete hardware of that kind is not cost effective or it complicates the design of the device too much. And we might then choose to build secure services in software but isolated from the rest of the system using isolation features that are built into the CPU. And we'll see an example of that later on. So we have fragmentation. And this throws up road blocks when we want to be cloud native. The APIs that they offer are very low level. They're very device specific. They mean that you're writing code in the C language with an embedded mindset tightly coupled with the hardware and needing quite a bit of specialized knowledge. So it's a world away from that high level portable code that we aim to write if we really want a cloud native edge. Now, up until recently, the best available solution to this fragmentation problem with hardware security was the PKCS11 interface, also known as Crypto Key. And even today, you're going to find that that is the incumbent solution for cases where application code needs to be bridged to hardware security. And we find that all manner of shims and bridges have sprung up both proprietary and in open source, making it possible to use PKCS11 to talk to TPMs, to HSMs, secure elements, and so on. And it's okay as far as it goes but it doesn't go far enough for a cloud native edge experience. And this is for a few reasons. You'll find actually as soon as you try to deploy in this way that PKCS11 implementations are notorious for following the standard imperfectly. And even with that aside, it's still a very low level developer experience. We're either having to write code in C or we're having to use something like a foreign function interface to bring that up into another language. All those crypto and key management functions. So it's still a very close to the metal way of interfacing with hardware security. So improving on this status quo is the role of Parsec, CNCF Sandbox Project. This is why Parsec exists. Now Parsec is more than just an API like PKCS11. It's a microservice that handles all of that interop and all of the low level details of hardware security on a whole variety of platforms that is growing over time. Where possible, it cuts out all of those nasty PKCS11 shims. It can talk natively to the TPM API. It can talk to secure elements and other hardware security solutions. And if you need a new one, it's open source. Go contribute one. And if you have an interop bug, go fix it. Go contribute the fix and everybody benefits. Now the Parsec API is not defined in the CNC language. This one is language neutral. So Parsec defines it as a set of serialized contracts on a wire. And this means that it can be surfaced into programming languages in more creative ways, ways that are more fluent and idiomatic for consumption in that language. So at the lowest level, we're only transacting bytes on a transport with system calls. We're not bringing in a lot of C crypto functions. And the use of IPC and a microservice means that the client applications can be in containers. We can even have multiple logical applications sharing that service in a multi-tenant fashion if we need to. And the vision for Parsec is that we just want it to become part of the platform, really. We want it to become a supportive package in Linux distributions, in Yachto recipes. It becomes a pervasive low friction path, convenient, portable, high level, and evolving and improving over time through expert community contribution. Okay, let's look at a reason why all of that's useful. So suppose we have an edge computing device that's going to run some workloads that are rolled out to it from the cloud. These workloads could be containerized modules, for example. So think of a technology like AWS Greengrass or Azure IoT Edge. And we want these devices to be provisioned and registered securely with the cloud. We want to be sure of their identity and the provenance. We can do the provisioning step like this. We can create a private key on the device and a corresponding public certificate, which is then chained to some CA and shared with the cloud so that the cloud can trust the device. And then at run time, when the device is in the field, it can talk to the cloud securely by means of a mutually authenticated TLS connection. The cloud and the device share their certificates with each other. The device proves ownership of that private key by signing for the TLS handshake. And then finally, the secure channel is established, which is foundational to whatever else needs to happen in the product stack and deliver and execute those workloads. So you'll see this pattern or some variation of it in systems that need to do secure onboarding. And it's a great fit for Parsec, because on the Edge devices, those private keys are resources that you absolutely want to protect. Ideally, we would want to provision these things directly within hardware-enforced secure boundary and then access it later to do the signing. And then if Parsec becomes the default low-friction path for achieving this, then I think there's a greater chance that customers will adopt hardware security, because right now, the lowest friction path is the out-of-box provisioning solution that's offered by the product stacks. That normally means generating your key pair in the cloud and then importing it onto the device. So it was never provisioned within the device. And you don't know where it's been in the meantime. And it could be stored just as a file on disk protected with nothing more than file system permissions. So what Parsec aims to do is offer a comparable level of ease and simplicity, but gaining a stronger security stance as a result. And this is the kind of use case that it supports already. And so we take the next step. And this is one of the more exciting opportunities that we have with Parsec. And we have it precisely because Parsec is not locked against a fixed crypto API like PKCS 11. It's its own project, its own microservice. It has its own API that can grow organically. And the opportunity that we have is to grow Parsec beyond doing only key management and cryptographic operations while still providing those crucial platform abstractions for portability. So we've seen that we can provision our private key with hardware protection. And we can use signatures in TLS to prove ownership of the key. But we are only proving ownership of that key. We aren't proving anything else. Now TLS is going to work whether that private key is hardware protected or not. So the signature proves ownership of the key, but has nothing to say about the history or provenance of the key and nor does it have anything to say about the overall state of the platform, its hardware, its firmware or any of the other components that are running on it. And so the answer to that is to ensure that the private key is attached to a platform that we can vouch for. We can vouch for it because we have a chain of measurements of all parts of the system going right back to the device's hardware route of trust. And we can verify all of those measurements remotely. You may recognize that. That's remote attestation. It may be familiar to you. And the remainder of my talk is going to be about how we can take the existing portable key management and signing capabilities of Parsec and grow those out to include the kinds of primitives that we need to support remote attestation flows while maintaining that abstraction and that portability. And then I'm going to talk about how we can make use of those capabilities once we have them. Okay, let's look a little closer at the components of a remote attestation flow. There are essentially three aspects to it. So we have endorse, attest, and verify. And this flow diagram indicates where and how those stages take place. Endorsement is the process by which a manufacturer or provisioning entity can make a statement about what is valid. If a device has these properties, this platform state, this configuration, then it's trustworthy. And the provisioning entity makes these statements to a separate entity, the verifier or the verifier service. This verifier service then becomes the authoritative database of valid devices and measurements and configurations. Manufacturing can then deploy a device to the field and that device might then go and register with a cloud service, much like we saw a couple of slides ago. Only now it can do more than just prove ownership of the private key. It can also supply a package of signed evidence to attest to its own state and its own configuration. The cloud application or service then relies upon that, so it becomes the relying party. And that reliance is based on this critical final strut in the triangle, which is verification. So the device's signed evidence is presented back to the verification service, which receives the manufacturer's endorsements for that device, which had received the endorsements previously. The verification can then match up the endorsed characteristics with the actual presented characteristics and if they match, the verifier can tell the relying party, yes, we can trust this device, it's a known good device, it's in a known good state. Now verification services exist in the ecosystem, some device vendors provide them, some product stacks come with them as being readily available. But I want to take the opportunity today to bring in this other Linux foundation project called Verizon. Verizon is not a CNCF project, it's actually being nurtured within the Confidential Compute Consortium, which is a different branch of Linux foundation, but it is open source and it is relevant to us here because Verizon is concerned with the verification part of the process. Verizon is in fact a set of reusable components or building blocks that can be used to accelerate the development of verification services that offer a high degree of ecosystem compatibility by adhering to open standards. It defines APIs for provisioning and for verification. Now I can't go into detail on Verizon today, but I do want to bring it into the conversation because the work that we're doing on attestation in Parsec is aimed at matching the capabilities of Parsec to those in Verizon. So in essence it's making it possible for a Parsec enabled a testing device to bring its evidence to a Verizon based verification service. Okay, let's come back to Parsec. So why is Parsec in the picture? Well we've already seen its role in key management so that we know that the role of Parsec is to provide simplicity, abstraction, portability in the face of a diverse hardware ecosystem. And these things continue to matter once we're talking about attestation because it turns out that attestation is also device specific and causes us similar problems if we want to code, if we want our code to be portable. Okay, concentrate on two different route of trust technologies, not the only two that Parsec can work with, but we are focused on these two for attestation, at least to begin with. So we have trusted platform module the TPM here on the left and then on the right we have an implementation of the PSA route of trust. Now PSA stands for platform security architecture and PSA certified is a widely adopted security standard for IoT. The PSA route of trust allows the device's foundational security to be built in a way that is cost effective and not necessarily reliant on discrete hardware module like a TPM. So we can be built out of software or firmware components typically integrated by silicon vendors into their SDKs and isolated from the rest of the system using an isolation technology that's built into the CPU. So ARM TrustZone for example would provide that kind of isolation. And then we have these small packets of functionality we call trusted services that run within that architectural isolation boundary. Conceptually it's comparable to the TPM but the implementation is different and then of course the API's are different and the data formats are different. And so again it's a fragmentation problem and we're back very much in Parsec territory needing a common abstraction that allows client code to be portable and simplify it. So when I talk about taming attestation in the title of my session that's what I mean. In those boxes we can see examples of the data formats used so a TPM will use the TPM 2.0 data structures. A PSA platform uses something known as the entity attestation token or EAT. And we're doing some work in Parsec right now to expose both key attestation and platform attestation API's that can work across these distinct route of trust backends and they're going to work in terms of a common encapsulation format that uses something called the conceptual messages wrapper. Now this wrapper is being nurtured by the Rats working group which is one of the IETF working groups that focuses on attestation. Now these wrappers are self describing and they're keyed on a registered media type which makes an ideal format to convey between the components that don't want to hardwire themselves to any single physical representation. So they can deal with multiple representations they can dispatch to the correct kind of processor by peaking first at that media type and then assuming that the rest of the data is organized according to that type. And the slide has a link to the IETF draft that goes into much more detail on how that works because I'm that was really just a lightning presentation of it and there's a lot more to it. So these common encapsulation formats can be applied end to end also throughout an attestation flow for all stages for endorsement for verification as well as for gathering the evidence. So Parsec can provide them for all platform types beginning with the two examples shown here and Verizon can consume them so it's a nice story for bracing these two open source projects together. OK so for the last part of the presentation I'd like to look at two areas where we are applying this work. Let's look at the attested TLS proposal first. This is another one that's being developed within the IETF. Some of my armed colleagues are contributing to this but also in collaboration with engineering teams across the industry. So there's a link to the top at the proposal in the IETF data tracker where you can go and read in much more detail and go and go and look at the lovely ASCII artwork diagrams in the proposal. We looked at TLS before it's obviously foundational to the connection between an edge computing node and the cloud. So until you have a TLS connection you can't really do anything else. TLS by its nature excludes any peers that cannot prove ownership of the correct private key. Now the attested TLS proposal goes beyond proof of key ownership and brings attestation into scope for the handshake phase. Now there's a real elegance to this because it makes attestation a prerequisite to the establishment of the secure channel. It means that by the time you have a secure channel the device has already been attested and verified so it's a very neat way of excluding devices that can't prove themselves to that level. It closes a threat window that might otherwise exist if you set up the secure channel first and then do attestation afterwards. It doesn't mean it's going to necessarily suit every kind of workflow for secure channels but the onboarding use case is one example that I mentioned earlier that might benefit from this kind of approach. So the diagram is a simplification. The IETF proposal has all of the details but I do want to stress what's essential about the proposal. You'll probably be familiar with TLS based on X509 certificates but what this proposal introduces is a whole new credential type. So it means that attestation evidence can be used as an alternative to X509 making use of some of the extension facilities that are present in the design of TLS. So that then allows us to make use of the common abstractions and wrapper formats that I talked about on the previous slide. The secure channel can be established based on that same trust relationship between the manufacturing entity, the attesting device and the relying party on the verification side which again I've indicated can be built using components from the Verizon project. Okay, other details I've had to gloss over. You can go and read about them. Follow the link. All of this work is of course being done out in the open. You can follow along with the POC effort for this and that's going on in GitHub. I've got some links assembled on my final slide which we'll look at in just a second. But before that, let's just finish up and talk about another potential application of this work which I was discussing with the Spire community just towards the end of last year. Now Spire and Parsec already share a little bit of history and there is a YouTube video with myself and Andres Vega talking about the use of spiffy identities when calling Parsec so that Parsec can work in a multi-tenant fashion. That was in 2021 I think for the CNCF production identity day and it's on YouTube. But for today, so let's look at Spire Node Attestation. So Spire has workload attestation and Node Attestation. Node Attestation is to verify the provenance of the compute node on which a workload is running and Spire has a plug-in architecture that allows nodes to be verified via different characteristics. So one of those verifiable characteristics could be a hardware root of trust that is on the node. And there are Node Attestation plug-ins available today that can already use a hardware root of trust if it's based on a TPM. But with the new attestation functions being added to Parsec, which is portable, it raises the possibility of doing Node Attestation based on an abstract root of trust using a TPM when it's available but using other technologies that might be on the platform such as the PSA root of trust that I mentioned before. And you have that similar client server separation that we saw in the case of TLS. So Spire Node Attestation is separated into an agent plug-in and a server plug-in. So the proposal here is to create an agent plug-in that consumes the Parsec APIs and a server plug-in that consumes the Verizon APIs. And there are go-client libraries for both Parsec and Verizon. Now there isn't a POC of this in the works just yet, but I am really hoping to get one started and it would be great for the Parsec and Spire communities to come back together. So if you're interested, do please come and talk to me because I'd love to engage. Okay, so that's it. Some useful resource links there. You can follow all of these if you download the deck. Thanks once again for coming. Thanks for listening. And if I've done this right, you should still have some minutes left for any questions or comments. Yes. Sorry, can you ask again? Okay, so the question is, is there anything for runtime attestation as opposed to boot attestation? So I think anything that you can measure is potentially in scope with this work. So yes, boot attestation is generally what you think of when you think of hardware root of trust and platforms like a TPM, for example. But with what we're proposing with Parsec and Verizon, both are extremely flexible. So there definitely is scope for measuring components within the system that are not just the boot loader, not just the firmware, but potentially going beyond that to encompass other things as well. Okay, if there are no other questions, thanks very much. That QR code is my session feedback code. If you love the session, please go and tell me. If you hated it, please remember how busy you are and you don't have time to fill in feedback forms. And enjoy what remains of the conference and have a safe trip home.