 Hi, Paul. Hello, Jürg. You were successfully able to connect. Yes. Yes, in a few minutes early this time. Yeah, I was early too. I actually had issues to connect through the browser client. I had to download the real client. Yes, so I've always been using the native client. For one thing, performance seems to be better with the native client, especially I've found on my MacBook, certainly. Yeah, for sure. I think I'm not going to share video though. I don't trust my Wi-Fi with dealing with video and audio and screen sharing at the same time. I think it will result in dropouts. Yeah, you're good point. Brandon said he would join early as well, so hopefully he will be on shortly. Nice. I'm going to go on mute until he shows up. Yeah, sure. We'll do the same. Hey, Paul. Hey, Jürg. Hi, Brandon. Hey. Hi. How are you doing? Great. Do you manage to test the screen sharing? Let's just do that now. Yeah, I'm going to get the meeting notes set up in a moment. Okay, let's just start. Looks great. Are we coming through? Yep. Okay, so Brandon, I'm going to keep my camera off because I don't entirely trust my bandwidth to deal with the camera as well as screen sharing and audio. All right, yeah, no worries. Could you just go for the few slides and back just to make sure that the latency is okay? Okay, looks good. All right. Sounds looks like we all said. Yeah, I'm going to start wrapping the meeting notes. Yeah. Paul, do you have a copy of the link to the slides as well? Yes, hold on. It's actually... We don't need it now. Okay. Yeah, can I send it afterwards? It's actually in the... It is in the pull request. If you look at... If you look at CNC at the top in the pull request. All right, I'll grab it from that. Thank you. Greetings, everyone. Hello. Hi. Hello. Hi, Reed. All right, I put the link to the meeting notes and the chat. Please put it in your name under attendance. Thank you. I will give this another five minutes. Zoom has some connection issues. So usually people take it all together. Yes. I know a bit about that. Yep. Unfortunately, only this particular Zoom for some reason. I... Like Jess and Cap was had some issues only with this Zoom apparently. So something special about it. I know that when I was about to connect it, I wanted me to be signed in Zoom to be able to join. And I know that, yeah, I had a problem because of that. Whereas the other Zoom, I was joining well fine because I could just join as an anonymous account. Yeah, I think this meeting does actually require you to have an account. The problem I had last week is that I was signing in with an account, but it didn't accept that account for some reason. I think what happened, Paul, is Arm was resetting all the accounts because we're moving over to Zoom. And it found me up too. It took me three attempts to get in. Yeah, I ended up creating a temporary account with my army mail address, which I have a feeling might not have done me any favours for the future either. But even that didn't work. So eventually I signed in with Facebook, which is how I'm signed in now as well, which is why you get to see that lovely mugshot of me with the shades. That's my Facebook profile picture. When we register or when we record our attendance, are we supposed to put company name or just start names? Just names is fine. If you end up with a company name, that's fine as well. Okay. Yeah. And I noticed quite a few new names that we've seen lately. Please feel free to add yourself to the member list by opening a PR. Is membership purely elective, Brandon? If we attend a certain number of these, can we just declare ourselves members? Yep, basically. Okay. Yeah. Okay. I think we can get started. We seem to have quite a good number of people here. And we're already almost five minutes in. So today is going to be a presentation, Paul, and a few other colleagues here from ARM are going to talk to us about PASAC, platform extraction for security, dealing with TPMs, hardware security modules, stuff like that, other good stuff. And I believe you guys are submitting this for sandbox, was that right? That's correct. Yes. Okay. Yep. So yeah, take it away, please. Okay. Okay. Well, thanks very much, Brandon. Hello, everyone. Thanks for allocating the time on the agenda for me today. So I'm going to present PASAC, which, as we've said, that has now been proposed for donation to CNCF as a sandbox project. The pool request is open without proposal document currently. And it is a security focus project. So, so, so I'm excited now to be presenting it to the security SIG. Quick, who am I? My name is Paul Howard. I'm a solutions architect here at ARM, nominally based on the ARM campus in Cambridge, UK. A few of my vital signs here. If you want to connect or get in touch, I'm pool.howard.arm.com. I am on the CNCF Slack as well as on the Docker community Slack. So do feel free to get hold of me there or via any of the other ways shown here. My role in PASAC is that I am a maintainer and I provide some technical steering for the contributions that ARM is making. And also, we're very fortunate to have some of my arm colleagues on the call today, including Yuke and Yonats, both of whom are regularly making code commits into PASAC. And those names will become familiar to anyone who has been involved or wants to get involved. So the agenda for the presentation is as follows. I'll talk about why PASAC was created about the problem that it is trying to solve and about its relevance to cloud native. I'll give a high level technical overview of architecture, the long-term vision for evolving this as a community project. I'll talk about the status of the project today and provide some links and resources where you can dive deeper, learn more, and I'll open up to questions at the end. To be honest, though, I don't mind being interrupted. So if you have a question while I'm mid-flow, do feel free. But let's begin with why. Why does PASAC exist? Well, the PASAC story starts here with this recognition that the edge in particular is evolving as a compute platform, becoming a focal point for rich compute workloads. This is being driven by a need to process data to the source. The volume of data from IoT is growing and with that growth comes a need to process and gain insights and actions from that data locally without a high cost and high latency backhaul. And it means that this edge layer where we might traditionally have done nothing more than say a protocol translation is now a place where we're doing analytics, we're doing machine learning. It's a place where we're deploying complex workloads. It's becoming a more elastic and cloud-like layer. And in a more elastic and cloud-like layer, we want to have cloud-like development and deployment practices. We want to containerize. We want to orchestrate. We want to make use of that huge momentum that we've built up with cloud-native development practices. And we want those practices to succeed for us at the edge. But that's a challenge. And one of the reasons it's a challenge is that we have fleets of edge devices that reside outside of the cloud. The threat landscape is different and it's a threat landscape that we need to address in the security architectures that we deploy. It's also a challenge because there is diversity and fragmentation of those host platforms, especially around security. So there are different routes of trust, different provisions for secure services like key storage and cryptography. And with those different provisions, we get different APIs and that creates tight coupling problems because there's now a need to understand these platform hardware features in order to access them. And this just isn't what we want. So what we want is agnostic solutions. We want solutions that are divorced and decoupled from all this detailed physical platform knowledge. And not only that, but we want to avoid any notion of any single workload being the sole resident of the hardware platform. So we want workloads to be decoupled from physical hosts. We need a solution that scales to multi-tenant execution environments where workloads are provisioned and orchestrated in a cloud-native way. And this platform agnostic multi-tenant access to secure services is what Pasek is providing. And this is where we think it really plays in to CNCF as well because it's creating these new opportunities to decouple workloads from the physical platform to enable cloud-native delivery and orchestration into this otherwise very diverse, very challenging environment. So here's a visual representation of where Pasek sits. It is creating this new abstraction layer, a common API over variable root of trust or cryptographic services that would otherwise be accessed using more specialized APIs, like TPM 2.0 or PKCS 11. This then becomes a uniform software platform in support of runtime or orchestration stacks and ultimately in support of one or more applications. Pasek really took off when Arm and Docker got together in the spring of last year and collaborated on a solution to this platform agnostic root of trust problem. At the time Docker Enterprise was already integrated with TPM to bootstrap trust between nodes and Docker trusted registry. But they were also looking for a way to make this integration more portable. And this really aligned well with the investigations that we were doing it within Arm into these platform agnostic security interfaces. So we saw eye to eye on the problem and on roughly what was needed to address it. So we got together and we decided to work together to create something in the open source community. And the executive summary of how those conversations went was, well, do we need to build anything at all? And really the answer to that was yes, because again the existing APIs were fragmented, specialized, we needed platform agnostic APIs and furthermore we needed that to be consumable conveniently in programming languages that we might be using in a way that doesn't require screens full of codes to achieve simple or common use cases. And of course we need multi-tenancy. And we didn't see any existing solutions that met all of these requirements. And then the next question is, if we're going to have this common API, then what is it, what should it look like? For this we mined the PSA, platform security architecture. This is something that ARM had previously defined as being a holistic end-to-end approach to security for IoT platforms. Inside this architecture there are API suites available and one of those APIs is the PSA crypto API. It ticked lots of boxes for us in that it's modern, strongly specified, it's platform agnostic, provides all of the required primitives, good algorithm agility, and it suits hardware backed implementations where the private key material is not exposed. And this API choice was really only the start of the story, so by itself the API isn't the solution. For one thing PSA was designed originally for constrained platforms and the PSA crypto API was specifically a C language API where we'd already declared a need for a language neutral and multi-tenant programming model. So we opted to take the PSA crypto API as a set of contracts and use those contracts as the basis of an API that would map nicely into any language while also supporting multi-tenancy explicitly. So here are some use cases that we had in mind and they certainly aren't the only possible use cases but we were thinking initially about having portable route of trust where we can do things like bootstrapping NTLS from a node to a remote component and have that be backed not necessarily by a TPM but maybe by Secure Element, HSM, or a firmware route of trust running in a T. And then thinking much more broadly, more ambitiously, if I'm an application developer give me a simple and portable way to consume the best available secret storage or cryptographic services on my platform in my preferred programming language. That is a use case that's loaded with all sorts of requirements and we'll talk about how Parsec intends to address them. It's also worth remembering here these use cases are not necessarily edge specific. I mean, I have talked a lot about edge in the presentation but data center and cloud are relevant here as well. It's just that in the edge we find that these use cases are particularly poorly catered for due to the amount of fragmentation. So really that, yes? Hi Paul, just a quick question if you don't mind. You talked about multi-tendency. Do you mean a virtualization of these devices or something a little bit different? So it's agnostic referring to the need to run more than one workload. It doesn't specify necessarily full stack virtualization or containerization. It's really just there is a need for multiple applications that are resident on a node and are distinct from each other to be able to consume Parsec and the secure services behind it. Gotcha. Thank you. Cool. All right. So I think, yes, let's look now at the architecture and we'll start with the conceptual model. It begins with the API, specifically the PSA crypto API. We've said that's a C language interface. It was designed for embedded programming on endpoints. So what Parsec does is it brings PSA crypto into the application class world, this more rich compute world, by creating a service around the crypto API. So this is a software service. It represents and controls access to the underlying platform hardware. And being a service, it needs a way to be called by applications. And we do this by defining an inter-process communication or IPC layer with a wire protocol that defines the inputs and outputs of each operation. And the derivation from PSA crypto on this protocol is a close one. There is pretty much a one-to-one mapping between contracts in this IPC interface and the operations of the PSA crypto API. The only difference actually is that this protocol is now language agnostic. The final piece is an ecosystem of client libraries in popular programming languages. And these are designed to create a developer friendly experience, putting the crypto services directly to the fingertips of application developers. And this is what Parsec really is, conceptually. There's this onion skin set of layers decoupling the workload from the platform. So with the conceptual model in mind, then let's have a look at how this service architecture actually looks. So imagine that everything you see here is running on an infrastructure or edge node supported by a rich OS Linux, for instance. So here you can see the Parsec service, which represents the underlying hardware and acts as a broker for access to it. The Parsec service is written in Rust. We felt that Rust was occupying just the right space for us in terms of having predictable performance coupled with lots of safety and security characteristics. The Parsec service itself is organized along the lines of a front end backend architecture. The front end is the listener that provides the service endpoint and implements the IPC wire protocol that I talked about. Now Parsec doesn't prescribe a transport technology for this wire protocol. What exists today is a listener that works with Unix domain sockets, but it would be equally valid to use another kind of transport. In fact, really, it's a general principle of Parsec that we try to be non-prescriptive where possible for these things. In fact, you can see by the block architecture model here, it's very Lego-Brick. There's lots of pluggable and replaceable pieces that could have different implementations. Those backend modules, we call them providers, and this is where you load in all of your platform-specific knowledge and isolate that knowledge. So the providers are where we would have code that specifically knows how to talk to, say, a TPM or an HSM. And then over on the left, over on the application side of the wire, we have our client library, which understands how to talk to the Parsec wire protocol. The application itself doesn't need to know those details. So client libraries, let's focus a little bit more on these because they're kind of existential to the project. We've talked about the wire protocol so far and about how that presents the contracts of the PSA crypto library in a language-independent way. But the wire protocol itself is not what applications would directly consume. It's a bit too low-level bits and bytes. So what they consume instead is a client library. And the idea behind Parsec is we create a whole ecosystem of these client libraries, which grows over time to support popular languages. But we don't want these client libraries to be clunky language bindings on the wire protocol. And this really is one of the defining characteristics of the consumption side story with Parsec. We want each client library to be designed and developed to provide a very fluent, natural, idiomatic programming experience that is tailored to that language. So it will be attractive. It will feel natural to developers in that language. And one of the ways in which we aim to achieve this is by structuring the client libraries into different layers of abstraction. The full PSA crypto API will always be accessible, along with all of its different settings for controls of things like algorithm choice, key size or key usage policy. So that full programming service will always be there when you need it, but there are cases where there is such a thing as too much choice or too much complexity. It can be bewildering and as such prone to misuse. So part of the vision for Parsec is that it should have a strong consumer side story so that developers know that they're using the best available crypto and key storage for their use case without needing reams of code or necessarily a lot of crypto expertise or specialized knowledge. So we want to be able to cater for common use cases with relatively few lines of code. And this means things like using smart defaults to make good choices on behalf of the programmer. If I just have some data and I want to hash it, I want to sign it with a private key, then Parsec can abstract away some of that complexity for us, it just gives us a nicer experience as a developer. And here's official representation of that layering concept. So you can see here we can choose to code at a relatively low level of abstraction. So quite close to the wire protocol primitives themselves if we need that degree of sophistication and granular control. Or we can make use of this simplified experience where the client's library is making some choices for us based on maybe configuration data that is automatically picked up from the service. So that's the client library vision. So it's multi-language designed for each language and designed in layers that are sensitive to use cases. Okay, so all of that was about Parsec as an API. We've talked about it as an abstraction layer. So let's talk now about Parsec as a brokering layer as a provider of secure services in this multi-tenant or multi-application environment. And the challenge is two-fold here. So the first is we have this contention for resources. We have multiple applications needing to share access to the secure hardware of the platform. The second is that there must be clear differentiation between those multiple workloads because there's a need for separation of secure assets such as keys. Each workload needs some kind of unique and persistent identity, and that identity has to come from somewhere. There has to be a component of the overall system that understands where the workloads came from, that's able to vouch for their provenance and for their identity in a way that Parsec can trust. So these identities have to be stable values. They have to be able to survive such minutiae as system restarts or upgrades to the application code. Now the role of Parsec here is actually not to decide that identity or to assign it, but just to honor it. So it has to treat each incoming API call as coming from a workload with a known identity, and it must partition key stores and broker the access to the hardware based on that identity. So we don't want workload A to be able to operate with keys that were created by workload B, for instance. Now where does the identity come from? And the answers to that really depends on the deployment. So again, this is one of those areas where Parsec doesn't prescribe a single answer. It could be a container manager or an orchestrator, for example. We just refer to it opaquely as an identity provider. For Parsec to work in a multi-tenant environment, some kind of identity provider has to exist. So Parsec defines its role, doesn't define its implementation. However, whatever the identity provider happens to be on any given system, we do know that Parsec has to be able to trust it. So there's got to be a trust relationship between the Parsec service and the identity provider on that box, which would be based on something like PKI or certificate sharing. To look at that whole concept a bit more visually. So here we have the logical applications or workloads where each comprises, for instance, a number of containers. The Parsec service is then shared and provides the abstraction over the root of trust and crypto services of the platform. We add this identity provider in the gray box. Again, a separate component. This resides outside of the Parsec service. So it's either a separate service or it's functionality that is shared across the software stack that is supporting these applications. The identity provider assigns identities to the workloads. Note specifically this is per logical workload, not necessarily per container. It is quite possible for a workload to be a composition of containers. The applications then make calls into the Parsec service via the client library and the wire protocol. And each of those calls is annotated on the wire with a token representing the identity. And what does Parsec do with the token? Well, uses it as a partitioning primitive. So based on that identity, whatever it is, it makes decisions about how to grant access to keys, for instance. And it can trust those tokens because they will have been signed by the identity provider. And Parsec is able to validate that signature according to a shared trust bundle that resides between the two components. Okay, so with that, we've covered the sort of dual roles of Parsec as being a common abstraction layer and as a broker and mediator of services. So these really are the two things that are critical to enabling a cloud native style of workload delivery onto these otherwise very diverse and fragmented edge platforms. So we can summarize the value proposition really of Parsec. It's these four things. It's abstraction, common API, truly agnostic based on modern cryptographic principles, mediation, security as a microservice, brokering access to the hardware, providing isolated key stores in a multi-tenant environment. Ergonomics, which is how we refer to this client library ecosystem, brings the API right to the fingertips of developers in any language. The mantra is easy to consume, hard to get wrong with security, that's what we want. And lastly, openness. It's an open source project inviting contributions to enhance the ecosystem both within the service and among those client libraries. And you might have realized just from hearing me talk that there are lots of degrees of freedom and axes on which to grow Parsec as an ecosystem. It really is an ecosystem project. So there are these back-end provider modules. These can be enriched with support for, say, vendor-specific secure elements or crypto accelerators. There are identity systems for different deployments. We could support different styles of transport, not necessarily just the domain socket transport that exists today. And that's not to mention the potential wealth of client libraries that could be built. So running this as an open source project absolutely makes sense. And if we capture that value proposition in a single image, this is it really, an agnostic layer supporting any platform, any chip architecture, any kind of secure hardware on that platform and with any kind of workload in any sort of runtime or packaging consuming those facilities. Okay, so that's the Parsec vision. So then how does Parsec contribute to the CNCF vision? Well, firstly, as we've said, Parsec is an enabler of decoupling and it's this ability to decouple and create an agnostic platform that really aligns Parsec to cloud native in general terms. Thinking more specifically, we can also look at the ecosystem of existing CNCF projects and identify some places where Parsec could potentially integrate. And there are some options here around projects that are concerned either with orchestration, with identity, or with workload trust and provenance. So some of these are being actively explored. So for example, we spoke with some representatives of the Spire project just this week because Spifi and Spire and the notion of identity and provenance, we think that could play quite neatly into Parsec's requirement for an identity provider. But I should say there aren't any existing dependencies currently on other CNCF projects or components or APIs or versions of APIs at this stage. So Parsec for now is relatively standalone. This picture is how Parsec is positioned relative to some of those other projects in functional areas. So we can visualize it as being like a triangle where Parsec is providing the agnostic interface to the platform security, but then we have orchestration systems such as Kubernetes that are actively managing the execution of the workloads. Projects such as Spifi, Spire, as I mentioned, concerned with identity. And it's also interesting to think of how Parsec and Notary could potentially fit together when we think about the general problem of running trusted workloads on secure platforms. Right. Let me do a project status. So just a few quick slides on the status of the project as it exists today. It's been public in GitHub since October last year. It's all Apache 2 licensed. The available API so far is what's targeted at supporting the portable route of trust use cases. So that's provisioning key pairs, exporting the public key, signing with the private key. The available back-end integrations are via embed crypto. That's a pure software back-end. And that's a neat thing to have available because it means you can just get up and running very quickly for experimentation. We also have integrations with PKCS11 providers, including the Secure Object Library, which runs in a trusted app on the NXP layerscape platform. That Secure Object Library has a PKCS11 wrapper around it. And because that's a standard, we can connect with it. We have a TPM 2.0 back-end as well, supporting those same primitives. And the main engineering focus right now is to look at ways of getting those existing pieces into production systems. So ARM is looking, for example, at using the Parsec technology internally. We're doing some product integrations as are some other organizations. The client library story so far has really been about prototyping and sketching to examine what the model should be for client libraries, but also to build those pieces that are vitally needed for short-term integration plans. So right now, our team in ARM, for instance, we're doing some client-side work with Rust and with a SQL wrapper. As of now, we don't have an implementation of multi-tenancy for Parsec. So we have design documents around this, but it hasn't been built yet because it isn't needed for this initial use case where it's just a single runtime management piece needing access to the service. And lastly, of course, we're hoping that entry into the CNTF Sandbox would be a great next step in terms of growing the project into the future. So quickly on project maturity, this is still a relatively young, relatively new project. We have invested a lot in documentation. There is a well-populated book resource, including aspects of the wire protocol design, the API spec, source code structure, system architecture. We have a published threat model. You can find in the repo alongside those docs as well. CI builds are there along with unit tests, integration tests. There is a fuzzing framework and we've been pushing the component crates out to the Rust crate repository at crates.io as well with the documentation appearing on docs.rs. So there's been significant investment in making the project real and in making it attractive for adoption and for contribution. Quick GitHub pulse. This is now a couple of months out of date, but it does show we're starting to see some community engagement. We've got some PRs coming in. The level of interest has grown since the start of the year. In terms of who is actively contributing, as I said at the start, the initial seeding of this project was a collaboration across Docker and ARM. And this was prior to the point where Docker was partly split out into Mirantis. So we now have those three organizations that have been responsible for the content of the project so far. ARM very active currently. We, ARM is very active currently. We're expecting also some contributions to start flowing in from the NARO as well at some point in the near future. We have potential industry partner adoptions in flight. Actually more than a shown here, but some of them are in the very early stages and it's not possible to talk about them publicly just yet. Open governance for this project is really going to open more doors, we think in terms of these partnerships and contributions. And indeed in some cases we've found that open governance is effectively a gate to adoption for some organizations as well. Okay, I'm talking for about half an hour. So before I open up for questions, let me just leave the resources slide up so that people can see the relevant links to learn more. You can see the GitHub reference there. Don't be confused by the name Parallax Second. The project has multiple GitHub repos and they're all collected together into an org. Ideally the org would have been called Parsec, but alas that was taken already. So we've had to use the expanded astronomical term instead. And again there is that book repo. There is a wealth of additional projects documentation there. There is a public Parsec Slack channel on the Docker community Slack and also a Zoom call that takes place every alternate Tuesday which anyone is free to join and you can find details for that in the GitHub repo including the Zoom link and a calendar link. So I'll leave that slide up and I guess we can end the presentation at that point. So thank you very much for listening. Brandon, can I hand them back to you as chair? Perhaps we can do questions. Yeah, we already have questions in the chat I think. Thank you Paul. So I think that's a question from Vinay about the Scarborough of Parsec capabilities. I'm not sure what, do you see the question? Wait a second, let me just, I've just got to open the chat. Okay, so how do applications discover that Parsec capabilities or what kind of facilities are available behind it given that it is an abstraction layer. Is the question more aimed at the first or the second of those two cases? I don't know. I don't know. I don't know. Does that make sense? Does that make sense? So I think I'm trying to draw the distinction between whether you know how Parsec is there at all as a library or whether you know what kind of facilities are available behind it given that it is an abstraction layer. I think maybe more of the second. So I'm just trying to understand like, let's say that I have some services that I'd like to deploy at the edge. And then I, and then my use cases, I'd like to extract some secrets given all the other capabilities with the integration of the IDP. How do I discover that those, that that Parsec is available as a service for me to leverage to obtain keys, for example, I mean ordered. Is that like, is there like a handshake? How do I know if that makes sense? Yeah, sure it does. So, so, um, so the first thing is that if you have an aspiration to use Parsec, then the first thing you would do is you would link with a client library. Now, of course, that doesn't necessarily mean that Parsec is definitely going to be on your system. But there is a handshake stage and actually the handshake stage is capable of doing a couple of things right here. It's capable of determining whether Parsec is there at all. It's, it also has some capability and negotiation design built into it as well. So, so in terms of the kinds of operations that are supported. So for instance, whether particular types of key are available, or whether the back end is, is, is hardware backed or say firmware backed. Now I should say that all of these things are built into the design. Again, there is a difference between the design and what is available in the project today. So some of those negotiation pieces are kind of at an early stage. They haven't all been implemented, haven't all been built. But the vision really with Parsec is that it is an abstraction, right? So you shouldn't have to care. And the client library is going to be responsible for making some of those handshake API calls to do things like smart defaulting. So if you, if you want to store a key and you don't actually know where your platform is going to store it, then the client library can be relied upon to make the best per platform choice based on information that it has gleaned from making API calls across the wire to the service. Of course, if the service is not there at all, then no response is going to come back. And then just isn't possible to use Parsec through a client library. There would have to be a, another kind of non-parsec fallback in that situation. Got it. Thank you. All right. That's another question about fuzzing. I'm not quite sure the kind of line of questioning here, Krishna, could you elaborate a little bit? Yeah, sure, Brendan. So, so there was some talk about the first testing framework is in place and I was looking into the GitHub project as to what it does and how it is doing it. I couldn't just like just by looking at the code, I couldn't get a sense of what's happening. So I was trying to grok. Maybe I thought I could just ask the expert like what that framework is doing. I understand what first testing is, but how it is accomplished in this scenario is what I was actually kind of trying to get out of that question. Right. Okay. So, so, so the first testing framework is really it's, it's based on a soak test kind of principle. So the idea is that you have a server on which you run the Parsec service and you run the first test suite, which then just soak tests against the service over a long period of time with various randomization techniques. If we have, if we have Yonat on the call actually, and he's willing to talk about it. Yonat, yes. Do you want to go ahead, Yonat? Hi guys, I'm Yonat. I'm one of the developers on the Parsec project actually works on the first testing framework. So it's using the fuzzer and it essentially generates bytes, byte streams that it feeds into the input of Parsec. So it's essentially like simulating an input from the domain socket in a form of bytes received. It uses a bunch of predefined examples and then it checks if the service hangs or crushes during one of those tests essentially. Okay, makes sense. Another clarifying question and this framework is written in Rust, because I thought you said there's a go client library that's already implemented and the Rust library is kind of in the works right now. Yeah, so actually, the first framework doesn't actually use any clients. It uses the service directly so it essentially uses the service codes as a library and pump data straight into the service without having to go over a socket. I see. Okay, that makes sense. Thank you so much. No worries. Thanks, Yonat. I will just clarify so on the point with the Golang client so there is a repo. There's a repo in the Parsec org for a Golang client. It's effectively at a prototyping stage it's the first client that we created. But we created it with the vision to kind of play with code sketches to imagine what it would be like to consume Parsec into Golang. It's not actually a fully functioning client at the moment. So, right, so there's a question from Cameron. Thanks. Do we have any published use case documents, white papers, guides, guides. So, to answer that, I think I would say start with the book, start with the book repo, because that's where everything that is specific to Parsec is published currently. There isn't a Parsec white paper currently so arms contribution to Parsec is under a project called Cassini for which there is a white paper. That's to do with the general problem of cloud native practices at the edge. So I'd say, go have a look, look at the book resource, look at Cassini if you're interested. As I say, that's an arm thing. It's not really specifically Parsec, although it does mention Parsec. And, yeah, get to me on the, yeah, that's Cassini. Two S is one N, I think. Cool. I actually had a question with Parsec kind of like hiding the abstraction of the hardware. So usually when I think about accessing services from the hardware modules, there is a step which I'm able to verify that the, what I'm talking to actually resize in hardware. Is there a contract between the client and Parsec? What does Parsec stand as like, is it crucial to as part of the TCP or what's the trade view in that case? Yeah, so the ability to attest to key residents in hardware is it's not in the interface today, but it is design roadmap. So we know there is going to be a need in a wider deployment for an application to be able to attest that a key is hardware protected and not exportable, but we don't have it in the API currently. Gotcha. A question from Mark with Identity Management Federation. Can you use it for distributed identity, for example, blockchain service? Excellent question. So I think, I would like to be able to say yes, we would, any such system would just need to be designed and at the moment. At the moment what we've done is we have, we've written system architecture documents that assume the presence of an identity provider on the system that is able to vouch for the workloads, their provenance and their identity. I don't think there is anything in that design that would preclude the notion of federating that. It's not something I've explored myself, but it's, yeah, certainly something that sounds interesting, something that I would be interested in looking into. Fair enough. Any more questions? Yeah, there's time and this is off topic so cut me off if we don't have the time but I'm curious how it went with Rust. I've only read papers about it never used it myself but it's very interesting compiler. Yeah, so I would love, I would actually love to open that up to you and the unit, if they'd like to talk about the experiences of building the service in rust. Yeah, sure, I can, I can start. I'm a guy. I'm also a developer of parsec and, and yeah, and actually I think you need to be as well but we are really enthusiastic about the usage of first. I personally think it's really excellent. It might be a bit complicated and a bit hard. The learning curve in the beginning is a bit long. But the fact that it protects you from so many memory safety and threat safety mistakes is really, really great and actually makes you win time in the long run. One example that we had and was really useful is that when we started building the service and at the moment we switch from a single shredded to multi shredded application. Everything just walked and we we didn't have any, any safety issues any data that was written at the same time with a concurrency problems. Everything just walked so easily and it was there. Really great to to to see that the, the other big, the other face of the coin that really help us is all the infrastructure that comes with with rust projects. The, the fact that rust projects with the cargo package manager are really standardized makes it really easy to to build a test infrastructure to add the integration test unit test to the project. Easy to to publish your your crates for everybody to use easy to have a free online documentation. So yeah, so the the the big two parts that were really helpful to us was the safety features and the infrastructure, I think, yeah. How did it get on your radar? Well, I, I think a lot of people in in arm are actually looking at trust. It's becoming more and more important for both embedded projects and normal projects that there is definitely a need in in hold the company to add more security in our projects. So, yes, whenever you think about secure development, rust is automatically one of the possible option and since since we are building building parsec on top of an operating system on a on a known target. There wasn't much risks to to to use a language that was quite quite new. It's a it's a it's a play on intercepting the future. So so when we were when we were looking around for the right language to write the parsec service in. We knew that parsec was going to be a systems programming piece and we knew that security safety memory safety in particular was going to be paramount rust rust just announced itself really as being as being the right choice. You know, it's still a relatively young language, but playing playing the long game with it. It seems to be going in the right direction. The barrier to entry can be a little higher than other programming languages. But I think that's, you know, you pay the pain at the right point. You pay when you're getting your code to compile and you save when it matters, which is when your codes in production. Yeah, well said. I think there's a lot of industry adoption within, I would say, open source adoption of rust and secure projects. A lot of a lot of the Lenaro trusted apps in or trust a firm or trusted apps and going to offer sections of some of the secure firmware. So the question a little bit unrelated to rust discussion. But so the question is around trust bootstrapping. What does kind of bootstrapping look like. And you see this as part of what parsec will do as well in terms of getting the keys getting the right secrets being shared and so on. I'm not not sure I understand the question. A specific application. Yeah, so so I'm trying to see like example of something that will build on top of parsec. So I imagine it will be something to do with okay how do I attest a couple all my notes and put the secrets which I have them in a key management like vault or something and put them into the correction itch notes so that when I deploy my application my application can then retrieve them. Right I see so you're, you're thinking of a use case where parsec is effectively protecting your master key to what is otherwise a software managed key store. Yeah. Yeah, I certainly that's a use case we're looking at it. So, vault integration is something it integration specifically with volt is is is something that I'd like to see investigated. But also the use of parsec to protect the master key for a system. We're looking at as part of another potential integration as well. So, so yeah that kind of bootstrapping use case where where where your hardware protecting just one specific vital key, but otherwise using say a software service to manage the rest of your secret data, then yes that that's a use case that we're absolutely targeting. Oh, will you see this as part of being parsec itself or would be would it be like a intimidation that uses parsec. I see it more the latter. So I think you would, in a situation like that you would probably use parsec to provision your key or to, or to store and ask the key. Gotcha. Yeah, it could depend on the framework I think vaulting vault in particular is is kind of interesting right because it has them. Vault has key management storage plugins into which we could potentially plum parsec. See, in which case parsec would then be kind of part of the part of the vault infrastructure if you like. But but all of these things are things that would have to be investigated. Right, yeah. It's a hot problem. I'm looking at that as well. Yeah. That's a question from Matt about accepting into sandbox. I'm not sure if Justin and Amy want to talk a little bit about this. Time for sandbox. Well, we are currently reworking the sandbox process. Hopefully where it shall become quicker. In principle, it doesn't take very long at all. But in practice, we've been it's been very variable and I'd say between one and three months, but we're hoping to kind of cut it down. So there's a fixed schedule at which sandbox projects get accepted. Thanks, Justin. Okay, we're almost out of time. Just a little less than five minutes. If any more questions. All right. If not, thank you, Paul so much. Thank you. All right. So next week we're going to go back to working sessions. We're going to look at a couple of issues that we'd like help on and see where we can engage in. If not, again, thank you Paul. Thank you everyone. And people post the slides in the notes as well as on the issue. Thank you. Okay, take care everyone. Thank you. Thank you. Bye.