 about me, sorry, it's the agenda, so I'll be covering what is IoT device management, what attestation is, how it actually helps establishing a secure foundation for IoT device management, what is ITF RATS architecture, it's a standard for attestation scenarios. We have built a POC based on RATS, so I'll be taking you through that, what components we have used in this POC, and a quick demo actually showing how everything works. So yeah, this is about me, my name is Tushar Khandelwal, I'm a principal software engineer in ARM architecture and technology group in Cambridge, I have several years of experience in developing and designing software for embedded devices, now I'm working on topics related to software and security standards. So what is IoT device management, the service is listed here, I'll use my mouse, I don't have laser, so largely these are the services actually, you'll see a device management server provides to its devices, and basically these services can help actually device management server to manage the remote devices, so main applications of the server will be doing device attestation, which will be my primary focus in this talk. Key management, when the device comes up for the first time in the field, how it is going to register to the server, and how credentials will be provisioned during the initial handshake and registration. Remote management, I mean you'll be changing settings on the device, you need remote management service for that, firmware upgrade, I think name tells itself what it does, fault management and reporting to make sure the device health in the field is good and you're aware of the device status. So this is a simple diagram about how attestation works today, there is a standard around it now, I'll be covering that in next slide, but what is attestation basically, it's a means to establish trustworthiness of a trusted execution environment, and in some cases like in confidential computing, it's a way the platform proves its trustworthiness with the workload and how they establish trust between each other. So as part of attestation, what happens basically the device which is claiming the identity or producing the evidence actually has to create a token which is called as attestation token typically, it has list of claims which are sent to the remote server and then remote server actually has list of things to compare it against like endorsements or reference values and then basically based on that it can say okay the device health is good and it's a legit device actually, you can access my services. But the report alone is not sufficient, you need some verifier actually to verify the attestation token sent by the device. On the right hand side, you can see attestor which is the IoT device, it has keys provisioned and sets provisioned during the manufacturing stage and later on this key is used to actually create the token and sign the token and once the token is created, again it's a standard format, it format, the token is in this format and it is sent to the verifier. Verifier actually does the signature verification and goes through all the claims. Verifier already has some of the data to actually verify the claims and based on that it can take decision on the whether to trust the IoT device or not and then the results are posted back to the relying party basically which is the device management server. This is the ITF RATS architecture, as you can see there are different blocks here, one is provisioning. This is a stage actually during which you provision the verifier with the required endorsements and the reference values against which the token will be compared later. Then comes the verification part, in this it's a three-way communication where you have a verifier, a relying party and attestor. The attestor wants to talk to a relying party but it has to prove its identity, it has to go through verifier, prove its identity and then only it gets the password to talk to the relying party. As part of this verification, verifier actually takes into account all the schemes and policies you have provided to the verifier and this could be provided by a verifier owner who owns the verifier service you are running actually. This is what we have done in our POC. As you can see the verifier, attestor and relying party are now replaced by the components we have actually used in our POC which is Verizon, Vakama and Lashan. Vakama is running on the device, attestor, Verizon is the service doing the verification and Lashan is the device management server which provides different services to the attestor. Now comes the rats interaction patterns, basically there are a couple of ways how you can actually get your token verified, that attestor can get its token verified. In this scenario, this is called passport model and this is what we have done in our POC. Relying party request a attestor for its identity and the attestor generates the evidence with all the claims, send it to verifier. Verifier appraise the results based on the policies it has and the reference values it has. The attestation result comes back again in a standard format, it is part of ITF draft now and then the results are posted back to the relying party by the attestor. This is called passport model because attestor now got a passport to actually talk to the relying party. Obviously, I mean, relying party has to verify whether it can trust the token or the results sent by the attestor because attestor can actually modify in between the results after getting it back from the verifier. So relying party has to do the signature verification provided by verifier while appraising the evidence. This is another interaction pattern where again communication is initiated by the relying party. Relying party sends the request to the attestor. Attestor actually sends the evidence and then now the relying party communicates with the verifier directly and get the results verified. So this is called background check because attestor doesn't know the verification is happening in the background. So that's why it's called background check model. So I'll go through the POC components we have used. The most important one here is Verizon. Verizon has been derived from verification of attestation. ARM being a core contributor of this project. We have done a lot actually in this and it's part of confidential computing consortium now. The architecture is rats compliant and it's quite a flexible model. You have different plug-in interfaces and you can write your own clients and interfaces to actually talk to the service. So it doesn't depend on your client being in Java or CE or Python. All it wants is basically communicate through these rest end points. These rest end points. So one end point is there for provisioning the reference values and endorsements. And the other end point is basically through which device sends its evidence to the service. And these two services, front end services, they communicate to the back end service which is called Verizon trusted services. And in a different scenario, like in case of confidential computing, the service could run in a secure world or maybe in realm world. Whereas the front end services need not to run in, I mean, there's no rule around that actually. You can run it in a normal world as well. So on the right hand side, you can see KV store. KV store is the key value pair store which has the claims and reference values for the trust service to actually verify the evidence. So this is what happens during the provisioning stage. Endorser could be anyone like manufacturer who wants to endorse the device, sends the request to the Verizon service to submit the reference values. It's a bundle in Corim format. I mean, in Seabor format, we call it Corim. And this is also an ITF draft specification. And all the claims are actually bundled in this Corim token. So this is how you provision the Corim token into the Verizon service and you get back the result as, OK, if it all goes through well. In verification stage, the tester has to create a session with the Verizon service. And it sends the nonce as part of the request. It gets back the session ID using which you have to further communicate with the Verizon service. And then it sends the evidence. And then that evidence is appraised and verified. And then you get back the attestation results from the Verizon service. And the attestation results are in this format. It's JWT, the Java Web Token format. And this is ITF draft and soon it's going to be standardized, become RFC. Now other POC components includes Leshan and Vakama. Leshan and Vakama are lightweight M2M implementations of Java and C respectively. They're coming from open mobile alliance. It's called lightweight M2M. It's a protocol for IoT devices to securely connect to one or more lightweight M2M services, servers. IoT devices and servers exchange all sorts of data depending on the IoT device. And you need to make sure that the channel is secure and you trust the remote device always. And Leshan server has got a web interface. So it's very convenient to use it and view all the services and objects supported by the remote device. This is how the high level architecture of Leshan and Vakama communication. It's like typical server client diagram where on the left-hand side you can see IoT device. It has different objects like attestation object, device object, software management objects, and you can do all sorts of operations with the server. Just forgot to mention that Leshan and Vakama communicate do initial handshaking using TLS. So you make sure that the channel between the Leshan and Vakama is secure. So the prototyping has been done on a reference platform. It's for rich IoT applications. Corestone 1000. It has application cores. On the left-hand side you can see orange and blue. This part of software run on the application core. And on the right-hand side there is secure enclave, which is a security engine running trusted firmware for M class. And on the application core side, on the normal world, we are running Linux. Vakama is just an application running on top of Linux. It has support for MBIT TLS and PSA. I don't know how many of you know about PSA. It's platform security architecture from home. It's a set of guidelines, actually, which tells you how you can build a firmware and make your platform more secure, actually, by following the guidelines of PSA. So on the normal world side, Vakama is running on top of Linux. And then it talks to a kernel driver, which then takes the control to the highest exception level. And then from there it transitions to the secure world. And then the secure world is running its own operating system, opti or trustee. And then that communicates to the security engine over a hardware channel, which is Mailbox Handling Unit, Message Handling Unit, MHU. And the normal world actually communicates to the server, which is relying party doing initial handshake using TLS. And that communicates with Verizon Verifier as well to send the token and get it verified. This is how my setup looked like, actually. On the right-hand side, the FPGA board running Linux. And on the left-hand side, we can see two blocks. One is running the relying party server. And the other one is running the verifier. So communication starts with TLS handshake between the relying party and a tester. And then relying party request for the token verification. The tester takes the request, send the evidence to the verifier. Verifier appraises the results, send back the appraised results actually to the tester. And then the results are posted back to the relying party. So that relying party can actually make sure that the device is authentic and share the services with it. So this is the demo. On the left-hand side, you can see a tester relying party. And on the right-hand side, verifier. So a tester is the FPGA device. Relying party is Lashan. So I've already booted the device running Linux. This shows you the Linux prompt. Pushing the assets to the device, like the certificates and the keys. Now we are running the relying party, Lashan server. So the service is now running. And the client is now communicating with the relying server using TLS handshake. You can see the device is registered now. It shows state ready. And on the right-hand side, this is the web interface of the relying party, Lashan server. And you can see the token status there. So there's nothing as of now. Now on the right-hand side, I'm provisioning the endorsements. Now provision part is OK. Now it has the reference values and endorsements to compare with. Now the verification service is running. And now I request for the token verification. And this is how I get the result after the token verification happens. On the right-hand side, you can see the logs, actually. And on the device side also, you can easily... This is the server showing it has received the payload. And on the device side as well, you can see what token has been sent to the device. Yeah, these are the references. That's it. Any questions? Token and the client certificates. Yeah, but the token has lots of things about the information of the platform, like hardware configuration and the software running on top. Sorry, I didn't get your question. No, not the token, the certificates. And when the device creates the token, it uses the private keys it has to actually sign the token. Okay, then I didn't get that possible. Sorry, maybe I... Yeah, it could be anything. Could be anything. Okay, thank you.