 So, this is a protocol framework that's designed for embedded systems. And the idea is that if you have an embedded system, like maybe an IoT device, it's got like some little Cortex processor in there or something, and you want to be able to do simple sort of key exchange, secure session protocols, particularly like handshakes can be tricky. You do a handshake and then you'll send encrypted messages back and forth with your symmetric key, but the handshake itself can be tricky. And you want to be able to also do all sorts of basic symmetric and asymmetric crypto primitives with it like, you know, signing or verifying your code. And the goal of the framework is to be relatively simple. Performance is mostly a non-goal. I wanted the performance not to be terrible, but it's not designed for highest possible performance. Standards compliance is sort of a halfway goal. So this is designed so that you can instantiate it as an instance of NIST's C-shake algorithm, or you can use a smaller sponge construction for efficiency if you don't have the memory for something like C-shake. So the primary motivation for this is that as, since I've been doing some work as an embedded engineer, I've seen a lot of use of custom one-off protocols, both designed by cryptography research and designed by our clients. So while, of course, you would say, why aren't you using TLS? Oftentimes, like in the NTP talk, for example, you see that like sometimes TLS or DTLS or whatever doesn't work for your scenario for some reason or another. You know, you want the messages to flow a certain way. You want some party not to retain state. Maybe you want to mix pre-quantum and post-quantum algorithms or simply TLS doesn't fit. You don't have the code size to support it. And so the result is that everybody has their own favorite custom protocol. And it's kind of a pain to design and analyze these protocols. It's a pain to design them mathematically for academic reasons. It's also a pain to not make mistakes in the design. And sometimes in the end, it's hard to know what properties your protocol has. So for example, you'll have two stages of authentication, but maybe they're not quite perfectly bound together. And then you wonder, well, what happens if an adversary substitutes a message here, and maybe that's not really an attack? Maybe it is. And the result of this is that you often end up with protocols that are insecure from people who design their own. Now, of course, you could go to an academic protocol, but there's some issues with academic protocols as well. And this issue is that you'll have some idea of what you want to do kind of morally when you design an academic protocol. And then you'll put it into mathematics in some terms that might or might not directly show what you want to do morally. And then somebody has to implement it, which will possibly require interpreting your mathematics. So an example of this is the famous MQV protocol. So if you look at FHMQVC, this is fully hashed Manez Q Van Stone with key confirmation. So both parties, they have a long term and an ephemeral public key. And they want to kind of hash these to get two exponents that they'll use to combine the public keys together. And then they want to extract a session key and also some kind of key authenticators from this. As you can see, this is done using a hash and a hash of the is like the same messages in a different order. And two different KDFs that have the same properties. And then some Mac functions and all of these, except maybe the Mac functions are assumed to be random oracles. And then when you go to implement this, you'll discover, well, wait a minute, I'm now hashing tuples of objects. How do I do that? What if the objects are not of a fixed size? I can't just concatenate them. And then you have, how do I separate KDF1 and KDF2? And so even though this is a provably secure protocol, there are a lot of tricks both in how you design such a thing and how you implement it. Another motivation is TLS1.2 and various similar protocols, which if you look at how they actually use their hash functions, you get something that looks a little bit like a rat's nest. This is just the hash block calls in one of the subroutines of TLS1.2. So the modern solution to this, and by modern, I mean, well, maybe 20 years old, but it's taken a long time to percolate through to actual real-world cryptography, is that you should just hash all the, everything. If you know some data about the other party and they know it too, you should hash it. And if you send them a message, you should hash it. And you should also hash who sent the message and the boundaries of the message and so on. When you get a public key, you should hash it. And you should always, there are some technical concerns about, well, maybe you should have two running hash contexts, one for keys and one for authentication or something, but basically you should just put everything into a hash function. And so more modern, like TLS1.3 does this, Trevor Perrin's noise protocol does this. And also the blinker protocol, which is part of the inspiration for strobe, as you can tell by the name. So consistent with this, the protocol designed for strobe is that all the messages in the system pass through strobe. So the application will have some sort of strobe library that has an object that handles, that acts as a bridge between the application and the network. And every time the application sends a message to the network, it'll be hashed. Every time it receives a message, it'll be hashed. And it can also send in associated data or whatever. And then the strobe object also handles encryption and decryption and macking. And then the transport, it's probably the network, but it could also be something like, well, I want to encrypt a message and send it to a file. And then later I will retrieve it from that file and decrypt it, in which case the transport would sort of be your untrusted file system. And a two-party protocol, the way that this will work is that Alice and Bob will have some kind of strobe object with a state on either side. And when Alice sends a message to Bob, it will advance the state of her object and the state of his object in the same way, so they'll always match at all times. And then if somebody tampers with a message on the network or it gets corrupted or something, then they will notice. And in strobe, you just drop the connection. So this is not, you can still use it as a primitive, like as a hash function in Cypher and so on, to implement something like DTLS, but the main usage is in something like a handshake where if messages are getting corrupted and stuff, you don't try to reconstruct the state, you just drop the connection. So how would this work as a more or less concrete example in FHMQVC? So the way it works is that this looks complicated, but bear with me for a moment. So both sides will initialize, well, both sides will initialize their strobe object and then they know each other's long-term keys, so they'll just hash those separately using two calls rather than concatenating them because this way it's parsable. And then maybe Alice will send her ephemeral to Bob and Bob will send his to Alice and the framework hashes that this took place. And then when they need to get these exponents out, they call a sort of pseudo-random function call from the library and it will give them this D and basically D is then the hash of A and B and G to the X and G to the Y. And then separately E, which is the hash of that plus D and then they'll set the key and then they'll send the Mac and the Mac will basically be this. Now I still have commas in there so I'm still talking about hashing a tuple so I'm still lying to you but the rest of the talk we'll say how this is done. And so on and you'll see that like all of these, what the FHMQVC protocol is sort of trying to do morally which is that you hash in what you know about the other party and all the messages that were exchanged now gets accomplished by this framework both in terms of the mathematics of it and in terms of how you implement it with bytes in an actual hash function. So I showed a bunch of operations on this slide so what operations are there in the framework? So first of all, you can set a key or you can also put in some data that's to be authenticated that you know about the other party could be a nonce or it could be their key or whatever and that'll be hashed. You can extract data from this that is pseudo random. It's basically like a hash of all the things you put in. You can send a message to the other side or receive one in the clear or encrypted and if it's encrypted it'll be encrypted with a key that's derived from everything that has been sent or exchanged or keyed or whatever before. You can send a Mac which is very similar to sort of sending an encrypted message that's zero that's known by both parties to be zero but it looks a little bit different in the transcript. It's just a Mac function. The application doesn't have to supply a buffer full of zeros or anything and finally there's an operation to rekey the protocol in a way that is not invertible which means that if somebody should later compromise the session state then they will not be able to decrypt previous messages that you have sent unless of course they could brute force all the keys that you put in. So the main insight or development or innovation of Strobe is that the way that these operations are described and also the way that they're performed can be determined from basically just four features. So one of them is the direction of data flow so which way do the arrows go? And does any data go to or from the sort of application side of this setup? Does data go to or from the cipher? Like is the key changed or is it used to encrypt anything? And does data go to or from the transport? So that's sort of four features. It's like four bits kind of and I'll show you how this can be used to implement the operations according to a relatively simple pattern. And so that pattern is a sponge construction. So this is based on the work by the Ketchick team mostly and other people who have worked on these sponge constructions. And in particular the duplexing sponge construction. So basically the state of the object in a sponge construction is divided into two parts. One of them which I'll call the rate although it actually rates the size of the part is what interacts with the outside world. So if you hash something you'll XOR it into that part of the state. And if you want to encrypt something you'll XOR it with that part of the state and then the cipher text will be returned and so on. And the other part is called the capacity which basically acts as some combination of a stream cipher key and a hash function chaining value. That part is does not interact with the outside world. And then every time you do an operation or you run out of, you know, you overrun the size of the rate variable, you overrun the rate, you run some function F which in the security modeling of these things is assumed to be a random function or possibly a random permutation which completely changes the rate and capacity variables. So the update function is shown here that the next rate and capacity variables will be F of the previous ones XOR the message on the rate side. So you can see that basically you're going to, well if the application is providing the message you're gonna take it from the application and otherwise you're going to XOR in sort of zeros. You'll XOR it into the state here and then you'll output either the input XOR the state or just the input if you're sending in the clear and then you'll replace the rate variable with either R's or M or just M in the case of decrypt and so on. And so there's a simple function for whether it's RX or M or M and so on. This detail is in the paper. So now I said we're gonna sort of hash all the things. So the goal is basically that the output of strobe should look like a random oracle. And this is sort of suitable to what you wanna do anyway because the idea is to make the analysis of these protocols simple for like a custom one off protocol. So you're probably gonna analyze it in the random oracle model anyway. But it's a random oracle function of a specific thing which is supposed to be sort of the transcript of all the previous operations that you did. So you can't just concatenate them together and also the previous things you did is not just the data but also what operation was performed. And in fact, what operation am I going to perform with the output should also be part of this function so that when I extract a value to be used as a hash it's not treated it the same way as if I extract a value to XOR with a message to send it or I extract a value to be used as a Mac. So the duplex paper says that gives me the random oracle part but what I need is still this parsability. And I wanna be able to parse like I said the entire previous transcript and what the output is going to be used for. So to do that let's consider first an operation that doesn't use the cipher output doesn't encrypt the message. So say I'm sending a message in the clear. So to do that, remember that the operation is determined by four binary flags. So I'm gonna put those into a byte and then there are four bits left over that are used possibly for future things. A couple of them are described in the paper how they'll be used. And then I'll put in the message that's going to be hashed and then finally one more byte that points to the where this operation began. And that way when I go to the next operation, I'll be able to sort of make a chain of these pointing backwards and parse out where all the operations began. So suppose now that you have an operation that does use the cipher state so that the output of this sponge is now going to have to depend on all the previous operations which means that I'm going to have to run this f function so that I can stir in the state of all the previous operations into the output of the next one. Which means I sort of have to sort of pad this the state out and end the block. To do that, I'm gonna put in another begin byte which distinguishes this case from a case where for example, we just ran out of space in the block and that's gonna point just to the operation itself. And then some padding that satisfies the sponge construction requirements. I chose the padding from C-shake because that way you can make a sort of quasi-standards compliant version of this protocol. So you'll take that and then you'll run it into f and then you'll continue where you left off at the beginning of the next block. You know, using the output of the sponge to do whatever encryption or macking you wanted to do with this operation. So finally, I gave you parsability but I didn't actually quite give you parsability because the higher level protocol has probably got operations that are not exactly encrypt max and in the clear and so on. They might be send an encrypted username or extract a hash for purposes of using an FHMQV or something like that. So when I say that the output is supposed to depend on the intended usage of a message and on the previous transcript, maybe the sort of strobe level operations are not quite enough. So to disambiguate this, there's direct support in the library and sort of a small amount of direct support in the protocol itself for metadata operations which work exactly the same as normal operations except that they're marked as metadata operations and therefore saying what the next real operation is supposed to do. So for example, if you want to hash in, you send the other party a public key, you put in a byte that says this is a public key or something and that's defined by your protocol spec in a unique way, okay, this byte 0x7 means public key and then maybe you'll put in the length of the public key or maybe not or whatever. The messages are self-delimiting but you could put in the length and then you will put in the public key itself in a separate operation. So you do the metadata and then the main operation. The interesting thing about this is that actually you're probably doing that anyway because you probably want to send tag length value whenever you send something to the network so that the other party can parse it when they receive that packet so that they can parse it into whatever subparts of the packet are used for different purposes. This is also very cheap in the protocol design because it doesn't cause you to go to a new block because you probably won't be encrypting the metadata. You also, the other advantage of strobe actually is that you can encrypt the metadata so if you want to encrypt the lengths and types of your packets to make traffic analysis harder, the protocol framework directly supports that. It works exactly the same as encrypting anything else. You just use the sort of, you know, encrypted metadata operation in the library. So there's some prototype code on strobe.sourceforge.io. This is designed primarily for minimal size and embedded devices and for simplicity. It has a little IO engine that, you know, probably could stand to be improved but it does IO with callbacks so that you can use the same operation to talk to a file system or a socket or whatever. Additionally, there's some lightweight Curve 25519 ECDH and signature and verification code that might be interesting. I know somebody was looking at it because it has a much smaller stack footprint which might matter in the kernel or something than other implementations of Curve 25519. And so the code total is less than four kilobytes and it uses less than one kilobyte of stack. There's an unreleased version that uses some assembly to get this significantly smaller but this is just the C version and that's on ARM but it's a portable implementation. So there's a lot of work to do in this area particularly in documentation because, you know, I claim this is simple and easy to use without documentation that's adequate and understandable to somebody who is only dabbling in cryptography. It's definitely not simple and easy to use. Another interesting thing is that the way I define these operations, you could of course use SHA and AES to accomplish the same things and I think the constructions would be relatively similar based on, you know, only four flags or whatever. So it would be interesting to try to put together an implementation like that for people who would rather have SHA2 and AES than SHAKE. Furthermore, I would like to complete a formal analysis of this. Almost all the work is done by the duplex paper anyway but it doesn't consider things like rollback resistance of the state or a post-quantum analysis for which the random oracle model suddenly gets a lot more complicated. So that's all. I see them coming, we have time for questions. Okay, three very quick questions. First, can you comment on patents? Something key at plus but we don't have specific patents on this except that the paper describes also a variant of the DPA-resistant key tree for which I can neither confirm nor deny on CRI patents but outside of the key tree stuff and whatever patents you may stomp on by implementing a protocol with this, it is not specifically patented to my knowledge. Second question, you were mentioning that you don't care so much about performance so why do you have the clear text options? Couldn't you simplify it and promote good practice by only allowing people to encrypt? Maybe. Well, the particular issue is like metadata, you know metadata might, you might rather send it in the clear so that the other guy, if he's got, like if it's a security enclave inside a smart phone so that the smart phone can look at patent, can look at packet headers or for like a bootloader that has encrypted code, you might wanna be able to see some kinds of headers. I could remove that option though, yes. Okay, third thing, I was just wondering as an academic, you were criticizing FHMQV and labeling it as academic, I was just wondering why. No comment. You've discussed a little bit about formal analysis of the framework in general. Have you thought about any work in the direction of allowing engineers to implement a specific protocol on the strobe framework in such a way that you could automate formal analysis so you write it up, extract an implementation and then also get an automated check, I'll say pro-verief or crypto-verief as to whether the protocol is meeting the security goals you think it is. Yes, but I haven't done it yet. If you wanna do any collaboration in that direction, I'd be interested. One of the most issues that we have in embedded system is the power consumption. Did you consider that, did you compare your method with others in powers? Can we use this for example with Arduino boards with a battery? Did you consider that, did you check it? I don't, I didn't actually put it on an Arduino board. I mean, it runs fine on a photon. Yeah. Did you compare your power consumption of your protocol with other protocols that we have? No. Thank Mike again.