 Now we start with the next talk. Therefore a warm welcome to Lukas Fulcille He is a developer of a new protocol and What it's about you will hear in the next Talk about the fin year project give him a warm applause So, thank you everybody for coming here, and I'm gonna talk about This project of mine that I started as my master thesis and then Kept working on it's a new transport encryption and well authentication protocol and the emphasis is on the authentication part The first thing that comes to mind is why do we need another protocol? We have so many of them till us works fine for everybody and Everybody will tell you you do not roll out your own crypto For me, it's a journey that started with solar panels goes like everybody would like solar power, so we put put some some panels in our home and Like any hacker would do we started I started to reverse engineering the rest of the triple protocol for the inverter Sended out that through SNMP to make some graph and that's what's it If you've ever Had to work with with protocols like SNP you You might have a to study a bit to to understand what is going on behind the scenes to understand all the protocol works and Well and to make it work and the reason was Also This was not a very long-lived protocol Sorry a long live project goes well mafia something to say in it and It didn't last more than a couple of weeks, but still it got me starting the you got me starting in the protocol study and And I still think that the SNP's history is is a great place to start so Again It mimics the the history of a lot of other protocols first version everything clear text everything Well, it's just welcome to the hacker Second version we had security, but we're not good at it Nobody Don't worry implements it. It's difficult to actually implement it So we go back and forth and we have the v2c with the v2 features and Mission one security, which is basically none until finally we get it right with some encrypted and outdated version we see this all the time in everyday protocols like The usual TCP and TLS, which is future first security letter But you can say well, this is because of legacy this is because the protocols are old the truth is it's that we haven't learned it really well like if you've studied a lot of other protocols like a ctp and dcp which were standardized, I think in 2001 for a ctp in 2006 for dcp You will still see that everybody's clear text and security is completely forgotten That is until some New protocols experimental protocols like quick and minimalty Which were both born I think three years ago something like that Which Well, the main reason for quick is to try and pull TLS and TCP together and and also add some Multiplexing in the streams so you can have multiple streams of data in the same connection basically what What speedy and HTTP do are doing? And Everything runs on top of could run on top of IP but Technically runs on top of UDP to get through firewalls and stuff What does it mean security fee first, sorry features first and security letter It means that the packet looks a lot like this clear text everywhere. We can track connections. We can result connections Maybe even do some passive fingerprints on the options and then finally we have our data and After well in the TLS sections We can also see here that TLS is still in the Mac first and creep later mode The TLS 1.3 will will change that and go and creep first Mac than neck which is Which is thought to be Some somewhat more secure for a lot of reasons. We won't we won't go in it and And this is what I mean when I say security first features later This is the quick protocol and it's our well the first thing we notice is that Google tried to make Everything as optional as possible so you can skip out wall sections like even the connection ID is You can be skipped out because if you have the UDP tunnel use the UDP connection as an identifier but But still The only clear text parts are the The thing that you need to decrypt the packet or the things that are related strictly to the The end shake part of the protocol Well, finally, we have some multi-stream support in the stream ID and an offset which mimics the The offset mimic mimics the sequence numbers in TCP Minimality that I mentioned before looks a lot similar But the packet structure is more fixed and those very easier to parse But still the the overall The overall sections are the same only the clear text only as the part Only as the part that are needed to decrypt the packet and then the packet is the actual data Minimality is still a support for multi-stream if you want to know which are the The RPC part which can be repeated in the the same packet same as streams in quick So for our protocol, I went for something very similar to minimal T with the connection ID Each when you see connection ideas like this Each part of the connection generates the song idea and sends it to the other so there is no clash or synchronization needed Then we have some multiple streams which will be which we will see more in depth later and some some checks on the It's really the same thing as before except maybe for the padding which I put at the beginning to I'll randomize a bit more than the internal structure of the packet to To make more difficult some some traffic analysis or Or rather small attacks There is also the option for vital in me and about that. I don't think quick supports in In quick everything is bite aligned and I'm not sure about minimal T Anyway, the first thing we do when we When we build our our application is to select the stack of protocols that we will use Since today everything runs on the web. You will always start always start with something like this tsp tls HTTP wow for your authentication and finally the application Which is fine. It's it's not the wrong with this It works Until you need to add the things like Chat video audio audio streams that Do not really work that well with reliable connections and you cannot do with the stocks You need to add another stock and manage these Two in your applications Maybe then synchronize the authentication of wow that you do in one stock on the other and things like that it gets even more complicated if you want to do something like multicast and And yeah, if your application has to synchronize all of this stuff everything is pushed on the all the complexity is pushed on the application and We can really expect the normal application developers to get this right every time so big frameworks and Another stuff everywhere What we want to do instead is probably move the the management of all of these stacks inside Inside the streams that were before but in it's something that That both quick and minimally missed because they stick to TCP like connections There is some some way to do UDP like connections for Streams in quick, but it's more of a hack than than the way the protocol is used thought and And still none of those protocols as least at least as far as I know support multicast So In Farir We have an explicit creation of the streams Even during the end shake so you don't have to waste it more round trips and We can have any combination of reliable and reliable order than order a datastream or datagram basically any any Any that you can think of if there's some that are quite difficult to manage them Something that quick introduced in protocols, but was already done in In other parts, so it's not really a complete innovation is the forwarder correction of packets We can introduce something like right for for the network where every two packets that you send you send Exhaust of the two packets so you can lose one of the two packets and Still get your data I Developed and you can you can use out the Libre uptor queue which implements the Raptor queue algorithm and it's actually generalization of this linear combination of Pockets so that we can generalize on any number of packets from the source and any number of repair packets so So you can actually tailor your error correction depending on on your network properties Finally multicast support Multicast is needs to be treated a little bit different though because For example, we have support for multicast even in the DTLS Specification that is in an RFC, but what it does is basically just Share the same key with everybody which works is fine, but then every client can impersonate the server With the sending data to other clients pretending to be the server simply because well, it's the same key so to fix this in Farrier we simply reserve a connection ID and the identifier for the multicast connection is actually the the public key which is We use elliptic over elliptic or the cryptography through a lip sodium And then we sing The the packet with the with the public key directly elliptic with cryptography is fast enough for this We It was something not really possible with RSA, which which is somewhat slower The the last thing about managing entity everything in the same protocol is that we can also associate that The multicast connection is actually associated with the unicast with one or more unicast connections And it means that we can use the multicast connection to just send the data to the clients And then use the unicast connection to send more recovered data or the packets that were lost And we can have something akin to reliable multicast, which is nice still We we were talking about the application stuff the the application stuck let's look into it a bit deeper and compare it to the the osu! layers and the osu! layer pilot the doubt stuff like authentication encryption is not Is not included because you can do it at multiple levels But we can see some interesting stuff like TCP and TLS And HTTP well are both transport protocols, but while TCP and TLS have their own sessions Which are different because the TLS sessions need to be cryptographically Cryptographically verified and everything HTTP does not support sessions so you actually need to reintroduce the session in with the cookies and And also you have a different session for the authentication in OU Which need to be synchronized in with the cookies and and this is what we mean with I meant before what the The complexity is pushed to the application because now you have to synchronize things like the OU the cookies manage the The session that you that HTTP took away and stuff like this We have also multiple ways of multiple points where we do the authentication or we cannot really fix the The session part is because simply HTTP took it away We can do something for for the authentication so What are the authentication models possible the most simple one is the client server that is used in TLS and well and And what what your application can easily implement? You have the Federated model like Kerberos and Wow, which we which is somewhat of a match between the client server and the federated because it should be it looks a lot like the federated model but then you don't have an automatic discovery and things are Things are difficult to implement and And anyway, all of these models have the real problems like the client server model Only it can give you too many user and passwords, which is our current problem or Or you can use a certificate with and then you need to to think about renewal revocation issues and stuff The Kerberos federated model is actually a very good one But it requires clocks synchronization, which is very difficult for embed the stuff like the badge, which is What update failed nice Like this badge, which does not have a clock inside and still requiring clock synchronization on the internet there is something that you probably don't want to do and Finally, well, it's just a big back of nope if you follow the development of this protocol you will already know that there are some stuff like the The main auto row 4 to 1 which worked, you know how to and got away a couple of months before the standardization of this protocol asking for a same to be taken out of the all the documents inside It was they tried to standardize this as a protocol, but it's all loosely defined that They couldn't do it. And so it's actually just a framework and And And it works. I mean, it's it's safe everybody use it because cryptographers and security people have gotten together and Now the libraries that you find on the internet are actually a very specific subset of wow because if you you can actually have implementations of wow That that are completely Conformant to the specific to the specification, but also completely insecure. So do not implement or out What I chosen for this project is the federation model and I actually try to separate the Various Missing the words the various Parts of of this all of this federation. So you have your application like your web browser the client manager, which is something that I don't know like in the Kerberos model that the application that manages all your tokens and Authentications, then you have your server service, which can be the The web server and Finally the the authentication server, which is a separate entity from from the service And Overview can be something like this where the application connects to the manager local in the same machine The manager does all the handshake and authentication and token management stuff With the authentication server, which then just Notifies its service because the service since it's inside the same domain That the authentication server manages About well, it notifies it about the new user it sends back their keys and connection information like connection identifiers and other stuff Which are sent back all the way to the application and now the application can connect to the service directly Without any more handshakes or round trips and stuff This is actually one of the few models that you can have without the clock synchronization And yeah, I think it's it's good enough and also as I said formally verified that through mathematical properties This the previous slides left a big hole However, what is the trust model of the of the whole system? I mean we can do we could just use the certificate model Which has a lot of different problems as we know it or we could just you could just use DNS SAC which is almost available for for any TLD domain now and So that's what we do we We create a binary with With information like multiple authentication service IP is UDP ports to connect to multiple public keys so that so that you can do roll out of new keys very Very seamless and Worsting support is is to do We don't use the X 509 Certificates just playing keys Because well X 500 and is a very complex and Very abused the standard. I think the implementation in new TLS is like 35,000 lines of code just for parsing the certificates, which is I've been too much and Also, we don't strictly standardize on a single trust mode. So you can extend it and it's all managed via plugins if you want to If in the future we want to add something like GPG we we could TLD just like the the tour guides do and and get info from the Web browser and save our Our binary encoded inside some description field So it's it's very easy to extend Also Z 85 was chosen for the encoded because it's somewhat more Efficient than base 85 and it's actually a steering safe We were talking about Authentication authentication is handled directly in the end shake Which are designed to To avoid completely any kind of amplification attacks Which also means that we cannot have zero round trip connections like Like what minimal T and quick actually do It's a design choice mainly because This way we avoid any problems with the roaming stations and such There are three different end shakes which are taken roughly from TLS Which is well, which requires three round trips then we have a stateful connection which requires a bit of a state in the In the first round trip It has a it also has a weaker form of perfect forward secrecy Which means that basically you don't have you don't have a One key one ephemeral key generated by the server for every connection But just one key shared with every connection for like five minutes Then you drop it and generate a new one and go on like this it still works both quick and minimal T do it like this and After it's not as safe and as well as as robust as the as the as having a perfect forward secrecy per connection Then finally we have a one round trip way to do the connection which however needs to Synchronize the the keys with the DNSSEC so you continuously roll out new keys and this is the way you do perfect for what secrecy Again, this is formally verified through proverive models You can have a look at them. It's all it's all on the web and And the auto-addiction is actually token based as we said before which makes it very easy to To manage and tokens are actually just random strings of Like 206 and 56 256 bits So there is no senior require for tokens Which is something nice that comes from taking out the The clock requirements the crocs in chronic synchronization requirements so Another thing that we we use inside the the authentication is authorization which is it's not something that well to introduce but It's something that to out uses Very thoroughly actually out as another nice thing that is an application authentication where you put the But the identification and of this application and your password inside the binary which that everybody can get so But the actual authorization part is the scope so when you When you develop the application you see in the documentation stuff like using user read the file right To limit what the application can do we have the same thing in ferry but we We put it in a lattice to To put the numerator easily numerator and And walk through this. So well, we still have our device identification not an application identification Like wow giving a hierarchy scope to To the authorization allows us to tie a single token to an authorization and do fun stuff like Limiting the the token authorization without Regenerating the token at multiple points so we can limit the authorization at both the authentication server because every authentication works right through there and At the client manager point This means a couple of different things first that application absolutely do not manage anything related to tokens and shakes authorizations nothing like that. They only get the keys and the and the IP address is to directly connect to to the service The the lattice needs to be synchronized between servers between service authentication server and client This is because the user so that the user in the can actually select and limit the for example an application that you're testing to a specific scope and The limit is Enforced by the protocol itself. It's not a self limit from the from the application so even when you're testing a third-party application this way you can be even more Safe on the on the application limit There is some hardening inside all of this All of these key exchange and generation of tokens so for example There are a share of secrets between the client manager and the authentication server and service Which enable which are basically just strings that absorb the with the tokens And I am actually experimenting also with something like hashed OTP is so so that you can actually check how many times A token has been used in the in the service and avoid things like Token stealing and being immediately notified about that Still just a couple of these things of these things like putting another public key in the trust service enable us to completely Take away that The nice single point of failure that is the authentication server that That would usually be the the first thing an hacker would target, right? It manages all the authentication is as it has all the tokens, but even if but now thanks to some Some just soaring of share secrets Even if you hack into an authentication server All you will manage to do is to force the the Austin of the authentic at the the C sub mean of the authentication server to just regenerate the share secrets which Which will also as a nice side effect that every client will be immediately notified of a breach of Things like the service or the authentication server. So So we will be able to To me immediately know something like that and it will also have seen in disclosure In this yeah in in disclosing this this type of breaches The end result of all of this again is a formally verified Secure transport where all kind of transport are included and easily ask accessible to the To the end user The since every token is managed again by the client service the user only has to decide whether to allow an application to To access what it has to access or not Application and services never see again authentication data So we have less and less Security related stuff inside the the applications, which is a nice thing We have a forced maximum authorization and yeah the It's no longer a single point of failure. So what's the status of all of this Libra Torque you the forward error correction? Implementation of Raptor Q Works I Need a couple of approvals to roll out the new stable release Everything is LG PL 3 Ferry The library implementing it is still not finished Currently and shake works the trust model works The connection is set up, but we need to finish the I need to finish the control flow, which is a big part of having a connection But still everything is managed through a plugin architecture Which is very thorough. So she can actually add and delete stuff as you want and experiment with everything and That's it everything is on the website and please have a look at it and And any any questions? Yeah, thanks a lot. If there are any questions, please go to the microphones in Front or at the end. I don't know if I can express this well, but I wonder About using it with an RPC protocol. Can you can you speak about bit list? I wonder about using it with an RPC protocol where there is a request and a response and They may be overlapping On one of your slides you said it can be like data grammar data stream ordered or unordered reliable or unreliable Reliable If a request is a datagram it might be bigger than a packet though But it does not need to be ordered. For example, you could send out three requests and then get the replies in a different order Yeah, yeah problem over TCP because you have the headline blocking and yeah So you accommodate this and do you have some RPC framework that you suggest using with this so the application could actually I I've not gotten through something like the developing an RPC protocol Taylor to this But yeah, you can you can actually choose per stream a combination of one of the the three of the three features and Just use it you can actually have also big messages in In a data in a datagram fashion because the the message is split between multiple Multiple packets, so you're not limited to the to the sides of a packet for your datagram messages And if you lost one packet it would retransmit one packet not the whole datagram. Yeah, that's yeah, that depends on the Well on what you choose for error correction and whether you have a data gram Which is what which was unreliable or reliable? Yeah, any further questions? No, then thanks once again to Luca and the one well