 The first post-bunch presentation is from the OpenBSD developer Theo Bühler on recent progress in and around LibreSSL. Thank you, Peter. Welcome. As the title indicates, I'm going to ramble a little bit about what we did around LibreSSL. My name is Theo Bühler. I've been an OpenBSD developer for quite some time, must be seven or eight years now. When I started here as a mathematician, I didn't want to do anything to do with mathematics, and that's why I ended up doing crypto, which has nothing to do with mathematics whatsoever. Yeah, I assume that everybody knows a little bit what LibreSSL is. I'd like to define it as one of the four major forks of OpenBSD SSL. So what are these four major forks? The first one is OpenSSL itself, which started in 1998 as a fork of SSLEAY. SSLEAY is what came out of Netscape, which was the initial implementation of SSLV2. SSLV1, one being so broken that it was never published. The author, Eric A. Young, was absorbed into the RSA corporation when the crypto restrictions were more or less lifted, and other people had to take over OpenSSL, and basically OpenSSL replaced SSLEAY and later died. Over the next 16 years, OpenSSL accumulated a lot of not-so-great code. The code of SSLEAY not being great either, but it was early code. It was code of the 90s, so that was okay. Basically everybody dumped whatever research project he had in there, and it landed without all that much review, which led to a lot of fun and a lot of features and a lot of stuff you don't need. You may want to look at Bob Beck's talk on the beginning of LibreSSL to learn about funny things like Big Indian AMD64 support and stuff like that. I won't elaborate too much more. In the next 16 years, after 1998, Heartbleed happened, there were lots of disasters before, but nobody really looked, and Heartbleed made people look. People didn't like what they saw, so committees were formed to fund OpenSSL research. Heartbleed wrote articles like the Internet Security is supported by two guys named Steve, one Steve being Steven Henson, who wrote the code, and the other Steve being a guy who managed to hire him as a contractor, and it's a financing, and Steven Henson apparently wasn't paid very well for his work, and he was basically the only one remaining next to one guy who maintained the Pearl assembler stuff, and some contributors who contributed occasionally. And while committees were being formed, some people lost patience, looked also and didn't like stuff like the memory management, which made Heartbleed this much worse than it already was, and OpenBSD forked LibreSSL a bit later in June. Adam Langley from Google made BoringSSL public. It's not quite clear to me when BoringSSL itself actually started. The commit history was a massive code dump on the 20th of June, but the fork happened in January, and there was a lot of call, and I guess they started planning with replacing OpenSSL by an in-house fork they could use internally in Google before Heartbleed actually was discovered. And then there's a fourth fork, which happened quite recently. You may or may not be aware of that one. So big companies, Akamai and Microsoft, wanted to have quick, and in particular quick support in OpenSSL, and that was a pull request that was open for quite a while and didn't happen, so they forked OpenSSL. It's a bit much to call it a fork, it's OpenSSL, but a patch set to add BoringSSL quick API. So these are the four forks, and LibreSSL is the oldest one. BoringSSL is in wide use, despite there being no API guarantees it is used by Google itself. Various projects embed BoringSSL, and for instance the crypto support in the Swift language is also based on BoringSSL. LibreSSL's main features, I would like to summarize them, the crown jewels obviously are LibTLS and its API. It's basically a sane wrapper, which is easy to use around the SSL API from LibSSL. It is used throughout OpenBSD in all things, speaking TLS. We're using LibTLS. It's tremendously easy to use, tremendously hard to misuse, and it just works. The second major thing we have is a clean room implementation of a TLS V13 stack, which mostly happened between 2018 and 2020. It started with a hackathon in Bob's basement, where we hacked together some initial things. The centerpieces are a record layer written by Joel Singh and the Handshake State Machine written by myself. This is more or less feature-complete, two major things that are missing are pre-shared key support, which is needed for session resumption, that is work in progress. I'm working on that right now, and it should be done in the next month. I have some stuff working, some other stuff not quite yet, but there is a new release around the corner, so it should work in the next release. The second thing that is missing is encrypted client-hello, which would be really nice to have, but unfortunately this is a standard that grows in size and scope, and it's tremendously complicated. It started out as encrypted server name identification, so you don't have to transmit the server name you want to talk to in plain text over the net, and it turned out that this is harder than you might think, and implementing that will be a ton of work. One thing we don't support and won't support is early data, round-trip time data, which would be nice performance-wise, but it's a bit security-wise. Our main feature is a new certificate validator, which I will talk about later on, it was written by Bob. Then I also count this as a feature, it's documentation. We have lots and lots of documentation written by Ingo Schwarzze, and unfortunately there is only one of him, which means we don't have the capacity to document 3,000 functions in the meticulous manner that he does, journaled manuals. We have at least 50 manuals that landed in the last year, because they had a major project to document things sponsored by Genoa, but still there's lots and lots of stuff that is still undocumented. One great thing about Ingo is that when he documents things, he doesn't document what should be the case, but he documents what actually is the case, he figures out, oh, there's a bug, I need to fix it, so you first get a patch that fixes bugs, and then a patch with a documentation of the fixed thing. One further thing is that we did a lot of and a lot of code cleanup, refactoring, but the code base we started with is such a noggin stable that this is a task that will never be finished. We are largely compatible with OpenSSL 1.1 these days, at least on all the features we both support. There are some corner cases with locking and other things where it's not quite there yet, but it's usually not a big deal to support something with Libre SSL if it works with OpenSSL. This fell out of a ton of work I did in the last year, but which is a bit hard to talk about, which was basically making structs opaque everywhere and using the OpenSSL accessor API throughout port. This was a lot of work, but it's not really heroic and interesting. Now that things are opaque, we can say that the ABI is about as stable as the one of OpenSSL 1.1, which was one of the criticisms we faced over the last few years that we have plenty of ABI breaks, and most projects can't deal with ABI breaks because they never do them. A bit more on OpenSSL compatibility. We have lots and lots of stuff that is part of the OpenSSL API. We basically have what we need, and that's a lot more than we actually wanted. We do not have OpenSSL 3 API support yet. We don't really have plans of how to deal with that. We will see how the if-death mazes will turn out in the next few years and cook something up. We have about 2,300 OpenBSD ports that link against LipCrypto or LipSSL. There are handful also that use LipTLS, but they're a very small minority. Of these, fewer than 100 need patches, which is a bit less than 5%, which isn't, of course, isn't ideal. The ideal thing would be that nothing needs patches, but it's a lot better than many people make it out to be. Of these, I would estimate that about a fourth are legacy software that is no longer maintained that was never ported to OpenSSL 1.1 and needed patches to work with OpenSSL 1.1. Other things have upstreams that don't take patches. Other things are patches that we could upstream, but didn't do it because there's only so many hours of work we can put into this project. A lot more than I would like to have, but it's not that bad. There are a few painful patients. A really bad one is QT, because it basically has its own wrappers around the OpenSSL API, which is macro-hell, which I don't really understand why it is necessary, but basically they DL Open LibCrypto and LibSSL on demand, and then access these functions internally with some strange things, and whenever you touch something that it doesn't expect, it will break. Another very, very painful patient is PyPy, because it embeds an old version of PyCryptography, and PyCryptography is the most sensitive port to LibCrypto changes. Basically, you look at it and it breaks. And Bob Beck changed something in boring SSL, run regress, and PyCryptography broke. PyPy has a very old version of that embedded, which means that it still breaks. PyCryptography decided to rewrite everything in Rust, which I'm very happy about because it no longer breaks if I touch anything. And then there is S-Tunnel, which for some reason has a maintainer who is very hostile against LibRSSL. Why I don't know, and I don't really care, it is also a painful thing to patch. And one feature that many people would like to have is support for ED25519, which is a Bernstein elliptic curve-based signature and Cypher. And the problem is we do have support for this curve, but the support needs to be done using the so-called EVP or envelope API, and it is just a massive pain to land that, at some point we will have to do it. And then other things that people request are some variants of the SHA Cypher, like SHA 512266 or SHA 3 or Blake or whatever. Most of the time you can actually do without all that stuff, but many things start using that. At some point we will probably have to suck it up and land that. So far it wasn't really necessary. Then six ports link against OpenSSL for various reasons. So the DeKym filter for OpenSSMTPD has a flavor that wants to use ED25519 because it's nice small signatures, and this links against OpenSSL because we don't yet support this Cypher. Not a big deal. The second one is PostFix. PostFix has an OpenSSL developer who also created Dain, and he pushes Dain everywhere he can, which is of course his right, but we do not have support for Dain, and at some point we might add it, but maybe not, it's a bit intrusive. Florian has some work in progress for wrapping Dain in LibTLS, and maybe that will land, but that won't help with PostFix. So for the time being, PostFix has to use Dain. Then another one is Pro, also known as Zeke. With the recent update to version 5, they started using the TLS pseudo-random function for some deep inspections. That's something that should never have leaked out of OpenSSL in my opinion. OpenSSL does TLS, you don't need to access the Cypher or pseudo-random function, but it did, and of course somebody is going to use it. Another one is Node. This is also ED25519 support, it could be patched out. There's a dozen API that we could provide, if someone is motivated to port that to LibreSSL and add support to LibreSSL, I would estimate that would take about a week of work. But, well, who cares, it's something self-standing, it doesn't link against anything else, so it can as well use OpenSSL. And then there is the NSCA, next-gen Nagiosk thing, and this use pre-shared keys for obvious reasons. You need to be able to talk to it without all sorts of network things, so you need to have a cyphered way to talk to that, and it needs BSK for that. And that might be switched to LibreSSL or not, also it's an end thing, so who cares. And the last one is Libre TLS, which we have in ports for testing purposes. Libre TLS is a port of LibTLS to OpenSSL, and by design, this uses OpenSSL, so it needs to link against OpenSSL. So we can build the entire port stream minus six or less important things, so that means we're pretty well in shape with compatibility. Now before I talk about the validator, I would like to do a little bit of technical stuff and give you some background on certificates. So what is a certificate? Well, it's a complicated struct, which won't surprise you. The ASN1 definition is on the screen. It is a sequence consisting of a TBS certificate, a signature algorithm, a signature value. So what is a sequence? A sequence is basically a struct. What is TBS? TBS means to be signed, and the contents of this struct is something that is to be signed, something that says how it is signed, and the signature itself. So far, so good, nothing complicated. Now let's look at the TBS certificate. The TBS certificate is again a struct. It contains a version. The version nowadays is always free. You won't find any other certificates in the wild usually, or I hope you don't. It contains a serial number. The serial number is something that uniquely identifies the certificate, so that you can pinpoint what the certificate was that misbehaved, and that the signing certificate authority can check what went wrong with it or at the time of signing. Then it contains something which is, again, an algorithm identifier, which must match the one in the outer struct. I won't go into why it is there. Then it contains a field called issuer. This identifies the certificate authority that signed it. You have a validity field, which says from when until when the certificate is valid. There's lots of fun stuff in there because there are two formats of how the time is formatted depending on whether the date is before 2050 or after, and it's complicated. Then there's a subject which identifies what is signed. There's a subject public key info, which tells you what the key is that is signed. Then there's some unique identifier for the issuer and the subject, which aren't very important to us. Then there is the thing that makes things really fun. These are certificate extensions. Certificate extensions are things like the subject, alternative names, or many other things, what policies the certificate must follow, and this is what makes certificate verification complicated. The main thing I need to point out is that a single certificate has a pointer to the issuer, which says who signed me, and a pointer to who am I. This creates a link between two certificates, which makes validation of certificates not only a cryptographic thing where you verify a signature, but you need to find in some graph the issuer of the certificate. Validation is pretty much a pathfinding algorithm in a bunch of search, some of which you trust, some of which you have, might or might not trust, and some of which you want to know something about. So you need to walk a path and see if there is a path of validly signed things where all the extensions do behave as you want them to, and that's something I will talk about a little bit later, but let's look at the PEM encoded certificate, which might have seen on your computer at some point. It is between an armor which starts with begin certificate, then there is some base 64 encoded blob, and it ends the certificate. There might be some stuff above it or below it that describes what this certificate is, which is of no importance of the parser. If you ever wondered what PEM meant, that's privacy-enhanced mail, because I think it was invented by the PGP guys, and they wanted to privacy-enhanced mail, so they embedded things inside header and footer and encrypted or signed stuff, and because the certificate is also a mail, it uses the same thing. It's base 64 encoded DER, and DER are distinguished encoding rules, which are a particular encoding of the ASN1. If you ever looked at a few certificates, you may have wondered why the certificate always starts with MII. I claim that, and you can verify that on an OpenBSD system, there are 133 CA certs in OpenBSD's root bundle. You can grab that by looking for the beginning of the ASCII marker, and of these all start with the letter MII. So why is that? So it is base 64 encoded, so let's pipe that into base 64 decode, then hex-dump it and look at what we get. What we get is three hex numbers, because base 64 encoded translates each letter or each ASCII character from the base 64 alphabet to a six-bit value, which means that we have four six-bit values which translate to 24 byte, which are three hexadecimal numbers. The first number is 30, sorry, 30 is DER speak for an ASN1 sequence. An ASN1 sequence is what a certificate is, so the certificate structure is this blob. The 82 is DER speak for the length that follows has length 2. So the two bytes following the 82 encode the length of the entire blob. So MII is the base 64 of 30 hex, 82 hex, plus the two most significant bits of the length, because it encodes 18 bits, 3 times 6 is 18, so we have two bytes plus two bits. The length of a cert is always more than 127 bytes, so you need at least two bytes to encode it, so it is a sequence of something that is encoded by at least two bytes, and it's almost never something that is longer than 16K, unless you are some university certificate on the Philippines and have 2,500 subject alternative names listed in your extensions. So, well, tough. You don't start with MII, but all reasonable certificates have something between 127 bytes and 16K, so the first two letters are 30 and 82, which is a sequence of something that has length 2, and that's why. Now let's pass on to the new certificate validator. As I said, it's a complicated thing, a path finding algorithm, and complicated things tend not to have nice code in the code base we inherited in 2014, and the validator is pretty much un-maintainable because it's a huge spaghetti mess of things that you just can't reason about. So during lockdown, Beck wrote a new one from scratch, just using some utility functions. As he likes to put it, he got COVID and lost his taste of smell, so he could dive into that. The initial code of Bob's validator was pretty much correct, there was some cases that we needed to fix, but you always have that if you do something complicated, we only found two minor bugs, a few minor bugs more than two, but nothing serious. But then the fun started, we needed to make that compatible to the legacy verifier, and this resulted in many, many months of whack-a-mole, basically it took up one year of development of the RSSL, and it was only just finished. Because lots of software relies on strange error codes that make no real sense outside the context of the legacy validator, it doesn't make any sense if you look at the RFC itself. But some software needs that you behave exactly this way in exactly the situation, otherwise it breaks. It needs, many things depend on undocumented behavior of the verifier callback, which ties in with this strange and overly specific error code, and some things even rely on, you have to traverse things this way, otherwise we break, or at least we have regression tests that break. It took us, as I said, two years to be reasonably compatible with the legacy validator, we're almost there, but not just yet. As I said, it's very brittle, you fix one thing and you break ten others, but in the end Bob managed to do it, but not without introducing one not-so-nice-whole, which failed to verify client certificates. Fortunately, we plucked that very quickly. Another nice thing that we have is the legacy record layer rewrite. The record layer is something completely unrelated to certificates. It's the thing that translates between what we get from the network, say on the socket that speaks TLS, and then translates the fragmented messages into messages that the TLS stack can understand and vice versa. So it's the thing that underlies and provides the abstraction needed for handling TLS. Joel Singh wrote one of the nicest pieces of codes I've ever read, which is this record layer for TLS13, and he already had the plan when he wrote that to adapt that for the old stuff, TLS and DTLS, which are currently driven by the legacy stack. He wrote the record layer for TLS12 and DTLS. He has as a goal to remove SSL packet.c and D1 packet.c, which are terrible, terrible code. Basically SSL packet existed already, and then some PhD student took that, copy pasted it and adapted it as long as it needed so that he could play games over UDP and encrypt it with TLS, which was what DTLS was for. The rewrite uses two things from boring SSL, which are needed to deserialize and serialize ASN1 things. It's the CBS and CBB API, which stands for cryptobytes, strings and cryptobytes, buffers, and this avoids any explicit point manipulations, which makes the code that much safer. With this work, we got DTLS12 support pretty much for free. As a consequence, Landry could port Linfone. He could update the bear zip stack to using DTLS12, which is quite nice. Another very nice thing that fell out of this is Clemens Nanni's work of porting the Telegram desk club client to OpenBSD, which apparently just works. One missing bit is we do not have support for the BioAdder things, so QT can't use that yet, which means that we have to compile QT without DTLS support, which is a bit unfortunate because it would enable some nice things. Then another nice thing which is brand new is the quick API, which I already mentioned in the beginning. It is the de facto standard API for doing quick. It was designed by David Benjamin, one of the main developers of boring SSL at Google, in parallel with writing the 9001 RFC, which defines part of quick. This was ported to OpenSSL by Todd Short from Akamai. He opened the pull request, 8,797, if you want some fun entertaining read, and you will need a lot of popcorn. So they said we can't take that, we are already late for OpenSSL 3. This feature will have to wait for OpenSSL 3. In May 2021, quick was finally standardized. In September 2021, OpenSSL 3 was released, so people were eager to use quick, but couldn't because this pull request was still open. Daniel Stenberg from Curl wrote a blog, this is the API we want, but they just don't want to merge. And a few weeks later, it turns out or it was communicated that OpenSSL want their own stack, which means not just the API, but really the whole protocol implementation. The real bummer was they don't want to be compatible with boring SSL, which means that a lot of work that was invested in NGTCP2 and other things won't be interoperable without patching because for some reason, I don't know why, but someone must have a reason. Maybe someone wants a challenge, maybe it's NIH, maybe there's a good reason. I don't know. That's not explained. People were surprised because quick transport protocol dealing with nitty gritty of UDP is not really within what is perceived to be OpenSSL's expertise that shouldn't be surprising because that's crypto stuff and not networking stuff. And in November 2021, there was an IETF side meeting by Rich Saltz and some other people and they announced the fork quick TLS. This is a hugely entertaining video conference. I call this RFC 902.0, which is a lot of highly paid executives lamenting the fact that they now own a fork of OpenSSL, which they don't want to. So further stuff about the quick API, Bob Beck and Joel Singh ported that to OpenSSL. Bob started that in June in Bruges with a few days and it took Joel a bit longer than we thought, but it must be a few nights work and he had it working. It plugged extremely nicely into Joel Singh's record layer. The design just worked out a little bit of refactoring was necessary, but that's always the case if you need to fit in something new. One thing it needed is EVP-CHACHA-20-POLY-30-05 support. This was quite painful, but yeah, since we wanted quick, this was ported as well. EVP is the crypto abstraction layer, which is a recommended way of dealing with ciphers which make things that can be done in a single API call need 20 API calls because EVP is nice. An experimental version of this API will be available in LibreSSL 3.6. If you compile NGTCP2 and Curl, you get a thing that can't speak quick with this API without any patches needed. And just a few days ago, William Lalmont from the HAPROXY project landed a working version of the HAPROXY master branch, which is minimal but apparently working. Full support will need that we add a wonderful API to our stack. Maybe it allows you to get the things that will be parsed at SSL extension as raw things which it then can manipulate and have parsed by the SSL stack. Very well done. The boring SSL API works, but it's not great. That may be one of the reasons that contributed to the decision of OpenSSL to be fair. So the first thing is it exposes full structs and enums publicly which is really, really bad because it means that whenever you need to change one of these structs, you have a flag day and because nobody can deal with flag days, it means it can't ever be changed. And surprise, boring SSL and quick TLS have already diverged. So boring SSL changed the struct and have two members where quick TLS has only one member because it's the old version of boring SSL API. And the cherry on top is that NGTCP2 initializes struct without C99 initializers, which means that you need to be very careful how you pack things so that things don't blow up. But it can be done. Boring SSL, you have a very good contact to them because there are some contacts. David Benjamin wrote a letter to us and said they're open to improvement, so maybe we can do something there. But unfortunately, quick TLS is probably set in stone, so there is a pull request that is open since beginning of November 21 that says, well, we don't match boring SSL and we should change that, but we can't because it's an ABI break. Now something very near and dear to my heart as mathematician is work on primality testing. The starting point of this saga is a wonderful preprint from 2018 called Prime and Prejudice, which found some rather disturbing facts. I'm quoting from the abstract, they are able to construct 2048-bit composites, so numbers that are not prime numbers, that the primality test will declare as prime with the probability of 1.16. This is as bad as it sounds. And the documentation which Libre SSL inherited from OpenSSL in this situation says, it's very unlikely, don't worry about it, it's 2 to the minus 80. That's enough by 40 in an exponent, enough by 20 in an exponent, which is very bad. But Libre SSL and OpenSSL aren't the only ones affected by that. They found a number of libraries where they can construct numbers that will always declare prime by the supplied primality tests. This is tricky to fix. There's an easy workaround, which is very unsactive factory, which means just crank on the number of rounds of millerabin. And primality is already very, very expensive to check, so you can't just do a factor of 10 more tests because it's just too slow. The recommendation is to use an algorithm which is named after four famous mathematicians BPSW. The problem is this isn't easy. Someone needs time and skills to implement that. So fortunately there are some people in the world who have both combined, and Martin Grenouille is one of them. The background to this is Mark Esby found the preprint earlier this year independently. He contacted us saying, well, this is something that might be interesting, and I have a student who has a knack for math. Would you be interested in looking at that with him? Of course. Mahter already had an implementation in Python of this algorithm. He apparently didn't have much experience with C, but he said, well, I can do it. Okay. It can't be worse than not work. A few weeks later, I have a pretty good C implementation in my inbox. It was obviously written by someone who isn't very experienced in C, but the most important thing, it had very few bugs. It was correct, and it was fixable. So as always, if you work with a student, you have breaks because there were exams. No big deal. A few weeks, no work, but then we sat down. We had something that worked. We had the mostly correct implementation. So we cleaned it up, optimized it, simplified it, fixed it, and committed it. The result is one of the nicest pieces of code in Libcrypto, which is a very low bar, but it's still a very nice complement. And it's a piece of amazing work by Mahter and Konuyu. There's something that Yob wanted to have, which is RSE 3779 support. This is about routing and BGP. It's an X509 certificate extension for IP addresses and identifiers of autonomous systems. The issue of a certificate transfers some internet numbers to the subject. This is part of Libcrypto, which is ported. There's a P missing here by Yob. It helps RPKI client and makes the OpenSSL X509 output nicer. Unfortunately, it needed a lot of work. The public API is pretty broken and it's inefficient, and it costs about 10% of runtime performance loss of RPKI client, which is too bad, but not as bad as it could be. I have five more minutes, three more minutes. So there's always a lot of work on testing ongoing. Ilya Shepitsin, one of the HA proxy guys, has been tremendously helpful with dealing with all sorts of GitHub stuff. He helped us spinning up ASAN continuous integration thing, which has been invaluable. He also helped us with triaging co-verity issues. Then we use TLS fuzzer, which is a protocol fuzzer to check whether our implementation is compliant. It tickles many corner cases and helps improve standards compliance a lot, and thanks a lot to Hannes Mainert, who mentioned it to me at BSD Can after my talk. It was tremendously helpful. Then we have Jeremy Ruby guy, who needed us to take care that this Ruby gem doesn't break, so we ran that as part of our regress test as well, and it has been very useful because it covers stuff we don't have good coverage for. A very nice thing that happens a few weeks ago is that the eldest son of Joel Singh started sending pull requests, and he rewrote and improved lots and lots of old tests that nobody wants to look at, and unfortunately he stopped after I threatened him with having to deal with CVS himself. Hopefully it won't scare him off permanently. I'd like to finish with some stack. First of all, the core team of Libresa Cell, which are Brent Cook, Bob Beck, Kenichiro Inoguchi and Joel Singh, Ingo Schwarze because he's the best, Antoine and Stan because they're all also the best for all their help with boards. Genua is tremendously helpful with all the testing infrastructure and also for sponsoring work. Then of course, as I said, Magdan Konuyu for his wonderful work on DPSW, Ilya Shepitsin for Help with Portable. Then one person called Orbi or however you pronounce that, working on Gen2 and has been extremely active with updating patches, pulling patches out of our porstry and upstreaming stuff. It's very, very helpful to have people like that around. And finally, the OBSD Foundation which spawned a bulk build machine that allowed me to do testing in port so that Nadi and Stuart Henderson didn't have to run into breakage which makes bumpingly crypto a non-issue these days. That's all I have. Thanks for listening. Any questions? I'm sorry? No, what? Oh, I'm not that long time a member of OpenBSD and I don't use magic thing. Yeah, I should have. I need to write a latex plug-in for that. The question was, why is there no comic sense in my presentation? Other questions? So I guess we'll write on time for a break. Just about other time. So thank you very much for a great presentation to you and here is the speaker gift. Awesome. It's a local specialty for Vienna. Thank you very much. It's a Sacher Torte if you're curious and I won't share it. Let's thank our speaker.