 So, yeah, I wrote a handful of topics that I was thinking we might be able to cover, and I don't know if other people have other issues that they want to look at. But so there's questions about the different key servers that we have available, how we might want to deal with key server synchronization, maybe trying to introduce some idea of actual cryptographic validation on the key servers. I'm wondering whether there are particular OpenPGP workflow use cases that we want to streamline if people have suggestions or ideas for how to do that. And some examples of use cases are like new key generation, key signing. I'm wondering if anybody know if people can report on whatever new implementations they have. And then I'm also wondering how much we might want to consider diverging from upstream on some of these packages, in particular GPG and SKS. We don't diverge a whole lot from upstream, but we do somewhat, and some upstreams are more responsive to the concerns that we raise. So people have suggestions about changes to configuration that we might want. People have best practices that maybe Debian should recommend. There's a whole bunch of stuff that we can't possibly cover entirely in 45 minutes. But anyway, I kind of wanted to open up the floor and see what people had. I might call on noodles if nobody's responsive. Daniel and I have had a number of conversations offline. I'm actually reasonably against diverging from upstream where we can avoid it. I agree with that. We have a history of diverging from upstream on cryptographic projects that did not work well for us. So let's not do that where we can't. I mean, the conversation I've had with various people offline, we've been moving slowly to stronger keys. And part of that's about having a bigger key size than one or two four bits in general for that, but also largely driven by the weaknesses found in the SHA-1 hash algorithm, which is what binds signatures to user IDs and subkeys by default in previous versions of GPG. And what's become apparent over the last week as I've done various digging and investigation is that lots of us have stronger keys that are 2K or bigger that are actually bound using SHA-1. So we're not getting the benefit of the stronger hash type. And the main reason that seems to be the cause of this is that GPG, if it finds a self-signature on your key, will use the hash type of that self-signature for any new self-signature rather than your preferences. So if you create a key with an old GPG, but make a big key, make a 2K or larger key, hadn't set your cert preferences in GPG, then you'd end up with a large key with a SHA-1 signature on it. You then subsequently change your preferences to SHA-256 or better, any new key you sign gets the benefit of that better signature. But your key doesn't get the new hash. Sorry. Go on. So this is the default in GPG and there's been a discussion about 2009 Daniel. There was a patch sent to the list that would then use your cert preferences. Upstream GPG said no, this is a surprising change. We don't want this to happen to people. And I think the argument we would make is that, well, I changed my preferences that said I want to use strong fricking hashes. And the fact that you're not doing that is not the principle of least surprise. And there's an argument to be had there and I think the argument there should be had with GPG upstream rather than us doing the right thing for our users and not really helping the rest of the ecosystem. If GPG have a good reason not to them, we'll listen to it. But I think they're wrong. So I don't think they do have a good reason. So that's the sort of divergence that I was implying. Not anything more risky than that. I mean, there are, I did some stats on this. I pulled out all of the insecure hash types from the Debian key ring, including Debian maintainers, Debian developers and Debian on uploaders. And we lose 97 keys that should have strong hashes and just don't. Including mine, I should point out I'm there and I will be fixing that when I get back home and have access to my private key again. If you look at the Debian key ring, we have what's called a reachable set. So there are two terms in PGP web of trust that are quite interesting. One is the strong set. And strong means that if you have any key in that strong set, you can get to any other key in that strong set. So if you're in strong, you can get to any other key in strong. And any other key in strong can get to you. And then there's a weaker relation called reachable. And reachable is all of the keys that can be reached if you have a key in strong. So the standard Debian key ring is a reachable set size of 1,051. So that's, I can't actually tell you how many keys are in all of Debian. It's not very many more than that. It's most, a few dozen more than that. So we are an extremely well connected set. And of those 1,051, actually 965 are in strong. So we're really well connected. In general, if someone signs your key, you tend to sign them back. And that gives us strong. You take away all the in-secure hashes. Our strongly connected set size goes down to 154. That's like nearly 10 times smaller. Reachable is still only 243. And that's partly because we actually only have a couple of hundred strong keys in the set. There's a couple of hundred keys that are bigger than one or two four. And it's actually notable that Debian maintainers are much better at that. They have a much higher percentage because a lot of them are newer keys, I guess. They're there's more active group than Debian maintainers. So that's kind of why we care about this. And I haven't really, I'm going to blog about it at some point and put up the list of everyone's key who's affected by this. But there's actually no simple way to fix it at the moment. Technically, I know it needs to be done, but GPG doesn't have an interface that lets you do the necessary re-signing of all of your SEGs. Of your self-segs. Your self-segs. In particular on sub-keys, there's nothing you can do. For a self-seg on a UID, you can go in, delete your self-seg, and then create a new one because it doesn't see any self-seg. Then it will create and honor your digest preferences, but all your certification preferences. But for sub-keys, there's no edge of his I can see. And even doing that for primary keys is a colossal pain. If your key has any number of signatures on it, the process of deleting your own self-seg from your key is actually incredibly tedious and weird because of the GPG user interface. You have to go in and say delete signature. Specify which user ID you want to delete a signature off of. So you have to do this for each user ID. And when you say delete signature, it walks you through every single signature that's over that user ID and asks you, is this the one you want to delete? Is that the one you want to delete? I am not sure. I just wanted to ask if you then confirm that you want to delete a signature if it continues or if it starts off from the beginning because you have to redo that. Yeah, I don't remember. I did it once, and that was it. And I never want to do that again. I think it's mentioning Tom's tool. Before I mention that, I wanted to ask you about the sort of the permanence of pushing keys to the key server. If I had a weak self-seg that's on the key servers and I change it and I repush it, does that actually overwrite my weak signature self-seg? So the GPG model says that for a primary key and you're trying to attach some piece of data to the primary key. So for a signature over a user ID, you're attaching that user ID to the, you're saying this user ID belongs to this primary key. And that certification is made by one issuer. So if you as an issuer make a new self-seg, the date on the new self-seg will be incremented, will be larger than the old date. And the rule that RFC4880 says is that we will only look at the most recent certification from any given issuer over any given content. So the old self-seg will be up there, and the new self-seg will be up there, and the new self-seg will have a newer date, so compliant implementations should respect the new signature. Now as a contrast, if somebody signed my key with a weak signature, and I uploaded that because I was sort of careless, and then I convinced them to re-sign my key with a stronger signature once the weak one is in the key servers, can I not delete that? There's nothing wrong with having a weak signature in the key servers. It's just that at some point we are going to want to not rely on weak digests. So for example, right now, MD5 is considered a bad idea for signatures. And so we would like to say I don't want to rely on any certifications made over MD5. And so yeah, there could be a certification made over MD5. It's not hurting your key that it's there. It's just that you wouldn't want it to be there. You wouldn't want anyone to rely on it. So it's as though there's nothing there. So at some point, if SHA1 becomes significantly more broken, we may want to try to deprecate SHA1. There are some significant problems with doing that deprecation. But the point here is to prepare ourselves so that at the point that SHA1 is actually broken in practical ways, we can move forward without relying on that digest. OK. And then the other pointer is I sent a mail to Dibconf discuss last night with a link to my KSP utility, which basically just annotates GPG list keys with the strength of the signatures. And it's a fairly hacky Python script and probably wants to be rewritten with live nettle. Is that the one? I haven't had a chance to look at your tool, so I'm not sure what it does. So I couldn't tell you, actually. Sorry. OK. Well, it probably wants to be rewritten in that. But anyway, look at that. Look at the documentation. Ask me questions about that. I think one of the things that's missing that I put on Gaby is that I'd like to do a better how-to on how to fix signatures. And I hadn't really considered how to fix self-signatures, but that should also be part of it. But it makes me want to ask you about changes in the keybox format and to what degree is that going to be something we need to be worried about? So the keybox format is a change in the way that GPG is going to be storing keys. I probably shouldn't be the one to talk about it, because I actually haven't really even experimented with it much. But that will not affect the nature of the open PGP packets. It's just going to affect the way that they're stored on disk and how GPG accesses them. So I don't think we need to worry about that for this particular discussion. As long as what you're working with is the output of GPG export, it doesn't matter what GPG has, how it stores it internally. OK, that's a great clarification. So it looked like people were adding a whole bunch of stuff up here. So one of the questions that was just asked was, how do you tell what digest there is in a given certification? And the answer to that is that you do GPG export with whatever options you need to make sure you're exporting the right thing. And then you pipe that output into a program that can parse the GPG packets and print them out. One such program is GPG itself. So you can say GPG export whatever, pipe into GPG dash dash list dash packets. But GPG list packets lists the number that identifies the digest, which you then have to go look up in RFC 4880. So that's a pain in the ass. But you can also install a tool called pgpdump. And pgpdump will actually print the human readable name of the digest. Oh, or Tom's KSP SIG script. So we have multiple choices here. That's also a thing where I think upstream is actually unreasonable, where they say that it's something for developers, so there should be no readable digest name there, because you really want to see the number and to look it up in an include file. That said, the new 2.0 release on the master branch has changed so that dash dash with dash columns actually also prints a digest type. OK. As an numeric, of course, as a number. OK. So OK, that's interesting, and might be useful for people who are parsing the output of GPG. So I think one of the things that we're sort of dancing around here is the fact that GPG does not offer a programmatic interface that's capable of doing the kind of work that we are talking about. And so I think in some sense we have to acknowledge the fact that we are, I think we're struggling right now because we don't have enough implementations. We don't have an implementation that currently that we can manipulate in this way. And there are a couple of other implementations, but we need to think about, I think as a community, if we're interested in seeing OpenPGP be something that we can use and that we can drive this kind of change, I think we do need to think about having an implementation that gives us a programmatic interface that's beyond GPGME. So I don't know. Is anybody working on those things? Noodles? So you're not strictly speaking correct, and that there is a library that we have. We did discuss the other thing. There's NetPGP, which is originally sponsored by Nominet, the UK Demand Registry, written by Ben Laurie. It's got a Git repo up on, I can't remember if it's GitHub or Gitorious, it's in NetBSD, but the issue with it is that it links against OpenSSL for all of its crypto and therefore it's kind of an issue if you want to talk to it from GPL code. And there is an RFP for OpenPGP Lib, which is what NetPGP, I don't remember which one of them diverged from the other, but one of them had, but neither of them are particularly active. There's an RFP from me on that, and then when I looked into it and realized all the licensing issues I was going to get into, I backed off of packaging it for Debian because I didn't want to introduce still more dependencies on the OpenSSL license. So most of the stuff that I've seen actually just calls GPG directly. If you look at any of the pearl bits and pieces, any of the Python interfaces, any of the hacky scripts that I and others have written, they call GPG an attempt to parse it. The with colons interface from GPG at least gives you something that you can hopefully more easily parse, but doesn't necessarily always give you the information you want. What Daniel's talking about is I actually am the author of a key server. I think these days there are pretty much two key servers that are in use by far and away the most popular is a key server called SKS. It started out as a very interesting set of papers on synchronizing sets of numbers, and they turned it into a very practical application of a key server network that actually makes sure all of the key servers have all of the keys. And I think if you looked at it, 99% of the key servers in the world use that. Unfortunately, it's written in OCaml, and while I'm sure that's a very nice language, and I try not to remember what I knew of OCaml, it's not very easy to pull it out into a separate library, and that's certainly not what they've written their code to do. It doesn't do any cryptographic verification. It's very much a key server code. The other is the key server I wrote, which is called ONAC. It's written in C. It's designed to be a key server, so again, it does suffer some of the problems about doing things that are key server related rather than doing things that are more generic. But one of the things I've been looking at doing is putting more cryptography into the key server and actually making it so the key server would verify packets and only accept keys that it could verify. And to that extent, I've done some of the hashing work. I've done some of the extra parsing work of packets and what we had a discussion about offline was maybe we could pull some of that code out into, I guess, an LGPL library or something using the existing Netlin-Hogweed libraries to do the crypto work and adding the PGP functionality on top and then giving us something that we could build a set of generic tools against. And I think there are two advantages of that. One is we get a library that's designed for other programs to talk to and is hopefully a CNN interface that anyone can link into and we can build scripting language bindings to and the other is we hopefully get a reasonably comprehensive alternative implementation of PGP, something that we can interoperate against and drive innovation, I guess. Right. So one of the big challenges, when I looked at the NetPGP or OpenPGP lib, was that they were shooting to try to create a tool that could be a drop-in replacement for GPG, which is a Sisyphean task if I've ever heard one. I mean in terms of complicated interfaces to re-implement, I think we can't. So to give you an idea about the sort of thing I've already got, I wrote about a two-dozen line program that will walk through a key, will read a set of keys in, walk through the keys, strip out all of the insecure signatures and write the code back, the signature backup. Now it's written in C and that links against the stuff that I've already got. So it was actually just a set of traversing link lists. If there were bindings to Perl, it would probably be about three lines of absolute line noise that would just do what you expect. So that's where I find it. Because it's my code I'm familiar with, that I throw together things in this to play with PGP, that if we had sensible bindings for a scripting language, I think would be very accessible for other people to throw together. Well, hang on. The idea of your sake-checking thing is easy with this. You just load it in and you go, well, what is the sake? And you wouldn't have to worry about parsing the output or doing the evilness or whatever. Slightly off topic. DKG, could you go into Gaby quickly? Oh, okay. Never mind. Okay. What happened? Yeah. Mike. Okay. Sorry. My preferences. You know that it crashed when that was... You're not going to use my input stack is all fucked. This is the thing that was crashing it earlier, right? You guys want me to crash it again? Is that what's going on? No. Highlight current line of remote users. This one here? That was the one that was crashing it, but shall we try it? Oh, it worked that time. What if I do this one too? Or I'm not going to push it. Sorry. Well, so what do we need to do to get that LGPL library pulled out so we can start doing programmatic stuff instead of person colon stuff? I put the code... I mean, the code's been in a public BZR repo for years, but I put the code up on Gatorius today in Onak. So that's the full key server. There's some thought about whether the interface is appropriate and actually pulling it out into a separate library, but the code's up there and you can have a look at it. It's all GPL license at the moment. I think that pretty much all of the bits that we'd want for the library I wrote most of and I'm happy to do it. I think LGPL is a better thing. I'd rather see this widely used than tied down. So I'm happy to re-license the bits that I'm willing for as LGPL. So we need to decide about what we need to pull out and what makes sense. And I think there's probably an initial set of core functions we pull out to start with and then more we pull out over time as we discover what we need and see how it works. So I guess feedback from people who've written these sorts of tools and the sort of services they need would be a good start. I've never written bindings to Python or Perl, so if anyone's done that and has some thoughts about the best way to make the interface look, that's probably a good thing to think about from the get go as well. I think if we can actually provide a set of bindings for Python and Perl, it lets people just play with these things and do some validation, that will be really give us quite a lot of interesting tools. That was a licensing question. Are you thinking about LGPL v3 or later? And the answer was a shrug, I think. So my experience is that if you give people a C library, the Python people will make Python bindings and the Perl people will make Perl bindings, and it's really not a problem. No, ONEC is not a library. ONEC is the key server. And the code is currently in the key server. And so there's a process of extraction that's going to need to happen to pull out an interface that seems like a sensible library interface. Oh, yeah. O-N-A-K. So, Noodles, here's a question. Do you have any sort of a regression test suite for ONEC, or is that something that we should, especially as we refactor the code, start to build up? There is a very, very minimal test suite that does test out of a key, remove a key, look up of a key, and a key server that will then walk through various code paths. It's not going to be suitable for a library, so there's certainly work that can be done there. It's very minimal and actually it does find issues, but it's about the key server interface and any testing of the actual PHP code paths are all through what the key server interface, rather than being unit tests of individual C files. So, it sounds to me like we actually have, I'm also happy to commit to helping working on breaking out the interface and maybe even doing the pearl blindings if that's useful. So I'm wondering, it sounds to me like we have a decent plan on actually having some tools available to us thanks to your work. So, I'm wondering if we want to move on and maybe try to talk about what use cases we'd like to try to streamline or if there's other parts of the other potential topics people want to cover? I have an issue with maskey signing parties. Okay. I actually also have issues with maskey signing parties. I don't know what the answer is, but once you get above a certain number of people who don't really know each other, they just don't seem to scale. I find there's an issue with a hugely varying degree of knowledge in the room. You get people who are quite happy to do the signature verification checks and know what they're talking about and really are just using it as a way to meet people that they know they won't be bothering to ask for fingerprint exchange, and you get people who probably should have their hand held a bit more and probably should have some sort of an introduction at the start that says, this is why we do this, this is what's going on, but I don't know how you deal with that, and I don't know how you deal with 100 people turning up and wanting to cross sign keys. And the splitting out into smaller parties helps in some ways, but it's always a free-for-all. None of them are perfect, and I'd love to... I think the problem at the moment is that by the end of the key signing party, you could wander in and everywhere is knackered and they'll go sign, sign, sign, sign, sign. So the way you could reverse the situation there is that as soon as someone decides not to sign someone's key, they should shout about it and shout out the number of the key that they're not going to sign, and then the person next to them has to decide whether they're going to sign it, and that would actually raise the bar, because then it's basically the standards of the most twitchy person in the room that end up being applied rather than the least twitchy. Another approach that I've taken and I regret that I didn't make this announcement just before the latest mass key signing is that I've made it clear at different times that I will put a fake key into the key signing party, any key signing party that I may attend. I reserve the right to put fake keys into, and if those keys ever receive signatures, I will publish those signatures and identify the people who signed these fake keys just to let people know that there are some fake keys in the key signing party. And I think just that level of paranoia, there's a possibility of being publicly embarrassed if I didn't actually do my checking, I think is useful. I'm not talking about trying to do like a fake government issued ID or anything, but just like there's some keys that are in there that are bad. So you just missed the announcement and you did the fake key thing, right? I won't say. I make a key under a different name, yep. And it's a real key and it's present in the key servers, but it's not, but no one is claiming it at the party, or I certainly am not claiming it at the party. There are a number of keys that were in the key server list that were from people who had to leave before the key signing happened, people who didn't bother to show up, people who are out getting beer, and there may also have been some fake keys as well. So if you're just going through a list from a key signing, signing all the way down, like Phil was suggesting that's probably a bad idea. And yeah, maybe we should try to make that kind of a scenario more visible, more regularly. I'm kind of torn about that because I think for us it's great to raise awareness, but with the idea of trying to get the freedom box as a vehicle for expanding the web of trust, even though I know you don't like that term. In the extreme, we want grandma to be able to manage her key material, and if we make such a huge amount of stigma about how to participate, I think it will just frighten people. But the concern that's being raised here is about mass key signing parties, not about key signing in particular. And I think it's worth pointing out that you could have, there are other ways to encourage people to sign keys that don't involve mass key signings that still might make it easier to have key signing happen. So, thank you. So for example, one of the use cases that I would like to see streamlined, and some work has been done on this, and again, not enough. But we all walk around these days with devices with high resolution display screens, and I've said this before to many of you individually, but there's no reason for humans to have to repeat long hexadecimal strings to one another anymore. For a one to one key signing, we have the possibility of streamlined approaches where I show you a high density bar code, you take a picture of it with your device, I've got my trustworthy device, you've got your trustworthy device, note that I'm not saying telephone, and I can transmit a chunk of data that is about the size of an open PGP fingerprint very easily that way. So there's no reason that I need to be able to remember D217-399 or anything five times longer than that. We ought to have those tools, and those tools ought to be built into our contact managers. If you use a contact manager, I don't know whatever the one is used by the different tools, but we should be building that stuff in, and that will improve, I think, key signing uptake more than any number of large key signing parties, and I think it will mean that the signatures that are made are more actual legitimate signatures of like I actually met this person, not I was in a room with 100 people, one of whom I think had that passport that I think I checked it sufficiently. Just to be clear, are you suggesting that if there were an Android app to do that, we may not be able to trust the entire stack? Well, if it's your phone, then maybe you can, but most people don't really own their phones, but this is a, I don't want to diverge the discussion off into that territory. I think one of the ways you can kind of mitigate a mass key signing party is you actually have like a key signing hygiene training, like as part of the early part of the party, and maybe it's like old news for some people, but for a lot of people, it's going to take a long time before like good key signing practices become like second nature, if ever. So just continually refreshing and re-educating people about that kind of thing, I think could really make a big difference. Yeah. I'm saying it not just re-educating, but even educating for the first half of people in key signing parties, we encourage new people before they enter Debian to, we require them to get signatures. So coming to key signing parties is like a step for them to join Debian, and they might have an idea about security and what's not. Right. Much as I dislike key signing parties, I do think they serve a great use in terms of making sure we have a strong web of trust and meaning that if some keys fall out, we are still well connected. I just don't think they're perfect, and I'm not quite sure how to make them better. I fully agree that some training at the start, even for those people who may think they know it all is a good idea and a good refresher, I think part of it is that it's a scaling problem once you get that many people. I'm not an expert at all, and part of what I had trouble is there's not a lot of guidelines either provided by Debian or out in the community that help you decide on certain things. So an example I still struggle with is how many UID should you have? What's considered best practice? What kind of tradeoffs there? How, what you brought up was about how to make sure you don't use SHA-1. I still don't know how to do that, to be honest. Yeah. So there are a couple of folks who have worked on writing up OpenPGP best practices documents, and I did you send the email to the list that linked to the RiseUp page? Yeah, and that was excellent, but that's just really a start. No, no. I mean I've contributed to it, but it was not written by me. Yeah, so that's written by a range of people actually. RiseUp.net, activists and interested in security technologies. Yeah. So yeah, making sure that we link prominently to that kind of documentation and that we maintain that documentation and try to improve it as we understand new best practices would be good, I think. So an experience from CACER at the door, let's not go there, that's a good idea or not, but at some point like they had this whole thing of trying to emulate the level of trust, the level of trust, and at some point they noticed that people were assuring other people without checking anything or without doing anything. So they put a test to become an assurer. You have to pass a test. The idea of the test itself is also dubious, but you were sure that people were going to read the documents. They had the document that you should read. Best practices document. And in Debian, we don't have something like that at all. We have PGP in all of our day-to-day work, but we don't have something in a Debian place. Are you suggesting that something like tasks and skills would contain some idea about key management? The task and skills could be just a document that is referred to a central Debian place about best practices, like the developer's reference, maybe. There are a few questions about PGP handling as part of the new Minty inner process. There's a, in terms of generating a key without child one, Anna's guide that she blogged about at the time and is actually linked from keyring.debian.org. It's not the only thing out there, but it is a pretty good guide about you need to do these steps and this is what you need to do to get the key out and this is what you do with those steps. So if you're generating a new key and if you don't have a key that's 2K or larger, please go and generate one and start to get signatures on it. Yeah, and now it's a very good time to do that. You've got a lot of people in there. Yeah, I think we're going to get a lot more pretty active about trying to get that done. I think there probably is a space for Debian to have these are our best practices and guides for it. Is someone doing the undue? I hear this latest version of Gabi has undue. I hope so. From a personal point of view, best practices procedures are often a very personal thing and if you put two people who know enough about security to be dangerous in a room and you'll get three opinions out and at least one of them will be insecure. It's not just that people disagree. It's that actually best practices might actually be different for different people in different situations also. In the context of a private company, I have a consultancy, we use GPG internally, but we have to manage it differently. It's more like a CA model where we have one person that signs the keys and also has a revocation rights on each of the keys. That wouldn't be appropriate, I don't think, in a Debian context. Right. But the standard gives you the flexibility to implement that different workflow. I am interested in talking about monkey sphere and how that integrates the SSH. So at the moment it integrates via the same kind of interfaces that we were talking about. We're using the with-colons output of GPG to extract what information we need. Does it interface to the new certificate facility? It does not. The certificate open SSH because they didn't like with some reason they didn't like the X509 model which was already built into open SSL which they're already using. They said we don't want the increased attack surface for the ASM1 parser. Turns out that was a good idea in the last year there have been serious bugs in the ASM1 parser found. But they basically said the existing certificate models are too complicated. We don't want to use the two complicated models so there's a new type of certificate that we're making up right now and it belongs specifically to open SSH. And so as a result there's no way to actually use the same certification scheme across different crypto systems. So the so no open PGP stuff doesn't hook into that because it can't. The cryptographic routines are slightly different and the way that they're composed are slightly different. So from open SSH perspective I can see why you would do that but from the perspective of let's not make every different application protocol figure out a new way to certify all of their users it doesn't do you much good. So as an admin now you're deploying six different vulnerable to bugs in one mechanism that you can try to improve. So I wanted to mention I was digging through some old emails trying to answer a question that Noodle's had about about the certificate digest self sig digest updates and found an old email that I had written that referred to an idea of like an open PGP lint so that it would be I think in terms of best practices documents are good and nice to be able to just run a piece of code that looks at your key and says hey you might be interested in you know I don't know you might be interested in cleaning up these the car insurance yes exactly so I don't know I don't have a good set of use cases for what things open PGP and a hypothetical open PGP lint would cover but that might be something that would also help us to break out the library interface in terms of having a separate tool and then and if people want to just write up suggestions maybe add them to the scotty document of things that you would want to see checked for that you don't know how to check for or that you think is currently too tedious to check for by hand which is pretty much anything then I welcome suggestions on that front I think our time is up yeah I'm glad to see so many people here for a topic that's pretty arcane but also pretty fundamental to the mechanisms that we use so thank you all