 Welcome to the Deviant GGG packaging box at the top 15. My hope is that folks who are here today are interested in the health of GGG and GGG and Deviant. Whether you're explicitly a member of the team or not, we want to encourage participation. So Eric and I wrote some outlines of things we could do. If it turns out that we can breeze through this outline, I'd be happy to break out into small work groups if folks are interested in actually doing some of the concrete work that we have ahead. And we can talk about that. So if anybody has questions or whatever, just speak up at any time. This should not be a one-directional operation. Yeah, and this list may not be complete. So if you actually have ideas for things we should be doing, we definitely want to hear those too. So concrete next steps that we have for the packaging. We plan to cut over the user bin new PG to move that. We'll do this in experimental first so that user bin new PG will come from the new PG2 package. Oh, sorry. In case you don't know, new PG2.1 is in unstable right now. So we will continue to maintain that. So we plan in experimental to make new PG2 packages, which are new PG2.1 to make those the user bin new PG. And we'll move user bin new PG in the new PG package to create a new PG user bin new PG.1. So that's going to be an experimental. We're hoping to put a call out for people to try that once we've got it up there. That's just a simple cut over of the names. For the debut installer work, we want to move the UDEMs to be created from the new PG2 package instead of the new PG1 package. Creation of the UDEMs is a little bit of a tricky thing. And I'd be happy to talk with people who have debut installer experience or multiple build package experience. If you have that experience or if you know somebody who has that experience, when I say multiple build, I mean, one who uses source code, you build in several different ways because the UDEMs that we're building are being built separately, possibly with different configurations. So if you know how to do that or if you know who have other packages that are like that, be happy to hear about them so we can look at what the different options are. We need to be writing better news files instead of just announcing stuff at DevCon. So if anyone wants to step in and just do a straightforward documentation task, straightforward, ha ha ha, do it with humans, we welcome contributions to news.dev to the people who are following their package updates can see them. For example, the issue about switching over from traditional key rings to key boxes may throw some people for a loop if all of a sudden we're looking at different files. I'd like to update the Devian, the 1.4 packaging for Devian to minimize Devian rules. I think that can probably wait until we've done the cutover and the UDEM move and all of that. But if we can minimize the Devian rules for GBG 1.4, it should make it easier to maintain that going forward. So those are the specific plans that we can complete. And there raises a bunch of open questions, which you can see up there. One question is just a logistical one about package naming. The source packages are currently GNU PG and GNU G2. The binary packages are GNU PG and GNU PG2. Do we change the names of the source packages? Do we change the names of the binary packages? There's also GNU PG agent, which just comes from the 2 package. But it's invoked as GBG agent. So there's a bunch of ongoing naming questions. I'm not up for making huge changes, but I'd like to figure out how we're going to do the names. Do we have a preference? Then we have a preference for the naming switch. This is kind of like a meaningless question we just have to decide when we're going to go. Do we change? So here's two proposals that you can come up with. One of them is that we keep the source packages named exactly as they are. The GNU PG package now produces a GNU PG1 binary package. And the GNU PG2 package produces a GNU PG binary package. And the GNU PG2 package produces a dummy GNU PG2 package that just points to GNU PG. So that's one option. The other option is that we change the names of the source packages to be GNU PG1 and GNU PG. And then the binary is followed from the source package names. And the third option is that we change the names of the source packages from GNU PG. We just changed the name of the source package for GNU PG1, so it's now GNU PG1. I don't know. I don't know that it matters, but we can decide. Preferences? Also a matter of what's the specific use case other than what's the technical use case for retaining GNU PG1.4 after it has made the switch of GNU PG that might be conservative, but it sometimes specifically needs it. We have a specific reason. And that reason is people, so sorry, Marko, if you watch this, Marko Diitri is a great example of someone who has secret key material that's associated with old style, like old style crypto, old style certs that we know to be cryptographically not very useful. But he still uses them to interact with systems that he cannot update. And if we take away the 1.4 package from him, he's going to be stuck. Now you could say Marko Diitri is someone who can build his own GPG, and that's probably true. And I don't want to see GPG1 on anybody's computer who doesn't have this kind of specific need. But there are people who do have a need to do some older interact with systems that they can't fix on the other side. So that's why we agree. Then there is another case. If you have old data, it could be a PGP2. You need to have a GPG1. So if you want to run, there's no support anymore for old keys, old data in it. That's a reason. Some people do have this old data. But that would indicate to me that the more appropriate package name would be something like a GPG that's legacy or something like that, to indicate that otherwise it sounds like this. Otherwise it sounds like this classic line that's Apache 2.2 and 2.4 both are great. But if I need something specific, I'd pick 2.4. I like that. So the binary package would be GPG legacy. Or GPG obsolete. The bootstrap, I would like that the GPG package is built from the GPG source package. So when you do the switchover, I need to switch over the package as well in the bootstrap process. And if you rename the source packages as well, then I have exactly nothing to do. Because it changes which source packages built by renaming them. OK. So all of these tools are going to build multiple binary packages. And obviously some of those binary packages will not have the same as the source package. What I mean is the property that is desirable is that the GPG source package continues building the GPG binary package. That's a desirable property for the bootstrap response queue. OK. Not very important, but if it can be established, that's nice. So I have no objections to that. Are there issues with changing the name of the source package that nobody knows about from the R-Craft perspective? So I don't think there's any issue from the R-Craft perspective. But if we change the name of the package, it could be issues. You need to re-sign all the bugs. Sorry? You need to re-sign all the bugs. Yeah. Also, if we change the source package name, do we change the git name? Do we, will that be confusing as well? Like the name of the repository? To be consistent, that would be good. But it would also be confusing because then that suddenly was used to be pointing at the, they're not in the same repository right now. So the things that we're pointing at, 1.4, would suddenly now point at 2.1, which would probably break people's checkouts or clones or stuff like that, which maybe is OK. But it would be a little bit of a, it might be a bit of a pain to switch over. So refer that renaming would be good for bootstrap? Mine would be good. But not the strong argument. OK. You're making me worried when we get repos. So I'm kind of inclined to leave the source package names the same. We could always change the source package names later. OK, yeah. I don't think it's, I don't think it should be a high priority to fix it. Can I ask? I'm sorry, you're Moritz, right? Hello. When you're saying GBG1 should be GBG Legacy, is that the package name? Or do you think it should be user of NGBG Legacy? I think it could possibly retain the old problem of scrolling opinion there. I mean, if people have wired up some scripts or something like that, they would probably need to adapt that. But if it's just like the Markov use case, then you can probably also say it should be. Well, if they have wired up scripts that rely on user of NGBG, those are going to be called NGBG1, what do you mean by that? Or do you have a preference of the NGBG1? You'd rather call it NGBG1. Should we have completion? Yeah, but not completion. So the thing that should be annoying to type is installing the package. That should probably be a NGBG Legacy. But the invocation should be convenient. Yeah, although there is an inconvenience of saying, I'm going to install NGBG Legacy. Of course, it's called NGBG, and the binary is GBG anyway. So we already have this other thing. There's also a little bit of a problem that the diagnostics wouldn't show up as GBG Legacy, but as GBG1. Right, maybe as GBG1. I mean, I agree with this. If the thing is installing out of the box, once this cutover has happened, I know new installs are actually getting GBG1. Is this really the naming? Is that necessary, or is that desirable to have it be different than the binary name? And already no one's going to install it unless they explicitly ask for it, right? At the end of this, there'd be no dependencies on it anymore in the archive as far as I can think of. I hope that shouldn't be anything. There's no way that people can get this unless they install it. Intentionally, and we can definitely put in the description, like, you don't want to use this package unless you have these use cases. So yeah, just making it inconsistent from the binary as other things, a little bit undesirable. But I have an understanding point. OK, so discourage. So I'm hearing like people might be OK if we have the package name between GBG Legacy, the binary as user bin, GBG1. We might change the diagnostic string as well, so it's the diagnostic string that it emits. If there are a lot of files or whatever, we can distinguish them from a GBG diagnostic. We'll probably leave the source package as a load. Is that OK? Cool. I think we can't change the diagnostic string, because scripts rely on this. I'm pretty sure. Oh, really? Yes. But the scripts are going to have to change to call GBG1 anyway. And if it starts to pass, they... There are too many scripts. They're regular. Those scripts are broken. Yeah, they're broken. I actually don't care about breaking those scripts. If someone's parsing human-readable output... Well, if my... If my code changes, this script is OK. Anyway, it's about to dimension this. Yeah. You will receive the binding problem. Yeah. Well, I'd like to say, if your scripts are parsing human-readable output, that doesn't make any sense. Yeah. That's the one I also always think of. I don't know what to do with the front line. In a Sage transition from bash to dash, some people will continue to use bash by mounting it onto Windows Sage. Some users might do the same for... You could mount a non-directory file. Yes. You can bind non-directory files. OK. In that case, they can do that. We've got some more little pieces, right? Yeah, that's kind of our problem. So, let's move on to other open questions. So, the key server access, at the moment, we are defaulting to keys.mov.net, which is a C name to pool.sks-keyservers.net. The pool itself actually also has an hkps variant. So, it's conceivable that... So, do people know about hkps? You're accessing the key server basically over a TLS link. But if you want it to be a pool, pools for TLS are a little bit strange because you wouldn't need to deal with that identity. And the person who's coordinating the pool is Chris John Fisterstrand. And he has a certificate authority that's specific to the pool. So, we can ship the CA for his certificate authority and do something like have the default be an hkps query so that people's... so that people's queries are not sent to be clear to the key servers as a privacy presenting measure. So, that's something that we can consider doing just shifting away that GPG has it. People have thoughts or concerns or questions about that. The hkps pool is active. I use it regularly. I haven't noticed any problems with it. It doesn't mean it may first keep the key server out of the pool. No, it may first keep the key server out of the pool. It's just one key server. Actually, it's in the pool and there's the main first key server. So, depending on how people access it it provides a different certificate. How do we ship their CA and how do we handle the rotations of the CA? I have a general dislike of adding my CA to the certificate package. There's no reason anyone should rely on Chris John Fisterstrand and he probably wouldn't want you to rely on them for anything other than pool.sks. So, I don't know what the perfect good policy is. We're just shipping in GPG as they're asking. If we ship it as a separate package so we can easily change it if it needs to be updated for whatever reason or if it needs to be updated for whatever reason or if it needs to be updated for whatever reason Yeah. Well, if some key service comes from the rotation the certificate should be revoked. How do we handle that? Well, there should be CRLs or CSP or something but I don't want to think about GPG and deal with that. No, we don't change that. So, it sounds like we should talk to Chris John. So, what is he using? Isn't he using a library to do this or is it just doing it by itself? Yeah, sure. That's a good problem. The old GPG was the GDG but I don't think that makes any more sense because it should be a property of a deal manager who doesn't have all the access so I'm trying to remove this anywhere on GPG. It is in the general manager. Yeah. What would you think about saying if the key server that you asked for is hkps.com then use this custom CA as the default. So, you change the default based on this one particular string because we know about that operation and that's a special thing. And if they override the default fine but if they don't override the default then instead then with that exact string this feels like a gross hack but it seems the simplest thing to do. So, if it matches that exact string then the default is different than if it's we use the system model otherwise. Sound okay? So, we should talk to Chris about it and see how he feels about the idea of the CA being shipped and how he thinks it should be handled if he needs to switch CA's but that is something which will also be done in our stream. That would be great. Yeah, sure. What about also the load of the pool because if we talk TLS, KCPU they've got there's a dozen servers answering the pool and all of the servers because it's hkps are required now to have basically a front end reverse proxy to handle the requests. So, SKS had a whole bunch of really nasty availability issues. Like, you could start a key server query if you were talking directly to an SKS instance and it wouldn't be able to do anything answer any other query until you finished it. So, it's like slow Lawrence attack but time is 100. But that's not true if you have a front end in place like a reverse proxy in place but the way to do hkps is with a front end proxy. So, it's actually the higher end load is better in the hkps pool. So, we should talk to Christian about it and see what he feels like. And if he's okay with it, then we'll make a patch and we'll ship it to you upstream. Isn't this a generic problem? We're talking about one key server but it would be nice as a general policy to use hkps against key servers but we don't want to ship a lot of strange CA's. So, we do handle hkps to arbitrary key servers but not all key servers support hkps. So, we're talking about this particularly because this is actually an hkps pool and there's a nice property of the pool which is that you get better availability properties without leaking your traffic queries to the network itself. And this is a specific pool that's maintained by someone who's in the community who's done a pretty decent job at keeping it up and running and coordinated a wide group of people. It doesn't actually provide you with a ton of security because anybody can join the pool if they just meet a certain technical threshold. So, if you were a nation-state adversary and you wanted to find someone else's find out who's where and what you could just add a key server, put it in the pool and you'd get that key server running 12 at a time. I'm just concerned that they're giving some false sense of security by adding others. Well, but it's... You didn't see that. No, you didn't have that way of using it. You realized that. Yeah. Yeah. Yeah, but this... I mean, is the reason you need to look at the CA because it's fronting for a bunch of different... Basically, we reckon there's a bunch of different domains. There's a bunch of different servers that are part of this pool. I operate one of them. Christian operates three of them but, you know, there's another dozen people who operate these servers. And Christian says, okay, you've met this particular technical bar. Your server has been online. It's been synced. It's offering TLS properly. So I'll make a certificate for you and you can install that on your HTTPS front end for your key server. So the reason that you need a special certificate is because there actually are basically a dozen different servers that are authorized to respond to that. Yeah. So it's not signed by a regular CA or a less incredible inside. Who's going to pay that? Well, we said it's not the payment. Are you sharing the secret key between all these machines? You need to. If you're doing a TLS front end, you either need to share the secret key or make separate certificates for each of them with different keys. And now you've got, dealing with the CA's to say, oh yeah, I need 10 certificates for the same name. It's pretty unlikely, actually. That doesn't work very well. Yeah. I mean, I see what you mean. Yeah. Well, at the same time this lets us say there's only this one point failure instead of the 400 points of failure in the system. Yeah. Okay, so let's move on from this. It sounds like people are kind of okay with it. We have some open questions. It's not 100% fixed, but it's probably better than nothing. Yeah. The GNU TLS team, which is basically Andreas Metzler, as far as I can tell, is still maintaining the KSBA and the Gcrypt, despite the fact that GNU TLS no longer uses the KSBA and the Gcrypt. So Andreas seems fine continuing to maintain those. We don't want to take them over. I'm happy to let you continue to maintain them, unless we think that there's specific changes that you need to make. Do you see any changes coming up in either of those that seem to take care of instead of Andreas? I mean, I have no problem with that story. The KSBA is only used by GBG for KSBA. So KSBA is the easy one to take over. Gcrypt I found a little bit more scary to take over. It's a separate project. Okay. So, I guess we should mention that those two are also part of the GNU. They're still part of the GNU TLS packaging team, as far as I understand. So if you want to work on them, there's another team you can join. And these Gcrypt is used very widely even in SystemD. Gcrypt is used in SystemD? Yes. Gcrypt is used in SystemD for GNU. Don't talk about it with that burner. I'm a fan of SystemD. So the GNU PG docs package, seriously out of date, nobody seems to be working on this. I've asked Don Arms, did any of Don Arms run this year? Probably so. So I've asked Don if he was the one who'd been packaging it before. I kind of feel like we should just drop this from there. Isn't getting a lot of attention, and I don't think there is an active upstream PG docs collection. Are there documents that you could introduce when they do? I don't know. You have too many docs, and it's not too nice. That's kind of why I'm looking for someone who is doing this, just to sort out all these different documentations. But I've not found anyone. So is there any more? So if we drop this, that gives you one less piece of dangling. Alright, so... So basically you put the PG doc, which is that so by... But that doesn't have to be a dangling, does it? No, it's one. So unless Don Arms strong has a strong objection, we'll just encourage a removal of this. It's not technically under the team umbrella at the moment, but I offered to put it under the team umbrella. It got no feedback, no changes for me. So I'll just push him on that further. Where do we want to put the UDEM rules? For smart cards? They're currently in the GuruPG package, which is made from GPG1. Do we want to break it out into a separate package that just has the UDEM rules? Nime, do you have a preference? We could move it when we move the name user-bin-gpg and the package name, the buyer package name. Or we could just break it out into a separate package and have GuruPG and GuruPG2 depend on those UDEM rules separately. I don't know which one. Yes. Strictly speaking, the smart card access by GuruPG1 and GuruPG2 is a bit different. And the supported readers by GuruPG1 and GuruPG2 are different. But I don't think most of users don't care. Why you just mentioned this maybe you should consider to remove all the smart cards from GuruPG1 Yes, that would be easy. We recommend the use of GuruPG2 for smart card access. So the UDEM rules should be maintained by GuruPG2 package. Okay. Do you want them in a separate package or are you okay with them being SCDemon? I think this is also I mean, I worked on UDEM packages and we see this problem for UDEM in non-PGPG situations as well. And I think the real solution is to move UDEM rules to the UDEM package. That could be an option for GuruPG. We are trying to get UDEM rules in there but we don't think I'll file a bug but it seems to be opaque to UDEM. But I think they belong in there. The upstream UDEM project seems to be missing. So maybe the thing to do would be to file a bug against UDEM to get those rules into UDEM. Yeah, why not? And then if they don't act on that there's no reason that we put them in just SCDemon. Because I don't see how do we make sense to have them in a UDEM package perform in UDEM so sure, it seems to be the same problem. I guess it could be complicated too for these new UDEMs and a bunch of other things too. So I don't know why I have their UDEMs in two places. So how do we coordinate that transition with the UDEM? Does that mean that UDEM is going to break an older version? Is there like a conflict or something? I mean, you don't have to use the same file and if you just put them in different files So then there's two... But then you have... Yeah, we have over enough potential for you in some time when we have the same rule in two files but it doesn't cause problems and once it's in UDEM you can remove it from all the packages and I can look forward to the UDEM. Why don't we just conflict against all the versions of UDEM? It breaks in two places. Yeah, so I mean that's a bigger ask of the UDEM. I would just ask them to add or if you've talked to them and see if they think this is a good idea. Is there any chance I could ask you to volunteer to do that? Because it sounds like we have a pretty good sense of what it is to do. Yeah, it's been on my to-do list for a long time. Are you signed with Josephson? Oh, nice to meet you. Why do you think the UDEM rules should be inside the UDEM package because actually you can't use smart cards without having installed new PTP. So actually you can also argue that these UDEM rules only make sense if you have the package installed for it. No, you can use them without. The UDEM package contains a lot of rules for various hardware. So maybe it's a question for the UDEM if they think the UDEM package should contain rules for all hardware or if it should be in all the packages. Probably, if it's all the packages then the UDEM rule means we need all packages that require that level. I think the biggest problem here is the area of maintenance and control. If you're submitting to UDEM you're submitting to a system D package with a rather central piece of package which most people wouldn't. I mean, for example, if you need more updated UDEM rules and you need an UDEM package by what if you actually submit it to a system D which builds UDEM then you would either need to make some local hacks or actually back port it completely and there's also the the people who are maintaining all these market infrastructure properly more up to date and more dynamic then. There's probably no one within system D where there's specific domain knowledge. There's some patches from the steam, right? Well, of course, we'll put that out as a practice in UDEM rules. I don't know, maybe it's a discussion to guide it up if they have a preference. I think that you would have to worry about the part. So, if you're okay like spearheading that it sounds like you've got a sense of what the risks are. Yeah, I don't know. Yeah, I can try to reach out to them beyond the four options because there may be some sort of it's not a number but I can suggest something to trust them. Okay. So, in the meantime, we'll leave the UDEM rules in the the actually legacy package when we're doing this cutover in experimental. Does that sound right? I mean, I just if we're going to try to do the cutover do we want the cutover? We don't want the cutover to block on the part. Right, don't block on the part. What's going to happen if it seems like somebody could lose their smart card access in my case, right? During the cutover? Because if they just do app get upgrade and don't remover, I mean then when you want to remove it from the new package then you don't know. But maybe they need to be swapped into SCDEM and if UDEM is unresponsive maybe for the cutover in experimental it's swapped into SCDEM Yeah, I don't have it. But that's where you also have break in good place you can make sure that Right. Right. So, it's probably some time to try Yeah, so we'll go directly to SCDEM first I don't know. And this is not going to be done as a UDEM you have to be in the school but I think it's set some property that much Does that sound right, Live? Yes. Yes, and So there's this open question about divergence from upstream that we might adjust some things in terms of maybe the next specific hard name or changes to the default algorithm or the default preferences or whatever I think this is going to be an ongoing discussion but I'm already getting notes from people saying, hey, you should change this and you should change that. Most of the notes that I've gotten have been very broad of like here's my dpd config file this should be the default so I don't have to modify it and I don't want that I think that's a bad idea but I do think we should be willing to do some divergence in that sense so we need to figure out how to keep track we'll keep track of the divergences in the data that patches folder but we need to think about how we accept new requests and how we make those decisions I'm happy to like What kind of divergence are you thinking of? Is it like hard and default code patches? So we have some examples of divergence and now we've averaged a little bit in terms of dpg agent turning off the PR setupable the process control got it, thanks conceivably we could change things like the default preferred hashes we could change the default key sizes we could change the default algorithms but I don't want to at the moment I hope there's no reason to but like in Debian right now when we say for Debian developers when you're going to make a new key the instructions that are out there actually require you to do a handful of changes so that you get a key that we think is going to be good going forward in the long term those changes may in some cases be have an issue with interoperability with people who are stuck on much older systems so there's some tradeoff to be done there and I'm hoping that if we make a divergence and it demonstrates that there isn't an interoperability concern we get no complaints from folks who are using it in Debian unstable and no complaints from folks using it in Debian testing that by the time testing gets stable that might be an okay thing and we can then advise upstream hey, we didn't get any bug reports about the interoper we got three and they were from these predictable use cases and now all the guys say here's how you set up your key though man that's a special case I'm happy to talk to somebody who's using an old system but I would rather have that corner gate the special configs be something that our people do have special needs so thinking that specific use case you're predicting I think you could still come to a conclusion where this is even upstream but for example I was just talked by Niko from Nuzela's upstream who's doing this work in Fidora where they have like a little bit of a various set so for example if you just build new PG with a dash dash compile option or something like that then you adapt those sign of strict or harder default settings so I'm not sure if that would work for you but I think many of these changes can could be made in a way that they are also applied in a modern way sure, I mean I hope PGP is quite different because we have these previous systems it's quite different so that the old has its own mission to select algorithms so I'm not sure whether there is a way to it's also different because the algorithm selection mechanism is done asynchronously like in a TLS algorithm negotiation is done you have a hint and it's going on right now no man, this just has to go along I mean this is often even because it would break it's strict about it's made but it's bad to approach I mean if we're doing that it might be interesting also for people to work building with PG from TLS right even if it's well I mean if we make a change we can make it a configurable change and put the switch in our configur and then if you want to take it upstream but if you're not interested in doing that then we might as well use a simple change well I hope we get to that some point on that standard sure, but we have an opportunity to do some experimentation within Debian that maybe you don't feel comfortable doing the GBG upstream I mean GBG for win might have a different that's one problem we use different versions so anyway my goal is not to diverge from upstream anyway that we can avoid diverging I'm happy to but there are some things in some of the versions that would be useful so we'll run it long time we should keep going so ongoing work for folks who want to participate there's bug tree off we've got a bunch of open bugs some of which I think no longer apply and it would be great to just go through try to get a test case try to get a test case for the bugs and see if they no longer apply can I even close them there's ways to get other people to participate we should think about how to do it I think our relationship with upstream right now is pretty good as it goes but there's other ways that we can be useful in packaging to you Jonathan McDowell asked for a back port of 2.1 once we have it in or 2.2 or whatever it ends up being once we have it in a way that seems reasonable with the goal of being able to use it on Debian infrastructure so that we could verify say other ticker signatures in Debian infrastructure before the next statement would be cycle whether DSA is going to be comfortable doing that or not, I don't know but Jonathan McDowell did specifically offer to do some of the back porting work I did have a thought now that as we do cut over the back port becomes a little more scary it's not just back porting to the version it's back porting the swap of the versions which is a little more terrifying it is a hub for it that's not just the US case it won't be needed hopefully the Debian infrastructure is not relying on every product of PGP2 keys we did a bunch of work in the last several years to get rid of all that stuff what I just mean is like any bugs that we figure out during the swap we have to back port those fixes to make sure that those are also back ported this could break people's systems more readily than if we just reporting a new version of the new version 2 the architect infrastructure certainly requires doing a back port for deploying that there's no question about that but I'm not convinced that during the swap is the intention or required over there having it installed as PGP2 and appointing those tools and with the newer signatures seems like a feasible thing to do it sounds like a lot more work since DAC already has these parts as computerization but it's not just DAC either so I'm just learning that should back port maybe we need to talk about see what he thinks there's going to be a lot more work to make it without the path slot so a couple of hardening ideas have come up in burners talk you talked about this idea of privilege separated gpg agent you could do a privilege separated third manager debuts in a good position to actually do that kind of work because we're at the distro level if we need to add user accounts or make sure that you're running on linux to use some teacher or whatever if folks have ideas for how to do that hardening we should do it what you were talking about forwarding the agent over a network my big concern about that is that the agent currently just goes ahead and uses the secret key material yeah I know the big question is how you can add a connect to the confirmed which has an extra also I have an idea how to do this in a general way meet a data along with private talk okay so I'm also interested in the idea of a gpg agent that runs as an isolated process on my machine that it's signaling mechanism of something like the light on my laptop so not on the x-lab at all and my ability to say yes go ahead and use it goes through like an acpi trigger or something like that and I could write this would be like a separate pin entry maybe but so if it's just a confirmation account that's needed nothing running in my accession can cause an acpi trigger nothing running in my accession can flip the LED light so the idea of just having sort of a soft smart card something separate anyway we should talk about that more but this is also related to you because the separation yeah so we're pretty much out of time I was hoping that we get to actually maybe start hacking on a cutover so folks have other things they want to talk about what you should bring about that did you have your does anybody want to help or does anybody want to take on any of these tasks themselves somebody want to volunteer to triage five bugs from your deviant BTS come on five bugs four bugs it's a reverse auction one you'll triage one bug I'm going to write that down in the notes yeah I think if people can't do any book triage or just watch on the main list and if you have domain expertise in some aspect of this it would be great there's bound to be a few bugs no metal fence but there's probably going to be a few bugs that's true in GBG and in our packaging that people will want to deal with can folks try out the experimental packages with the cutover is that something we can make in there and I won't write you down in the notes thank you that would be definitely useful yeah how much will we have any particular keys to test it so I think we shouldn't be encouraging the creation of the Curved Keys anytime soon but there are already some people within them if you have them of the Curved Keys right so we can test the verification of encryption mechanisms do you have the NIST current so long then what do you mean we need to talk to the media because they are coming to the page of the NQC something it's probably the mysteries so I don't think we should encourage people to create them but I do think we should encourage people to try using them we should be able to use them so I think Eric and I are going to try to do a first pass at the cutover at DevCon if we can find a few hours to work on it if anybody has any warnings about what we might write into it we would be very happy to hear about it you mentioned that is the UDEP differently from the rest it is it's minimally configured so there are fewer dependencies and fewer options I think that's it our time is up and I know the video team needs a break any other questions are people here subscribed to the mailing list for package.devian.org please subscribe to the mailing list we'd love to have your feedback on that it's not a super high volume there's also a pound of devian.vmpg on the left there's also very little traffic so yeah, if you've got issues that you're running into find us on the mailing list or the BTS so I think that's it thanks everybody