 All right, let's start. Hello, everyone. My name is Tomah Rosa. I've been a Fedora contributor for more than three years. And I've been mostly working on DNS related stuff in the network space. So today I'd like to explain the DNS like from high level and the changes we're trying to make in Fedora. So let's start. During this presentation I'll cover why DNS is important, is important, and why we should care. What is the change in Fedora all about, how it works, and what is the integration in Fedora products. I've been working on one station server cloud and not everything is finished yet, but what are our plans. So why is DNS so important? Hopefully all of you know what DNS is. It's a distributed database that is able to store various types of data. Usually IP addresses are used for translating domain names to IP addresses, but that's not the only thing you can store in DNS. You can store also different types of data. For example, as I call them, trust data. Data you can use for verification of, for example, certificates or remote books, fingerprints and stuff like that. So for example, TLS resource record can use for verifying remote host TLS certificate. SSH fingerprint you can use for verifying remote host SSH fingerprint. So you don't have to be tasked to answer if that fingerprint presented by the remote host is really the one it should be. So it can be verified by the SSH client automatically. IPsec keys, for example, I know Paul is trying to do some opportunistic VPN tunnels and stuff like that. So in the ideal world, if you are trying to connect to some remote server, first you look up IPsec keys. So of that server, create a tunnel, that server will look up your IPsec keys and you can start communicating over tunnels. Then there is the third resource record you can use, for example, for storing PGP certificates and there is the certification authority authorization resource record you can use for like restricting which certification authorities can issue certificates for your domain. So this opens possibilities for new kinds of applications, like I said, like opportunistic VPNs, VPN tunnels. Then SSH clients can automatically verify the remote host fingerprint and so on. But with plain DNS, it doesn't make sense to get those data from DNS database because with plain DNS there is no data integrity mechanism and no data authenticity mechanism. So plain DNS is vulnerable and suffers from various types of attacks, main-to-middle attacks, cache poisoning attacks, spoofing attacks and so on. And also, in usual situation, DNS works over UDP so it is really easy to spoof the client's IP address or server's IP address. So this is why DNSsec was introduced. It stands for DNS security extensions. It's just an extension of DNS protocol so all the implementations that don't have DNSsec implemented should just ignore all those extra data in the responses and all those extra flags. DNSsec provides data authenticity and integrity in DNS world and it uses asymmetric cryptography. Basically, all the data that you are using DNSsec are signed using asymmetric cryptography and keys and then you can verify the server responses by building the chain of trust from the root zone and by using the well-known key used by the root servers. So you can build the chain of trust from root zone to the host name you are trying to resolve and verify that the response is really the one it should be and that no one messed with the response. Or you can find out that there is no such a domain name and you can be sure that this is the response from the alternative server. So now we know why is this important to have DNSsec so what's the change all about? We are trying to focus on the client side and by that I mean the applications client side. Everything works pretty nicely on the server side but when it comes to for example mobile machines and clients there is really not that much support built into applications. So we are trying to basically integrate multiple components on the system so they can be installed by default in Fedora and provide some extra security. These components must respond to different network configuration changes and do some other stuff. So we are trying to have a local magnetic resolver on the system installed by default and running by default. This will allow us to provide some extra security from DNS point of view to the applications that don't even care for example your web browser. It doesn't have to fetch any like trusted data from DNS but even though it can be sure that your internet banking domain name was translated to the right IP address and it was very fun. Applications that do care like SSH client or if the browser wants to do the TLS certificate verification using the TLSA record they should use preferably some validating DNS API. For example get DNS or leave unbound or something else because you could rely on the locally running validating resolver but you can never be sure or can make sure that there is such a resolver running and that it was configured properly. And all the libraries you are using for example GWPC is kind of broken from DNS point of view so if you do care about security preferably you should use some better API for DNS. So how it works in Fedora? In Fedora we are using three main components network measure since it's used by default in Fedora as a network configuration manager. We use it as for notifications about network configuration changes and for all the data about current network configuration. Then there is the unbound server which is validating DNS resolver. It's used basically for the name resolution and it does the validation. Unbound is kind of special because it was developed with DNS in mind from the beginning and its purpose is to be like not a Swiss knife as a DNS mask but rather just a resolver solving its purpose. Then there is a DNS trigger. DNS trigger is kind of integration component between unbound and network measure. It does dynamically reconfigures unbound base on the current network configuration. So you can imagine like a network measure on the top which on every network configuration change notifies DNS trigger that there is some change. DNS trigger uses a standard network measure provided in the library to fetch all the current network configuration from it and it tries to figure out if it should reconfigure unbound in any way. It gathers for example default configuration, default connections, what are the DNS servers provided by the default connection, if there are any VPNs, what are the domains provided with those VPNs, name servers and so on. DNS trigger then when it gets all the data it does some performance various tests. For example for unbound to be able to do the validation it needs to be configured properly with proper set of DNS servers. Those DNS servers have to be able to provide all the data necessary for the validation. So this is what DNS trigger tests for. Those network provided DNS servers are capable of providing all the data that are necessary. If this is not the case it can try if it's possible to reach alternative servers on the internet. If that's not possible it can try if it can contact Fedora infrastructure. Because in Fedora infrastructure we have DNS results running on TCP port 80 and also on port 4143 and they are using SSL. So it tries to tunnel all the DNS communication to the Fedora infrastructure. It does not possible it should basically inform the user that something serial broken. Then when all those tests are performed unbound is reconfigured so it reflects the current network configuration. Currently we support split DNS view VPNs so if you connect to your VPN and the VPN provides you different set of name servers and some domains and the VPN is not configured to be used for all resources. But just for the resources from the VPN network it configures special forward zones in unbound so that VPN provided name servers are used only for VPN from the domain from the VPN. Then we are configuring forward zones also for private IP network ranges so you are able to do the reverse resolution of IP addresses. Then we have a couple of fallback mechanisms like I said full recursion and then DNS or TCP or SSL. Basically you can set up your own infrastructure if you don't trust Fedora infrastructure but by default we are using Fedora infrastructure for that. There is also upstream infrastructure from the upstream of DNS sector here. Okay so what's the integration in Fedora? In Fedora we have different products for some time and those products have different audience. Therefore after some pretty long discussions on Fedora. At least we figured out that we need to provide specific configurations because known folks were not really happy about the changes and the user interface DNS sector either provided. Also we figured out that there are a couple of integration points we need to solve and then maybe solve differently in different products. For example get the portal detection, get the portal handling, login handling and then the user interaction. So what are the common things? In the beginning there was a DNS sector panel that was not really user friendly but it worked and it was installed by default. Fedora workstation product didn't like it so we split it into separate sub-package so it's not installed by default. And we made some changes so that we don't need to do any extra user interaction but we left everything up to the workshop. Then get the portal detection, it's now turned off. Currently only in Fedora workstation but it will be most probably turned off on every product and every variant. We will rely on network measure because network measure is doing basically exactly the thing, maybe something more that DNS trigger was doing when detecting the captive portal. And for that we will need some notifications on connectivity state changes from network measure. That's not yet available but network measure developers promise to provide this functionality. So like network measure dispatcher notifies you on network configuration changes. It will notifies us on connection state changes for example if the internet is reachable or if there is a captive portal and so on. So Fedora workstation we completely turned off the captive portal detection and logging functionality because not shall already implement this and basically it didn't work really well because there were some race conditions. We are doing no user interaction but we want to provide some way to easily configure some notifications if you really want to. Developers don't want to bother users with some notifications that may confuse them or the user may not even understand what does it mean. For example when all the fallback options fail and we are switching to some kind of insecure mode from the NSF point of view the user may not understand what are the implications and may get scared, something like that. So therefore we also not to get rid of the need for user interaction when all the fallback options fail. We got rid of the basically UI for that and we are doing the automatic switch to insecure mode all fallback options fail. This is basically the switch to insecure mode means that you are basically in the state that is currently in Fedora. So we are not doing any NSF validation. So does the user have any way of knowing that they've fallen back to NSF? So if I'm a user of Fedora workstation and I think I'm using the NSF and then I walk to a cafe or whatever and something fails and we go back to the insecure mode. Actually we as the change developers we think that user should be important. But all developers have different opinion about this but by introducing the possibility to automatically switch to insecure mode because the NSF trigger didn't have that before. I also added an option to run some command on switch to insecure mode. So it is easily possible and it will be commented out in the configuration to send a notification window to Inoxia so you can be notified like now I switch to insecure mode. And the insecure mode automatic switch can be disabled so it will not happen at all. And the applications will actually know because they're not getting the NSF data anymore. So they'll actually know that something's wrong. If they care they will know. So maybe we'll figure out some good notification that is digestible for the user and also for the developers. But right now we agreed not to do any notification which is not really good from our point of view but everything silently fails and you are losing the security but it doesn't seem like a priority for workstation. So for cloud we basically just started discussions. We think that the trustee resolver makes sense at least on the host where you are running containers because containers running on that host can reuse the locally running resolver and get extra security even though they don't care or are not aware of it. So for sure it doesn't make sense to have a local resolver on the images themselves because you don't want each container to run a separate resolver. The problem is that Docker currently is not able to use the locally running resolver. It works in a way that if the host result contains some IP address different from local host the Docker will copy the IP addresses to the containers result call. But if there is local host address it will put Google's DNS servers IP addresses in the containers result call. Which is basically the easiest way to solve this and it works but it's not the most clever thing to do. Does it do all local host or like if you were to change 127.001 to 127.002? I think it's like 127.8 so anything from the sublet is basically not. We proposed like some easy hack using IP table rules to forward all the DNS queries from containers to the locally running resolver. But the upstream wants to solve it in the more proper way by implementing some DNS proxy. And they are not really sure when this will be available or how this will work. The communication is kind of slow and we are not sure how to solve it in Fedora without the upstream really improving the fix for now. So the discussion is still ongoing. For the server there is the question of configuration in the server environment. You don't have that many like network configuration changes and most probably everything is statically configured. So the DNS trigger part may not make sense in server. But if you use it you get like the value of not having to manually configure anything because it will automatically configure and load. Even though it will be just once when you boot up the server. And also there is the DNS trigger control tool that substitutes the user interface in command line interface in command line. And that's usable for getting data like what's the state, what are the outcomes of the best performer by DNS trigger and basically for switching to different modes. This tool is included by default on any product but it makes more sense in environments where the command line tool is like the most usual way of interacting with the system. So for other variants for example as I said don't shall provide some user interaction and hotspot login functionality. But for other variants some spins that don't provide this it will be necessary to install the DNS trigger panel sub package for the user interaction. And as for the rest it should be all the same as for other products. So to summarize DNS trigger on client side opens new possibilities. Like I said you can store some special data in DNS and right or enhance your client applications to use those data to for example validate remote host certificate or some fingerprint or to build some VPN tunnel to the host. The changes we are doing are mostly consisting of tightly integrated set of components, network manager, unbound and DNS trigger. We are providing different configuration for workstation but we are open to like suggestions if there are other things that should be different. We can provide different configuration also for other products or variants. Basically I forgot to mention that by using DNS like some 30 hex that were commonly used with plain DNS will stop working. For example in the past if you used just network manager and you connected to VPN. Network manager put the VPN and DNS servers on the first place in the result call. So even though the VPN was configured to be used only for the resources from that VPN network those VPN provided DNS servers were used for all DNS queries. So this was privacy leaking basically but it worked. So if you didn't list all your domains when you connected to VPN it worked in that way but now it will stop working because if you don't list specifically all domains from the VPN those queries not from the listed. From domains that are not listed will not be provided to the VPN. Also some like if you connected to some pipeline on airport and they are using their own top level domain that doesn't exist it will not work because they are basically making up their own records that don't exist. So you can get like cryptographic proof of non-existence of that top level domain but they are still claiming that there is such a domain. So if you are using DNS like this is basically an attack so you will not trust their answer. I like that. That's a good thing but you know a lot of users are used to things working and by introducing new countermeasures for security you are breaking those things that were basically misconfigurations or misinterpretations of RFCs. But now for $180,000 you too could have your own top level domain. Right so why not to buy them. So I would like to ask you to test the NSEC trigger if you didn't yet in Fedora and provide us some use cases for which it doesn't work for you. Not all use cases you had may be valid. So I will tell you that you should change your configurations. So hopefully there will be some place where I can share the slides with you so we can go through some links or Fedora VK if you are interested in some design documents we put together after having to explain over and over and over and over again like what is DNS like all about and what is the change all about and why I think it's important and stuff like that. So that's basically everything from my side. So when you were talking about the cloud host so are you saying that the DNS set result wouldn't be included in the cloud image? I think it doesn't make sense to include it in the image. Yeah I agree I just want to make sure that's not included because the way it's been presented on the mailing list it seems like you guys are trying to put the DNS set trigger in the cloud image as well as in the comic host which I don't think makes sense. Actually I don't think there was any previous discussion but just this week BJP sent some email to Fedora cloud mailing list and to be honest I didn't go through the email but if he claimed that we should include it everywhere that's not true. So what I mean to say is that on all containers it would not be included because then if you run a thousand things in one host you're running a thousand times 15. Yeah that's what I thought about the cloud image especially because you're paying. Yeah that's the idea. There is of course some trigger between you have to then start to trust a host you're running on for DNS between the local host tag. Because you're outsourcing the trust of your DNS from within your container so in theory you're vulnerable to the host compromise. I mean but you're already outsourcing your computation storage network so like they can just do anything they want with you when you're over there. But having the local auditing resolver on the host I think it makes sense to have it there. Can you define cloud image because I'm thinking of cloud based image which is just like a disk image that you boot and then like you mentioned a thousand different versions of it running which makes me think you're talking about a docker image. So what are we talking about. So I'm talking about the cloud image that gets turned into the ANI. But there would be a thousand different versions running if you include it in that. But look in a normal cloud if you run anywhere between the honor and service and it doesn't necessarily make sense because a lot of cloud providers provide network isolation and you get control over it. You get control over DHT so you can run there. So with each VM instance you're talking about it would run not necessarily if containers were running on top of the VM. The cloud image if you're already setting up your image in like an AWS or a Rackspace it doesn't make sense to include the trusted resolver there because if you want a trusted resolver there's a more efficient way of doing that external to the image. By dealing with like the network can fade around it. In like your DVC or your Rackspace get it private. And so I just think that including the cloud image doesn't make a lot of sense because then you've got a bunch of DNS instances running inside of the environment you've already controlled in DNS 4. And it's just wasteful on the resource. And you don't want to run a thousand on a VM so long because it's a hardware. And so I don't think it makes sense to run it. You mentioned atomic host. You say it does make sense there or it doesn't. Because like we currently do atomic host for bare metal and atomic host for cloud and right now they're kind of the same bits almost. But like for atomic host like it kind of if you're going to be spinning up containers you're probably not going to be spinning like if you're just doing a pure VM environment that you might be spinning up a thousand VMs because each one of your microservices is in itself or something like that. But if you're spinning up containers on top of a VM you're probably each of your VMs is going to be a bigger VM. Like you know you're almost using the container host is becoming like a second. It's becoming your infrastructure. Fewer really easy atomic hosts even if you're running them in cloud. And so I feel like it would be less wasteful for running them down there. Because for the number of tenants you're serving like the point of containers that you can serve way more tenants on the same hardware. And so I feel like for the cloud if you're just using cloud and not those of the containers you're going to be running a lot of them because that's probably how you're separating them. Whereas with the atomic host you're sharing for a long time. So it seems like it makes sense for atomic hosts on the Bermacol but it doesn't make sense. He's saying it depends on where you divide the services. Because if you're doing it with a traditional cloud image then you're probably going to spin up a ton of those cloud images to do their own things. But if you do it on atomic host even if atomic host is hosted in the same cloud situation you're probably breaking up a bunch of different containers on top of that atomic host. So you would only have one DNS second resolver on that atomic host for a lot of different services. Versus in the traditional cloud case where you probably have a bunch of different VM instances and each of them have their own resolver. So you might have 10 atomic hosts with 100 containers each. So you're only writing 10 instances of that whereas if you were to secure VMs you'd have 1000 VMs of 100,000. Like it has some implications on its own to have to unload there. You can save some by caching, you can save some time and bandwidth but it is still running there so it's consuming memory and CPU time. Yeah, because if you're running in a standard cloud environment you're running a one web server with maybe some of the similar processes that are there. And then you're running one on-down for every instance of the web server that doesn't have as much time as running 100 web servers and 100 containers all sharing that same memory. Yeah, that would be good. Because the process can do its own DNS caching. You're not simply here just single host. And that's not a big deal because you've got a bunch of processes in here. Sorry for being the, like, if the email was... Yeah, I will reply with this to the mail-in. So I was working on this one this morning. But I just had not done my own complete session. Okay. Yeah. I mean not having known anything really about DNS stack much before this is what we have now kind of an in-between step before larger infrastructure gets in place for DNS stack. And at what point in the future does the current implementation go away? Right, so like... You keep the DNS result very on the local host forever. Forever? Yeah, because it does caching through the whole machine. Okay. None of them have caching. There's a lot of confusion that happens now. Like, Firefox has caching. It's an network. It may work. Firefox doesn't. I know... System... I saw Leonard talk recently about something similar to this where... Yeah. You didn't want each application to do its own caching of DNS, so there was some system D service. System D is doing some tunneling of DNS over a deep bus back and forth. Uh-huh. Yeah. Yeah. It's hard to sort everything out, like, and understand, like... There is a DNS product called the Works of both 53. Yeah, I'm marching through a lot of us. Marching DNS into any other format seems to be... It's a well-known host too, format. Just running over work, if you think. And ResultD doesn't support DNS stack yet. And from Leonard's talks and from what I discussed with them or asked them, they are trying to do... Or they are planning some kind of best effort DNS stack. So we are trying to do all those fallback options, like trying to do it for a garden or tunnel DNS over TCP, but their plan is not to do any of those fallbacks. So if the network provided DNS servers don't support DNS stack, they will just turn it open. Yeah. That works. So... So without... I'm not the only one listening, are you with that? Yes. Okay. Because I have an interesting... I've been able to sit down over there for you since... Yeah. Actually, it works pretty nicely also with... With manager and... I'm, like, running three instances of DNS mask for... For VMs. Yeah. And I have, like, unbound running and... Yeah. It's just unbound. On the loop back, having been being listened on for... It's certainly... It's certainly helpful to be able to explain it with integration. Yeah. But other than that, it's a... A config file. Yeah. It should work out well. Well, I need to read it. Yeah. Can you guys do the work? It's a system that I want to use. No, I think... And then... More specific bind. Yeah. Exactly. And local policy meaning that anything that's not... That's the only way. Okay. For example, bind in depot configuration with Fedora. It's also listening on local. So it works perfectly. Well, because it's still listening... You know what you see better, huh? You'll actually listen on with... I mean, in your face, that's... Actually, bind... We ship, like, depot configuration with bind. Yeah. Fedora policy that you're not allowed to... So it can stop the beam and it listens. It has to be on the local. Right. Unless you manually re-configure it. And we don't want, you know, to... You don't have to stick in the depot config file. If somebody were just going to turn it on... Yeah. You can install two Vienna servers. Oh, yeah. And they're like... What is this going to take? One-to-one factory installed where it was optional. Yeah. And the problem is to be aware of that's only there. Yes. I'm trying to prevent users from shooting themselves in the foot-out. Right, yeah. They won't... No, they won't. No, they won't. Let's hope you don't, like, I'm afraid of it. No, not the way you understand what I'm holding in my hand. You know what I mean. Nobody's going to listen to you. We don't want to have, like, more open resolvers on the internet and, you know, to help with the denial of service attacks and stuff like this. Nobody asked Fedora in-house. And someone will be like, this is what's going on, and then they'll just be up over to the top and they'll just be stuck there forever. So back to the scope of your DNS. So you're saying that that's going to work out of the box if you have network manager configured directly for your VPN? Because in a lot of cases right now, I'm just setting up the DNS mask myself manually and setting it back to your result com. So then setting the domains for the... Right. If the VPN provides all the domains like, for example, if I connect to Redhead VPN provided on the redhead.com and some set of DNS servers, DNS mask automatically configures unbound to forward like varies from redhead.com to that set of DNS servers and for everything else, it uses different... So it depends on your upstream VPN configuration, though? Yeah. Some VPNs don't, by default, give you DNS. Okay. Do you set it in network manager? No. And also it depends on the type of VPN you are using because if the VPN implementation is not communicating with network manager, we are fetching all the information from network manager. So if network manager is not aware of your VPN, then we will not configure it. So key news... But currently some VPN solutions do it directly with... Yeah. But also I think open VPN... If you get a system VPN, you will find that it has a kernel model that blocks a resolve account and then... No. It's a horrible thing. Sorry. You mean the Cisco VPN software? Yeah. Of course you can have all the resolution to the VPNs. It does bad tricks. You mean the proprietary software? Yeah. Yeah. Yeah. People have. People have. That's not necessary. So you can use the stock IP stack software which works fine. You can use another one like Juniper or... Yeah. Yeah. Yeah. Yeah. Yeah. Anything can happen. You don't need... Those are all RFC-confined IP stack clients. You can just... Yeah. Only if you don't use any special feature but they have all their own special features only if you have a client. I've been running a lot of VPNs. Yeah. But I've seen situations where the... If you level up a patch with the... that you have to use, that won't become a bad specific feature. Sometimes it's corporate policy too. Those tricks. Corporate policy is a lot to allow for Dora and then mandate a binary only other VPN software. Okay. So there's one limitation currently to the DNS. You can only actually get one domain currently because the... For the 20 microseconds one because I only support one. Space one. So this... We're working on a graph together with Apple to actually extend this to be a list of multiple one. Right. But that's the limitation of the VPN software. Yeah. In reverse case where, for instance, I go to the customer and they've got their domain locally that's attached to the VPN or whatever. But I actually want to do all of my resolution on the road down and only do the resolution force one domain on the VPN servers working. The non-VPN. So you want the local, virtual, virtual, and one domain and everything else goes through the VPN? Yeah. If you configure your VPN to be used for all... for everything. Like done for... No, I mean like in network measure. When I configure in VPN I can like check the box to use the VPN all before resources from the network. So if I don't check the box network measure will prefer that VPN over any other connection. So it will be the default connection from our point of view and we are not distinguishing like between if it's VPN or just via network. So if you configure... How is it the local way? Right, yeah. You need to treat the local connection as a VPN as well. So the domain is connected to the local connection. Because they don't... The local VPN is just that domain. So you have the domain thing in the... Yeah. Actually we are configuring... If you're being used over people around then you're basically isolating your laptop from your local network. You're only connecting to the VPN. That's what it means. Exactly. The local network is simply a local network. It doesn't take over the local network. So... No, not that. If you say it's your local network. That's the problem. So currently we are configuring for our own support. Any... Domains provided by the wired connection. So it would be configured. But if there's like... Hopefully in domains so the VPN provided the same domain that the wired connection is providing the VPN would be preferred. Okay. Yeah, it would only be for probably somewhat fake. To only what they create on their local resources. Actually we have some... like... own solution for the situation where you connect to a wired network but the network provided DNS servers are not able to provide all DNS and data. So in the end you would not use them. But there may be... They may have different internal view of some zone. But so for the internal domain you want to use them anyway. So we have like module for unbound that can be configured with different set of name servers. So if the response from the internet was insecure basically meaning that the domain is not signed because we don't want to downgrade the security the module would ask the different set like local DNS servers for the answer. So what could you do for an extra domain if it's a local underdone? Yeah, you try the internet and you try the signed answer if you get un-signed answer you will try to ask also those local servers because they cannot tell you or provide you any signed answer for sure. So if you would ask them for every answer you would downgrade the security basically. Question, can you in your own environment can you have a subdomain that is explicitly not signed but approved to the main sign and then in such a way that a client knows that it is not signed but willing to sell. So you can actually ask where the DNSX will not be there. In such a way that there is no proof that it doesn't exist. So I think a lot of people that might want to live in DNSSEC think about a common or a party whatever. They might want to have it signed but they might have a cork temp or a subdomain where it is kind of far west and they don't want to sign that because And you don't want to publish the specific subdomain but I don't want to have any other recommended subdomain to actually Right, but you don't want to have like delegation or anything for the subdomain Well maybe, I don't know how You can delegate that subdomain to also to the same like a set of servers but and not having the so-called TS record like the delegated signer and by not including the record for that delegated domain you are saying that the domain is not signed basically. But you say that the domain actually exists But it exists but it is not signed because You can also use NSEC3 opt out and everything opt out is actually not covered so anything could exist in a range that isn't there So you can actually have your hidden domain name that is still going on Even just not signed I want to be able to say these are not signed subdomain but okay and this is non-existing you have to take action to actually save the subdomain so if you don't take any action for definition it remains unassigned as long as it is delegated yes because it if it's part of the same zone it has to be signed or not signed because you cannot have half signed zone I think it will be common to find things like that outside zone half signed zone is not technically possible because you are including also the proof of non-existence of some names so it will be then like so you have to explicitly acknowledge that unsigned subdomain and you will get like proof of non-existence of DS records for that subdomain and by that you know that it's not signed basically because there is no such record so you are not able to build chain of trust To be honest we are the only ones based on the DNS sector you are mailing yes we are the only ones contributing to the project but there is a previous team who now for default installs hand down but they have sort of ignored all the hotspot problems I guess they don't have much of a laptop market they are mostly servers so they do install those for default so it is that people do not serve before them with validation they don't know what fund they are missing I think they installed actually what fund they are missing if you install a bind in deviant exclusively then it has gano-sempret default but it is not installed by default if everybody installs and any of them have that for default install bind or unbound and also in Fedora they all come with DNS like enabled but then you have then you have problems ideally you want to use never provided all name servers if possible so you do not use it but it is there as far as as far as for the server side by server side I mean bind when install bind unbound just to do the resolution DNS like enabled everywhere every time but if you want to use it on mobile you have problems because you can connect to network where all outgoing DNS communication to the internet is blocked and then you are screwed if you don't configure unbound correctly like to tunnel the communication or to use the local name servers alright thank you for coming