 Welcome to my talk about Mirror Manager. There have been Mirror Manager talks in the past. I think they were all done by Matt, who was the original author of Mirror. Who wrote it many years ago. I even looked it up in Git so I can tell it when it first started. My idea behind this presentation is to give a status of what is currently available, what is changed in the last few years, and what is currently running and maybe get a discussion on what we can do to make it better because it has still some problems. Something about me, I'm actually, I have a mural of Wretted Linux since 1998. So I'm a longtime mural admin and it's also how I got into Mirror Manager. I'm maintaining also the Mirror Manager instance for Opium Fusion. So if you download anything from there, it will hit one of my systems. I'm all doing this all in my part-time, free-time, I don't know. I'm working on something differently, not Python code and not Wretted code. What I want to talk about is I want to know about what Mirror Manager is. There are different terms and words and what they need and what currently is running with Mirror Manager 2. How it is used by clients and by everybody who is downloading any updates. Then I will point out a few learning problems we currently have and see. And then I hope we can have some ideas how we can improve it in the future. Mirror Manager is basically only a tool to redirect a user to the best possible mirror for that user with his current IP address. And what is the best mirror? We define the best mirror as the closest mirror because the probability is high that this might be the best mirror. And the closest mirror we try to find out, first we try to match it, if it's the same continent, if it's on some research network which might have a better connectivity then if it's the same country we try to match our autonomous systems and if the mirror admin added net blocks which he believes his mirror should be responsible for then we will redirect the user to that network. Some of the features which Mirror Manager has, it supports partial mirrors so if somebody wants only to mirror certain releases of Fedora or certain architectures Mirror Manager can handle this and we support multiple categories so we have Fedora Linux and April are different categories. We have Fedora Archive, Fedora Secondary and Fedora Out. Those are the current categories we are providing. I don't know, Atomic has disappeared luckily because it was really bad for Mirror Manager's design. I don't know where it went. It's not mirror because it has no mirror support. I wanted to mention it later, I think it was for three releases, it was about 700,000 files and Mirror Manager crawled all of them each time and this took forever. This was one of the points where it really broke the performance, where it really broke a lot of mirrors because the crawler was not anymore able to get the content from the mirror. What you need with Mirror Manager which other Mirror Manager solutions provides at least what I have seen is private mirrors so an organization can say we want to provide mirrors for our users but nobody else on the outside should be able to access it. If the mirror is detectable by a host name or an IP, Mirror Manager can then redirect those users coming from this IP to this sort of mirror and nobody else. What's also unique is the surf servers that are in the base. I'm not every time sure it's a good idea because it gives the mirror at least a lot of flexibility because they can configure it all on their own. They don't have to write emails and say please add this net block, please add this ASN in this country. We want to block this and that so they can do all this on their own but they usually sometimes need some help to get it configured correctly. So this has advantages and disadvantages. I know that Mirror Manager is used by the Fedora project. Now Confusion, there were a few attempts to get CentOS running at Mirror Manager but they didn't like it, they didn't have time. I don't know it. There were initial discussions but it never went wrong. I was just going to show you some code for Wuntu. There's some code for Wuntu but I don't know. There are other Mirror Managing solutions which exist. It's Mirror Bricks. It's used by Video Man and Super Apple. Mirror Brain is something which a lot of projects are actually using and it was developed by Suze and they are using it. VDM has just, I think two years ago, introduced something similar. It works on HTTP redirect, I think. Then there was Mirror Managing which only checks a single file so they only check if the time state file is up-to-date and if it's up-to-date then the Mirror Manager always sends the whole Mirror to make sure all the files are up-to-date and not only the time stamp file. And it seems every project has their own Mirror Managing framework to somehow deal with it. And about Mirror Manager's history, looking at the Git log of the old code it was started in January 2007. And this is also sometimes important to remember when working with the code base even if it's now rewritten something newer. It makes different assumptions about mirror sizes and what time is required to scan the mirror because it was small when it was initially written. Fedora seems to use it since May 2007 when it was still pretty new at least that's what I get from the Git log from Puppet so it seems to run almost the same time as it exists in the Fedora infrastructure. And the original Mirror Manager code base was Puppeteer's one-point-something base and porting this to something which was run on Route 7 which was the targeted platform for the Fedora infrastructure required a rewrite of the code base in any case to something newer like Google Views 2 or, I don't know, version numbers or... There's a new feature called Mirror Manager. Oh, you know what Mirror Manager is. So that's when Pia will never manage to flask in 2014. I think the project will be tuned before that. Huh? I think the project will be tuned before that. Oh, yeah. So he ported it to a lot of platforms. And this is in spring 2015. It was actually moved into production systems for Fedora infrastructure. And I just last month I moved part of RPN Fusion's Mirror Manager infrastructure also to Mirror Manager 2. The changes for Mirror Manager 2 were it's now a flask and SQL ugly base. We made a few changes to the behavior. The crawler used to crawl all the mirrors every time even if they took forever to crawl. We are now in disabled mirrors which have four consecutive crawl failures. We did maximum time of three hours to crawl one mirror. So after 48 hours the mirror was not reachable. We completely disabled it to not waste our resources on scanning slow mirrors. If it's automatically disabled, the mirror admin of the mirror can re-enable it any time he wants to. But this disabled a lot of broken mirrors which were still active, still not as active but were not updated anymore. So this was again something which helped us to reduce the crawl times for all of the mirrors. But sometimes it seems that the limits we have set are a bit too high because sometimes mirrors get disabled too fast. So this might be a value which needs to be re-evaluated. The nice thing about the mirror manager 2 was that the systems which are giving out the mirror list and the meta-links they use the same data format as mirror manager 1 so we were able to update the mirror list service independently of the backend infrastructure. I give an overview of how all is connected in the next slide. We also recently disabled and removed all FTP links from the database. The reason behind this was that FTP is usually difficult for some clients and is often a reason to get problems with parts with a certain mirror. The probability that the user would get an FTP link was already pretty low because if a mirror would provide HTTP and FTP we would usually always give out the HTTP link but to make sure the remaining FTP, FTP servers are not given out to the clients. We removed all of them from the database and we even blocked adding new FTP links to the database. What's also new we added support to specify that someone only wants HTTPS mirrors so if you actually change your repo files manually which you shouldn't then you could say that you only want HTTPS mirrors. This is also something which we would expect to be handled by DNF directly that we can say I am preferring HTTPS mirrors. Mirror Manager does a lot of scanning of its master server and of all the clients and we tried to get more intelligent by doing those scans not cron-based anymore but message-based anymore right now we will get a message that the new repo has been generated or we get a message that the data has been sent out to the master mirror only then we scan the master mirror for changes. This is the current architecture of Mirror Manager. We have one central system backend system which handles all of the data. It generates the data which is handled out of the mirrorless. The mirrorless are the systems which the clients are actually talking to so if you run DNF for young it's always only talking to those mirrorless systems it's never talking to the other systems which also exist. The backend system generates the data which is PKL generation and this generates the data which is pushed out to the mirrors each hour and the data includes which mirrors are available and up to date in which country, in which network, in which ASM if they are on internet 2 or research networks and this data is pushed out to the mirrorless servers they only start it and then they stop sending the new data out to the client. The front end, there are currently three front end running this is the self service interface for the up to these places of the mirrors where they can consider the mirror, the parts, the URLs and everything else and it's also the system where we provide some statistics what's been downloaded and the crawlers are the systems which are crawling all the existing mirrors so they are connecting to each mirror and scanning each mirror if they have up to date content and the crawlers using either right now either HTTP or R-Sync we prefer R-Sync because with HTTP it can pay up to one, two, three hours and then it tips the limit already and the mirror is disabled because depending on the configuration of the mirror it can mean it actually opens a network connection for each file. We are hoping that mirror appings are configured keep alive so this reduces the number of HTTP connections but we still do a lot of HTTP connections which is really not very efficient and with R-Sync we see differences between scanning times between two hours with HTTP and R-Sync takes maybe five minutes or something like this because we open one connection get a complete list of all files and then we can process the file list locally without needing to communicate with the server anymore and one thing I tried to improve which failed and I tried to and because the crawler right now there are two crawlers they are both in the Fedora infrastructure data center which is in Phoenix I think so all the mirrors are brought from Phoenix which doesn't seem like the best idea if the mirror is on the other side of the world so I had the idea that we could crawl mirrors from the mirror for example from the mirror and I implemented the filter in the crawler to crawl based on the continent and this all worked but once the... for example if you are crawling with R-Sync once the R-Sync virus was downloaded and the crawler tries to compare it with the database it reads the file information from the database which is still in Phoenix then compares it with the values from the R-Sync then updates the database which is still in Phoenix and so it was actually slower so the scanning was faster but the updating of the database was slower so this didn't work out at all and I think to make something like this work the crawler needs to change completely because right now it really needs an open database connection for each file in the optimal case it would have all the data from the database locally and could then update information in one upload but this didn't work so far I mentioned statistics from the front-end what Miramender is also doing is trying to draw diagrams of the propagation of the data so the green bar is how many rows have the up-to-date repo and Dx and out file the correct one and the red bar is which rows have the repo and D from the R-Sync and the blue is two things and red is even older so we see this is each bar is four hours of power yes it's four hours so we see once the data has been signalized to the master we already have after about eight hours almost 50% of the mirrors have the current content which I think is quite a bit of time and then it usually stabilizes pretty fast at a high level I think the ones up there, the red ones are always out of date because they are just broken so this would need manual interaction with the mirror rotten so you need to talk to them and find out why it's broken this is for Fidora 22 this is old this is from yesterday from Rawhide Rawhide is always difficult because it changes so fast so we have the number of mirrors which are mirror and Rawhide is already there so this is and then the parts it changes but I saw yesterday that something is not working because everything is red and it should be white and blue I think this number of mirrors is correct but why all the others are wrong I don't know right now and this is from Fidora 24 we see longer cycles here between the updates and a much higher number of updated mirrors this means right now we have about 100 Fidora 24 mirrors worldwide which seem to be working from mirror manager perspective and correctly what takes so much time what was the bottleneck here at Rawhide so so long to see so the mirror starts now when the green data is here so a mirror usually they do a crunch up every day every 12 hours every 4 hours every 4 hours we need to be only a 4 hour lag part but then in Fidora it tries to encourage mirrors because the Turing system we have a few mirrors which are called T1 mirrors they have direct access to the mass mirrors and all others mirrors should see from them so if we now have 2 mirrors which have a frequency of 4 hours we already have 8 hours difference to get them updated and it takes forever to call yeah and that's the far yeah and it takes forever because it's a lot of data right now and the awesome one takes I don't know 2 or 3 hours to scan the master mirror and the master mirror is basically not doing much more right now than making awesome client happy because everybody is trying to get the information from the master server and it has to work the far system every time every time and I think the the file system of the master mirrors is shared via MFS over all of this it's from a net app and so yeah and the file master mirrors all seem the same yeah so net app so all it does is do stats all day long yeah and not only the clients are doing stats but we are also studying the same data which I want to talk about later again this is which companies are connecting to the mirror manager this is also from yesterday or 2 days ago and this is this is all accesses which are coming to mirror manager without any filters over it so if one IP does millions of accesses it will also be being here countries we have architectures this is pretty clear and this is repositories and slot paper and the active fedora which can be seen here and I mentioned that adding a mirror is sometimes the the app manager needs some help and I wanted to show here that it can be quite complicated to come to the point where you can actually add the URL for your mirror so the first thing you have to do you have to create a site and then you can create in the site multiple hosts and then in your hosts you can create multiple categories so hosts is basically your mirrors if you have multiple mirrors then you can divide them there and then the categories of it are limits that are secondary either and under each category you can then add the URLs you are providing and this is the information the browser and everything else is using to build the final data which is then pushed out to the mirrorless servers I did a account of the database to see what we currently have in the database so we have 499 private mirror sites so these are sites which are marked private and not available for other users then we have 383 public sites which are now public and not private the host again can be private or public again I never really understood why the site and the host can be private and they can be differently but I guess if the site is private and the host is public this doesn't work so in any case we have 486 private hosts and 305 public hosts and overall those hosts we have 1100 categories and 1400 URLs added to our database and 929 URLs HTTP we have 125 HTTPS and 362 awesome this is what we currently have in our database and we will hear from the 300 URLs there are 100 available for bit over 24 which are working I didn't have a look at April a lot of people are only mirror in April so this doesn't come out here in those diagrams it comes out here where you see that it's accessed a lot from the mirrorless and metallic servers so the front end which the user or dnf or young is using is mirrorless and data link it's mainly data link for Fedora and the URL looks something like the second line it's data link and then you can specify the repository row height of Fedora 24 or updates release 24 and then an architecture and those additional parameters can be specified you can you can either can specify the file that you want by repository and architecture or you can specify it by by providing the full path to the file and then the manager will tell you a mirror which has the file then you can for testing reasons I often use country or network or even I didn't list it here you can also say IP you can directly specify the IP which for which mirror manager should provide the the meta link or mirror list and this is helpful if somebody has a problem and you know the IP of the problem then you can test it and see what's the result it's looking like and if it's actually wrong or broken wrong or correct and then there are version in cc this is for the sender as an port which was done at one point and then protocol if you want only a specific number of protocols listed enough all of them yes can you tell me what's doing in the mirror list meta okay this is the mirror list and this is the meta link so the mirror list is basically only you say you access the URL and then you get a list of possible mirrors for your request in this case it's two mirrors you also see what mirror manager understood the repository the architecture and the country and this is a longer one global as a country global you can all available mirrors on the time protocol and I it's much longer but we get a list of HTTPS it's just a list of mirrors each line has one mirror mirror link is different it's an out file and it contains other interesting stuff and it contains information about the repo and the XML file about timestamp about chat songs it can contain up to three repo and D timestamps and chat songs and then later down here comes a list of possible mirrors and then DNF can connect to one of those mirrors which are down here and download the repo and D file and then it can see if the repo and D file matches the timestamp and the hash and only if this works DNF accepts the mirror as a possible client so with mirror list you have no verification at all it just goes to one of the mirrors download the file with meter link you have a verification if the file which is provided by the mirror is actually the same the one Fedora told you to download and it's providing three files in certain situations and I will talk about this also later to make sure if the mirror is not updated fast enough you still get a and then the problem is the mirrors are scanned two times a day and so the mirror can be up to date or not but we cannot scan it so often this doesn't work so we are providing multiple repo and D chat songs so that if you connect to a mirror and the mirror still has all the data you still have a working update which is not always the desirable situation because you might miss some security updates which why didn't you scan more it takes a long okay so why not do something to the DNF which will notify you of those failures so you will get nearly constantly scanned for free oh interesting point yeah you can do that no and then the DNF will pass back to the mirror manager that's your idea right yeah that's a nice idea although you should probably also scan make sure it's fine yeah sure we have already multiple ways of getting informed also talking about this later a bit more but yes this is a good idea but yeah I'm usually only looking for the mirror manager point or things on it so if it goes to DNF I I stop because it gets too complicated for me so this is the the method we what's next so yeah report mirror is another tool which we provide and the idea behind report mirror is and from the problem we right now have we cannot scan the mirrors fast enough we provide the mirrors a tool which they can run after there's our sync run to tell us that they are up to date so they run our sync update the mirror the run report mirror and they update our database that they are up to date and in the next generation of the data which is sent out to the mirror so that this mirror is marked as up to date or not from the report mirror we actually need report mirror for private mirrors because we wouldn't have any chance to get information about the status of private mirrors because we do not scan private mirrors they probably can or we shouldn't as I said this and one of the problems a report mirror has it only scans the directories it doesn't scan the content the report mirror doesn't this goes back to it was written in 2007 and at that time it was not the idea of forcing all mirror admins to run a tool which stats again all of the files just after our sync started all of the files seemed like a better idea and people would probably not run it and so we so the report mirror right now checks the directories there the directories there we assume that the mirror actually is up-to-date and then we up-to-date it in the database this leads to situations this leads to situations where if for example if they didn't actually update the mirror because their all sync command is broken but then they run the report mirror they have all the directories because they run it exactly one month ago there was no release so then they run it they say they're it's up-to-date it happens sometimes that the mirrors are actually slapping they are report mirror tells us my mirror is up-to-date so it's enabled and then the crawler comes over it and says my mirror is not up-to-date I'm disabling it and this also leads to situations where we get bug reports and people say the mirror doesn't work then we say then we test it and say you shouldn't have been redirected to it it's enabled but because there was a time difference between it so this is already an existing problem which you might have the motion for if you actually do want everything you want the file sizes and times what we have back to you but you can't the point is you can't trust the mirror and say yep I got it this is the situation do we trust the mirrors or do we don't so we trust them somehow because they serve all files but this is also the point where we say yes if they run report mirror it's good enough for us this is this is one of one of those of course the BNF was telling you that a mirror doesn't have no mirror doesn't have so that would be interesting I don't know if you had instant feedback from the actual time machine but so now you actually then you don't do a statistical call you go full call we crawl the 10 newest files okay that's not that's oh but it still takes forever for our sake yes yeah let's go back so we if we were HTTP PDP crawl we only crawl the 10 newest files in each directory and I believe this is also I've seen comments in the code saying this D&F feedback I worry about the feedback yeah that'd be it yeah I see somebody doing like patching D&F themselves and we were actually built that just like you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and Yeah, the following thing is, there was a time in your own letter that's allowed at least to pop mirrors of what it was like today. We may not really need it at all, but in worst case, so we should be arranging everyone to pass on gear. Which it's definitely not ideal. That is the worst case. It does mean that you used it, but in the best case, you would use human data sometime, and you still can't give up on your work at this point. Yeah, it's just something about it. Yeah, right. But I think I'm always scared a little bit about a whole new manager thing, because you could probably bring down a lot if you just want to cause damage using the neural management network, something. Yeah, I don't know. The mirror is key in order to report. So the worst you could do is report bad data. If you're already a real mirror and then disable yourself. Yeah, but you could create a lot of URLs and have them data-based and purged. Yeah. But then they call once again. What kind of worst case is when you're not introducing a mirror to someone else? Yeah, that's something that people don't know about. I mean, that's not the thing that I swear to you about. Right, and we noticed, I mean, we had cases where people were like, why are the mirrorless, you know, someone had such a big role in the case of Matthew, like a provider who did just roll out. And the last one was something that I did just roll out. And then, sometimes, it's considered as with a role, everything that I do, something like that. And so we had like, we'd walk as I do, and then he actually showed up and I'd see. He was like, that was mean sorry about that. I mean, it didn't take much time, but it was fairly much a burden. You know what problem I was recently struggling with, was there any customer role? And there was somebody who was trying to get any customer role added to the mirror-managed network. And I told him that it would probably not work, but he still tried it. And they had notes in L.A., New York, and Rome, and Romania. And they added all of them to the mirror-manager, and the crawl started crawling, and it always, because to Felix, it always was crawling a note in Los Angeles of all of the systems. So this doesn't work at all with our setup. So it was nice for me to offer it, but I had to tell him here, it has to disable it. And at some point, isn't it up to them to make sure that all of their errors are the same? Yeah, absolutely. Yeah, it would be nice. Does that appear to you when you say it comes to you and it was done in a proper way? Yes, it wasn't there because of this mirror-managed list, and then I was discussing it with him. Yeah, and he was an android male, so I was writing with him, and at the end I said he needs to disable all of them except the L.A. note, because we will always fix those, and it was mostly us. The metaling alternates, we had problems with them also. There is, and what I said, we have up to three metaling checksums in the, we have up to three Ripple and DxML checksums in the metaling, and they are removed after some time. We have two relays which decide when the checksums will rule. If it's over three days, this is the easy one, this is understandable, and the second most recent entry is older than Max propagation days, which is two, I forgot to write, and the nice thing is it's written wrongly, so it's really easy to find it in the code, because I'm not aware of this, and then I remember, oh, it's written wrong, I just had to grab some cocoa. And so this works most of the time. We had a few occasions where it didn't work, and I'm not entirely sure why it didn't work. It could have been, I don't know if this could actually happen, that the repo and defile was on the masternode available, and then the sync or the repo generation didn't work, and it was changed back to the old file, and then the new file was already detected, and the checksum was in the database, and after three days the old files were thrown out of the database, but the old file was still there because it was reverted somehow, so we have situations where we stumble over this, but it's not really clear how and when it happens. From the code it looks correct, from the idea it's correct, but this sometimes happens, and then we have to throw off the rescan of the masternode. Scan of the masternode is done by Update Master Directory Listing Tool, or we call it UMDL. Yeah, it's mentioned often, but nobody knows what it means UMDL. In the Fedora infrastructure the masternode is NSS mounted like we said previously on the backend machine. When we get a message from the Fedora message bus, we start crawling the masternode NSS. We do a lot of stuff again, we check each C time of each directory to see if it has actually changed. If it has changed, we go into that directory, read our replay in DXNL, make the checksums and write it to the database, and it's really highly efficient at one point of time when we were waiting for a new repo to be generated really fast because something had run. I was looking in the tour with S-Praise and it does so many stuff, it's unbelievable. It's still rewritten and it exists even in rewrite compare, but we didn't put it yet in production because we were afraid of S-Praise. This is something we cannot really test in staging because we never know if the data is... It will always seem probably correct, but we never know if it's actually correct. The problem behind this is that we need to know the information about all the files which we are pushing out to the rules to be able to redirect the rules to the files correctly, and this information exists probably already in some database or some database. It's sitting in the file and everything. Yes, and this is also the work we were doing with Pics-A-Doll Unreal, and this is also the point where we need to be much more intelligent how we scan the master murals and probably also the existing murals. I don't know how the two interface, I don't know how... Look at all of them. I don't know how mirror manager interfaces with the masters except by, I guess, NFS mounting it? Yes, yes, yes. You could just read the files off, I mean, because that's all learned in another process, which does that, but it only takes, how long does it take to update archive? That's a big one. What? Eleven minutes? Yeah. You only have to do it when it changes, so eleven minutes to re-scan the entire thing when archive changes doesn't take that long. And we have this all in a database, it's in the actual format, but it's easier. Yes, and this was also the initial version from Pierre, the updated version he was using. We were using the full files or something like this? I think that the full file is in my part. There's four files on the stash, the module is there, so then the full file, the full file is in my part, but I still like this one. Why this one? Because the new one here is in this video, so I could actually read the entire tree, part of the entire tree, using the full file at the same time, while reading the file in one of these. Times are updated, so the times are read out of one. Only one? Yeah. Okay, we're out. But I think if you look at one more time, I think we're back from, it's very refreshing, but it's safe sometimes. We're back from one hour to opening up the machine. That's really quite a big problem. But it's fine. What does the new file have in the full file? Yes, you have the... You can only look for what's in the stash boxes. Yeah, good. That's right. I think we need a shot of the files, and I think we need a list of the files that are in the system, and this is very significant in the shot of the files. That's a good one. Did you mention... Did you mention good, good, good, good? Yes. I did too, but this is so bad. Right now, really. No, so we actually have a... I've been working on this whole thing. All it does is eliminates what? How many half a billion stats from the server side every time a client connects to one of the master carriers and tries to see what update. Basically, it seems intelligent, instead of arcing on the server, scanning every day and time the entire file tree, just read database, and then make assumptions that the database is accurate. The problem is, we all really want to hack arcing when we're writing. So instead, it's completely possible to separately generate a database and then have the client simply download that and see if we can catch it. And it's actually very easy. So you turn... There's 10 or 11 terabytes now on the full archive. It takes something... If the server is loaded, it takes about 12 hours to just arcing over that to get the file list before you start transferring it. Instead, if you just read the database, it seems that it changes and takes you about four seconds. And then... Right, I saw that, except it's not complete and it didn't look like it worked. So instead, we just download and follow it. I just... And this is a shell script. It's very easy. Unfortunately, it was a shell. Or it could actually be a real pain because if you want to run on RL5, you can go back through it. Whereas, the SAs for RL5 have no problems, so I can say. But it's just a shell script. It means nothing but arcing and send and off. It does everything up to including report error, except that I don't have an endpoint that I can call, which there's a pull request for that in Git. So it's... I don't know. It needs CURL to do the checking point. Otherwise, it runs very quickly. When it does 5,000 change, it does run arcing with big file lists, and it does require a stat but only for one each change file. Only one stat for each change file is served. And you can pull it every 10 minutes, or what? Which means... And it works for subsidiary mirrors as well. So if you... If you're one mirror is pulled, if you're two mirrors pull from that, as long as the... if you're one mirror don't change the file, everything actually works. And so, if everybody is pulled every 10 minutes, it doesn't stop you. It doesn't save you from transferring anything because it's still about half the bandwidth. But it does save you from a stat to say, it's all about master mirrors, which means you can do all master mirrors. Of course, you have to run it on your mirrors, which is the problem. But running it successfully is going to be... I did do a mirror, and of course, I've got a tree of mirrors that I've run it on. And I run a folder. I run a folder with all 11 pair of eyes of three mirrors. And it takes, like I said, four seconds for it to pull. And when it finds changes, we're going to have an album that's making it time to do that. So... And I do have a complete list of everything that's on the client at that point, including times that I could send, just the directories, like I would send them out. I could send the whole file but I don't think their manager would use it. Right now, they're going to send stuff out to use that. But at that point, you have a mirror. Once it completes successfully, it checks in, and it's done. So everything is pretty much... There's no... There's only one script you have to run. It's everything. So... And it does work pretty quickly on the client. So there's something to download everything, but the mirrors aren't as average. So the download is actually a pretty quick one, so... Especially when it's pretty quick for the guys, it doesn't have the... Now, it does still have to stat on the client once. It has to stat everything. But it only does it on the client, and then the client side, the client's fault, the server side, is everyone's fault. So, if you say that, then, you know, it takes a lot. It seems to work, and it's just a really long test. It's a... It's a... What's the goal of mirror operation? But... I'll announce it to the mirror as soon as I... Like, is that specific enough? I'll speak... I'll speak... I'll... I'll speak... Right now, it's very specific because it has incredibly knowledge of what our modules are. And it has this concept of a buffet module that actually contains all of the modules.