 On time, we start with the next talk. I welcome Richard Hartman. He is involved in Debian since many years, and he became recently a Debian developer. And he will talk about Gitify your life. Web, blog, configs, data, and backups. Gitify everything. Richard Hartman. Thank you. Thank you for coming, especially those who already attended all of those buffs. Yeah, short thing about myself. As Gordon said, I'm Richard Hartman. In my day job, I'm a backbone manager at GlobalWays. I'm involved with FreeNode and OFTC, and oh, should I speak louder? I'm not sure. Test, test. Good, back there. Can you turn up the volume a little bit? Test, test. OK, perfect. OK, since about a week, I've been a Debian developer. Yay, and I'm the author of WCZ. Yeah, OK. So raise of hands. Who of you knows what Git is? Perfect. That's just in a backup plan, but perfect. We can skip it. Let's move to the first tool, ETC Keeper. Some or maybe even most of this audience will have heard of it. It's a tool to basically store your slash ETC in pretty much every single version control system you can think of. It's implemented in a POSIX shell. It auto commits everything in ETC basically at every opportunity. You may need to write excludes, for example, for your network config when you have DHCP, but else, yeah, that's really cool, the auto commit. It hooks into most of the important, or maybe even all of the important, package management systems. So when you install new packages, even on SUSE or whatever, you can just have it commit automatically, which is very nice. You can obviously commit manually. If you, for example, change your X config, it supports, as I said, various backends. And yeah, it's quite nice to recover from failures. For example, Axel used it to recover from Saturday's power outage, because some servers lost stuff. And with ETC Keeper, he could just replay all the data, which was rather nice. Then there's BUP. BUP is a backup tool based on the Git pack file format. It's written in Python. It's very, very fast, and it's very space efficient. The author of BUP managed to reduce his own personal backup size from 120 gigabytes to 45 gigabytes just by migrating away from our snapshot over to BUP, which is quite good. I mean, it's almost a little bit more than a third, so very good. This happens because it has built-in data application, because obviously, Git pack files also have data application. You can restore every single mount point or every single point in time, every single backup point can be mounted as Fuse file system, or by means of Fuse file system, independently of each other. So you can even compare different versions of what you have in your backups, which, again, is very nice. The one thing which is a real downside for most serious deployments, there is no way to get data out of your archive or out of your backups, which, again, is a direct consequence of using Git pack files. There is a branch which supports deleting old data, but this is not in mainline. And it hasn't been in mainline for, I think, one or two years, so I'm not sure if it will ever happen. But yeah, at least in theory, it would exist. Then for your websites, for your wikis, for your whatever, there is Ikiwiki. Ikiwiki is a wiki compiler, as the name implies. And it converts various different files into HTML files. It's written in Perl. It supports various backends. Again, most of the ones you can possibly think of. Oh, and I can even slow down. Great. It's able to parse various Mergeblank languages, more on that on the next slide. There are several different ways to actually edit any kind of content within Ikiwiki. It has templating support. It has CSS support. These are quite extensive, but they may be improved, but that's for another time. It acts as a wiki, as a CMS, as a blog, as a lot of different things. It automatically generates RSS and atom feeds for every single page, for every single sub-directory. So you can easily subscribe to topical content. If you're, for example, only interested in one part of a particular page, just subscribe to this part by RSS, and you don't have to check if there is any updates or anything, which is very convenient to keep track of comments somewhere or something. And it supports OpenID, which means that you don't have to go through all the trouble of having a user database or doing a lot of anti-spam measures, because it turns out OpenID is relatively well suited for just stopping spam bots. For some reason, maybe they just didn't pick it up yet. I don't know, but it's quite nice, because you don't have to do any actual work. And people can still edit your content, and you can still track back changes, at least, to some extent. It supports various Markdown languages, the best one. Well, debatable, but in my opinion, it's Markdown. It supports wiki text, restructured text, text style, plain HTML, and there are icky wiki-specific extensions. For example, normal wiki links, which are a lot more powerful than the normal linking style in Markdown, which kind of sucks, but whatever. It also supports directives, which basically tell icky wiki to do special things with a page. For example, you can tag your blog pages, or you can generate pages which automatically pull in content from different other pages and stuff like this. That's all done by directives. How does it work? You can edit the web page directly, if you want to, on the web. Then you will have a rebuild of the content, but only the part which changes. So if you, hello, okay. If you change only one single file, it'll only rebuild one single file. If you change, for example, the navigation, it will rebuild everything, because obviously it needs to rebuild everything. Hello, okay. Yeah, okay. Yeah, if it has to generate pages automatically, for example, new index pages or something, if you just create a new sub directory, or if you have comments which start to appear on your site, it will automatically generate those Markdown files and commit them, or you put them in your source directory and you can just commit them and have them be part of your site, or you can auto commit them if you want. That's possible as well. You can obviously pull in all changes in your local repository if you wanted to look at them. Common users would be public wiki, private notes for just note keeping of your personal to-do list or whatever. Having an actual blog, which a lot of people in this room probably do, that's, yeah, I mean a lot of people on Planet W and have their blogs in Ikiwiki for good reason, and an actual CMS for company websites or stuff, which also tends to work quite well. The three main ways to interact with Ikiwiki are web-based text editing, which is quite useful for new users, but it's quite boring in my opinion. There is also what you see is what you get editor, which is even more fancy for non-technical users. There's just plain old CLI-based editing where you just edit files and commit them back into repository pushes up and everything gets rebuilt automatically, which is, in my opinion, the best way to interact with Ikiwiki because you're able to stay on the command line and simply push out your stuff onto the web, but you don't actually have to leave your command line, which, yeah, is pretty convenient. There are also some more advanced use cases. As I said, you can interface with the source files directly. You can maintain something is wrong. For example, you can maintain your Ikiwiki and your docs and your source code in one single directory, and it will simply have part of your sub-directory structure rendered. For example, Git-annex-dustice, there is a doc directory, which gets rendered to the website, but it's also part of the normal source directory, which means that everybody who checks out a copy of the repository will have the complete firm, bug reports, to-do lists, user comments, everything on their local file system without having to leave, again, their command line, which doesn't break media, and so it's just very convenient to have one single resource for everything regarding one single program. And another nice thing is if you create different branches for preview or staging areas, you can even have workflows where some people are just allowed to create pages. Other people then look over those pages and merge them back into master and then push them to the actual website, which basically allows you to have content control or real publishing workflows if you have a need to do this. Yeah, next up, Git-annex, the beef. It's basically a tool to manage files with Git without checking those files into Git. That might sound counterintuitive. Yeah, what is Git-annex? It's based on Git. It maintains the metadata about files as in location and file names and everything within your Git repository, but it doesn't actually maintain the file content within the Git repository. More on that later. This saves you a lot of time and space. You're still able to use any Git-annex repository as a normal Git repository, which in turn means you're even able to have a mix of, for example, saying all your README files should be maintained by normal Git and then you have all the merging which Git does for you and everything. And then you have, for example, your photographs or your video for web publishing, which are maintained in the annex, which means you don't have to have a copy of those files in each and every single location. A very nice thing about Git-annex is that it's written with very low bandwidth and flaggy connections in mind. Quite a lot of you will know that Joey lives basically in the middle of nowhere, which is a great thing to be forced to write really efficient code, which doesn't use a lot of data. This shows it's really quick even if you had a really, really bad connection in backwater or whatever during holidays or during normal living. It's still able to transfer the data which you need to transfer. This is very, very nice. And there are various workflows. We'll see four of them in a few minutes. So it's written in Haskell, so it's probably strongly typed and nobody can write patches for it. It uses R-Sync to actually transfer data which means it doesn't try to re-invent any wheels. It's really just basing on top of established and well-known and well-debugged programs. In indirect mode, which in my personal opinion is the better mode, what it does is it moves the actual file into a different location, namely .git-annex-objects. It then makes those files read-only so you cannot even accidentally delete those files. Even if you RMF them, it will still tell you, no, I can't delete them, which is very secure, might be inconvenient, but you can work on this. It replaces those files with symlinks of the same name and those just point at the object. And if there is an object behind this symlink or not, that basically determines if you are able to access the data on this particular machine or in this particular repository. But you will definitely have the information about the name of the file, the theoretical location of the file, the hash of the file will be in every single repository. There's also a direct mode. Initially mainly written for Windows and Mac OS X because Windows just doesn't support symlinks properly and OS X, while supporting symlinks, apparently has a lot of developers which think it's a great idea to follow symlinks and display the actual target of the symlink instead of the symlink. So you have cryptic file names which are really hard to deal with. And obviously people who are used to GUI tools which then only display really, really cryptic names, so that's no good. So there's direct mode which doesn't do the symlink stuff. It basically rewrites the files on the flyer. Git still thinks it would be managing symlinks but Git Enix just pulls them from under Git and pushes in the actual content. You keep on nodding, so I'm probably doing good. And if you want, you can always delete old data or you can keep it. Or you can just, for example, what I'm doing, you can have one or two machines which slurp up all your data and have an everlasting archive of everything which you ever put into your annexes. And then other machines, for example, laptops with smaller SSDs, those just have the data which you're actually interested in at the moment. How does this work in the background? Each repository has a UUID. It also has a name which makes it easier for you to actually interact with the repository but in the back end it's just a UUID for obvious reasons because it just makes distributed generation and synchronization easy, period. It's also tracking information in a special branch called Git Enix. This branch means that all... Should I? This branch ensures that every single repository has full and complete information about all files, about the location of all files, about the last status of those files, if those files have been added to some repository and then deleted, or if they have been over there forever. So in every single repository, you can just look up the status of this file or of all files in all other of your repositories, which is convenient. The tracking information is very simple and it's designed to be merged. It's a little bit more complicated than a plain union merge, but basically what it does is it just has a timestamp and tells you if the file has been there or not, and it has the UUID of the repository. And from this information, along with the timestamps, you can simply reproduce the whole life cycle of your files through your whole cloud of Git-NX repositories in this one particular NX. One really nice thing which you can do is, if you're on the command line, which again, in my opinion, is debutter mode, you can simply run git-nx-sync, which basically does a commit, oh, it does a git-nx-add, then it does a commit, then it merges from the other repositories into your own master and into your own Git-NX branch, then it merges the log files, that's where the git-nx branch comes in, and then it pushes to all other known repositories, which is basically a one-shot command to synchronize all the metadata about all the files with all the other repositories, and it takes no time at all, given network connection. Data integrity is something which is very important for all of the tools, but Git-NX is really designed with data integrity in mind. By default, it uses SHA-2, 256, with file extension to store the objects, so it renames the file to its own SHA-SOM, which allows you to always verify the data even without Git-NX. You're able to say, by means of globbing, which files or which directories or which types of files should have how many copies in different repositories. So, for example, what I do, all my raw files, all the raw photographs are in at least three different locations. All the JPEGs are only two, because JPEGs can be regenerated, raws cannot. All remotes and all special remotes can always be verified with special remotes. This may take quite some bandwidth with actual normal Git-NX remotes. You run the verification locally and just report back the result, which obviously saves a lot of bandwidth and turns for a time. Verification obviously takes the amount of required copies into your account, so if you would have to have three different copies and your whole repository cloud, you only have two, it'll complain. It will tell you, yes, the checksum is correct, but you don't have enough copies, please do something about it. And even if you were to shoot Jory right now and delete all copies of Git-NX, you would still be able to get all your data out of Git-NX, because what it boils down to in indirect mode, it's just sim links to other objects. These objects have their own checksum as the actual file name, so you'll even be able to verify without Git-NX just by means of a little bit of shell scripting that all your files are correct, that you don't have any bit flips or anything on your local disk. Direct mode doesn't really need a recovery scheme because the actual file is just in place of the sim link. Yeah, but on the other hand, you still need to look at the Git-NX branch to determine the actual checksums, which you wouldn't have to do with the indirect mode. There are a lot of special remotes and what are special remotes. These are able to store data in non-Git-NX remotes because let's face it, on most services where you could store data, you aren't actually able to get a shell and execute commands. You can just push data to it and you can receive data, but you cannot actually execute anything on this computer. That's what special remotes are for. All special remotes support encrypted data storage, so you just GPG encrypt your data in dense and off, which means that the remote services can only see the file name, but they cannot see anything else about the content of the files, which is, yeah, obviously you don't want to trust Amazon or anyone to store your plain text data. That would just be stupid. There's a hook system, which allows you to write a lot of new special remotes and you'll see a list of, a quite extensive list of stuff in a second. Normal built-in special remotes, which are supported by GIT-NX out of the box and implemented, actually implemented in Haskell, are Amazon Glacier, Amazon S3, a normal directory on your system, Rsync, WebDef, HTTP or FTP and the hook system. There's a guy who wrote most of those, where you can support archive.org, imappbox.com, Google Drive, yeah, you can read yourself, I mean, but those are quite a lot of different special remotes, so if you already have storage with any of those services, just start pushing encrypted data to it if you want to, and you're basically done. There's an ongoing project called the GIT-NX Assistant last year, and I think this year just ended, didn't it? Yeah, so pretty much exactly one year ago, Joey has started to raise funds by means of Kickstarter to just focus on writing GIT-NX Assistant for a few months. He got so much funding that he could do it for a whole year and he just restarted the whole thing with his own fundraising campaign without the overhead of Kickstarter and he got another full year, yay. Yeah. Are you still accepting funds? Okay, so if you use it, at least consider donating because honestly you can't write patches for it anyway because it's in Haskell, so that's the other means of actually contributing. GIT-NX boils down to being a demon which runs in the background and keeps track of all your files, of newly added files. It keeps then, it then starts transferring those files if configured to do so. It starts transferring files to other people or to other repositories. This is all managed by means of a web GUI which in turn means that it's really, well, not easy but easy error to port to, for example, Windows or Android which both work to some extent, not fully but they're useful or usable, more or less. Yeah, at least on Android it's really quite well. I couldn't test it on Windows because. And it also makes it accessible for non-technical users. So, for example, if you want to share your or some of your photographs with your parents or with friends or if you want to share, I don't know, videos with other people, you just put it into one of those repositories and even those non-technical people just magically see stuff appear in their own repository and can just pull the data if they want to. Or if you configure it to do so, it will even transfer all the data automatically which is, yeah, it's mom compatible. That's the short version. It supports content notification but not content transfer by means of XMPP or Jabber which used to work quite well with Google Talk. I think it's not, oh, it still works. Okay, at least at the moment. We'll see when they just cut the cord and replace it with Google Plus. Yeah, at least at the moment it still works by, if you have a Google account, you can simply transfer all your data. Or you can transfer the metadata about your data. You cannot actually transfer the files through Jabber but that's probably something which will happen within the next year. There are quite a lot of one's rule sets for content distribution. So for example, it can show you, okay, it can show you, you can say put all raw files into this archive and all JPEGs onto my laptop or whatever. Or if I still have more than 500 gigabyte free on this disk, please pull data in and as soon as I only have 20 left, stop pulling data into this one repository which obviously is quite convenient. As I said, there's an Android and Windows port and now onto use cases. First use case, the archivist. What the archivist does is basically he just collects data either to ever look at or just to collect. And if you have this use case, what you probably want to do, you want to have offline disks to store it to your moms or to put into a drawer or just you don't have enough SATA ports in your computer because you just have so much data. So what you can do is you can just push this data to either connected machines or to disconnected drives or to some web service and just store data. But normally you would have the problem of keeping track of where your data lives if it's still okay, if it's still there, everything. With good annex, you can automate all this administrative side of archiving your stuff. And if you have yet only one of those disks, if there are proper remotes, you will have full information about all the data in your annex cloud up to this point. So even if you only pull out one random disk, you'll still have information about all the other disks on this one disk, which obviously is a nice thing. Media consumption, let's say you pull a video of this talk or you get some slides, maybe also from this talk, you may get some podcasts and get annex has become a native podcatcher quite recently, I think two or three weeks ago, which means you don't even have to have a separate podcatcher, you just tell get annex, this is the URL of my RSS feed and it'll just pull in all the content. Then you can synchronize all this data, for example, to your cell phone or to your tablet or whatever, consume the data on any of your devices, even if you have 10 copies of a particular podcast, because you didn't get around to listen to it on your computer, you didn't get around to listen to it on your cell phone, but then on your tablet, you did listen to it. And you have three copies of this file, which you don't need anymore because you listened to the content and don't care about the content anymore. What you do is you drop this content on one random repository. And this information that you have dropped the actual content, not the metadata about the content, but the actual content and don't need the content anymore, will slowly propagate to all other annexes and if they have the data, they will simply also drop the data. So you don't have to really care about keeping track of those things, you can simply have this message propagate. Do you want to comment? Can someone give Joey a microphone? Just as a minor correction, it doesn't propagate that you've dropped the content, but you can move it around in ways that have exactly the effect that you described. I just didn't want people to get the wrong idea that if you accidentally removed it from one thing, it would vanish from everything with us. That's what happens. Yeah. But if you deliberately dropped the content and tell the annex, I don't. No, that's not how it works. We'll have to talk about it later, but it's... You look at the slides, but... Oh, I'm sorry, I missed that one. He watched for everything which is by him. Okay. Okay. Next thing, if you're on the road and one use case, which is probably quite common, taking pictures while on the road, while during holiday days. You take your pictures, you save them to your annex, you are able to store them back to your server or wherever if you want to. And even if, for example, one disk gets stolen and you lose a part of your content, you will still at least be able to have an overview of what content used to be in your annex. And if you then pull out your old SD cards and see, oh, that photo is still there, it can simply re-import and it'll just magically appear. What it also does is if you have a very tiny computer with you, you can, as soon as you're at an internet cafe, just sync up with your server or your storage or whatever and push out the data to your remotes. Which then means you will have two or three or five copies of the data and get annex keeps track of what is where for you so you don't have to worry about copying stuff around. Yeah. And then there is one personal use case for photographs. I have a very specific way of organizing my photographs. My wife disagrees wirelessly and she likes to do her photo storage in a completely different way and she doesn't care about raw files and she doesn't care about all the documentation pictures of signposts or whatever, which I took to just remember which city we went through. So what she can do is she can simply delete the actual file or more to the point the sim link of this file and it will disappear from her own annex. She can then commit all this. Normally if she would sync back the data, I would also have the same layout, which I don't want, especially since she tends to rename everything a lot. But what I did, I set up a rebasing branch on top of my normal git annex repository. So what she gets is she has her own view of the whole data or the part she cares about. And when I add new content, she'll see the new content. She will rearrange the content however she pleases. But as it's a rebasing branch, all her changes will always be replayed on top of master. So she has her own view and I don't even notice her own view. But even if she uses one of the other computers, she will have the same view which she herself has. So basically she has her own view of all of the data. This is very convenient to keep the piece at home. Next topic, we see as age. Most of you would probably have some sort of system where they have one subversion or CVS or whatever repository and they have it somewhere in your home directory. You sim link into various places in your home directory and it kind of keeps working so you don't really throw it away. But to be honest, it sucks. Here's why. Or here's why in a second. We see as age is implemented in POSIX shell so it's very, very portable. It's based on Git but it's not directly Git. The one thing which Git is not able to do is maintain several different working copies within one directory which is a safety feature more on that later. But this really sucks if you want to maintain your M player, your set shell, your whatever configuration in your home directory which is the obvious and only real place where it makes sense to put your configuration. You don't want to put it into ..files and then sim link back. You want to have it in your home directory as actual files. So we see as age uses fake bear good repositories. Again, more on that on the next slide. And it's basically a wrapper around Git which makes Git stuff do, which makes Git do stuff which it normally wouldn't do. And it has a point it's sensible and usable hook system which Goddons will care about. With a normal Git repository, you have two really defining variables within Git. You have the work tree, which is where your actual files live and you have the Git there where the actual Git data lives. Normally in a normal checkout, you just have your directory and .git under this. If you have a bear repository, you obviously don't have an actual checkout of your data. You have just all the objects and the configuration stuff. So that's what a bear repository boils down to being. A fake bear would get repository on the other hand. It has both. It has a Git work tree and it has a Git there, but those are detached from each other. They don't have to be closely tied together. And also sets core bear false to actually tell Git that, yes, this is a weird setup, but yes, you still have a work tree even though you don't really expect it to have one, you still have a work tree. By default, WCSH puts your work tree into home and you get there into, yeah, it's balsamt.config, WCSH repd and then name of the repository, which just puts it away and out of the way of you actually seeing stuff. But it follows the cross-test of specifications. So if you move stuff around, it'll also follow. Fake bear repositories are really, thank you, are messy to set up and it's very easy to get them wrong. This is also the reason why Git normally disallows doing this kind of stuff because all of a sudden you have a lot of context, dependency on when you do what, just imagine you set Git work there, a Git work tree, sorry, and run random commands like git add, that's kind of okay. If you Git reset hard head, yeah, you will probably not be too happy if you check out the current version, that's also quite bad, and if you clean F, yeah, you just killed your home directory, congratulations. So it's really risky to run with these variables set, which is why I wrote WCSH to just wrap around Git, to hide all this complexity and do quite some sanity checks to make sure everything's set up correctly. Again, it allows you to have several repositories and it also manages really the complete lifecycle of all your repository. It's very easy to just create a new repository, you just in it, just with Git, you add stuff, you commit it, and you define a remote and then just start pushing through this remote, simple. This looks like Git because it's very closely tied to Git and it uses a lot of the power or of the syntax of Git for obvious reasons because it's closely tied to Git. You can simply clone as you would with Git, you can simply show your files as you would with Git, you can rename the repository, which Git can't do for, but you don't have to. You can show the status of all your files or just of all your repositories, or of all your repositories. You can pull in all your repositories at once, you can push to all your repositories at once with one single command. So if you're on the road or if you just want to sync up a new machine, it's really quick, it's really easy. There are three modes of dealing with your repositories. The default mode is the quickest to type. You just say we see status H commit whatever or just any random Git command, but you cannot really run Git K. You can do this by using the run mode, which is the second mode, where you simply interject, you see here run is missing and here the Git is missing. So you say simply we see status H run, status H, Git commit, whatever, and this, and this is exactly the same command. It's literally the same command once it arrives at the shell level, so to speak. Here you can also run Git K because with this you set up the whole environment for one single command to run with this context of the changed environment variables. Or you could even enter the repository then you set all the variables and then you can just use normal Git commands as you would normally. This is the most powerful mode, but it's also the most likely to hurt you if you don't know what you're doing. So I recommend working your way down this way. You should have your shell display prompt information about being in the VCS H repository or not, simply because else you may forget that you entered something and then if you run those commands, there will be pain. And advanced use case, which will be possible quite soon, where you can just combine VCS H to get annexed to manage everything, which is not a configuration file in your own home directory. So you have basically two programs to sync everything about all of your home directory without having to do any extra work. You can also use it to do really weird stuff. For example, you can back up a .git of a different repository with the help of VCS H. So you can just go in, change objects or anything, break stuff and just replay whatever you're doing just to try and see how it breaks in various interesting ways. You can just back up a working copy, which is maintained by a different repository or a different system. You can even put a whole repository, including the .git into a different .git file or you can even put other VCS-like version or something into Git if you want to. Then there's MR. MR ties all those, hopefully by now you have about 20 new repositories because you have your configuration, you have your icky wicky, you have everything. So now you need something to synchronize all those repositories because doing it by hand, just a lot of work. MR supports push, pull, commit operations for all the major known version control systems, allowing you to have one single interface to operate on all your systems. It's quite terrible to write support for new systems. I think it took me about two hours to support VCS-H natively, so that's really quick. If you want to try the stuff which I told you about in the links later, there'll be the possibility to just clone a sample repository for VCS-H, which will then put up a suggested MR directory layout and we can just work from there. This is the, or at least my suggested layout, which basically you just include everything in config.d, you maintain your available.d by means of VCS-H, so you simply sync around all your content between all the different computers and then you simply soft link from available to the actual config, which is basically what Apache does with sites enabled in sites available, or modules available in modules enabled, which is really, really powerful. Last thing, it's not Git-based, but set shell. It's a really powerful shell. You should consider using it. It has very good tab completion for all the tools which are listed here, more than bash. It has a write prompt, which will automatically disappear if it needs to, which is very convenient to display not important, but still useful information, and it will automatically, if you tell it to, tell you about you being in a Git repository or a subversion repository or whatever by means of VCS-Info, which also means you'll be told that at the moment you are within a VCS-H repository and you may kill your stuff if you do things wrong. It can mimic all the major shells and there's just too many other reasons to live. So, final pitch. This is true. I tried it earlier. I can demo it. I still have five minutes left. It takes me less than five minutes to synchronize my complete, whole digital life right on the road. So if I'm at the airport and just want to update all my stuff and push out all my stuff, it'll take me a few minutes that I can hop onto the airplane and I know everything's fine, everything's up to date on my local machines. I have, on my local machines, I can continue working and I have a backup on my remote systems. These are the websites. The slides will be linked from, from, from PENTA. So you are more than welcome to look at these links later. There are previous talks which you can also look at if you want to. That's pretty much it. And if you have any more questions afterwards, either catch me or there's an IRC channel and there is a mailing list. Okay. We can take a few questions. We have still a few minutes but if there are more questions, ask Richie afterwards. And while we're doing this, just look here because that's a complete sync of everything I have. Okay. So just to make sure that I understand correctly, let's get the annex the point is that the data is stored, dispersed over different local destinations, so to speak. But the metadata which versions exist is complete, complete Git history. So Git is able to tell me, well, this version at that destination was changed at that time and so on and so on and so on. Did I get this right? Git will be able to tell you about changes. Okay, I don't have internet, sorry. Git will be able to tell you about changes in the file name or the directory structure. Git annex will be able to tell you about changes in the actual file content or in moving around the files. But as it's one single unit, more or less, yes. The answer is yes, but not quite, but yes. Yes, but it is almost, all the things you asked about are in Git, the previous location and all that stuff. Okay. Yeah, but in a separate branch which you should use Git annex to access but you can do it by hand if you want to. Okay, thanks. Yeah. Hi. Oh, yeah. I'm not familiar with tracking branches yet. You mentioned the workflow where your wife has a different view on the data than you with this workflow. Is it possible for your wife to upload photos that you will have in your view as well or is it the one way street? Minor correction, it's not, tracking branches track a different repository. What I meant were rebasing branches which rebase on top of a different branch which basically just keeps the patch set always on top of the branch no matter where her head moves to. Yes, she would be able to but if she wanted to do that she would need to simply Git check out master, do whatever she wants to do then Git check out her own branch and then she's, yeah. But she's able to but she would need to check to change into the master branch and then back. But yeah, that's okay. Microphone. She never pushes her private branch, it only lives on her machine. No, she does push it but I don't display this view of the data. It's sort of private. Because else she wouldn't be able to synchronize this view between different computers. I seem to have internet now so I'll just let this run in the background. Any more questions? No more questions? Then we're done. Can I have one more minute for questions? Okay. Okay, so thanks, Richard Hartman. We will continue.