 Good afternoon, everyone. EuroBSDCon is very, very happy to have Stefan Spirling this afternoon talking about Game of Trees. Hopefully the ending will be better than Game of Thrones. Quick note, this is going to be a rather long talk, so there won't probably be time for a Q&A session. So if you need to ask Stefan some questions, don't hesitate to kidnap him after the talk. All right, have fun. Thanks, Stefan. Thanks. Thanks. Can you hear me? Okay, good. So this is... Oops, it could have been one slide too far. All right. So Game of Trees tries to be a version control system that is appealing to OpenBSD developers in particular. Not just at the functional level, but also in the implementation and the coding style and the design and the architecture of the whole thing. The license is ISC, and for various reasons, it's compatible with Git repositories. And also we try to have nice man pages that are easy to follow. There's a couple of people that have had input into this project. It's not just me, even though it might appear to be that way, but there's actually quite a few people I've had discussions with. And this extends beyond OpenBSD's community as well, because you'll see the former and the current LibGit2 maintainer on this slide. We work together at a company, and so they helped me a lot. Now, this is the overview of the general design. You have... Currently, there are two front-end applications. One is the command line tool got, and the other one is a graphical and cursor-based browser called TARG. And you have a library which contains most of the version control machinery you would expect. Like you can handle objects, cache them, you have differences between objects, you have code to manage a work tree, and all the sorts of stuff. And what's unique about the design in this case is the arrow that is labeled fork and exec. Because when we read Git repositories, we don't assume that they are a trusted source of information, but they are copied from somewhere on the Internet, and we are scared of being exploited and stuff, so it might be useful to wrap these accesses in a different address space with different utilities that then run in a separate process context. And that's how we read data from the repository. When we write to the repository, we just do it directly. For Git users, this is probably surprising. We don't use the actual Git work tree that Git maintains in its repository, so usually you'd have the work tree and the .git folder, but here we just ignore the files above the .git folder because we don't care about those. This has some advantages, for example, it allows you to really operate with both tools on the same objects, which is kind of neat. It also allows God to be a little bit different in the sense that it really only requires you to keep one copy of the repository on a local disk, and you can create as many work trees as you want from it, and you can also check out subtrees from those work trees. So you could have, say, a user source checked out from it, you could have your couple of kernels checked out, or some user land utilities like T-Max put them somewhere separately and work on them. The work tree remembers the passenger repository, it remembers the branch it's on, so there's a sticky branch for the entire work tree, and it also remembers the commit that was checked out from this branch. And the commits are checked per file. Usually you think of it as a global thing, but this tool is actually able to pull out some different commits, just like Git would be if you ask it to. So this is the platform assist that we're using currently. There is no network access yet, so you don't have INIT and DNS in here. Those would, of course, have to be added in case network support is implemented. But the helpers that actually read data, they have a very limited exposure to system calls, so they can really only do things like M-Map and they take file descriptors for reading data and file descriptors for outputting data. And so if an attacker gets in there, they can't really do a lot at the system call level. We also use Unveil where we limit the application to the pass it's actually needing. So, of course, the repository needs to be read, the work tree needs to be read, the slash temp directory is used. And also, if you import unversion files into the repository, we need to read those, of course. When you type a commit message, Unveil is not yet applied at this point because Unveil is supposed to extend from one process to another through Exeq and fork. This is actually not currently the case in the implementation, but the design of Unveil is supposed to make that happen eventually. So we can't really use Unveil before the editor is done because the editor might read its own configuration files, loadshed libraries, all sorts of stuff that we can't control. So for the commit command, you get Unveil protection as soon as you're done writing your log message. We also read the gitconfig file, but only to get user and author, username, email information that you might want to use just for default values. You can overwrite those separately. And this is the any parser is used to read this file and it runs in a separate process as well. Since I talked to Mattine about this last night, he asked me what are the helpers and I realized I didn't have a slide for this. So this is the list. You have the object reader is basically just reading the header. So in case we only want to know the ID and the size or something like that, we use that. And then you have readers for the different object types, which are parsing the loose objects that are in the file system. And then you have a reader for pack files where objects can be extracted from and you have a reader for the gitconfig file. I'm going to explain some git basics. So in case people know git already, the following few slides will might contain old information, but we're trying to keep this short. So git has several object types. The blobs, the bottom store file content pretty much as is. The tree objects are essentially directory inodes in this virtual file system tree and the commit objects point at one particular tree to create a snapshot of your project and then you can change those commit objects to create basically versions of your project as a chain of snapshots. You also have tag objects which allow you to label commits as released versions. That's pretty much the simple data model. On disk, objects are often stored in loose form, as it's called, when you create them, which means each object has a separate file on disk named after its ID. And this ID is derived from its content. So you have a type header, a size header, and then the data. And this all is hashed, currently with Charon. So git might change that at some point. And after hashing, you also compress the data with CLIP and write out the file. And that's basically how you create an object in the repository. Now this could be very inefficient because, of course, you don't want to have thousands of objects lying around on disk and bringing up inodes and things. So git invented a pack file format, which is pretty neat, actually, because while many version control systems will usually deltify between versions of individual files like CVS does, for example, this allows you to deltify across entire collections of files. And so, for example, if the license header is the same in all of them, the delta algorithm can see that and it can basically build layers of deltas to construct files and be really space efficient when it's saving things. To do this, they added two object types in pack files, which only occur there. One is a delta object with an offset that tells you where in the pack file the other object is that you need to read to apply the delta on top of. And the other one is a delta which refers to its base by the Charon ID of the object. You also have a pack index which is stored in a separate file and that allows you to know where the objects are. Basically, it's just a list of IDs and offsets into the pack file. On this, it looks like this. You have the pack index and you have the pack file. And this is a whole source tree that I packed. It's a gig of storage, which is a lot more efficient than CVS. And the pack files are also used in Git for communication purposes. So when you send a collection of objects between servers, they will usually be packed in the pack file. You use the same space efficiency to limit the amount of network traffic that's being sent around. Git also has a concept of references, which allows you to basically apply user-defined labels or names to particular objects. Generally, it's just a mapping from a string to a Charon ID or from a string to another reference. And mostly you use those to identify your branches because when you have a reference to a commit object that you can interpret that as a head of a branch. And references have... The names are strings, but they look like file system paths in a way. And they always start with refs. And then you have several categories. You have the heads for the branch heads. You have tags to find the tag object. And remotes contains multiple directories, one per remote repository that your repository knows about and contains copies of the history that exists in those repositories. And Game of Trees internally uses references for a couple of things and stores those in the refscot namespace. When you use it on the command line, you don't have to type refs, heads, refs, tags, and so on all the time. You can just provide a name and it will be locked up in the given order there. And to disambiguate, you can just use the full name. Okay, does anybody... Does Henning still have questions? No? Okay. You good? You can go on. So, this interface that was built for this isn't like a very new invention. It's just a combination of things that I happen to like and version control systems that I use. And I use all of the ones that are there on the slide. And I also use Fuzzle. And I've basically, because I've been working on SVN for many years, I have to sort of understand what everyone else is doing. And so I have a fairly broad idea of how people have implemented all these operations that version control systems do. And so I thought about what I like to see when I'm working with a Git repository and started to just implement it bit by bit. And I also wanted to make sure that I only write code that's actually going to be used by OpenBSD developers. And I don't want to add features that they won't need. So this saves time. And it also keeps the interface simpler. I also made sure that I don't use long options. So you only have single-letter options. I also kept the amount of options at a minimum. So you only have the options you absolutely need. And you end up with a list like this for local version control operations. You can maybe look over this and see if you find your favorite commands there or not. And in particular, though, you should not assume that any of these do whatever they do in the version control systems you're already used to because they produce, let's say, a different way of consistency that doesn't exist elsewhere yet. Every sort of tool has their own way of making things consistent and this is just consistent in a different way. This is a small example project that we're going to use. So I just want to show you the interface for a bit so we can just walk through some of the operations. So you see it's a HelloWord project, the Mac file under ReadMe. You would start off by creating a new repository. Of course, you can also use git to clone on and operate on that if that's possible. But for displaying around with it, this is the easy way to start. So we create a repository and we import files from a temporary directory into it. And two things happen here. Well, the files get added, obviously. It writes objects in the way we've just seen. And it creates two things. The commit hash that you then use as the first commit in your entire line of history. So this commit has no parents. It's a root commit. And also it creates a branch for you because in Game of Trees, you cannot work on anything unless it's on a branch. And because the work trees have to know which branch they're on. And it also creates a head reference, but that's mostly for git. So the git knows what's going on. We only use the head reference as a default references. If you don't specify one. And so after an import, the repository looks like this. It has the head reference, the master reference, and the objects in the tree as discussed before. And then to actually do work on this, because we don't have a work tree yet, just have this bare git repository on the disk, you do a checkout and that creates a work tree. And this can be placed anywhere and you can create as many as you want. And you can also check out the same work tree from the same branch many times, which is something that git makes it hard to do. Then you do some changes. You can view your changes with status. You can use a diff tool to look at diff command to see the changes you've made. And this is a lot of boilerplate text and I had some diffs on the future slides and you can see all this context. But you can nicely see that it presents all the ideas involved in the diff so you know what's been diffed. And to commit, you use the commit command and you create commits with that. Of course I'm writing the full name of each command here, but they also have short aliases to make it easier to type them, but these aliases are not flexible. You cannot use or define them because I don't want people to go off and redesign the interface to their liking because I want people to be able to communicate and that only works if everyone speaks the same language, so no configurable aliases here. So once we have a commit, basically this is the same diagram again, except you don't see the tree of the first commit, which still exists, but you see a new tree and you see how the commits are linked and you see that the master reference has moved up. You can also discard your local changes, of course, and for that, like an SVM, you use the revert command, which is destructive and really deletes things you've written, so you have to be careful when using it. And you can also use this to pick individual changes from files, which is something that SVM does not offer, but Git does, so basically for Git users, this is the equivalent of checkout-p, so if you have two changes in the file, you can run revert-p, you can individually select the changes that you want and say yes or no for each of them, so here we say no because we like deaf adults, and here we say yes because we don't like syntax errors. And after that, the file is like this and we can commit it. And so this actually happened because JC has told me he often uses Git where he fixes a bar and he adds lots of debug print-ups to code, and eventually he fixes the bar in a small section of the file and then he has to go through and remove all these debug print-ups again. And there's a couple of ways of then committing only that change he actually want, and one of these is to just revert all the changes you don't want and so you can go through this interactively, you don't have to open the file in editor and search for the print-ups, you just go through them. Though because revert is destructive, you currently have to be careful what you do at this prompt because the change will be lost. I'm thinking that maybe that's not such a great idea that we should produce a backup in that case, but that's an implementation detail. Another example of how things are done here is it's again modeled a bit on subversion, so you need a work tree which is at the latest head of the branch which contains the bar change and you might already have local changes in there, we don't care about that, but this work tree will carry the changes that you're undoing, which basically means you apply the inverse diff of a commit that was already committed in history and the command for this is called backup and you just give it the ID so here it can be abbreviated and then it just measures change and you have the change in your work and copy like this. Huh? Nobody will ever need that. I think I've backed up some just before. Okay, let's talk about branches a bit. So we know that in OpenBSD we don't really use branches, what we do is to some extent we have stable branches for releases and we specifically switch the purpose of the head into release mode ahead of release, which could also be considered a form of a branch, it's just that we do it on the same set of files, we just declare that the purpose of this branch has now changed, but we still have stable branches and also we have some vendor branches in the code tree to import things like LLVM and things like this. But for now just keep in mind that we have references and they point at commits and that's what a branch is modeled as. What you cannot do yet in Game of Trees is you cannot create merge commits because I haven't yet found a way where I need this for my own workflow against the OpenBSD source tree. It could eventually be added, but I would discourage this use and confine it to very few areas such as vendor branches because we want a linear history that's easy to understand even for external consumers and having lots of branches in the project would just make progress harder for us. So currently there's no way of doing this, it would be easy to add, it wouldn't be a problem. There's already code to do it in theory it would just have to be added as a frontend. To create a branch in Game of Trees you use a branch command, you give a name, by default it uses the current branch you own as the base and then you can list your branches and you see that another one has appeared. Now all this did was really just create a reference, it didn't change your work tree just added this second reference to this commit and I just said that we don't really use branches in OpenBSD so why are we creating a branch now? The problem is that in this data model you cannot see changes other people have made before you copy them to your repository so you need to store those changes somewhere before you can even see them let alone merge them. So you need a space, some reference that says this is what happened elsewhere and this is what happened locally and you need to have that so normally ideally in a networked version of this you would store it under remotes somewhere but it's really just a name you just decide that some represent external state and some represent your local branches and then that's that so for this example we can pretend that the master branch is the remote state and to switch a work tree between branches you use the update command with the dash B flag normally update would not allow you to switch branches it would only move you up and down in the same branch but with dash B you can say yes I want to change the branch please re-associate this work tree with a different one and then nothing happens really because the hashes are the same of both of the branches but the metadata has been updated so now we commit two changes that are related to hiking somewhat and we end up with a repository structure that looks like this so hiking has moved up and master is still at the old commit that we started at so now someone else somewhere in the world makes another change if you try this locally you just get a second work tree commit to the master branch it's essentially the same situation and you end up like this so now you have two branches and they have diverged they have a common ancestor commit and you have two references that look at diverged history so now what's important here is that because we consider the master branch to be an external branch it's basically part of the official public history that the project has produced and our hiking branch are local changes that only we see because we're not allowed to change commit IDs of things that are already declared part of official history upstream we cannot change the IDs of things on the master branch so 33AB and 3490 are fixed we cannot change them however the other commits can change and since the hashing of git runs through the entire trail of objects that reference each other these hashes will change if we change the base of these commits but we have to do that in order to keep history linear so if you want to make history linear again we have to take those two commits on the hiking branch and move them up to the current head of the master and that's called rebasing and this is basically how you merge your local changes with the incoming changes in this tool so to rebase again a work tree and this time this work tree comes from commit 33AB and it's on the master branch because that's where we want to rebase hiking on top of so we basically we get the base that we want the history to be applied onto and we're not allowed to have any local changes in this work tree because that just avoids unnecessary merge conflicts then game of trees will internally switch this work tree to a temporary branch and apply the commits that you've made before on top of the new base and once all that is done and has succeeded it will take the temporary branch and this one becomes the new hiking branch and the old hiking branch basically just sits there in the repository and can be garbage collected at some point which is not yet implemented so you could run git gc or something like this to really delete it but it's not really important so this looks in the user interface it looks like this you switch back to the master branch and you say rebase the hiking branch please and then you get conflicts of course as usual now the conflicts look as you would expect and you know in the status command you would see a C for this file because it contains conflict markers so there's a couple of ways version control systems have done this some have special conflict metadata that says like this file was in conflict and you have to run a command to clear this flag so that you may commit it in this tool it simply looks for these conflict markers and if they're still present you cannot commit once you remove them or change them even it allows you to commit them I wanted to briefly explain I think we have enough time still that how this merge is actually done how do we get this conflict and why do we get this output that we see because I've seen a lot of people actually using these tools but never really understanding how the merge conflicts come about and always complaining that merge conflicts happen and it's all magic and not really clear how it's happening so there's a simple way to represent or to communicate the idea of how diff3 actually works so with diff3 you get three files as input and in our case there's an original file which is the one that was in the commit at the very bottom of the shared history of the branches as we saw them before so maybe I'll go back until you see less so the base comes from the original comes from commit3490 that's where we take one file from the other two files are on the tips of each branch so then we call these files A and B and you can imagine here that every number represents a line of text in the files just to make it easier to visualize how the algorithm works so what it does is compares the original file to A the right file number A and again the original file to the derived file B it never compares A and B directly and it marks the regions where each of these files differ from the original and if these regions don't overlap it just produces the output below and basically takes the sections that the files wanted to change and in this case the merge says it's all good whether this still compiles and actually works is a different question this is not the responsibility of Diff3 but Diff3 produces a merge version that corresponds to this algorithm if you do this with different inputs so for instance A, B like this you would end up marking overlapping sections and in that case the algorithm can't decide what to do it has two versions of changes that are not the same on either side so it has to offer you both possibilities and that's why you see these conflict markers in the output what's perhaps a bit confusing is that you don't ever see the original file in this output Subversion has started actually to show it a few releases ago so now there we produce 4-way output or 3-way output for actual Diff3 results and some users have responded to that very positively so I haven't yet decided whether a game of trees should do the same but it's an option anyway if you want to know more about Diff3 there's this fantastic paper which also explains the algorithm with these OAB tables and has a lot more details now we've fixed the conflict basically resolution is arbitrary it depends on what the programmer really wants and we say please continue rebasing now we create a new commit that has we see the old and new idea of this commit that's been rebased but again we have a conflict so we have to resolve it again and this time we see that oh yeah we added a second line in our branch and we also have to merge that so generally you don't really get conflicts all the time like this but because of these toy examples I'm using they occur I've managed wireless changes with like 140 commits on top of obviously master and it's fine it's not a burden so then we fix this up and maybe even at other contexts changes whatever the tool doesn't care and we just say okay we're done with resolving this conflict and now there's two new commits our work tree is called on the hiking branch on the hiking branch again but this time it's the new version of the hiking branch which is rebased so now again we have a linear chain of commits in the repository and everything is good you can look at those in detail with the log-p command and you see the log messages the data author information and the changes that were actually committed so what's nice about this tool is that compared to CVS it gives you actual change sets across several files and everything and this really helps me also just looking through basically I stop pretty much following the commit mailing list and I just update my git repository and browse it to see what people have been working on it's pretty neat the browser oh no where we are here no this is a different one okay so people have requested features so I was basically done at that point with the feature set that was all I really needed apart from adding removing files and things like that and also stage changes for commit and contrary to git in this tool staging is entirely optional so you don't have to use it you can just run commit and always commit everything that's in the work tree but if you have stage changes the commit and diff commands and status commands change their behavior accordingly and only show you either staged or non-staged changes depending on what you ask for and commit will never allow you to commit unstage changes or something staged you also cannot update files which have stage changes and if you run into a problem there where you're behind the head of the branch and you want to commit but can't you have to actually unstage your changes which means you merge them back into the work tree and then update and then stage again if you like then there's a hist edit command which is like what git calls interactive rebase and it allows you to re-order commits you can merge them and edit log messages and all this kind of stuff and this should of course only be done with your local history but this is a great way of preparing diffs for review throwing out unneeded changes that you weren't really sure whether you needed them or not and just committed and all this sort of stuff so those two features combined allow you to manage your diffs pretty well there's a browser talk which allows you to browse commits view diffs, annotate files and read the tree of the repository browser and I wrote this mostly because it's a really really nice way of prototyping the needed functionality so this actually started very early on before it could read all the objects I already had some interface for this and it allowed me to verify quickly that my code was doing the right thing and I already had a user which is MPI who used this tool a lot to dig through history and learn about how the network stack works in old versions of PSC and things like that so he was using a tool called TIG before that which is also okay but based on Git and this is basically the equivalent but written in C and it's faster actually even though it's pref-set so how did this start actually the roots of this whole thing go back to UBS Decon in Paris where a surprising number of people started talking about Git for some reason in the hallway at dinners and things like this and when I was present people looked at me and said well you know version control so can you give input and yeah so we ended up, I ended up thinking well it seems like a couple of people interested in this and I can help and I invited Carlos who's at GitHub now from the LibGit2 maintainer to a hackathon where he showed up in an afternoon and we went to the back room let everyone hack on their port stuff but a couple of us went back with Carlos to discuss how realistic such a project would be and what the pitfalls would be and we just basically people just threw in their opinions and their ideas and we discussed them and he basically vetted them against his own understanding of Git and at this hackathon I started writing code to read references which is very simple and started reading objects and by the next hackathon I had done all the objects all the object parsing it was not using pref step yet it was just plain parsing code and pretty soon later I could diff objects and I started pack files because a lot of tests I wrote for this tool were operating on its own Git repository initially and then I did a fetch or did a clone or clone it somewhere else and all the tests started failing because now things were packed and I didn't have code to read the packs so that was kind of unfortunate so I realized oh yeah I have to also add code to read packs later there was a command line tool I started using pledge I started to add fork and so on so it took like about a year to get to the point where it actually had a work tree that could be what was it doing that could actually be used to change files and edit them and I started using this tool for my own OpenBC development in February this year and it couldn't create commits yet but it could manage local disk which was all I needed to get going with mailing diffs out for review it also couldn't handle ads and deletes yet but that was okay because I knew how to work around those things and added that support pretty much after that I added the ability to update individual paths based on feedback from Theo who says that his build process pretty much requires the ability to do that and at the general hackathon this year we taught how to create commit objects and then things started to move a lot faster so once you have sort of like a basic toolset and all the stuff is there that you need to build more things on top it just accelerates and getting all this rebasing so basically I started with this cherry pick feature and then I had file mergers and then I could just do the rebases on top of that and everything just went really fast so in August this year we did the first release and we've had a lot of bug fix releases since they've been about for months or even one per week so every couple of days I just went through what people had either committed or what they had sent me or what I had done and if there were more than four or five changes that looked useful I just pushed out the release and it's in the pod street you can get it there and it's always up to date so this is where we are we have local versioning it's useful for individual developers at this point so it's good enough to for all the regular version tasks I only run git now to do fetch and push that's all I do with it the next thing that we need to make this really useful is to generate pack files because that's a prerequisite also for network traffic and this from my point of view this should be a separate admin utility that you use to do repository administration and consistency checks garbage collection and we also need to import for an external format of this data so git has what's called fast import or fast export streams I never remember which one it is that basically gives you a plaintext representation of data which is important because if all you have is a pack file and you can't even decompress it anymore because of bitflips on the disks then you pretty much hose and you can't you can forget about your project and I don't want people to rely on external clones for backup that's not a viable strategy I think we need a way to fix broken repositories locally just like Theo today is able to fix RCS files and I don't want to take that away from people like him who really have a tight chip to run, have a lot of responsibility and this data is really precious and you just can't afford to lose it so there I'm still looking for solutions and maybe these streams that git is writing are not entirely suitable for it maybe they are, I haven't really checked but if they aren't we can just make up our own there needs to be some kind of server and one important aspect of this is that we don't want to use this sort of MergeMeister model that Linux is using you know how this works so they keep pushing up changes between repositories and there's always a person who takes care of merging changes into this repository and pushes the collection up to the next one the problem that we have with this is that we don't want Theo to end up having to merge everything because he doesn't have the time also that's not how the project is supposed to operate we're supposed to operate as an equal collection of peers who have access to the entire tree and people are allowed to change things anywhere they want if they have enough review or if they follow the community process nobody stops me from going into UTF-8 or wireless or even relay to you or other things that have something to fix there and we can't require a hackathon of 70 people or 40 people in average to keep fetching changes from a server every time someone makes one commit so we have to have a way of basically doing rebasing of changes on a server if you've just gathered before or tools like this you will pretty much know how this works except we can skip all the review part in such tools that allow you to manage commits and merge them only once they're ready we would do our review as before in an email but a QA mechanism could exist on the server that allows people to just keep adding changes and the server makes sure that they can actually be folded in without colliding so you would provide the hashes of your base blobs and the paths that you believe these blobs exist at and if those assumptions are no longer true then your commit is out of date this is pretty much how SVN and CVS do it so you can just emulate this with the QA mechanism in Git and you would have your changes in the main repository with different commit IDs but it doesn't matter because they come back eventually roundtrip to your own repository and then you would just rebase your own commits on top and some of your local changes will just disappear in the merge this should only use encrypted communication for obvious reasons it could also be used to support a mirroring infrastructure and it would also be nice to have a protocol speaker here that's compatible with regular Git I don't have details in my head for this yet but I think it would be good because then this could become an easy repository hosting solution for small setups that are secure run maybe on your home firewall or and use OpenBSD in free-line unveil and pledge and so on of course we want to be able to transfer changes between repositories for pulling it would basically fetch changes and put them somewhere in a reference so you can access them and perhaps even automatically rebase a branch that you're on in your work tree but if it can't do that because of conflicts you would have to manually rebase pushing changes should ideally be supported by the server as I just explained Thio had this idea that he doesn't want to see the branches he just wants to keep working as he does now and I guess that's a valid use case and it's also something that I guess not he would want but other people would want to and it's also something that Fossil is really implementing so Fossil by default pushes commits to more than one repository when you run a commit so I thought well this must be possible somehow right we can probably do that and so there is a good way of doing this if you only look at local changes that aren't committed yet you can create a temporary commit object that the user doesn't see and you use the same push mechanism as you would normally use changes in history and you can just reply locally and you're done otherwise you require a fetch and basically then you boil the command set down even further and you won't have to use rebase unless you have real conflicts and you could just basically use the checkout commit update update workflow that people are used to from tools like CVS and SVN this is of course something that would need to be added once all this other stuff is there but in my vision it could be something where you say oh I want this branch to synchronize to the main server and then the branch would operate like that whereas if you have other branches where you say now this one is local or this one is not synced it's just pulling or pushing to this other server then you wouldn't have this behavior that's one possibility another thing I'd like to have is a web front end that works pretty much like CVS web it could just be another front end alongside God and talk and it would use some existing web technologies that we have probably it should probably be written in C because that gives you easy access to the library that is already there but I wouldn't be opposed to adding pro bindings or something like that if people prefer to have an easier time writing this kind of stuff okay I'm just two minutes over is it 45 or 40 minutes? 45 all right we have time for questions yeah we have time for a couple questions if you've got one come in the front Stefan thanks for your talk it was quite interesting I'm curious if you see this as a potential replacement of CVS in base for general development not in its current state in its current state it's good enough for myself I'm happy with that it's good enough for people to try it's just a package out of way I would guess that we'd need a couple more years for it to mature and it was already considered to move it to do development of this tool in base but there's a bit of a chicken and egg problem especially because once there is a server I would like to use that server with this project and then having to convert back from CVS to Git which could be done but it's just cumbersome so for now it's in a separate repository that I maintain thanks speaking on your personal repository is it already self-hosted? not yet but the server is there now so I was waiting for a proctor in Berlin to set up a server and that's been done and now I have to find time to actually set it up but it's gonna just use standard Git tooling that exists on Linux if I had a server already I would use it but I don't alright thank you very much Stefan thank you