 conversion or the security team patches the stable package. But basically, you only have to do the fix once. That's the idea. So how do we leverage the work that Debian maintainers and the security team put into Debian and build on top of this for our web applications? Well, that's why I want to talk to you about this site that I work on called Libraratire.org. Now, interestingly enough, the first time I talked publicly about Libraratire was at Debconft 10 in New York. So it's nice to be back here to talk about Libraratire again. But basically, what that site is, is that it is a federated version of Gravatar if you know what that is. So it delivers avatars, so profile photos to third-party websites, and this one is a GPL as opposed to proprietary with Gravatar. Now, this is what it looks like. So if you follow a bug on the Debian BTS, and you have an avatar, it will show up here right next to your name and email address. So it does that automatically if you have one. Here's another example. Mozilla Reps also uses it. So the Reps will go and upload their photos to Libraratire and it shows up on their site here. This website does not have to host any of those photos. The stack for Libraratire is pretty simple. Apache, Django, Python, Postgres, Gearman, pretty standard thing for a Django application. The architecture is a little bit special, but not very much. Basically, there's a master server that is the Django application. That's where users create an account and upload their photos. Then there's a bunch of mirrors that will receive the photo from the master, and then they are the ones who are actually serving it to third-party websites. Now, the thing to point out here is that the mirrors are all basically static. They don't have Django running, all they do is serve static file from this, and the entire logic of the application is basically for the serving part is contained in Apache Mod rewrite rules. So it's very nice, clean, and maintainable. But it means that, for example, if you'd like to volunteer a little bit of this space and bandwidth, not very much, you can have your own mirror here, and talk to me afterwards if you're interested. But basically, it's just an Apache config file and a few other things. For websites that want to use the service, all they have to do is follow the Gravatar protocol, because basically I wrote a replacement for it, and I decided to use the same protocol that they pioneered. It works like this. You take the email address that the user enters, you lowercase it, you hash it, and then you turn that into a URL by adding the base URL in front of it. So gravatar.com.avatar and then the hash, and that gives you an image. So you stick that inside an image tag and there you go. So we use the same thing for Libraratar, except that it's a federated alternative to Gravatar. So we look up the base URL in DNS. So if you control a domain, say you're the owner of gmail.com, then you can add this SRV record that points to avatars.gmail.com, or avatars.debian.org would be a better example, and then that's the avatar server that will be used for this domain. So websites will do the DNS lookup and then use that as the base URL. If you don't use it on your domain, like if you don't expose that DNS record, then we fall back to the centralized service here, centralized fallback service. So it works for all domains that people can decide to self-host for the domain if they want to. So overall, it's a pretty simple web application, pretty simple Django thing. Now, when we started the project, we decided to add a few roles to simplify the maintenance burden that we were going to have running this thing. First one was only use Python packages that are packaged for Debian. So if I want to use a library and it's not in Debian, a Python library, well, I need to find another one that is in Debian or package it myself and wait until the next release or something like this. Secondly, only use the version that's in the latest Debian release, which means that right now I don't use Django 1.6, which is the latest stable release with Django. I use 1.4 and all agree to 1.6 whenever I upgrade to Jesse. So that's what it means in practice. Now, I also include backports in there since backports became official. Another thing to note is that in terms of deployment, everything is done using Debian packages. So we start from an upstream make file that has a build rule that will minify the JavaScript, the CSS. It will compress it using GZip so that it's not done on the fly, that's pre-computed, and then we compile the PO files as well for translations. It's got a test target that will run a bunch of linters for Python, unit tests, etc. And that creates a number of different packages. So there's a package that you install on the main application because that's the only one that uses Python, Django, and so on. And then there's other packages for the mirrors that are really just the patchy files and a few Chrome jobs. This is managed using RepRipro. So we've got a separate private package repository. And on the computers that I manage, so few of them but not all of them because the mirrors contributed by other people, I use Fabric to easily keep machines kind of up to date and run commands there. For mirrors, Fabric is a neat little wrapper around SSH. Basically, you can say I want to run these commands over this class of machine. So it's kind of a very, very lightweight version of something like Salt or Ansible or whatever. But the comparison is not quite fair because it does very little and just runs on your own machine. But if you don't want this super heavy weight thing, you can use something like that. It's quite nice. So a Python script that runs shell commands, but some on your system, some on others. So I'll just repeat what Enrico said. A Python script that runs commands either on your machine or over SSH on a bunch of different machines. This is how you keep mirrors up to date. So basically, you do nothing else than to keep the system up to date. As long as the people that run mirrors have a repository in their sources file, then they just need to keep the system up to date and it'll pull in automatically the new versions of the stuff that they need to run. I will get back to it. Yeah. Question was do you use unattended upgrades and I refuse to answer now. Okay. So how did it go? Because it's coming pretty soon. How did it go? Well, it turns out that because of my first rule, I'm limited by the choice of libraries that I can use. For example, when I started this, I was running on squeeze and I needed to use, so I choose Gearman as a queuing system, and I needed a Python library to interface with it. Now, the really nice one is the one at the bottom. It's a very Pythonic library, and it's what you expect from a Python library. The top one is Python bindings for the C Library. It's a little bit rough, not as pleasant to use, but it was in Debian, so that's the one I picked. Now, the other one is actually in Weezy now, so I could switch. But the first one worked, so that was okay. But in practice, it turns out it's not such a big problem. Most of the Python stuff that you want to use is in Debian, to be honest. There's quite a lot of Python packages. The most common libraries are likely to be in Debian already, or in backboards sometimes. The other thing is, you can't use, that was pointed out to me, you can't actually use all the new features that are in Django 1.6 because you're running Django 1.4, for example. That's true. But I started on Django 1.2, because I started on Django 1.0, and then eventually 1.2 made it to squeeze, I upgraded that. Now I'm on 1.4, and it's great. There's lots of cool 1.4 features. I don't know yet what's in 1.6, so we'll see when I upgrade. So yeah, it's not a big problem for me. If I were used to doing this, then it might be a problem, but yeah. This was a little bit scarier, the upgrade from squeeze to Weezy. Because all of a sudden, I was upgrading from major versions of Apache, I think it was 2.0 to 2.2, and then Postgres, and Gearman, and Django. Django was going from 1.2 to 1.4, so skipping at one full major release. So it was scary because it was a big change. But it turned out that it was actually pretty easy. Django has really good documentation for how to move from one version to the next, and everything else in the system pretty much worked. But the thing is, it's a big upgrade because you're changing everything at once, so it's a big deal. You want to set aside a little bit more time than you think you're going to need. But it's really not that bad in practice. That's what I found. Maybe I'll change my mind when I upgrade to Jesse, but so far it's been pretty good. So overall, I would say that Libre Avatar is a really low maintenance service, which is great because it's a site project, and I don't want that to turn into an unpaid full-time job. So it's working out really well from a maintenance point of view. I have very little to do in terms of maintenance. Now, here are a couple of problems that I ran into. The first one is that I obviously optimize for sysadmin in this service. It's really great to sysadmin because everything is nice, packaged, and it gets updated automatically. But I did not optimize at all for developers. So it's a little bit tricky to get a development environment running, for example, because I fully take advantage of the fact that I'm running in Debian. I have cron jobs all over the place, shell scripts that do all sorts of things. I use different users to limit the potential. So for the things that run as root, they're really small scripts, and I've got different users to limit privileges. So it's a pain in the ass to get all of this running. It's documented, but it's a bear for new developers. It prevents drive-by contributions. What I've started to do is to turn my instructions into a script, a vagrant script that new contributors will be able to run. Ashish braves about vagrant, and he's convinced me that that should fix the problem. But basically, when you optimize for sysadmin, you get something that's quite hard for developers. You have to think about something else to make it easy for people to contribute to your project. The other problem that I ran into is that if you depend on something from the system, like a system package, then you're going to be exposed to their bugs. For example, jQuery, so this is libjs.jquery, had a bug where someone discovered, I think it was between Squeeze and Weezy, that we were shipping a minified copy of jQuery that was compiled by upstream into Debian, without rebuilding it ourselves. So that's of course an RC bug because we're not building from source and who knows who could be in what could be in there. So it was removed before Weezy. So in the Weezy version of that package, there's just two copies of the real source file. So that means that all of my users are getting the full version of jQuery which is not ideal, but because I'm relying on that system package as a dependency, I have no way of actually minifying it myself, because the minification happens in my build step for my package and jQuery is pulled in at install time. So I can't actually touch it. So the way to fix it would be to either do a backboard where I revert that change, stick that into my own repository, or to actually fix the bug in Debian and then get it pushed out in time. Now, this is fixed in jQuery, but it's just an example of how you could be affected by a bug in Debian basically. So you want to keep an eye on the dependencies in Debian, to make sure that you're not going to run into something major. So this is a slide that I made for Zach. It turns out that, so unattended upgrades for those who don't know, is basically a package that will run at getUpdate, at getUpgrade all the time. So some people may think that that's not a good thing, you should actually log into your box to keep it up to date. But the reality is that if you have lots of servers, not much time to maintain it, it's much better to run something like this than to not upgrade and be running a bunch of exploitable shit all the time. So unattended upgrades is great, but the thing is Django has a habit of sometimes having security fixes that actually remove features because there's no way to make them secure. You run into this, right? So you can't just automatically install the package, the latest package that Luke pushed, because it might actually break itself. You have to read the changelog and or go take a look at the advisories that Django released, make sure that you're not using one of those features. In one of the ways, usually they don't remove the whole feature, but they'll disable part of it feature or something like that. Yes, hopefully it will make it to the news section for the package. But basically, you can't just blindly apply updates for Django. You have to test it before we go. Did someone think about using breaks? I don't know. So can we add that to breaks to sort of indicate that something bad might happen? It's kind of tricky because the feature is exploitable on your server. So it's kind of broken in a different way. It's not clear what the right answer is. Well, actually, so the right answer for me is that I use APTCRON instead of unintended upgrades. And that's an email that I will send you. Whenever it detects that there's a package that needs to be updated and hasn't been, it will email you. So every day until you actually fix it. And so that works quite well. At least you get a notification that you need to go in, update the package, test it, and then be happy. Another thing that was pointed out to me is that sometimes security updates are not always timely in Debian. Now, that's not to criticize Luke or anybody else that maintains one of my dependencies. But sometimes, there's lots of vulnerable stuff in the archive, and it takes time to go around and fix them all. And so it can be, sometimes it will be a couple of days, and sometimes weeks, especially if nobody notices, before the Debian package is updated. Now, there's kind of two cases. The first one is that you actually notice that a package is out of date in Debian, and there's been a security fix upstream. And you may notice because you're already following the RSS feed from upstream, or there's security newslet, or something like that, in which case, you can help out with backporting the fix or testing the fix. Because sometimes, for some of these things, the security team, for them, it's actually pretty hard to just go and fix a web application if they don't know how people use it. If it's like a web framework or something like that, and it'll have an easy way of testing it, then just offering to test it. To test their package would be a really good thing. But also, if you end up hacking a fix on your own machine, why not submit that fix to Debian? So if you do notice, that's what I would say. If you don't notice, because you can't possibly keep track of the 250 dependencies that I showed you on one of my first slides, well, then it's better late than never. If you can't keep up with all the upstream things that you have, or they don't all have security learning lists, or RSS feeds, or whatever, then it's much better to rely on system packages, because it only takes a single Debian user to file a security bug for a Debian to actually start the process of fixing it. So this is another problem. The approach that I took is kind of different from the recommended approach that web developers have nowadays, which is basically to push all your stuff to GitHub, have a bunch of proprietary services, connect to it, do all sorts of things, and test it, and commit to it, and whatever, take your code and push it to your server for you. I don't know how to fix that problem. I don't see it as a problem myself. But it is kind of weird to describe that approach to other people. Something to keep in mind. Finally, another thing that I ran into is that Libraratar has actually been picked up by a couple of community sites on the Fedora site. And that made me realize that the fact that Debian is such a central component of the whole infrastructure would be a little bit of a problem if a Fedora person came up to me and said, hey, I'd like to run a mirror. Can I help? Because I would be like, oh, I have to build an RPM for you now. Which I'm willing to do if I have a real offer for that. I'm not going to do it for fun. But yeah, when you rely on system things like this, you are a little bit locked into Debian, which might be good for you, but in general. So is this approach realistic? Or am I just crazy? Because I'm basically ignoring conventional wisdom, conventional web-dev wisdom. Well, it turns out that a few hours after this blog post got published, so when they announced the first batch of talks that were accepted at Debian Conf, I got an email from a guy who runs these services. Sky DNS is a Russian service. And the safety DNS is the American equivalent. And what was really interesting about that email is that this guy has exactly the same approach as I picked on the Bravatar. And we never talked to each other before. He runs Django using Luke's package. All of the dependencies are system libraries. And he even packages the whole thing using Debs. And this is a real commercial website with paying customers and everything. So yeah, I mean, I realize that's anecdotal evidence. But there's at least two of us that are doing it that way. Yes? Three. Oh, there you go. All right, the anecdotal evidence is piling up. So what would be a good fit for that kind of approach? Well, I think if the site you're working on is not your full-time job, like it's a side project like the Bravatar is for me, that's really good, because it reduces your maintenance burden. And you don't want maintaining a service is kind of boring. And if you're doing something for fun, you don't want to spend all your time doing this stuff that's not fun. You want to do development or make the service better or something like this. You don't want to just keep up, read like, announce lists for dependencies and keep stuff up to date. So that would be my first sort of criterion. The second thing is I would say, if you want to use something like this, make sure that you're using a mature framework. Because for example, until recently, if you were using Node.js, you probably didn't want to run the Debian packages for them. Probably wanted to go and get it from source or WGet the script into sudo bash or something like this, as recommended. But basically, if you've got the Debian packages, there's too old. There was two versions ago of Node.js, and it changed so much that nothing would work. Django is actually really quite good for that. I'm sure when I upgrade from 1.4 to 1.6, it's not going to take me three days to actually figure out. So yeah, SiteProject is a really good example of something that I think works well with this. But the other one is related to what I used to do before my current job, which I was working for a catalyst, one of the sponsors this year's DevConf. And they're a consulting company, and they run lots of small websites for lots of clients, and lots of large websites as well. But the small client website kind of example is a really good one, because you don't want to spend much time working on their sites, because they don't have that much money, and they don't want to pay for stuff that's not visible, like security updates and things like that. So you want to minimize the amount of work that you'll have to do there. And if you have to deal with completely different code bases all the time, different versions of the Vansys, it becomes a nightmare really quickly. So I think it's worthwhile to do something like this. But whatever you do, remember this horrible list. And also remember that when you're bundling code inside your own application, you become responsible for them. If there's a security vulnerability in any of this stuff, we have to issue a new version of our application and deal with it. So pick the approach that you want. But I think it's worth considering the approach of limiting your options to help reduce the maintenance burden. Are there any questions? Yes, Enrico? Yeah, it's on. First of all, thank you. I subscribe to everything you said. I work in exactly the same way. It pays my bills. It makes me happy. I wrote a blog post about it that hit Planet Debian some time ago called DebOps, which I think is a good summary of the same idea. And one of the companies I work for is a services company that maintains about. We have more than 10 websites deployed. And we keep making new ones, all for small customers. And if we started to maintain the whole jumble of dependencies embedded in each and every single one of them, we'd be dead by now. Because we'd be like, I don't know, some spam repeaters for something. So we need to cut on what we can't afford to have dedicated sysadmins, one for each application. And one application could be application-serving, language learning games for Italian-speaking schools in the German-speaking part of Italy, which can't afford dedicated sysadmins. So it's definitely cost-effective. Development-wise, as long as the developers know and are saying it's not a big deal, because that's part of the specifications. And the developer that can't read specifications is not worth that name, in my opinion. It helps if upstream has some release policy. So yes, doable in Django. No, not doable in turbogears. Thank god that died. And we use backports a bit more. We're already on Django 1.6 that's in backports. It helps to reduce the size of upgrade jumps. And it's good that Python Django maintainers are doing an excellent job of maintaining it also on backports. So hands up. Yes, thank you, Luc. And for scripting maintenance things, I've written a Django app called Django Housekeeping that's currently in Debian, although testing, I think. So not stable, not backports. Sorry. Which allows every Django application to create management tasks that are run nightly. And that simplifies a lot all deployment, because then you just schedule one com job, and applications can do their own checks, backcups, housekeeping. And since nice report. Cool. What does it call again? Django Housekeeping. Django Housekeeping. All right. Coming soon in, Jesse. So I'm currently upstream of our web app, which is Debian Sources. I think I know my way around Python packaging, even so I've been doing that in a while. And I wrote the tutorial now to maintain mini install installation on people in the BNR. But still, I find that this scaffolding for doing all the automatic deployment as Debian packages is kind of very heavyweight. So right now, Debian Sources is deployed from the checkout of some specific deployment branch of the upstream Git repo on the machine. So I wonder if you have suggestions based on your experience on how we can make it easier to have make targets that just create Debian packages, push them to some repository somewhere, and avoid that everyone which have to be at the box essentially rewrite his own scripts and rather ad hoc. Yeah, I've got my own, that's cool. Yeah, that's a really interesting question. I don't have a solution to it. But yeah, in my make file, that's where my packages get built. And I run the package build package at some point in there. And then I use fabric to update my repo sign everything and then actually it's a station to the machines and install it. So there's something that could be factorized and shared with others? That's a good question. I would have to look, I mean, I can show you what I'm doing and maybe you have ideas on how much of it you could use and then maybe that would help us decide whether or not there's things, parts that we can split out. So I'm a web developer, but some of my best friends are. And at work I'm responsible for the sort of sysadmin side of some web apps. And so we have this sort of discussion within the team where the web developers always want the latest version of everything, and I'm very lazy. So I don't want to end up having to maintain all these packages myself. So the compromise we tend to come to is if there are things that in Debbie and Winnie Newer version, I'll make a back port. And then for new bits of Django, because it's usually Django we use, I make some of my own packages and maybe I'll push them into new when I've got a bit more time. And that sort of works for us because it means hopefully, again, we're outsourcing as much this effort to Debian and not doing it all in-house. And we can contribute to Debian because we increase the set of Django libraries that are in Debian, but we still get enough of the stuff that my web dev colleagues are happy. And we do packaging for everything except the actual app itself, which we use Ansible to push out from a Git repository. And so, yeah, that's a sort of compromise between we'll only do things that are in stable and we'll just download any old junk off the internet. And that kind of works for us. Yeah, I think that's an excellent point. You don't have to go all the way necessarily. I mean, I went all the way because I'm the assessment for that service, and I want to minimize that as much as possible. And that's the trade-off that I made there. But if you have more people on your team, you can afford a little bit more maintenance, then you can actually have a sort of hybrid approach where you rely on Debian as much as possible, but then you backboard a few things. Well, I'll also just comment on a similar use case, not exactly the same because you're all doing development and just integrating bits that are already done. But I ended up adopting Drupal because, well, I have several Drupal sites at my university. And I built DH make Drupal that, well, allows me to convert any Drupal module into a Debian package. Of course, at the beginning, some people, me included, uploaded some of those modules to Debian. In the end, we decided, well, not to because the quality of the modules is not homogeneous. But well, yeah, I mean, of course, I have a huge repository of all the modules I've ever used. Even I can tell you it's a very dirty app repository because I don't purge the old versions. They're just piling up there and, well, taking space. But of course, also, the DH make Drupal is written in Ruby because I would not be able to standard-written PHP. But well, yeah, it's a huge time server for us. Yeah, and the interesting thing there is that you picked up the maintenance of Drupal because you were kind of doing that work already as part of your work. And that's a similar story to what Luke was telling me. He was using Django in his previous company, and they were using system libraries and stuff. So might as well maintain it for everyone in Debian. I'm sorry if I missed some of this answer at the beginning of the talk. But how, if at all, do you pass configuration information to the web app? And is that something you store in Debcon, like the database URL? I also noticed that you said that it's hard to live in a world of unintended upgrades. Perhaps what that means is it should be easier in your deployment plan to have a staging site, and then you have a separate VM that does the unintended upgrade. You'd send n% of the traffic there. If it crashes, you don't send traffic there. I wonder if you've thought about that, basically, also. Yeah, so the answer to your first question is yes, I use Debcon. That's where I put all of the stuff. So when I upgrade, if there's no extra questions, then nothing happens. In terms of having a staging site, run unintended upgrades, and then maybe run the full regression test suite, that would be great if I had a regression test suite. I have some tests, but it's mostly manual testing for a lot of it. I think that it would only be possible. There's not much point in me doing unattended upgrades if I'm doing manual testing to verify that it worked. I mean, just to respond to that briefly. One thing you could do is do the unattended upgrade on a different VM, send n% 10% of the traffic there. And if the error log has new exceptions, or rather, if the error log is non-zero at all, undo that. Yeah, so to outsource your testing to your live users. So I'm upstream for North Federal, which is a social networking platform between Ruby and Rails. And we have all the same issues. So we also target Weezy, or better, the latest Debian stable. And now we also have the problem of upgrading from one version of Rails to the next one, just as much pain for jungle. And that was just to share my pain with you. And then on the GitHub issue and on the opinion on proprietary services, something that works, for me at least, is using GitLab, which is free software and has everything you'd need from GitHub, like visibility for the repository, initial tracking system, and measure requests, and all this stuff. And there's a slight chance that's going to be in Debian for Jesse, but I wouldn't count on it. I'm just going to mention that the web devs I know often run into the situation of running something like WordPress and having a bunch of looking at some of the plugins and some of which are proprietary. And using the Debian archive to actually build your web app has the benefit of actually being compliant with the Debian free software guidelines, so that now you know you can actually deploy this and not have to worry as much about licensing, at least that's what I would assume. Right. Hi, anecdotally, I'm not a web app developer, but when I do develop web apps, this is more or less the same workflow that I use. So my question is, on one of your slides, you recommend to use a mature framework, and you also imply that you should also use a mature language. And you show app cache search results for Python X, many thousands of packages, or modules packaged in Debian. I was wondering what you or anyone else in the room thinks about newer, less mature languages that have a more aggressive, more tightly integrated language-specific package manager, such as the Go for any language. Yes, I mean, I can comment on Node.js because I was working on Node.js application. I looked at whether or not I could use the same approach there, and there was not enough stuff in the archive. I would have had to use NPM to install almost everything anyways. And at the time when I looked, and also the Node.js itself was not the right version. The version that was in Debian was kind of broken in lots of ways because it was a little bit early. So I suspect that the younger languages will have these sorts of problems. So it's probably not a good approach for those ones. But if some parts of your infrastructure are not as mature, say the framework is not that mature, then maybe you can still rely on the Python libraries for other stuff. You can have a hybrid approach if not everything is mature. But sometimes mixing things is a bit hard as well. Right, I noticed on your Node.js slide that your packages had version numbers. And in some cases, these newer programming languages don't have those. You're either specifying a particular get commit or even just saying, hey, pull whatever is on head or whatever. Yeah, and hope for the best. And so are we regressing in this aspect? Well, so those things, I depend on any version of this library, tend not to work very well. Because if you try to NPM install it again, it will be totally different and probably broken if it's a new major version of it. So what I see a lot of web developers use is just hard code the exact versions that they want. And you missed the example from Open Hatch earlier, of your vendor directory. I started with that. But the vendor directory is the same thing. You basically hard code specific versions of packages. I think that's the common pattern. So it's not really an issue of later it will break. It will always work. But I might have vulnerable versions of things. This is a sort of discussion that I've heard repeatedly throughout this DevConf. We had it in the Java bof. We had it with the Haskell team the other day about how you deal with this sort of infrastructure that's growing up where people don't want to use a distribution. And they said they'll just throw random versions or whatever junk happened to be handy when they were writing that app together. And then the result is a maintenance nightmare. And I am primarily a sysmin. This sort of thing gets me really irate. Because you write it once, and then it gets hacked. And you're like, well, where did this library come from? It was dependent on by that thing. And it was just some random version. And there's no stable API. And I'm not quite sure. So far, we haven't seem to come to a good answer. Particularly if we take the approach of, well, this is like how the Node.js infrastructure works. So if you're going to do that on WN, then just download everything from Node.js, we never get to the point where, like the state we have with Django, where there's enough of Django in Debian that using it on Debian becomes practical. So there's a risk of a kind of vicious circle there. I'm not quite sure what the answer is. But this seems to be a common pattern that I've observed throughout this week. I wonder if ignoring the first year or two of a new framework is the solution. Because Node.js is starting to stabilize now. So it's a year or a decade. Yeah, I like to think that those pioneering things are good for a job in a startup, where I built a new website and then changed job. Someone else's problem. But then I think a good way to start is that we are here all saying, yes, I do that. Yes, that makes sense. Because the world is full of people that told me, oh, you should use virtual AMP. And I told them things that I should not repeat because of the code of conduct of the conference. But I've been in RC channels where I look like the weirdo out there. And what then I told myself, I'm probably the only one that has a job in here. Sort of like the distribution pride, if you want. But coming out and saying, I'm doing this, it makes a lot of sense to me. It's something you should consider as long as you want to develop something that's maintainable over time and not just build the thing, put it out there, say, I've done it and moved to something else. And I hope to get acquired quickly. So you don't have to maintain it very long. Anyways, I think that was all the time that we had. So thank you very much.