 Hello, my name is Christopher Thrice and I'm a postdoc at the Allen Thuring Institute and this is a presentation on how I converted a research project to my first PIPI release and all of my mishaps in doing so, which I hope are things people might find useful. So hello, just a bit about me, I was taught C++ in high school, but swore off programming I crawled back in as a master student on the project which eventually became my PhD in sociology on how a community called Phytonet changed geographically over time and the GeoJenga project was absolutely crucial for that. I am now very privileged to be a postdoc at the Allen Thuring Institute, but I also am enlightened as an artist in theatre, film, jazz gyms, and photos particularly of late like this one. So, goals of this project, presentation, pros and cons of rearranging project for public release, quick cutter templates and how they can help not just from Python but other projects, options for testing across standard libs and pie test, etc. The continuous integration options like Travis and GitHub actions, documentation, like Sphinx, read the docs, deployments on PyPy, releasing on Zenodo and not getting that confused with Sotero as I have many times. Anyone who knows the book can hopefully see why, not just sound. So I'm going to start the clock because I don't have two screens and I messed up ordering an adapter for one. So rearranging a project for public release. So this came out of work I've originally as an RA on, on querying the company's house and charities commissioned in the UK or networks of board members attached to companies and charities. It's not the point of the talk but those network structures technically known as bipartite or bimodal networks are a research interest that made a lot of use of the network X library. So that was one of my crucial dependencies in my requirements.txt and when I rejoined the project year and a half later, excuse me, I just foolishly reinstalled all the packages without keeping track of their versions, excuse me. So that was a terrible decision. The network X migration guide kindly keeps an update of how difficult it would be to reconstruct a pickle file and I had foolishly had a whole bunch of pickle files wrapped around attributes to classes, etc, etc. It was a terrible rabbit hole. I didn't know what to do. So I have an awful old pi c file to remind myself not to pickle like that ever again. And factoring for me was pretty much required. So that was my initial bias. But I also was quite excited about the prospect of doing something that was open source. So software is really how I learned to code because my memories of high school, I mean, maybe some of that was kind of underneath and helped me get going. But that's what got me excited about it, particularly Django unit testing. I really like unit testing, but the prospect for other people to use it like my colleague, but other, you know, future collaboration and this warm glow feeling, which I like from that particular publication. So the current state package called UK boards, it's very alpha command line and just faces utter mess. It's all my fault. Anyway, lots to still fix, but it's partly how I managed to get this postdoc. We've got two papers under review from stuff we've created through that colleagues approach for further collaboration. I feel much more complicated as a programmer. And I was really excited when I finally had a DOI from Zenodo. And it's much easier to maintain and install for new projects than certainly what I started with. So there's just a little flavor of what came out of that project. Those are companies that share board members. So hence their links on that geographic network. So cons of taking it to Pi Pi, taking a huge amount of effort, especially the first time may expose bad shortcuts that you had early in the project. You may need to be really more rigorous about quality and maintenance of your work. You might need to separate code out for paper that's about to be under review. And your colleagues may not initially understand maybe in the social sciences, this is more of an issue than the hard ones. But the advantage is that it can be really helpful and it can be much easier to maintain, easier to reproduce results, easier to add features to and more directly citable, perhaps in its own right or attached to a paper. I kind of cheated and did that at a conference, but I certainly still feel that warm glow and it's nice to have something direct to refer to in an academic context. So how do you start? Basics that you need, you need to make sure that you've got something to work with. Get repository or mercurial still maintained, I think is at least an option, but it's probably going to have to be a shift to get for all of these other features out there with GitHub, at least to start out with and due to ease package folder structure, just make sure you got some at least a module with an in it. Realize that may be really obvious, but if you're using, I don't know, Jupyter or I Python a lot and weren't bothering with, you know, importing from package, it's just some basics you got to get used to. If you're using data in your package, importantly, it was going to be crucial. That's going to come up a lot. Dependencies. So you need to specify your actual dependencies, having a virtual ENV or poetry, ENV, all of these things to keep track of what you actually need in the correct versions. You'll need a list of your co-authors, your authors, your collaborators, what's open source license you want to use, and some sort of read me in a format that you can stick with, whether it's restructured or marked down, et cetera. So the data specifically is the data that you've been using and like to include in the package. Is there any questions of it being sensitive? Does some of it need to stay away prior to publishing a paper that's under review? Is data collection a project worth citing separately? And is there data in the Git repo that's just massive and you really should have it small so it's easier to maintain? Stuff like flat flies, those are easy to maintain or trying to minimize that. So you're just querying stuff from the public API that might change your time if it's your test basis. So a person comes on that. It's more tricky for me, at least, where things like SQL dumps. I got a bit of a rabbit hole in versions of PostGIS prior to three and kind of dumping and then re-importing pickle. I'm going to stay away from that. I had a project involving LiDAR with proprietary software. I have no idea how I would try and deal with something like that if I was going to release a library. So how to include import, keep it small, easier to maintain. Minimize essential is just for running tests and then flag tests that require more or really slow. Again, using import lib, it's just you can backport it. I wish I'd done that. My cheat was to just have a little Python module with lots of comments I could import, but that's probably makes it a lot harder for anyone else to use. So if your project's already get repo, it's worth double checking your get ignore. Be strategic in adding only what you need is a public library. Again, stuff with the academic paper keeping separate can be complicated. It's just worth flagging out repeatedly. So that's all the academic stuff, but authentication can also be scary, you know, whatever the project. A few have accidentally got usernames, passwords, etc. You really should use something like a .env file. It's a nice library for that. There are libraries out there that are trying to kind of keep track and prevent you from adding more keys, etc. I'm no expert on that. I was trying to look that up. And also just should be obvious, but if you've just deleted something that you haven't actually purged it from your GitHub history, your Git history, then you'll need some tools for that. I just started over from my Twitter. So example .env, which shouldn't be in a Git, you can make nice references to stuff that's been defined prior. If you include the command line interface, then you can just set these by command line and then maybe use some password management systems alongside if you're going across multiple servers or whatever. But it's also worth going down the hole with colleagues and making sure their contributions are accredited, ensure they're comfortable with what's included and, you know, include their commits if they actually, you know, were writing code, stuff like their ORCID number. All of these are helpful for giving credit where credit is due and but also double checking the password sensitive data in the Git repo. So if you have a structure like this, that's a really good starting point. But a lot of this can be kind of provided by a cookie cutter anyway. But vaguely having versions of elements of this is a really helpful starting point if you can. So other things worth remembering relative imports rather than absolute imports, same thing with data files, easy if it's, you know, just really simple to start out with. Same thing with formatting, right? So I decided to go for public release, had some basic tests to work with. And so here we go for the templates. So that whole concept, as far as I understand, was the genius of Audrey Feldroy Greenfeld. I hope I pronounced that correctly, very sorry if I haven't. So it's a whole Python, not just Python, just code project structure template might not even have to be code. It's a shell script with config options, names, emails, all the different, you know, basics you'd need to cover all of the boiler plate that a project would need to get started, and then be able to deploy and include a whole bunch of crucial things like documentation and tests. There's a whole bunch of those out there for more specific things like a Django project. And many don't include Virtually and V, but, you know, that some do. There's lots of options out there. Good start from Cratch, be pretty cool if you manage. You can learn a lot, like learn some Scratch and try that. But it can take a lot of work. It may be hard to estimate how long it could be hard to maintain consistently across docs and code for other potential collaborators. And that would have been too hard for me. Poetry is another nice thing to be considered. It does actually generate a very simple default specifically for PyTest. Flit can also help with deploying and a bit of managing. Pippie and V can also be helpful for managing applications, but it's specifically not designed for libraries. It doesn't provide any PyPy deploy ease. PyScaffold is a project that came across very recently. It seems to be very mature and quite well suited to some of this, but I didn't have enough time to try that out properly, but worth looking at. And it has very recently switched to PyProject.toml. So it might be worth looking into. So installing Cook Cutter like many Python packages, but meant for command line work. You've got a couple of different options. Kavex is kind of a cool one. If you want to go for Kanda, you just need Kanda Forge. Obviously, you know, different operating systems have some others. I just go for the cookie cutters from Direct Git Repos rather than kind of a local copy. I like to get the latest. Let's hopefully also test it. You can have shortcuts for different Git repositories, GitLab, GitBucket, and Git Hub. You can even go for specific commits or zips, test the questions from the cookie cutter you're interested in before you make your final decision. So like stuff like your package name, you want to make sure that's actually available on PyPy. So I end up kind of doing a couple of test form filling in before I kind of go for the final one. Search for package names to help. I ended up renaming my package after I had issues trying to collaborate with it via R in a package called Reticulate. So I had to get rid of that hyphen. Simon Willisons, one of his many genius projects, was a cookie cutter for fixing a project. Thank you, Simon. And it's, like I said, worth finding out how you want to answer the question. So an example with the original of it all. So it defaults to options like choosing between unit tests and pie test. But I think you have to stick with Travis. You've got, you know, different open source options, talks for, you know, which range of Python you want to cover, flaking covered by default, Sphinx documentation via read the docs and choose bump to version for managing increasing versions. Nice little options include PyApp for managing other dependencies, whether you want to include a command line like click or arg parts by default, stuff like black, which can help with formatting. So this is the kind of setup that'll come out of something like that. No, it doesn't actually include the requirements or test requirements that you specify. Those are just requirements.txt and requirements dev. But it fills in this form in a lot of ways. It makes the whole deploy process a lot easier. But like I said, it's not synced with requirements. It requires a GitHub username. So really only works initially on GitHub, but you probably convert it to another one, specify for Travis.yaml doesn't include any new stuff I like like mypy or preconfig, and there's no option for Zenodo. Some of the documentation links are a bit out, and I'd probably should try to help out with that. So here's a way of demonstrating how that works. So all of my test versions of this. So right. Yes, to make sure I've got the latest version. I'm just going to be quick on this. Flight variation. So you can see how this changes. Just taking the defaults as a demonstration at the moment. Note that my PyPy username ends up being based on what I put down for GitHub. There's lots of ways in which these customizations play out. Yes, I'll use PyTest. Yes, I'll use black. Yes, I'll use PyPy badge. And I'll go for click this time. Yes, an author file, which helps me site what my colleagues have done. So that just creates that folder. And then for example, if you look at setup.py, again, you'll see something very similar to what I said before, but now the requirements and test requirements are at least filled in some of the basics. So it's a pretty nifty way to get started. Right. So out of that, go back up here. Keep going. Here's that same structure. If you want to scroll through and look at details. Here's another one specifically for academic stuff. This is out of a research group in the Netherlands, the eScience Center. They have these whole fancy next steps things that'll create tickets within a Git repository, GitHub specifically, very academically focused. Ask for a whole bunch of extra things. I'll show that in the demonstration. And it specifies the citation.cff format for helping the citation. It's a very helpful thing in an academic context. As for the project itself, I found some of the configuration a little bit complicated. The fact that it's pyproject plus setup.py plus setup.config. Setup.config seems to have the majority of the options. Maybe they're hoping to simplify that at some point. This is just some of the details of how setup.config works and manifest.in. Only pytest is an option. It's got some nice extra plugins for maintaining the code quality, but no mypy, no pre-commit. So just to give you a little demo here. So try to be a little bit quicker this time. Yeah, that makes more sense. So again, it's a form to fill in. Just do the defaults for the moment. Fun. This is what would be keywords in a research paper that might come out of this. Code, all caps, version. Your university, for example, might be worth putting down there. Full name, email, copyright holder. So it might be the university that you work for. This is it's kind of academic kind of conduct stuff. All of these things are again very academic specific. And that can be quite useful. And just to kind of demonstrate where all of the majority of the configuration stuff comes out is down in this. So this is where that dev thing I mentioned in the previous slide. So anyway, gotta keep going. Don't have enough time. And have a look there. So the last one I looked at is really fancy kind of the newest and the coolest bunch of libs to include uses poetry. It's just not academically focused. That's why I wanted to provide that contrast. So they suggest using that specific checkout and documentation but probably get away without doing that. Again, lots of defaults. You can have a friendly name with a space. Author, email, again, all these expect GitHub. Let me see your license options. I really appreciate the development stages option there. I think that's really helpful to be upfront about. And then just by contrast, it's all on the project. So all of that gets filled in here. Very extensive list of actually what's happening in terms of dependencies, et cetera. Anyway, so that's the last of our options for today. Again, you can have a look. Knox files, another option. I looked at quite a few other ones. Here's another research, I don't know, general purpose. Make some other options like make docs rather than sphinx but also uses pre-commit. I'm a fan. Here's another science specific one. There's an interesting lightning talk about it actually. And there's a recent fork of that that actually supports GitLab. It's the only GitLab one I've found so far. That's not quite sure. I've seen some on GitLab and not as academically focused. There's some other kind of interesting new stuff. Again, Simon Wilson's innovation plays out with some potential ways of generating templates using cookie cutter. But as a one-click on GitHub, kind of fascinating, inspired by this. I guess there's probably more of those to come. Those are quite recent. Here are some other ones I came across but didn't have specific configurations for deploying to PyPy. The government one was interestingly concerned very much about security. So the basics are the motivation for cookie cutter. Py package. Here are three options that are covered. Hopefully those links will help you make use of whichever one you find useful. So whatever you do, you're going to need some virtual ENV environment to be able to run and test this stuff out and make that reproducible for others. So here are the different options creating that virtual ENV. So we fancy your condor, poetry, PyPy ENV options, all for local areas for testing installs. And then you should probably have a Git repository sorted out. Having it match to the project slug, for example, so that these package names resemble what's the source control management name and using main rather than master by the way. Branch. So PyApp is something I'm going to come into later but it can be unfortunately not used with PyProject at the moment. But it can be used. But GitHub dependabot can be used with PyProject.toml. Testing. So have you written a test before? They're really helpful. So I write loads of them probably not enough. So here the basic unit tests come with Python. The basic class structure involves a setup and a tear down to kind of rearrange so that you can kind of automate the similarities between a bunch of tests in the same class. It can be a little bit clunky but I like how clear it is. So this is an example from an early version prior to refactoring that you need this basic main thing to kind of help with executing these together. There are other ways to kind of ease that. Basics of coverage, how many lines of your code are actually tested. The speed at which the tests are run. So I'm thinking a long time is worth kind of decorating them to indicate what's quick and what's slow. That can help you filter out when you want to run them. And if you actually have errors in your tests, they can take a huge amount of time and seem like a waste and can be a bit of a heartache. But I think it's still worth it. Note this is a way of skipping. You can have skip if conditionally on ways of kind of filtering the tests. PyTests is also incredibly popular. It means that you don't have to have these complicated different types of assert statements. You can just use a start statement. You don't have to put in classes. You can just have functions, copious plugins, ease of writing fixtures. Once I figured it out, I was really confused initially, but this organization then can just be passed as a parameter to a test function and just automatically reconstructs that. And then you just assert at the end. Note that's just a function rather than a class. I was really confused by a variety of elements of initially and it was Conf test and scoping that finally kind of clued me into how this works. And this is just an example of one I created so that I would not run tests when I couldn't based on my IP address. This is another thing I did a way of replicating API queries in a test. I don't have time to go into detail. Sorry. So there are many plugins. These are just a few. I'm afraid I'm going to have to keep going quite like some of those extra kind of sugar elements. Tox is a way of automating running these across different versions of Python and a whole bunch of other things to check. There's some competitors to it. But all of those things together are being able to maintain and be happy with what modifications you've made to a project. And then I think those are basically the basis on continuous integration. So this is a way of basically automating doing all of that every time a change is made through changes at the GitHub or potentially GitLab. Definitely GitLab is an option repository. So if you register with PyApp, for example, or GitHub Dependabot, those are things that will help keep track of changes to dependencies and help kind of... It's a great way of kind of automating a whole bunch of things to make sure your code is good. Travis is one of the older historically more popular ones and has been a default for a lot of the cookie cutters. But they're kind of increasingly shifting towards GitHub. Some of that will come up later. So this is what comes out of the kind of standard configuration. And it'll include this deploy thing to help with getting things up to PyPy. We'll cover that a bit later. To actually sort out the configuration for Travis, initially it seemed like you have to do it by the command line by a Ruby in their little library. We'll cover some other options for that later. GitHub is similar, but you kind of do that all through the GitHub interface. There might be clone line options. I'm not 100% sure on all of that. Probably are, I should say. And it allows for a much wider range of stuff rather than fitting it all within a YAML, the whole workflows folder. This is just an example of running tests that I've got up. And yeah, it's kind of asking for a considerable lot of detail and you can name which sections it's going through. I think there's a whole lot more that can happen with that. So these are both very useful. There's a whole bunch of other competition out there. Might be worth checking out too, but generally continuous integration is really helpful. So both of them are worth considering and employing and how you're getting a project done. So documentation is really, really helpful. I sometimes use it to kind of think out what I really actually want to do and then write it down in code form. And it certainly makes it much easier for people to understand your project and make use of it. I'm just focusing on read the docs and spanks because there are others. Continuous integration then just very kind of easily fees in to read the docs because you can of course just use your GitHub login to create a login on read the docs and it will automatically have an option to then just pull in a repository that you've got and provided you've got basic configuration like one of these cookie cutter templates that will just set up creating the documentation for you. You can have a whole bunch of configuration options like math jacks, etc. They're lovely. They're really useful. I wish I could cover more about this. I just don't have enough time. So push it to pi pi finally. So basic thing of registering for a website like many you'll need to register a key both for the test and for normal pi pi. The key token has a scope. So if it's for test, it just covers everything either way. But if it's for the main kind of pi pi, then you can have the scope specific to a package. Save those keys somewhere safe. Again, you can't include them directly in the repository, but we'll get to that package password management can be really helpful. So it's helpful kind of looking through some of this before you go all in. But there are some other tools that can help you do that. I'm going to focus on Travis for this because it's what I use. Oh, I had some issues. There was this whole transition when Travis kind of changed hands. And that's one reason I've used GitLab a lot more. So once I had to kind of rearrange my configuration, this little command line tool, I just couldn't get it work. The username password, I still don't know how to reconstruct that. And my attempts to find out did not work. So again, there was this kind of acquisition, a lot of people kind of changed jobs. But the way you can do this is you set up a GitHub token with the following privileges. There's a link to which hopefully will keep that up to date. And with that token, you can get that sorted. Then you can deploy with that kind of line and note the deploy.password because that's basically taking advantage of that bit of what's provided in the cookie-cutter package, then fills in the password secure with the new encrypted key, which shouldn't have the pypy prefix like the key. And that should be what you need. But as I've tried to show in that configuration file, we're just looking at the test.pypy, not the direct pypy. And what I've actually done since then, since I first started doing this, is just shift to keys that I register within Travis, quite similar to the options within GitHub. Think also GitLab. So you can just define environmental variables, but make sure they've got underscores rather than hyphens. That was a hilarious issue. It took me a while to sort. So now I can just put that password there. And then it's not within the repository. So now I could actually test a deploy with my bumped version, which we mentioned before. But wait, that's just for test.pypy, how we actually get it up on pypy. So you could go for having different branches like staging production, but I've decided to go with this sort of conditional deployment. So I'd sort of deploy in order. So if it manages to pass test.pypy, then I go for main.pypy. Again, if you've been messing around with putting passwords and authentication keys and stuff, it might be worth going through that repo cleaner I mentioned earlier. So assuming you get all that up and you're happy with it, you can then do that bump command. The parameter, sorry, I just don't have enough time to go through all this, is indicating what kind of version jump it's going to be. So it could be major or minor or patch, those bits of the number. So this is a patch that would be going from 0.1 to 0.1 from 0.1.0. This adds a tag at that number. And then with that, you push all of those changes first, and then you push including the tags to GitHub, and that should run the deploy. So it'll try to deploy to test, then it'll try to deploy on pypy. And then hopefully you'll have a package up, and you'll get a copious bunch of logs and Travis, if it doesn't. Again, whether it's helpful to have different branches for managing that, more sophisticated, I like adding a whole bunch of other stuff that's not included in this, but some of the other cookie cutters like pre-commit. These are some lovely libraries. I like Black, I sort, Flake 8, and MyPy. You can also, if you're finding this whole thing too difficult, you can have a local.pypyprc with your authentication as a backup. But that doesn't easily kind of continuously integrate blah, blah, blah. So it's hard to set up and hard to fix. At least for me, I find the safety of test.pypy really helpful. And after kind of wrangling around, my Travis seems to work fine. It might switch to have actions in the future, or one of the many others. So not quite done yet. Sorry. Academia is still to come. So academia and software development is a kind of interesting tangle, citing actually code specifically for research or in academic context is a kind of interesting and unfortunately hard problem. Here's an interesting paper specifically about that. And I think it's even harder if you get outside the harder sciences. There's a project on citation file format. Here's some interesting posts about that. And Zenoto is one of the big firms in the fall. So Zenoto is a citation option. It'll generate the DOI that you need. And it very easily connects to GitHub. But at present, no other repositories like it lab, unfortunately, it's under development. As far as I can tell by the European Open Air program, it's quite impressive and operated by CERN, and can automatically pull from a GitHub repo. So that's the easiest way to create an account. Again, just like read the docs to your GitHub account. So they're just tied together. I guess that's probably an OAuth situation. Go to that part of your profile. You should just see all your GitHub repositories listed there. Click the one you want. It'll sort of switch, drag, click situation, and it'll generate a new release, which you can then fill in a whole bunch of forms to talk about the details. And you've got a DOI. It's great. There is a much more customizable way by actually directly editing a citation.sif file that you need in your Git repository. And then that, then Zenoto would then copy from that. I got super confused filling in. The CFF version is not your package version. It is the format version. Don't get that wrong. That was hilarious. But this is what I'm currently looking at at the moment, including my colleague. I think I need to customize this more, but it gives you a bit of an idea. I think it's better if you can actually publish a paper as well. Certainly, citations generally kind of increase readership, increase understanding, give you a bit more credit. And they can also be a clear way of indicating exactly what kind of contributions individuals had. There are details in Zenoto that aren't currently covered in the citations, that CFF format, like funding, but they're worth figuring active development, worth getting involved. Also note to self, I should do that and see if I can be contributing. So almost there should have had a seventh inning stretch or something, if anyone gets that reference. So taking an academic project and polishing it for release is really hard. At least for me, Python cookie cutters is essential. I don't know what I'd be doing without that. GitHub dominates the cookie cutter options, but I found, I found, but I think it's worth considering GitLab at least. PyTest is great. The only option for many cookie cutter templates. Read the Docs is a default documentation system. It's really easy to get that going. There are some competitors, but yeah, it helps. Travis, I found hard with that whole migration thing, but it still seems to be working okay. I'm slightly worried I may run out of hours on that. That's another big question mark. GitHub Actions is another option, and I'll probably go for one of the more extensive cookie cutters in the future. So thank you so much for your time. And here's some of my references and my dangling footnotes, which are probably too out of context to make any sense. Thank you. It's okay. Thank you. So I see someone typing. So let's give them a minute to see if there is questions. And then we can continue. Two things to mention while we wait for questions is one is this drive will be offline for the next slot because we have a keynote. And then after the talk, you can go and talk with Griff in the chat, right? So he's going to be available in the roaming matrix, and you can ask questions. So okay, I have a first question in a second. Can I say a few words about the very last part? I'm not sure what Paolo is dealing with that. Maybe why you were here. My main reference is Zotero. What I was going to cover is the last part. This is maybe double check if that's what they're referring to. But okay. But the reason, yes. Yeah. So yeah. So Zotero is, oh, sorry, not Zotero. Sonodo. Oh, man. Sorry. That's one of those days. Yeah. So in an academic context, Zotero, which is for managing citations, Sonodo is for managing code that is released so that you can cite it. And sorry, that's me today. Is in that silly state, unfortunately. But yeah. So I think it's a quite nice service. I don't know of a better one, to be perfectly honest. I think GitHub is trying to, as a potential project for managing that. But yeah. Sorry. I was going to then try to log in and show you what a demonstration of my release of a library on that is. But yeah. It's a very useful tool. Then you can tag Git commits to that. So that, you know, if your package is evolving over time, you know, people are actually citing it at a particular state. It can be quite cumbersome to set up. And I think, unfortunately, it's very specific to GitHub at the moment. And I can't remember if, for example, you're using GitLab for your project. I think they were a bit behind in enabling that. But I highly recommend it if you're hoping to make it citable. And that was that was my approach. And yeah. So last question. No, it's okay. So can you recommend any resources related to pipe pipe project creation? Yeah. So cookie cutter, which is literally what I was about to demo here. If I have more time. I think it's an excellent option. And it, you know, this is an example of how it will construct a template. And so I'll go with PyTest for this. Click is quite a nice command line management option. And then what I've now generated is let's see, a new project. So you end up with this kind of folder hierarchy. So that's including a test folder, for example, which conveniently has just already a pre-constructed import for including tests of click. And, you know, basic version control, like expectations, what should be ignored. Let's see. Get ignore, for example, you know, it's a it's a very nice and it's kind of the version. Yeah, sorry. Yeah, no, let's just stop now because we are going to overrun a lot. So I want to say thank you very much. Thank you for being presenting here. And I hope you enjoy the rest of the conference. Have a nice day. Thank you.