 Thank you very much. There is already a box of stickers going around, so if you want to take some. And I know that our audience today is quite diverse, so I will start with a simple but totally real story. So this is Derek, an aerospace engineer from Teodelfd. And I met him at a conference and then invited him to CREED. But because I was kind of crazy, I imprisoned him in the labyrinth. Then because Derek is an aerospace engineer, he got the idea, OK, I can make my own wings and fly out of here. And he finds some material around, but before flying, he wants to try if this actually makes sense. So because he's an aerospace engineer, he knows about simulations. And he says, OK, I will take my wing, my flexible wing, and put it in the computer in some software I already know, for example, openfoam, and solve the flow equations on many points around the wing. And in this way, he would see if the wings would give him enough lift. At the same time, he also wants to see how flapping the wing would deform it, so will this hold? So he does the same thing inside the wing with some other software like Calcolix. However, when he's flapping the wing, it's deforming. And this deformation causes some airflow, which then also causes maximum deformation. So let's see what we want to do at the end. As you see, this is a couple problems. So he moves the flap, and we need to see both simulations at the same time. So we need to see them together. One approach is to take some new software, write it from scratch potentially, that does everything on the same domain. So we call this a monolithic approach. And yeah, that would be nice for some people. However, Derek already has some simulations up and running. And time is ticking. The minute I will find him. So he says, OK, let's now take the two simulations I already have and couple them on the interface. Now, one of these coupling tools that one can use in a partition approach is precise. And the features it can give you is several coupling algorithms. It takes care of the data mapping between the two meshes. And it enables the solvers to communicate between each other. And in the future, it will also be able to do some time interpolation. It's an academic project mainly developed at the Teum Unich and the University of Stuttgart recently also at the Teowind-Hofen. And it receives public funding from the German Research Foundation. And in particular, the priority program for exascale computing, SPSPXL. So you see that it's a project that aims to HPC. And because public money should give public code, we distribute the code with the LGPL license. You can find the website and the source also at the end. And let's look at just one example of these features. The two simulations have two different grids that are completely different. They don't know about each other. And on the interface, we need to find out which points should be used to interpolate, for example, the force acting from the fluid to the solid. And for this, we provide several methods that you can use from a very simple method of taking the nearest point, the nearest neighbor, to projecting and interpolating or some more sophisticated interpolation with RBF. Now, I know this is not a coupling conference. And I'm not here to confuse you. So just for the people that know what I'm talking about, I will leave this slide for 10 seconds. Don't get confused. It's OK. So the question for you, especially if you are writing some simulation code, is how can I use it? It's a C++ library. However, we also have API for C, Fortran, and Python. Fortran is still a thing in our field. And in every solver, you usually have this structure. You have a main time loop. And inside this, you solve your equations for every step in time as you progress in your simulation. And in order to use precise, you load the library, configure it. And then after you solve the equations, you give it some block of data, like the values on your points on the interface. And you call the magic method advanced, which will do everything for you. And everything is just configurable through an XML file. And after all the solvers communicate and everything, you just read your new boundary conditions and progress. You can find the full example in our wiki. And we are very, very happy to help you start with your own adapter if you want. And the first point of my talk today is exactly this one, to let you know that this library exists. And you would like to adapt your solver to work with precise so that you could then use it. You could couple it with a bunch of other solvers we already support, like OpenFoam, Calculix, SZ2, Phoenix, some examples, or DL2, and so on. It could essentially work also with closed source solvers in some conditions if they have some API. And of course, you can also do it for HPC. Or you can write your own adapters for any other solver you may want. But I'm here today also for another reason. And the reason is to give you an overview of how academic projects usually work, what problems we have, and where we really, really need your help. And we need your help because we are not software engineers. We are not computer scientists. Often we are, or at least there are some computer scientists among us. And often we are just mechanical engineers, physicists, mathematicians, computational engineers that also write software to do our research. And usually, as a PhD student, you write your software because you want to get some results. And you don't have time to write tests. You don't have time to create nice packages and so on. And a term that was coined recently for people like us is research software engineers. And there are also conferences organized. For example, there is a submission deadline somewhere in February for the German RSC conference in Potsdam. So what is important here is also that we don't get funding usually to do such things. And for example, we don't know how to cite software so that we give academic credit to projects. We also have some more technical problems. And we were very happy to find a rare find, funding from the DFZ, which allows us to now focus on usability things, on sustainability issues. And I would like today to focus on building and packaging as well as testing. So we are using scones. And don't nod your heads too much because that's already quite some improvement from other projects that just give you a make file and try to adapt it yourself. And when precise started, more than 10 years ago, scones was really a thing. There was a hype. And for a good reason, it's written in Python. It's very easy and flexible. And you can more or less do everything. However, we realized that when we wanted to do some standard things that other build systems already do, it was difficult for us. So we now want to use to shift our defaults to CMake. So how we use it doesn't really matter. As soon as we can still do the same thing with CMake, of course. And how can you do that if you want to do the same thing in your project? Unfortunately, you need to learn CMake. It's a language by its own. And then you specify everything in a CMake list.txt file. We support CMakes already since longer, but there is an open pull request which you can also give some comments. The link is also on the page of the talk if you want to find it. So we can configure build, also install our software, run our test suite. And more importantly, we wanted to move to CMake because it's an excess decay requirement. So this is the extreme scale scientific software development toolkit. And it gives you several nice policies that you may want to check to make your project more usable. And other nice features is that it integrates with your IDE quite easily. And also, it makes it easier to create packages. And one of the options we have to create packages is with CPAC. This is a module of CMake. And after you configure it, it's very simple. So after you run CMake and make, you then do make package. And this will give you a Debian package which you can then give it to the user. And they can just do an opt install in your package. And you could also publish it to the Debian repository if you first also validate it. So that's quite important. Now, another thing that was quite important for us lately is integrating Precise with SPAC. And this is important for several reasons. One of them is because we make Precise for supercomputers. And we have a lot of dependencies that we also want to test and change. So for example, we run with an open MPI, but we also want to run with MPSH or whatever. And SPAC makes this really easy because you just do SPAC install, precise with the dependencies you want. And then you get a module. You can just load normally in your HPC system. If you like the concept of SPAC, you should also check EasyBuild, as you already know probably. And we also had some experiences with Conda or with Docker containers which we mainly use for testing. And maybe to the last part of my talk, this is our situation regarding testing. We developed the library since longer, but we always outsourced the problem of adapting the solver to the users during the past years. In this sense, the library is already quite nicely developed. It has unit tests, integration tests. We have a testing shoot and so on. However, now we also want to make it easy for the user to start playing with it. So we make this so-called adapters for solvers which can either be a plugin like for OpenFoam, in which case we also need to support several versions of OpenFoam, and that's quite painful. Or to give adapted codes for, for example, calculus. And at the same time, this should be able to work with commercial solvers or with in-house solvers and so on. And we have four different APIs to maintain more or less. So one important question for us is how to do unit tests and integration tests inside these adapters, which are just plugins. What we do at the moment is that we run complete simulations as system tests overnight on Travis. And something that we also need to do is performant regression tests. So we don't want any feature that we contribute to have an impact on our performance and we don't know it. And the simulations we run, the full simulations, are usually just our tutorials. And talking about tutorials, which you can find on our website, we are very happy to also have a web-based tutorial, which you can just click on your browser and it acts like it's running real simulations. And you can see all the configuration and so on. And this was part of a student project that really helped us. Finally, it's important for us that the community also contributes. So we're trying to make this easier. We use now pull request. This was not obvious in the beginning. And we also try to do some code quality checks with this nice precise bot that the guy who developed it is also in the audience. So this could check, for example, for cold styling issues or if you forgot to add something in your change log, if your contribution seems to be quite big and so on. So I took it kind of faster than I expected, but we are now at the end. And I want to remind you that Precise is a library that can couple simulations based usually on a mesh. And if you are developing some simulation software, we would be very happy if you could integrate Precise into your code so that you can then open up to the world and couple it with open-form calculics or other solvers. And for the common audience of FOSDEM, usually software engineers, we really urge you to give us your feedback because we know that we are sometimes reinventing the wheel and we are sure that you usually know something better than us. So find our website on Precise.org. Our sources for everything follows us on Twitter. This is our chair website or contact details for me. Thank you very much. Yes, in the end. Excuse me again. So the question is why don't we have a central executable here? This is actually, it's a feature, not a bug. It's designed to be like this because as a library, you can, first of all, load it to whatever code you want. You don't have to structure, to fit your simulation, your code into our framework. And also, because this then allows us to do completely peer-to-peer processing and communication. So its process of its solver loads its own copy of the library. And all the mapping, for example, happens only between the subdomains that it needs to. So if you had the central instance, this would be a bottleneck because you need to gather everything and then scatter everything. There are other tools that do that, but we primarily focus on HPC. So we need to solve such issues this way. Yes, which system? OK, so to repeat, if we consider WAF instead of CMake, OK, to be honest, I've never heard of it. As I told you, we don't know as much as you know. And CMake felt like an obvious choice because it looks like the industry standard by now. So it's important for us that users know what they're doing. So when a system administrator tries to install precise, they don't say, oh, I have to install these SCONs or these other build system. I just know what to do. And it's important here to know that our users usually are not computer scientists. And they have little contact with Linux, for example. So most of the user support incidents we have are like, oh, I cannot build it because I don't know how to use SCONs or my compiler has some problem or so. So it's important for us to make it as easy as possible. And that's why we are also now trying to make, for example, Debian packages. Because most of the people will try it on the latest Ubuntu LTS and then move to a supercomputer, where then SPAC would help us. Yes, I cannot hear you. About the performance test. Yes, yes. So the question here is not how to track our time. We already have our own event timers and so on. The question is how to implement automated tests that could run a small scaling test on a cluster and so on. And of course, we also have the problem how to find the funding, for example, to do that. We are in a nice situation to have our academic clusters, which we are free to use. But this is not obvious for every project. Yes? Yes, this is a multi-domain solver. We have different domains. No, it does not need to. So the question is if the grid needs to be consistent. It does not with artificial space. OK. Yeah, so the different meshes do not, sorry. Yes, the question was if the meshes on the two domains need to be consistent, and if there is a problem with artificial dissipation. First of all, the meshes do not need to conform. They can even have some space between them. And that's why we have the mapping techniques, which will do some interpolation. And depending on how sophisticated the method you use, the error will be bigger or smaller. And if you're talking about, for example, the added mass effect in fluid traction interaction, this is usually solved with implicit coupling. So you run this procedure iteratively and you could then, in case of precise, accelerate this procedure with IITCAN or with the interface quasi-needle acceleration.