 All right, thank you. Let me close the door. So I thought this was really interesting today because this is not my usual community to speak to, and I might not be the usual speaker that you would listen to. So I wanted to put what we do in the software a little bit into context of the topic of what's been talked about here. So in the grand scheme of things, when you build stuff today, that's not a one-off, then you start with an idea, you put it in some CAD geometry, for example, and then you somehow want to simulate the behavior of this. So if you think about this in terms of, let's say a bridge or a fridge or a car or any mechanical device, any electromagnetic device, any fluid device undergoes computational simulation. So the talks that was one exception that we've had so far were in the CAD realm, in the mesh realm, in the build realm. I'm sitting in this simulator part. So the simulator part is trying to simulate the response of an object to the stimuli, the forces that we want to apply to this and see how it behaves, and typically then iterate this process. So there's an arc back that goes from the simulator and the post-processing back to the CAD because the thing does not do what you wanted to do, where it doesn't do it in an optimal way. So post-processing typically would be to see, let's say, does it satisfy the design specs, right? Does it become too hot? Does it break under mechanical loads, these sort of things? So you evaluate the response of the object with regard to the designs that you had. So in this simulator realm for a lot of problems, the method of choice is called the finite element method. There are others, finite differences and finite bonds. The finite elements fundamentally says you have your object that you want to see how deforms under a force, for example, you break it into virtual very small chunks, and then you write down the equations that say, well, each one of these chunks deforms in this particular way in response to the deformations of the neighboring chunks. So these are what we call the elements. So as I say, the simulation is to simulate the physical response of the object to external stimuli, and the three classical areas for people are doing this is, of course, solid mechanics. So you want to know about the static deformation of an object to a force, or maybe solid dynamics. So that would be, let's say, the vibration properties of an object. So if you're interested in, let's say, the response of a high rise to an earthquake or to winds, then you want to see how it oscillates, for example. The second area is fluid dynamics. So you want to see, for example, how does water goes through a pipe? Are the pressures within an acceptable limit and so on and so forth. Then the third one is electromagnetics. So if you design an antenna, for example, then you would simulate how the electromagnetic field emanates from the antenna in the exterior. So all three of these can be simulated with the finite element method. So let me talk you through a little bit what tools there are. It's an open-source conference, so we should talk about what tools there are and what tools there are. So there's a lot of commercial tools in this area for all of the standard engineering applications. So for solid mechanics, for fluid dynamics, for electromagnetic, there are codes like Fluent, Neck 5000, Abacus, Elastina, and so on and so forth. These are the heavy weights in the area, and the sad part from the perspective of this conference is there is really not a lot of open-source tools around. A lot of the work in this area is done with commercial tools, and that's just what it is. Let me talk you through what they can do. So they're pretty good at the traditional engineering applications. They have different materials. They can simulate how a piece of steel, a piece of plastic, a piece of soil deforms under a load, for example, and they have pretty good integration into, let's say the CID tools into the visualization tools. They have nice GUIs that make it easy to use them. What they do not have is they're almost exclusively based on mathematical tools from the 1970s and 80s, and we know a lot better today how to do this, and if you're really interested in high accuracy for low computational cost, then the commercial tools don't typically provide that. That's because they're just too slow and too inaccurate for a lot of applications. They also scale really poorly to parallel computing. So a lot of these tools are written in Fortran, for example, and they don't make use of your 16 cores in my laptop. In particular, they don't make use of the 10,000 cores that we can have in the cluster today, and there's applications that really need that. So when you run an Abacus, for example, on a 16-core machine, you get maybe a speedup of six and eight, and that's it. It doesn't matter how much bigger the machine is, that's the speedup you're going to get at best. Then a lot of the problems that we have today are really couple problems. So you want to simulate the response of an electromechanical device. So let's say you have a dielectric that you apply a current to or a voltage to, and you want to see how it deforms. These sort of couple problems are not well represented in commercial software. The other world is the open-source world that does not have these sort of big applications that are commercially available, but there's a lot of toolboxes around. And deal two, the project that I've represented is one of those. So there's deal two, there's LipMesh, there's Phoenix, there's GetDP, there's FreeFame and a number of others. Deal two is probably the largest of these. So they are toolboxes in the same way as MATLAB is, for example. So MATLAB doesn't tell you exactly what you can solve with it. It just offers you a lot of different data structures and algorithms that work on this and how you plug this together for your own application. That's up to you. And that's how these libraries work. They really just toolboxes for basically every finite element related thing that you can think of. So what makes them really useful is that they're used in methods development if you're an America analyst, for example. You have a new idea that you want to try out. These are the tools to do it. And if you have a non-standard problem, so if you do want to couple, let's say, electromechanics to solids or radiation to fluids, then these are the tools with which you would build this. And I'm gonna show you a few of the applications that people have built on this. So the big libraries that are in this field, they're high quality, they've typically been developed for 20 years by people who really know what they're doing. They have a million lines of code or more. They are tested many times per day on, in our case, we have 12,000 tests that we run multiple times a day. Most of these run for every pulled request. If you want to change a comma, we run 12,000 tests on it. So it's really high quality. Use modern mathematical methods and some of these scale very well to parallel computing. So our software, for example, has run an up to 300,000 course and scales almost linearly up to this size. So really interesting things that you can do with this that you couldn't do with commercial tools. At the same time, they're not a particular code for a particular application. So they don't typically have GUIs and they only interface with the upstream things. So let's say the geometry generation and the mesh interfaces to other software and the same is the case for downstream. So I don't have good pictures that are generated in my software. The pictures that I'm going to show you are all written in a particular file format and a visualized, a pair of you, a visit, for example. But so this is sort of the realm in which this kind of software lives. So in the case of deal two, so it's open source, so we never really know how many users we have somewhere in the realm of hundreds, 2000s. We know of about 1400 scientific publications that have been published with this. It's probably a substantial underestimate. We use of maybe 10 or 20 commercial projects that use it and integrate it. That's probably also a substantial underestimate. It's almost one and a half million lines of C++. It's a community of 10 principal developers and every year for each release we have between 30 and 50 people who contribute. So there's a sizable community around it that works on this. We came to realize many years ago that the thing that makes it usable is the documentation. You have to teach people how to use this and it's not so easy because there's no GUI. It's like MATLAB where you really have to learn what are the functions that you can use and if you know what function you want to use, how do you actually call it? So we have many thousands of pages of HTML documentation. We have about 70 tutorial programs that show how not just use one function but how they all work together. We have video lectures, we run short courses. So that's our way of building a community of people who know how to use this. All right, so just to give you a few of the applications that people have done, there are many, many more, but I'm just going to show you those that might be of interest to hear for three minutes. So as an example, if you want to simulate people who have an aneurysm, so let's say a bulging out aortic vessel, for example, you would like to simulate how this blood flow into or through the aorta in this big sac that shouldn't be there and then back out here again. So you see that this is all turbulent flow in here vessels are not static pipes, they expand. So you have a problem that's coupled between fluid and solid. So the pressure wave that comes from the heart, for example, expands the aorta. And so it's these sort of applications that are complicated, that are built on top of DL2. Another one is if you want to simulate things like growth processes. So this is an application from one of my colleagues in Michigan who wanted to simulate how mollusks cells grow. But there are many other things where you're growing, let's say on industrial scales, right? So that's a process where it's not just a deformation of an existing solid, but you have the interaction between the solid, the stresses in there and then the deposition from a solution, for example, of more materials. These sort of things that we do pretty well. Here's an example for microscopic antenna. So you have a plus monicrystalline, very small length scales compared to the actual device. So you want to maybe simulate this directly or you want to homogenize. So there's a lot of, again, multi-physics that comes into play here. Here's another one, this is a flexor-electric effect. So it's the interaction of electric voltage and the mechanical deformation. So the point I wanted to make is that DL2 is none of these applications. It's the foundation for these sort of applications, right? Just like I said, you can build all sorts of things on top of MATLAB, but MATLAB has all of the functionality. This is how DL2 provides all of the functionality. So typical application that people build is non-trivial geometries, coupled systems of nonlinear PDEs, nonlinear and linear solvers, you want to visualize this in some elegant way. Maybe you want to paralyze it, maybe you need to have some mixed or higher order finite elements so it becomes more mathematical there. So it's a lot of functionality that goes into each one of these applications and if you wanted to write something like this from scratch, it would surely be on the order of tens of thousands to hundreds of thousands of lines of code. So all of these commercial tools, for example, were written from scratch and they are tens of thousands, to hundreds of thousands of lines of code. And one of the things that you have to keep in mind is that the typical productivity of one single programmer is about 20,000 lines of code if you do this full time. So it would require many, many, many years to write each one of these simulators. And the idea of these toolboxes, of course, is, well, we can give you all of the tools and then maybe you only have to write 500 lines and it's all there. So what we set out 20 years ago when we started DL2 is that it should support all of these things like complex geometries, complex computations, be general, that is independent of the application, scales to very large machines. And we do this through these building blocks. So here are a few of the building blocks that we have. So adaptive meshes, that means if something is going on here, I need a very fine mesh, break it into small volumes. Whereas if over here nothing is happening, I can get away with coarse meshes and thereby keep the overall effort under wrap or under control. It has interfaces to all of the major downstream graphics programs, also to the upstream meshing programs, for example, to the geometry programs that we interface with Gmesh and OpenCascade that were presented earlier here, has all of the finite elements stuff, maybe not all that important for you guys here. But 1,000 downloads per month already gave you an overview of the size of this. Over the 20 years, about 250 people have contributed to it. So it's a sizable community project. We merge about 10 pull requests every day. That's, there's this council of 10 principal developers that reviews every patch, mentors, newcomers to become productive in this community. And so every patch typically has a back and forth where we say you could do it easier or better or maybe more general if you did this and then we merge about 10 per day of these. It's a number of publications. So there's a nice growth of time. I think it's the right direction. Just as an example of what people have done that did this a couple of years ago, just in one year, so there's applications in the biomedical field. There's a lot of applications in the fluid fields. Some fundamental physics and quantum mechanics and neutron transport. There's my original field of numerical methods research. There's a lot of people with different applications from the solids, from the electronics. There are some people who do financial modeling. Do you get the idea? It's a general tool, right? It's not tailored to one particular purpose, but you can build your purpose-built PDE solver on top of this. So I'll cut some of these slides in the interest of time, but one of the things that as a project we have realized, and probably many of you have realized in your projects too, is that software does not live by its technical correctness, that a programmer's software is not usable just because it produces the correct output, but that a lot of it really goes into recognizing that your users are people. You have to have ways to teach them. They have to have ways to learn by themselves because we don't scale. We can't answer every question that anybody might have. So the true factors are really just beyond the code. It's about utility and qualities, documentation, it's about community. And so things that we do, for example, we check every function call. Whatever you call, the input arguments must be consistent. There's extensive test suites that make sure that we don't introduce bugs. We try very hard to provide meaningful error messages, all of these sort of things. We have a lot of catalog use cases where you can go to and say, I want to solve this particular problem. Let me see if there's something else in the tutorials already that does something like this. And a typical application really just starts with one of the existing tutorials, copies it and then morphs it. And that means that typically you have graphical visualization from day one. You have something that already runs, that already works. It might not be the application that you're looking for, but if you have, let's say, a heat equation, then the jump of making the coefficient anisotropic or non-constant is not so large. You have something that you can already look at and also visually debug. So there's a lot of these use cases. There's a lot of documentation at many different levels, including these tutorials, these video demonstrations. There's about 70 of these tutorial programs. There are some rather large applications that are also open source that you can go to and can look at, well, how did they do this? So that's sort of the tools that are available. So maybe just as an example, if you wanted to build something, so I live in the academic world. A lot of my customers are graduate students around the world. So in a three-year graduate research project, for example, it's realistic for them to start with their physical effect that they want to simulate and at the end of three years have something that solves a complex multi-physics problem in realistic geometries, high-order finite elements, with a good solver. Does it in parallel outputs so that we can generate high-quality graphics? And because you have a good starting point in many of these tutorial programs, it's something that you can show it to your advisor on day one. Three years is maybe not the project size that most of us think about, but if you have more experience, so if you're like me, for example, and you find some company that wants to work with you who say, I have this thing that, we don't know how to do it, but I know you do. So I have one project where a company wanted me to simulate the membranes on cell phones that cover the microphones, for example. So it's a membrane that oscillates. It also has some stiffness. So that took me about two weeks of full-time work to work on. And it's validated against their in-house code, so it's not just some code that produces nice pictures, but it actually does the right thing, right? And the code has, on the order of 800 lines, that's something that I can write in two weeks without too much trouble, right? So it's not that much work to produce something that is actually fairly complicated. So what that leads to, these toolboxes is, of course, that we produce codes that are smaller, that are more correct, that are faster to write. In the case of the numerical mathematics community, it means that we can develop methods and can demonstrate that it's actually useful for real applications, as opposed to just solve the Laplace equation on the unit square, okay? That's sort of what I had to say about this library. So to just sum this up, it's a deal two is one of three or four widely used libraries that are high quality developed by professionals who do this for most of their time. It allows building codes that are substantially faster and more accurate than what the commercial tools do, and in particular, that are applicable to things for which there are no commercial tools. That's what we're talking about. Yeah, so to repeat this, so it's deal two is software that allows us to write solvers from scratch. Yes, it's exactly what it is. So it's a collection of data structures and algorithms if you want to. So it has, for example, a mesh data structure. It has a matrix data structure and a vector data structure. And then the algorithms that you can operate on this is, we find this mesh adaptively in such a way that in those places where the solution is oscillatory, you make the mesh fine, whereas in places where the solution is nice and smooth, you keep it coarse, so that would be an algorithm, another algorithm would be softest linear system, all of these sort of things. So at the end of the day, a deal two program while written in C++ does not fundamentally look completely different from what you would write in MATLAB. The syntax is different, but I would say that half of the lines of code are actually calls into deal two, right? So it's, you could say we just use C++ as our DSL, domain specific language, right? And then we just call into deal two in all of these places. How does it come that these proprietary software packages are using such old fashioned algorithms? So the question is how come that these proprietary software packages use old fashioned technology? Well, the answer is a programmer can write 20,000 lines of code per year, right? So if your software is 500,000 lines, it's a very, very substantial investment for these companies to replace this by something more modern. I think that some of these companies like Abacus would have the financial resources to do this. There's not the market pressure to really, at least there was not in the past, to go to substantially parallel computers. I think companies were not willing to do this. They wanted to work on individual workstations. And while the workstation had four cores, maybe that was okay because Abacus could use that. But now a workstation has 32 or 64 cores and people surely will become impatient with this. I think the larger reason actually behind this is that if you replace your sovereign and your discretization, you're going to produce answers that are slightly different than what the previous version of the software did. And that makes people really, really nervous, right? And my personal opinion is you would probably make it more accurate. But there's all of these designs that some manager already signed off on and said that's, you know, I saw the results, the bridge is going to work. I put my name on the blueprint. If the next version of Abacus gives you a different answer, you might get very nervous about your past designs and you might not trust the new designs. And I think that is a big factor why these companies stick with what they have. I think they sell licenses per hour per season before. And if it's not so fast, you sell more. That's a good point. Yes, the company sell licenses per core. Yes. Yes, question on the grid generation. So you showed a complex geometry such as an aneurysm. For example, how do you typically do the grid generation? So how do we do the grid generation for complex geometries? So we interface with tools like Gmesh, for example. We can read essentially every mesh format you can think about. So people, if your geometry is fixed, you take your CAD, you put it into Gmesh, Gmesh gives you a mesh, we take the mesh and we solve on it. If the geometry changes in response to the solution, so let's say if you have a pulsing heart, for example, then if the changes are not too bad, then we can deform the mesh along with the geometry as part of the simulation, or we deform the geometry, we send it to Gmesh, Gmesh gives us a new mesh, we do this again. So we interact with other software packages. A long time ago, we made the decision that there are things that we do really well, and there are things where other people do it really well and we're gonna build on what other people do instead of trying to reinvent the wheel. That's the last talk for today.