 So we've got a lot to talk about, so I'm going to talk fairly fast, but it's going to be lots of fun. I'm going to start with figuring out how this thing works. Oh, no. Okay. Yeah, there we go. Okay, so first, some warnings. So massive simplifications lie ahead, because this stuff is way too complicated to fit into an hour, and we don't even have an hour. There's going to be some math, hopefully not too much. So I'm not a climate scientist. Some of you might be, I want to talk to you afterwards, but I'm just a hacker who just got really interested in this stuff, and there will not be a written exam. That's a lie. So this desert is a Wendorf in Poland. It's a desert in Poland. It's actually a naturally occurring desert from last Ice Age, and this one's fine. But every year, we lose about 27 billion tons of fertile soil to land degradation, and we lose about 120,000 square kilometers of land to desertification. That's about three Belgians if you're counting, right? So desertification is impacting 168 countries worldwide, and this is a problem. When people talk about climate change, they tend to talk about CO2. CO2, we kind of understand that. We need to release less of it. We need to capture more of it. But that's not the only process. There's all sorts of processes like desertification, eutrophication, and other things like that, and we need to understand them and deal with them, OK? So in order to deal with all of these effects, we use computers. We try to catalog the oceans, the atmosphere, the biosphere, whatever systems. And we need to understand how these processes work on a very deep level. So one of the main methods we use is we write software that simulates how all of this stuff works, and then we try to run it and hopefully come up with some useful predictions. Then we test those predictions against real data, and if that works, perfect. Then we know that it's at least somewhat reliable. Of course, we can set up lots of different scenarios and so on, and this is fine. So I'm going to talk a little bit about how these models work, but I'm also going to talk about why some of these models and a lot of the modeling methods that we've used are stuck in the 80s. And that's actually a little bit terrifying, but we'll get there. One big note, this is from George Box. All models are wrong, but some models are useful, and that's fine. We want to make use of the fact that some models are useful. And just a few non-goals. This is not a course in climate science. It's not a deep dive into any specific model. I'm going to talk a little bit about one of the models because it's kind of useful to get an idea, but yeah. And it's not intended to be scientifically rigorous, which means, look, I'm simplifying a lot of stuff here, and just yeah, okay. So let's talk about rocks in space, okay? Space turns out to be full of stuff, and some of that stuff is rocks, right? And so if a rock in space is massive enough, then over time, the internal battle between gravitation and the internal stresses of the material are going to cause it to tightly settle into a sphere shape. Depending on where it is and what it's doing, it might be called a planet. This is actually Phobos, it's one of Mars' moons, but we're going to talk mostly about planets. So if you know the planet's mass and its radius then you can calculate the escape velocity, right? It's easy. And the escape velocity is basically how fast you need to be going to escape the gravitational influence. So that means people in rockets or rockets without people, but it also means liquids and gases and things like that. So if we talk about the gases, if you have a gas and you know it's mass, so molecular mass, and you know how hot it is roughly, then you can calculate its average velocity of an individual molecule. So some gases are heavy, others are light. Hydrogen has an atomic weight of one, but it tends to be found in nature as H2, so aggregate mass is two. Nitrogen has an atomic mass of 14, but it's found as N2, so 28, right? So what happens if the velocity of the gas is similar to the escape velocity of the planet? Well, it goes away, right? Now, even if it's significantly smaller, let's say about 10% of the escape velocity, most of it will go away over time. And so if you know the average temperature and the mass of the gases near your rock in space, then you can figure out how likely it is to fly away. And so if we take a planet, let's call it Earth, and it's 6,371 kilometer mean radius and we can calculate the escape velocity, we've got these different types of molecules and we can roughly figure out whether they're likely to fly away, right? This is pretty cool. And so if they fly away, we can say that they're boiling off and if they're not boiling off, they make a soupy thing that we call an atmosphere. So based on this, you can actually calculate it for any given planet, which is why we know that Mercury shouldn't really have an atmosphere and also kind of why it rains diamonds on Jupiter, right? This is really cool. So I did point out one thing though in this one. I assumed that the average temperature is 15 degrees Celsius, right? How did we get there? Well, let's talk about stars. A star is basically a giant pile of stuff in space that lives in an equilibrium. There's so much stuff that the gravity wants to crush it together into a point and the pressure and such is so crazy that it wants to rip itself apart. So we get this equilibrium of nuclear fission and it produces electromagnetic radiation. So this is actually an X-ray image of our sun. It's measurements from a bunch of different satellites. Very nice. So the radiation from different stars comes at different frequencies. So radiation goes out in all directions and we use this black body temperature equation to figure out roughly what frequencies you're going to be getting. So it's called a Stefan Boltzmann law. And based on this, our sun is basically gonna be around here in the visible spectrum, which is very convenient for us. But there's this weird thing about it. You see, most of the gases in Earth's atmosphere, the main exception being water, are actually transparent to light at those frequencies. So the energy just goes right through and it mostly just heats up the rocks and the stuff on the ground. So that stuff warms up and starts to emit different wavelength of frequencies and that's how we warm up our atmosphere. So we end up with a kind of systems diagram like this. So we've got solar radiation coming in. It comes to the ground, the ground warms up. It heats up the atmosphere. The atmosphere heats up the oceans and well, it's actually, there's more inputs here. I'm simplifying, remember. And then eventually some of it escapes us infrared radiation. Now, the cool thing is the input and the output are roughly the same, right? So our planet is in a nice thermodynamic equilibrium. The main question becomes how big are the buffers? How much energy are we storing at any given point in time? And that's actually the crux of a lot of what we're doing in climate science these days, apparently. So an interesting point. Jupiter has more output than it has input, which suggests that there's lots of cool stuff happening deep down inside of it. But let's talk about these buffers a bit more. So the two main buffers for Earth are the atmosphere and the ocean. The rocky ground bit, of course, matters. But we're not here to talk about geophysics and that stuff happens on a much longer time frame than we're really caring about here. So for our purposes, the atmosphere is a big one. And the heating that is happening causes some of the air to rise. This hot air is lighter, so we get buoyancy in the air. And of course, colder air needs to come in underneath it. So this causes pressure differences. That causes wind to happen. And the wind direction is driven by these pressure differentials and also the rotation of the rocks, the Coriolis effect. Some people would call it Coriolis, but the guy was French, so you know. Anyway, so this rising parcel there, it will cool as it rises. And the atmosphere is a mix of gases. So the cooling and the pressure difference, they might trip over the threshold, which causes a phase change in some of the constituent gases. And so this is why we get things like rain, but it also explains how we end up with diamond draining on Jupiter. We get these phase changes that happen as things are moving up and down in the atmosphere. So if the air is cold enough that the relative humidity is 100%, the air can't hold the vapor in it anymore. So basically dry air means no clouds, but wet air means lots of clouds. So if you go outside, you can figure out how wet the air is. And for that, we can look at the adiabatic collapse rate, which in Earth's troposphere is about one degree for every 100 meters. So if you climb a one kilometer high mountain, the ambient air temperature is going to be about 10 degrees colder than down in the valley below. So oceanography kind of works similar ways, except there we have an incompressible fluid. Salinity gradients have a lot of effect. So how much salt is in the bit of water you're looking at? This is why we have things like thermal highline cycles, like the Gulf Stream. And sea ice has a lot to say. So I talked with an oceanographer recently who gave a ballpark estimate of tens to hundreds of teradjoules being unaccounted for every day, because the sea ice models are just not quite good enough. And then, of course, the interface layer between the water and the air, that matters a huge amount because of the transference of energy between it. So at this point, you're all going like, wait, what? I thought this was about Earth's systems modeling. But yeah, so that was the 10-minute meteorology super simplified version, because I want you to understand how many different forces and dynamics are going on here. This is kind of obvious stuff, but at the same time, it's amazing how many details there are. So in order for us to model this stuff, in order for us to make a computer model, we need to break stuff down into some kind of grid. And the first step in any... Oh, no. In any computer simulation is... Ah, yeah, is to create an operational environment in which the simulation can take place, both in space and time. So we could start on a quantum level like this and work our way up simulating each particle, but that would just be way too much stuff and way too much information to keep track of. So actually, this video is from Brain Truffle on YouTube. He has an amazing series on computational fluid dynamics. It's amazing, I really love it. But we can't deal with all of this stuff, it's just too much. So what we're gonna do is use microscopic statistical measures such as pressure and temperature and density and flow direction and so on. And this is just a practicality. But Earth, as you know, has a surface area of about 510 million square kilometers. And because of all of this motion and interconnectedness in the atmosphere and the oceans, we can't just simulate the bits that we care about. We have to simulate the entire thing. So we have different areas influencing, like here we are on this nice little island, but the North Sea is influencing us, the Alps are influencing us and so on. So we need to take care of it all. So we need to break this 510 million square kilometer area into a grid of some kind. We can do that using rectilinear grids like this one or a dynamical core, and each of these different options has different trade-offs. So a rectilinear grid, super basic, it's essentially just a mercator projection, right? But it has this problem, it's super simple, but the bits at the top, the grid squares at the top, they tend to squish a lot, so suddenly the poles are really accurate, but around the equator they're less accurate, and this is actually used a lot. So if you go and find climate data online, it's very likely to be in 0.25 or 0.1 degree latitude longitude grid, and that's just the state of things. But of course, there are problems with that, so if you have a 0.1 degree grid, you're gonna end up with 100 by 100 kilometer blocks, and really most of the stuff we need to be doing is on the one by one kilometer level or less. So you can imagine how quickly we end up with a huge number of blocks. So one thing we can do is be more flexible by using a dynamical core, where we say some of the area we really care about, so let's be more accurate there and less accurate elsewhere. And the problem with this is you end up with having to deal with all these special boundary conditions where you have this transition from low resolution to high resolution, and so you need to do special accounting. And so in terms of the science, this is fine. In terms of running on a computer, this is atrocious because you need to do extra if statements and basically thrash your CPU cache. So, you know, but this is actually really used a lot, and of course this one's just emphasizing on the Greenland ice sheet. This is a picture from NCAR, the National Center for Atmospheric Research in the US. I'm using a lot of stuff from them in this. Another more recent approach is the Voronoi grid. So this is used in MPASS, which is a model for prediction across scales. And Voronoi meshes, you're familiar with Voronoi, it basically using hexagons or pentagons to make things more accurate in certain places. The nice thing about this is you always use the same correction function in every step. So there aren't any weird boundary conditions, everything is a boundary condition. So you get some nice pretty maths out of this. One I really like is the geodesic grid. This is used, for instance, in the ICON model, where you start with an icosahedron and then you just subdivide every triangle in the icosahedron into four subdivided triangles and do that enough times and you end up with a very high resolution planet, right? This is very nice. So this is kind of one of the things that I'm using a lot in my systems, but this is super cool because it means that each triangle can essentially be the root, so each triangle in the icosahedron can be the root of a quadtree. And it is so very nice computer science stuff that comes out of that. Anyway, so we pick a grid, we subdivide the planet into blocks. We'll call these grid blocks or voxels or something, depending on how I feel. And so even on a rectilinear grid, these are gonna be tapered somehow. They're never gonna be exact cubes because we're on a spherical planet. But we also typically tend to subdivide horizontally, though vertically differently from how we do it horizontally. So most models we'll use maybe 10 or 20 meter altitude steps very low down up to maybe 500 meter altitude and then 100 and 200 meters after that, but it depends on what exactly the model's trying to accomplish, okay? So having done that, now we get to run physics on these voxels. And regardless of which grid you select to do this computation, the next step you need to codify the physics on that grid. And so we need a few things. We need, first off, a global timer because we need to calculate the time steps. This is just the same as in any computer game, how you get a game loop, right? You need some kind of shortwave radiation model for the input from the sun coming in. You need a longwave radiation model, so for the infrared light going out from the planet. You need some kind of Lagrangian continuum equation which is basically for modeling how flows go through each grid, so where stuff comes from and where it goes afterwards. And you're gonna need a bunch of factors, so some kind of thing like the global circulation model where you keep track of, say, the Coriolis effect and all of the physical constraints of the planet. And then you need some kind of interaction models which are like how the sea surface and the atmosphere exchange energy, for instance. If you have all of these things and just press play, then you actually got a lot of progress. It's going quite well. And this is fine, so then you need some kind of data structure. And so remember, each grid block is essentially just a structure of data in memory that stores a bunch of values, so pressure, density, temperature, and so on. And some of these are more fundamental than others, summarized naturally from if you start quantum physics and work your way up, you'll end up with pressure and temperature kind of coming naturally out of it. But others, like, say, flow velocity is not really a fundamental thing. It's something that you can actually derive, but you might want to keep track of it at this level of your model in order to just make the accounting a lot easier. So, yeah, you can derive the flow velocity of wind and the direction from the pressure gradient, but we like to keep track of that kind of thing. Also, things like rainfall. Do you want to keep it as a baseline thing in your per grid block, or do you just want to have a separate function or a separate process that keeps track of accumulated rainfall over time? And this, of course, depends a little bit on what your goal is with the model. So, yeah, we've got some derived factors, so wind speed, wind direction, cloud cover, cloud type. This is actually really fun. So if you know the temperature, you know the humidity levels, and of course we can calculate the lapse rate, you can figure out what kinds of clouds are most likely to arise, and you won't necessarily get it really accurate unless you start accounting for atmospheric vorticity and terrain shape and stuff like that. But depending again on how high resolution your model is, you can get this pretty good. So, this is where we come to the current day. Just to be clear, I'm not saying that Minecraft is better than Fortran. Minecraft is actually worse than Fortran. Fortran is really cool, it's a great language. The problem is anybody can run Minecraft. Have you run any Fortran code recently? Right? The problem, of course, is that both of these things are essentially working with voxel grids, but one of them is easy and everybody understands it, and the other one has produced this mess. So these are roughly the two main categories of our systems models. We've got the meteorological models, so GFS is the American main model. The IFS from the European Center for Midrange Weather Forecasting. Those are really amazing. Icon, I mentioned it earlier, it uses the Eicosahedral model, that's from the German Weather Service and the Max Planck Institute. MCAS, it's relatively new, it's being developed in NCAR. So these are all very short term weather forecast models. Weather forecast models are basically trying to figure out what weather will happen, where and when. Whereas on the other hand, we've got climate models, and they're not saying there's going to be a hurricane at this particular location on June 14th, 2053. It's rather saying what is the statistical likelihood of that kind of thing happening, right? It's much more broad, but you get all sorts of crazy stuff in there. And so the CSM model, the Community Earth Systems Model, it is a open source model. It's publicly available, it's developed mostly at NCAR, so National Center for Atmospheric Research in the US. And it is fantastic, and I'm gonna take that one and expand it a little bit, but not too much of a deep dive. So commonalities through all of these different models. First off, they're almost always written in FORTRAN. This is mostly historical, but a lot of the stuff was written in the late 70s, early 80s. The oldest bits of the CSM model were written about a year before I was born, right? And that's just the way it goes. They often have very confusing build systems. I'll expand on that one in a bit. A lot of it was written by postdocs and grad students who then moved on with life, so their dedication to code quality wasn't necessarily really high. No, so I'm not criticizing these people. They're excellent scientists, but they're not necessarily the best software developers in the world, right? I think they would very happily admit to that. And they also have very weird structures, but so if we look at CSM, it's about 2.2 million lines of FORTRAN code, right? And about one million lines of that is just the atmospheric model. There are different subsystems for land, sea, chemistry, sea ice, rivers, and so on. There's actually amazing development still going on on this, and it's a very active development team. Mostly scientists, but they have a couple of software developers who are kind of making everything wonderful. I love following their stuff. They don't really use runtime configuration files instead. If you want to run a slightly different variation of the model or run it under different parameters, you just rebuild the entire thing, which would be fine except the build process is you run a Python script that runs multiple shell scripts that occasionally run CMake and Make, and also Perl scripts, and it's the most confusing thing ever. And there's XML files that configure how it should be compiled for different clusters. The underlying assumption is you're always using a cluster, even though the thing is so flexible that you can actually compile it for running in a single dimension, and that's what people sometimes use on their laptops and such. But this model is, this is like the gold standard in climate science, and every other climate model is compared to this. If you go and look at the IPCC data, then a lot of it comes from this model, not all of it, but a lot. Right? Yeah, one thing about it is it doesn't really operate at convection permitting scales. I'll explain that in a little moment. But so let's talk about the data. You've probably heard of the IPCC activity reports, or AR, like AR6. So an AR report is thousands of pages of scientific technical explanations of what is happening in the climate. And what's maybe less obvious is that behind the report, there's a lot of data. So the main things is collected data, like observations from weather stations, ice core data, agricultural data, and so on. But then you get this thing called CMAP, which is a coupled model into comparison project. And version six of that is 30 petabytes, right? This is one of the big data sets that you deal with if you're getting into climate science. And so if we look at like average files, we get like, so output from the GFS model, which is so the global forecasting system, a 16 day period and that comes out at about three gigabytes. And it has about 27 million values per time step. Then you have like re-forecasts like the AR5 system, which is basically taking historical weather forecasts and rather historical weather observations and running them through a forecast model to get roughly fill in all the blanks because we don't have weather stations in every point in space, right? And so one month of that is about 36 gigabytes. So it becomes a huge amount of data. That's zero dropdown, but that's 324 million data points per time step. So it's, there's a lot there. And these files come in lots of different formats. Of course people use JSON and CSV and such for really small stuff. But I got asked recently by a friend to kind of, hey, could you give me like the ocean data? And I was like, okay, sure, what file format would you like? And he was like, Excel, right? I'm like, oh no, you don't want like gigabytes upon gigabytes of Excel. So we use GRIB a lot, unfortunately. GRIB is horrible. GRIB is a binary format that was designed in the 80s to be efficient when you're using a tape deck, right? So it doesn't support random access or it isn't really intended for random access. So it doesn't have an index built in. So if you want to seek to a particular location, you actually have to scan through the entire file. So somebody joked that it's like at least a big O N like log N, or no, not log N, just N. But yeah, so this is pretty bad. NetCDF is actually fairly good. It's the network common data format and it's under heavy development. The people who are making it are fantastic, but honestly like the interfaces could be a little bit easier. And then of course, people are increasingly using Parquet, which is amazing. It's a cool thing. I really wish the reference implementation wasn't written in Java, but here we go. So based on all of this, you can kind of get the impression that there might be some places where we can improve this, right? Yeah, so the first thing is we need better models. And I don't mean that necessarily in a scientific sense. The scientists are doing a fantastic job. They're really, really on top of their game, but they need more hackers. They need more programmers to help them out, making the code fast. And because if we get the code to run a lot faster then we can run it at higher resolutions, which of course makes it slower again, but that allows us to be more scientifically accurate. And then on top of that, coming back to the Minecraft point, we can do better visualizations, right? Because if we don't see it in some way, we can't really understand it. And unfortunately a lot of the visualizations that are going around in science at the moment are being generated after the fact through kind of people loading stuff into Jupyter Notebook and then rendering a bunch of like static images. So that's not great. And we're also very stuck in terms of climate science and the kind of old render model. So remember when you used to use Blender or Maya or something like that and you would like spend days, weeks drawing up everything and then you'd press render and then you'd go on vacation for a week because that's just how long it takes to render. And nowadays people use Unreal Engine in the graphics industry making cartoons and whatnot because it's real time. And what happens is there's this mental thing that happens where you suddenly have the capacity to play around and experiment a lot more once things are roughly real time. So I think that there's a lot of space to bring in techniques from the games industry into climate modeling to make things faster. And this is something that we should be doing. So all of this stuff is complicated but I think a lot of the problem with climate science at the moment is that it's being made even more complicated by the fact that the tools are really bad. From a scientific standpoint there are some interesting challenges. So one is that we have this divide between the climate models and the weather models and the gap between there is mostly this thing called convective permitting scales. So that's where you can start to account for convection in the atmosphere. You can't do that if you are dealing with a climate model where this is Hurricane Katrina. Some of you might remember it. In a typical climate model this would come out to be maybe 36 pixels. You can't understand the dynamics of a hurricane if you've got 36 pixels worth of data. We need to get the much higher resolution stuff so that we can actually understand these dynamics. And that doesn't just mean hurricanes. That also means things like supercell systems where you get this massive rush of lightning and rain and it's crazy. But if we don't account for those we're going to have trouble getting all the details. But of course the higher resolution we are, the slower it is and we get into problems. So I love this quote from Gene Quim. So fixing or improvements anywhere outside the bottleneck or an illusion. Yeah. So we have a bunch of bottlenecks. One is convoluted tool chains. The fact that we're running a Python script that's running CMake and Make and Perl and Bash scripts and so on, it's just too much. On top of that you want things to be easily configurable. You need the, we kind of need to get rid of Fortran. Fortran is an excellent language but the developer pool for it is shrinking very, very quickly and so we need to kind of start to leverage away from that. We kind of need to stop using Gribb as a data format. Just categorically. We need to have better resolutions or visualizations so that more people can understand it. There's a big problem with using clusters that because these models are always on discrete time steps you get a lot of processing in all of the cluster nodes then you get a time step and during that time step all of the nodes are exchanging data with each other. So you get this very unbalanced peak troth thing on the network and it's horrible for the lag. So maybe getting away from clusters is a good idea into more of a deep model on single computers with lots and lots of cores. There's a bunch of CPU stalling that's happening in all of these which essentially means that the models are running maybe 1,000 times slower than they need to be and honestly the code's a bit messy. I'm not gonna go into too much detail on that. On the political social business side there's a lot of money out there for climate change at the moment and a lot of really cool projects are being well funded but there's also a lot of nonsense projects being funded as well, let's be honest. And the problem is a lot of the boring, unsexy stuff like how do you deal with different grids on a planetary systems model, that's not necessarily being funded as well as it should be and it's making it hard for the scientists to get the technical support that they need. So this is something that could be fixed. And then there's a bunch of stuff where hackers, technology, community, what have you could be helping things. So I put up a couple of examples. So like a simple Grib to NetCDF converter because just being able to take this 36 gigabyte blob of error five and just recast it into NetCDF would make things easy. There are tools that do this but one of the problems with Grib is you only know how to read a Grib file if you happen to know what the person who made the Grib file was intending when they made it. And this is just a huge problem. It would be good to have a generalized geodesics library for handling map data and such in this quad-tree format that I mentioned. It would be good if we had an easy way of visualizing a sphere in the browser, like a global model because if you're using something like leaflet at the moment, the poles get really infinitely wide and it's bad because the poles kind of matter. And then this is I think one of the amazing things like if we could get this done is take a system like CSM, build it, like create a new build system that bakes each of the different sub-models into individual shared libraries or shared objects that could be hooked into from other languages. That would be a step towards releasing our dependency on FORTRAN without having to rewrite the two million lines of code behind it. But generally speaking, I want people to come and talk to me about this because I'm very new at this stuff and I know there's a lot of interest, but I also think like we could be doing really good stuff to help out the climate scientists. So that's like the super, super over simplified explanation. I'll take your questions now. Thanks very much. So yeah, line up behind the... Whoa! Oh! Oh, sorry. Yeah, line up behind the microphones. Just quick check. So do we have, for the signal angels, do we have any questions from the internet at all? No questions at the moment. Okay, great. Yeah, go ahead. Oh, can you just... Sure. Sorry, I can't really hear you. You need to speak into the microphone, I think. Thank you very much for bridging the physics that I know to the things I want to know what I'm confused about. But you didn't quite cover everything. Is there a way to get into it? Yeah. A way to get into it. So honestly, the way I got into it was I read a few books and then I sat down with some of the model source code and just read a ton of Fortran code. I'm sure there's an easier way. One of the problems I think is the scientists are, because they're very good at science, they get into all the technical details and all the stuff. And I think there might be a lack of some kind of super easy explainer for techies who want to get into this. So at the moment, I don't know. My method of getting into it, it took me about three years to kind of figure out the basics. So yeah, I'm sorry I can't point you in a good direction, but maybe one of the things to do, maybe a starting point would be to get people to collaborate on making the short technical intro for people. And maybe this talk is the start of that. Yeah, a list of books might be a start. For example, on your website, I don't know if you have it. Yeah, you're right. I should make a list of books. I'll try to put one together and tweet it. Thanks for the question. Next up. Hi, thanks. You mainly talked about the gold standard. What about other models? Yeah, so. Maybe put force machine learning on this vast amount of data. Where is this basically in the rank? Yeah, okay. So yeah, the reason I talked about the gold standard. So I mean that from the scientific perspective in that it has well established short wave and long wave radiation models. It has very good physics on all different domains and so on. When we're talking about the things like MCAS, MCAS is really cool. It's got a good grid system and it's more recent. So it's nicely, it's actually a pretty clean code, but it just, it isn't as comprehensive. It doesn't cover all of the different things. So that kind of model is probably easier to understand, but it doesn't cover as many details. As for machine learning. Okay, turns out the atmosphere and the oceans are really complicated systems and there have been attempts at using machine learning to kind of capture it, but the technology just hasn't been ready for that level of detail. So this is improving, but there is a possible shortcut that is really exciting. I don't want to get into it right now, but we can chat afterwards, but it's like the short version is we can't necessarily use machine learning at the moment to predict weather, but what we can do is predict what the slow, comprehensive models would say about the weather, if that makes sense. Yeah. Thanks for the question. Next up. Hi, first of all, great talk, thank you very much. I was wondering, looking at that grid system or the grid projection, and which seems to me to be very Cartesian, you know? Yeah. Very X, Y. Is there already some efforts being undertaken to maybe on a fundamental level and maybe also congruent with that on a fundamental computational level to find some kind of a vector or polar model that might, you know, have a better fundamental fit? Yeah, so I mentioned three of the main gridding models, so the rectilinear, the Voronoi, and the geodesic. I'm a huge fan of the geodesic approach because a lot of the math is just really elegant in that, but there are a lot of projections that work quite well, and honestly, there's no hard rule about which is the best. I think with the geodesic model, because you need to visit each of the single grid blocks, eventually, like when you're simulating the physics, you need to visit each block, and so one of the questions is, how do we visit them? Do we do it in a linear order or do we arrange them into some kind of space filling curve or use some nice fractals like the Hilbert curve to do that? And I think there's a huge amount of optimization potential there, but it's not been tapped into at least in the models I've been looking at. Very shortly, this reminds me of something else in even more, I guess, philosophical question, which might sound like, is nature analog in its fundamental nature or is it discrete? Yeah. Right? So if it is discrete, then we can perhaps find really well-fitting computational models. Yeah, and this kind of comes to the point that I was making when I referenced brain truffle. Again, amazing YouTube channel and these building computational fluid dynamics from the quantum level and up. And he kind of explains it really nicely that there's just too much information if we want to, even if it is discrete, it's discrete at such a small scale that we just don't have enough computer to do that kind of thing. So we need to simplify, we need to throw away data, and throwing away data in an effective way is actually the way to go, right? Thank you. Yeah, thank you. Thanks for the question. Any questions from the internet on the signal angels? Any at the moment? No, okay. Go ahead, go ahead. You were talking about, and you had a plea about help to clean up the current code and so on. Yeah. I was wondering, there is lots of early written photon code and had to make it maintainable to convert it to modern language. But I was thinking, should it first be all that code? Is there also the original scientific model still available? Ah, so, yeah, to get those, you need to start reading the scientific literature. And one of the things that's quite nice about the models is there's often comments in the code that say this bit, like this function here is based on this paper. And so, you can actually step through the code and just find lots of PDF references and such to get those. I did consider whether we could rewrite all of the photon. It's not practical, it's just too much stuff. And also, there are quirks that come from the fact that like, you know, photon is an amazing language. One of the things that it's very good at is vector and matrix math. It's just got it very nicely baked into the semantics. And so often you'll look at the function, it looks super simple, but once you start to try and pull it apart, there's actually a bunch of FORTRAN magic implicit in it. And so, a straight line translation isn't going to really cut it without a bunch of thought. So, yeah. Okay, but because you're completely correctly mentioned that the FORTRAN programmers are going, they're extinct, so to speak. And it's very worrying that at some moment there won't be any FORTRAN programmer left where you still have the code. And for example, when you find the book, and it's what to do. This is also a question which will be again in, let's hope, 100 years. Had that, the languages to, for example, if we translate everything to C, that will be the last C program. So how do you, is there any way of you how to fix this? So there's two people who want to ask a question. Maybe we'll have a longer discussion after the talk, but thank you for the question. And a very short, last question. Well, can we have, yeah, can we have, the two behind you want to, yeah, yeah. Talk after it. Yeah, talk after it, yeah, definitely. Okay, very quickly. So we have just one minute for a very, very quick question. Okay, bring the questions and I'll... A source thank you for your talk. It was very interesting. I love this very much to learn more about science within, but I was wondering about the Fortan language, like the things is, I have read pieces and articles, research about not converting Fortan, but running Fortan on things like the JVM.net framework, things like to make it interop, interop, interop with other programming languages like C-sharp or Java. And I was wondering like, is this, is like how strict is Fortan defined and how modular is the model built? Yeah. So most of the model code is written in Fortran 90 at the moment. And it's really, it's nicely well-defined, calling it from other languages is super easy. It's got a slightly different call, so call format than C, but it's really similar. And so I've written bindings to call Fortran from another language called Jai, which is under development, I've been like beta testing, and it's not a problem. Also, just to say, Fortran is really easy and it's actually not a bad language. Like people give it a lot of shit, but it's actually really nice. And so it's worth learning if for nothing other than the historical beauty of it. Great, thank you for the question. So we're at time now, so if you wanna ask a question, I'm sure after the talk, come talk to Smari. So yeah, great. So let's thank our speaker again for an amazing talk. Thank you.