 Thanks a lot for the invitation and the opportunity to speak here. So on the 5th of December last year, Peter Schultz had posted a challenge on Kevin Buzzard's blog, the blog of the Xena project. And in that challenge, he said, check the main theorem of liquid vector spaces on a computer. So this theorem is a result that he proved together with Dustin Clausen over the, well, in the year 2019, basically. And yeah, he had several reasons for posing this challenge that we'll get to later. But first, I want to give a little bit of a backstory. So nine days later, we hadn't actually finished the challenge. But nine days later on Twitter, this image was posted. And so these are four rock stars. If you don't know what a rock star is, it's something like a Fields Medalist, except in music instead of mathematics. And these four guys form a band called the Liquid Tension Experiment. So they released their first album in 1998 and the second album one year later. And then it went quiet for about 20 years. So then Peter Schultz wrote this blog post. And nine days later, this band announced their third album. And the album was actually released earlier this year. And one of the bonus tracks is called Solid Resolution Theory. Now, if you've been following all the stuff around condensed mathematics, then there is enough room for a conspiracy theory here. But I will not go into that direction. Anyway, so today we'll be talking about liquid vector spaces, the main theorem of liquid vector spaces, and how you can check such a theorem and its proof on a computer. So first of all, liquid vector spaces. What are they and why are they interesting? So the problem that they've tried to solve is the fact that we do not really have a nice category of topological vector spaces ever are. And what do we mean by that? Well, certainly the categories of bionic spaces or a category of locally convex topological vector spaces or category of nuclear vector spaces or whatever, all these categories that you could form in functional analysis are not abelian categories. And in particular, you cannot do homological algebra in these categories because homological algebra wants an abelian category. And so this is a problem, and this is what liquid vector spaces try to solve. Or it's at least one of the things that they try to solve. So this all works via the condensed formalism. And I'm not going to go into details of what condensed mathematics is exactly that would go beyond the scope of this talk. But you need quite some heavy machinery to set up this theory of condensed mathematics. It uses sheaves on the pro etal site on the category of profiling sets. So that's a lot of complicated words. But for this talk, if you don't know what a condensed set is, I would suggest that you think of it as some sort of topological, an object with a topological flavor. But it's not just a topological space because we're actually trying to solve problems that using topological spaces causes. And for very formal reasons, if you consider condensed R modules, so condensed R vector spaces, then this will be a very nice abelian category. But it has one problem. The natural tensor product of condensed R modules is not very well behaved. And this is not very surprising. If you know about tensor products in functional analysis, then already Grotendijk and his PhD thesis discussed maybe seven different tensor products. And one thing that is clear is that if you take a tensor product of topological vector spaces, then you want to sort of complete a tensor product. So you don't just want an algebraic tensor product, but it should play nice with the topology. So we want a different tensor product condensed R modules. And so what Klausin and Schultz suggest is that we look at a subcategory of the category of condensed R modules. And this subcategory actually depends on a real parameter P between 0 and 1. And so they call this subcategory the category of liquid vector spaces or P liquid vector spaces if we want to make this parameter explicit. And this has good properties. So let me explain a bit what these properties are. First of all, it's an Abelian category. It's as a subcategory of the category of condensed R modules. It's closed under all limits, all co-limits, and all extensions. So what this means is that if you have some liquid vector spaces and you perform typical categorical constructions like direct sums or products or kernels or co- kernels or whatever, you will stay in the category of liquid vector spaces, even if you do these constructions as condensed vector spaces. And even if you take extensions of these things, you will not go outside of the category of liquid vector spaces. The only construction that it's not a row, like I cannot say the only construction, but the only very useful construction that it's not closed under is the tensor product of condensed R modules. But it comes with its own liquid tensor product. And so we'll discuss that in a little bit, this liquid tensor product. And so they also come with this property that if you change your parameter p and you make it a bit smaller, then the category becomes bigger. OK, so the fact that this category exists is maybe nice, but so it needs to connect back to objects that we know of in functional analysis. And the good news is that every Barnard space is p-liquid vector space for every p. And nuclear frichet spaces, for example, are very nice topological vector spaces. And they are also all of them are liquid. And these nuclear frichet spaces are somehow important in the theory. So nuclear spaces were introduced by Grotendijk in his PhD thesis. And he singled them out as a class of objects for which there is only one sensible topological tensor product. So somehow these are the spaces where you can talk of a canonical tensor product. And this liquid tensor product extends, so to speak, this tensor product of nuclear frichet spaces. So if you have nuclear frichet spaces, you can view them as liquid vector spaces, take the tensor product, and it will be the same thing as if you took the tensor product as nuclear frichet spaces. But there are also a lot of new spaces available. And you can now take arbitrary limits and co-limits of these things, and you will stay inside of this category. And in particular, all the machinery of homological algebra is available. So that's one of the big selling points of liquid vector spaces. And so all of this can be summarized by saying that the real numbers form an analytic ring. So analytic rings are gadgets that were also introduced by Klausen and Schultz in these lecture notes on condensed mathematics. And once you have an analytic ring, a lot of powerful properties are available for free. And so in some sense, there are not so many applications yet of this theorem, but it has been used to give pretty short and conceptual proofs of surgicality for compact complex manifolds. And more generally, these analytic rings allow you to unify all of the different geometries that people have been looking at. So complex analytic geometry, piadic geometry, algebraic geometry, arithmetic geometry, all of this can now be encapsulated in a huge, big formalism of analytic geometry that Klausen and Schultz are developing. And the fact that you can view complex manifolds, or in general complex manifolds, real manifolds, as part of this formalism, all depends on this theorem that there is an analytic ring structure on the reals. So what does this, in the end, boil down to? Let's write down some formulas. The actual theorem that goes into this is the following. Namely, if you have two of these parameters, p prime and p, between 0 and 1, and you have a profile at set s and a so-called p-barna space phi, which is more or less a barna space, except that the norm interacts with rescaling, not the way it usually does, but it raises the scalar to the power p, if you pull it out of the norm. And so another ingredient that we need is this so-called space of p prime measures. So this is roughly the space of signed radon measures, except that, again, the definition has to be tweaked and take the LP norm or LP prime norm into account. So I don't really want to go into the details of what these definitions are, but the statement is then as follows, that if you take, look at arbitrary extensions, so extensions in arbitrary degree, of this space of p prime measures and this p-barna space phi, then all these extensions are trivial, so all the higher x-groups finish. And so this is the theorem that Schultz proposed in his challenge to be checked by a computer. And so he writes about this, that he spent most of 2019 working on this proof and almost going crazy over it. The proof takes some very unexpected twists and turns. And in the end, you need to reduce to a setting over an arithmetic ring, so sort of Laurent series over the integers. And this wasn't really understood why this was needed in the proof. So Schultz also thinks it's his most important theorem to date, but all the reading groups that were organized on condensed mathematics and analytic geometry, et cetera, they all didn't really look at this proof as far as Schultz knew and more or less treated it as a black box. And the theorem can be used as a black box very well. So, but he hadn't got any feedback after a year on no questions about how the proof works or whatever. And he had some small doubts left, whether it was actually correct. So as a sort of test case to see how useful these software packages are that you can use to formally check mathematics, check proofs of theorems, et cetera, he posed this challenge. And I then took up this challenge together with a bunch of other people. I'll list them on the last slide. And as a team, we worked on this challenge over the last year. And so I want to talk about this project and then also give you a demonstration of how the system actually works, looking at a do a small example theorem. So this experiment, the thing we first focused on in the first half year was what we dubbed the first target. And this is a highly technical statement which has one advantage and that is that it doesn't involve any condensed mathematics. So the condensed mathematics is stripped away, but you end up with a sort of mix of homological algebra and sort of functional analysis. You have normed groups are occurring, but also there are no real numbers involved anymore. So there's also this reduction to this polynomial ball or ring of low-round polynomials over here. And I'm again, not going into the details of this statement because explaining the definition of this variant of exactness would take up time that isn't really useful. So you can look up the details if you are interested. But so we focused on this target first. And also on one of the questions that Schultz asks in his lecture notes, namely the constants that appeared in this first target, how do they grow in terms of other input parameters? And so this is where we are right now. So we've finished this first target and we're now working on checking the proof of the main or checking the proof of the main result by reducing it to this first target. And we're not yet really halfway with that second half. My intuition says that we're almost halfway. So what happened in the meantime when we were checking these proofs? Well, as usual, when you're reviewing a paper or looking carefully at a draft of a paper or something, you find some typos, you find some things that were almost right, but not exactly right. And so some proofs had to be changed a little bit, but Peter Schultz was around almost 24-7 and he would explain things to us and he would fix these little mistakes. And I think there was like once or twice, there was a lemma where it really wasn't clear immediately how to fix it. So I remember one day, I think Schultz had worked so deep in the night because he really wanted to make sure that he could fix the lemma. And he was actually a bit disturbed whether it would work because there are many different constants and variables and they're all bounded by each other with many technical inequalities and you have to keep track of everything. And then on top of that, there are several induction steps going on and so it's a huge technical mess, this proof. And so that's why we are really happy to test this in Lean in this theorem-proving software because it can keep track of all these technicalities. So in some sense, this was a very nice proof to work on. Along the way, we developed a very detailed blueprint of the proof which you can read online. I think there is a link on the last slide. And we answered the question from the previous slide. So these constants grow doubly exponential and we could check this now in Lean. And one of the ingredients in the proof to compute these X groups, you need to take certain projective resolutions and what Schultz does in his proof is he uses Prindaline resolutions. But these are pretty technical. You need a bunch of homotopy theory to prove the existence of these Prindaline resolutions. So we add basically two options. Either we do all this homotopy theory and prove that these Prindaline resolutions exist. If we want to actually check the full proof, or we treat the existence of these Prindaline resolutions as a black box. And so this was the thing that I opted for first. So I started writing down the statement of the existence of these Prindaline resolutions and I tried to axiomatize it. And then I realized that if you took a slight variation of this axiomatization then I could write down an example of a complex that had almost all the properties of these Prindaline resolutions except that it wasn't actually a resolution. And Schultz then recognized that this was good enough because even though it wasn't a resolution we could still use it to compute these X groups or at least show that they vanish which is what we wanted to do. So this is what Prindaline resolutions usually look like. They say that if you have an Abelian group they can build a resolution that is factorial in A and all the terms consist of free Abelian groups generated by powers of A or direct sums of such free Abelian groups. And so the complex that we stumbled upon we later found out that it's actually a known construction. McLean already wrote about this. So it's called the McLean Q prime construction. People usually look at the so-called McLean Q construction which is a quotient of this thing. But for our purposes we wanted to look at this complex Q prime that McLean considered. And so this is a rather easier complex because the powers of the groups that you consider just grow as powers of two. And the differential is some sort of crazy alternating sum. I don't really want to go into the details of that but so the nice property that this has even though it isn't a resolution if you compute X groups with respect to this complex and they all vanish, then also the X groups of A and B will vanish for all I. So, and this is easier to formalize. So you don't need all the homotopy theory, et cetera. And so this is what is now also part of the project to formalize this construction. Okay, so that was a crash course on the Q prime. Liquid mathematics and the kind of mathematics that we're looking at in this project. And now I want to talk a little bit about the Lean Theorem Prover and then give a demonstration. So what is the Lean Theorem Prover? It's a piece of software, it's a programming language and it has special support for mathematics. So this means that you can write definitions, you can write theorem statements and then you can prove these theorem statements in all in this programming language. And the compiler of this programming language will check that the proofs that you've written down are actual proofs that every step follows from the previous steps and the axioms of mathematics. And as a result, if your file with definitions and theorems and proofs compiles, then this means that there were no mistakes in the mathematics. Of course, you have to trust that the system works and you have to trust that there are no bugs in your hardware, et cetera, et cetera. But these chances are all like extremely small. Many smart people have checked that the Lean kernel doesn't contain mistakes and the chance that there is some bug in your hardware that affects the soundness of your proof is, I guess, ridiculously small. So I don't really take those sort of risks serious. Anyway, what is this system? It's developed by Leonardo DiMauro at Microsoft Research and it all started in 2013 with the first version, which was very much sort of proof of concept, exploring many different ideas and the same was true for Lean 2. And then in 2017, Lean 3 was released and this was really the first serious candidate for bigger use. That's also the version that we've been using for this project. And beginning of this year, there was a pre-release of Lean 4, which is pretty big improvement over Lean 3. It has a lot better performance, so it's a lot faster than Lean 3, which is really nice because we're sometimes hitting the limits of like speed limits in Lean 3. And so besides this programming language, Lean, you also need a library with lots of mathematical results that you can build on top of. And so Mathlib is the main library for Lean. It has more than 650,000 lines of code by now and it covers, I would say, a decent amount of mathematics in an undergrad curriculum. So like a bunch of algebra, commutative algebra, linear algebra, analysis, topology, all these basic fields of mathematics, all the basic results are done. Of course there are sometimes gaps and then we discover that and we fill them up. But this library was extremely useful for this project because, well, we needed to do stuff with normed groups and normed groups existed in the library. We needed to do homological algebra. Well, category theory existed in the library and chain complexes and maps between chain complexes existed in the library. We then discovered that some of it didn't work as nicely as we wanted. So then we changed parts of the library and made them work better. But yeah, the fact that there was, that there is such a large library already available to build on top of was a huge contributing factor to the success of this project. And so now I want to do a demonstration of this lean system. And I really invite you to stop me if you don't understand something because most of you will have never seen anything like this before. And yeah, so I want to try to make sure that people understand what's going on. So I'm now switching to my lean editor and the editor is divided into two panes. So the top of, let's see, can you now see the top of the screen? So on the left-hand side is where I type things and where things are. So this is the input window and on the right-hand side is the output window where lean responds with errors or confirmations or answers. So as a little demonstration, I asked it to add 18 plus 19 and the answer is 37. But of course, many programming languages can do that and now we want to actually do something mathematical and prove a theorem. So here is, I already added a couple of lines. So this line is enabling the exclamation mark notation for factorials. And so now I want to prove Euclid's theorem that there are infinitely many prime numbers. So I write theorem Euclid, I give it a name. I can give it any name I want. And I want a natural number n and I'm claiming that there exists a p larger than n such that p is prime. Then I write colon equals, which means, okay, I'm done with the statement. Now I'm going to write the proof. And just like with latex, you write the proof between a begin and end block except you don't write begin and prove just begin and end. And now we can look at the right-hand side of the screen and see that lean is already responding with some useful information. So it tells me that I have an assumption a natural number n. And then after this sideways t, it writes what my goal is. So the goal is show that there exists a p and a witness that p is larger than n and show that p is prime. Okay. So the way I'm going to prove this is I'm going to say, well, let capital N be n factorial plus one. So now this capital N is added to my assumptions on the right. And I write let p be the minimal factor of n. So I know the library a bit and so I know that this construction exists. And so now I've defined p to be the minimal factor of n. And we can hover with our mouse over this window and it says it returns the smallest prime factor of n for n unequal to one. And if n is one, well, then there is no smallest prime factor, so it returns one. And now we can say, okay, we claim that p is prime. So let me say, we give our claim a name, we call it HP. Again, we could call it anything we want, but HP stands for hypothesis on p. We claim that p is prime. And I'm just going to write sorry for now. And that means that I don't want to give a proof of this fact and Lean accepts that and says goals accomplished. So we can move on and I make another claim and say have HP n, p is larger than n. And again, I'm going to write sorry and move on. And now these claims have been added as assumptions. So now we can finish our proof and write exact. So we want to use p, the fact that p is prime, sorry, the fact that p is larger than n and the fact that p is prime. And again, Lean says goals accomplished. We can celebrate, we've proven our theorem, except that under the word theorem, there are some orange squiggly lines and Lean says this declaration uses sorry. So it keeps track of the fact that we cheated and we haven't actually finished the proof. So now we go back to this sorry and we see that here we have a new goal. We have to prove that p is prime and our assumptions are the three things written about it. So we ask Lean for a suggestion and ask it, well, how would you prove this? What are the steps you would consider? And Lean gives us a list of things that we could try. And I think the third version, the third thing it comes up with looks quite promising. So it says, well, there is a lemma that says that the minimal factor of a number is actually prime. So we click on it and it gives us this suggestion. It says, well, this lemma comes with a condition. You need to prove that n is not equal to one because min fact only gave us a prime factor for n unequal to one. So, well, why is n not equal to one because vectorials are positive. So we write n factorial is positive. And this should certainly be a fact that's in the library. So we ask it to search for that fact if we don't remember the name. And certainly it found a proof. So the lemma that factor is positive is called net dot factorial false. And then we say, okay, now this should just be a bit of linear arithmetic to finish the proof. So we call a tactic called linear rith. And linear rith finishes the proof for us and lean says, goals accomplished. So this sub proof is now done. So we have one more proof, sub proof that we need to finish. We need to prove that p is larger than n. And we do this by contradiction. So we assume by contradiction that p is less than n. So now we have this assumption. If I didn't give the assumption a name, so lean chose to call it this. The assumption is that p is less than or equal to n and we need to prove both. So we need to derive a contradiction. We do this by observing a couple of facts. So the first auxiliary effect is that p does not divide one. So this is of course, because p is a prime number and lean can actually figure this out. It finds a proof. It says net dot prime dot not divide one. So because p is prime, it cannot divide one. So the second fact that we want to use is that of course, p divides n factorial and it divides n factorial. So p divides n factorial and it divides capital N. So then it must divide one and we get a contradiction. So p divides n factorial. Let's ask it for a suggestion. And it says, it found a direct proof using the fact that p is prime. But I personally think that just the fact that we want to prove that p divides n factorial using the fact that p is less than n. So we use the second suggestion. And then now we need to do one more step. We need to show that p is positive. Wait, I'm a bit confused. Sorry. I was expecting maybe I had to take the third suggestion. It doesn't really matter. We can of course finish the proof. We need to take the second suggestion. Anyway, so we ask it for another suggestion or find the rest of the proof. And it finished that. So now we come up with the third claim which is that p divides capital N and this should just be a fact about minimal factors. So it found a proof of that. And then finally we claim that from these observations we can deduce that p divides one because capital N is n factorial plus one. And indeed it found proof of that fact. But now we have the assumption that p divides one and we have the assumption that p does not divide one. And that of course leads to a contradiction. So we point that out to lean and it confirms that the proof is correct. And so it says goals accomplished. And now if we go to the end of the proof of the big proof, it says goals accomplished and there are no orange squiggly lines under the word theorem. So indeed this is now a full proof of the fact that of Euclid's theorem that there are infinitely many primes. Okay, so this was a demonstration of how lean works and I have a couple more slides. So I can maybe if there are some questions now I think I would maybe already like to answer them now but we can also return to this screen after I finish my slides. Okay, I'll go to the slides and then finish those. So what are the lessons that we learned or some of the lessons that we learned during this project so far? Well, what came as a big surprise to everyone involved was how fast it went. I mean, I've been using lean for two years now or maybe three. It took, it has quite a steep learning curve. It took me a while to become fluent in using lean and knowing what is in the library and how to add things to the library, et cetera, et cetera. But so the challenge was posed in December last year and this first target was reached before six months were over and so we had to develop a bit of homological algebra along the way, et cetera. But this shows that like a highly technical state of the arts approved that has been around for less than two years can be formalized in a couple of months. So I think in total, we spent one person year working on this first target. So yeah, I think this is a quite amazing surprise. I first expected that it would take us like two years to reach the target or something like that. Another lesson is that lean was really a proof assistant. So this software is often marketed as, it's often called a proof assistant lean and also other systems that are around. And in some sense, this to me always sounded as a bit of a mostly wishful thinking because lean is an extremely pedantic reviewer that points out every little mistake or every little ambiguity in your proofs. But the assistance that it offers is not that advanced most of the time. So as you saw in the demonstration, it can find tiny, it can fill in proofs of tiny steps. So you make a claim, it fills in the proof of that claim, you make the next claim, but it cannot prove Euclid's theorem or its own. We have to give it all those little steps and then it can fill in the tiny sub-proofs. And that's also true for the liquid tensor experiment. So in this, like we had to do all the proofs in all the details, but still lean gave us a huge assistance because several times I tried to just read the proof on pen and paper and I really tried and I tried hard and I got completely confused. And every time I would come to the conclusion, okay, I understand how half of this lemma works. So let me just work on that, try to translate that into lean and then we'll see after that. And every time after I did such a translation step, I, two things happened. One, I understood the mathematics a bit better apparently and two, I no longer needed to worry about it so I could free up mental space in my brain to look at other parts. And I think it's a bit similar to the experience you have when you try to read a page of text in a language that you don't really know that well. If you just try to read it, you get confused after 10 lines and you have no idea what the text is about but if you translate the first two lines, then you understand the next two lines and you understand them well and so you can translate them and by the effort that you put into translating those next two lines, make sure that you can then understand the two lines after that. So you go through this very methodically and slowly and you've written down what you have already translated so you don't need to remember it in your working memory. You can just read it back on paper. And I think that's a similar experience to what we had with Lean where it showed to be a very powerful tool to manage this complex proof because we could just, once we had translated a proof into Lean, we no longer needed to worry about whether we actually didn't make a tiny mistake there so we could free up our working memory to work on the next step. And Schultz also in his blog post at some point claims that this proof is so nasty and complex that it almost doesn't fit into your RAM, into your working memory. So I don't know. I mean, there's this buzzword phrase of like the one brain limit or the one mind limit, one mind barrier. And it feels like maybe this is approaching that barrier and using Lean we could break it into pieces and manage the complexity. And finally, a third lesson that we got from this project is Peter in his first blog post said that the proof took some very unexpected twists and turns and has to pass through this ring of this arithmetic ring of Laurent series. And during the project we had to formalize a bunch of results in convexity theory. And we asked many questions about the proof to Peter, et cetera. And during this whole process, mainly Peter, well, the entire team during all the discussions better understanding of why we actually to arithmetic ring of Laurent series, why work over the real numbers the entire time you need to sort of untangle scaling behavior in the reels with a formal parameter, which is then this formal variable in your ring of Laurent series. And yeah, so sort of during the whole process we also got a better understanding of the mathematics that goes into this proof. So these are the people that contributed in the team. It was a lot of fun. Some of these people I've never met in real life before I only know them because they're other mathematicians with a shared interest in Lean and we just worked on this project together online. And here are some pointers to more information about Lean, the online chat room where we hang out and many people ask questions and work together. And if you want to play around with Lean and try to get the first feeling for how it works, you can play an online game where you prove basic properties of the natural numbers just in your browser. So you don't need to download or install anything. And yeah, that's what I wanted to say. So thanks a lot for your attention.