 So this group was led by me and the participants are Philip Andre, Gaby O'Brenner, Claire Dye, Frank Phan, Mika Gay, Hunter Stoffelbeam, and Luis Rodrigo Urrutia. OK. Hi, everyone. If I go too far away from the mic, like wave or something. So we are at the Innoal Domain's group, so first we're going to find what our Innoal Domain is. So we're working on a manifold. So we think of a smooth surface and possibly higher dimensions. And we can look at the eigenfunctions of Laplacian on this manifold and ask, given some eigenfunction, we can define the nodal set as the set of points on the manifold where the eigenfunction equals 0. So these nodal sets will actually carve up our manifold. So we'll get these regions where the eigenfunctions are either positive or negative. And we call these nodal domains. So here's an example. So we have a torus. One nice thing about the torus is that you can actually identify it with the square by gluing the top and bottom together. Then you get a little tube. Then you go at the ends of the tube together. And you get the nice little donut. So here we have, I think, the eigenfunction is sine x, sine y. And our four nodal domains are the four squares. So the theorem that we use as the jumping off point is the Courant Nodal Domain Theorem, which was formulated by Courant's Prize in the 1920s. And what it says is that if we're given manifold, we can list all the possible eigenvalues of this Laplacian in increasing order with multiplicities. So say if one came up as two different eigenvalues, one could be lambda 1 and lambda 2. And then we name our corresponding eigenfunctions with the same index. And the theorem states that the number of nodal domains for the function fj is less than or equal to j plus 1. So the number of nodal domains is basically bounded by its place in the sequence. So we pretty quickly specified just to the two torus. That's because we have these nice pictures. We can just look at squares. But also because we were able to pretty easily code the eigenfunctions, because they have very explicit descriptions, and we were able to play around with some code to get some intuition for what was going on. So some pretty simple calculations show that any eigenvalue of some eigenfunction will have the form k squared plus l squared for k and l natural numbers. And the eigenfunctions will have, or the eigenspace will have the basis cosine kx, cosine ly, sine kx, or sine, yes, sine kx, sine ly, and then switching x and y and pairwise products of those. So generally you have these eight different eigenfunctions. So just because of the periodicity of these functions, you can show that if k and l have a common factor, then you actually just get a tiling of the reduced case in your new nodal domain. So here we have k and l are 1 and 3 on the right, and 2 and 6 on the left. And we see that it's just a 2 by 2 tiling of the 1 and 3 case. So some of the actual number of theoretic properties of the eigenvalue can tell you something about the eigenspace as well. So in this kind of most general case, where lambda can only be written as k squared plus l squared in one way, so we have unique sum of squares, then you have the eight-dimensional eigenspace as we had before. But if k is 0 or k equals l, you actually have a four-dimensional eigenspace, because you're just doubling up on your eigenfunctions. And if lambda can be written as some of two or more different squares, then you actually can get much larger eigenspaces. And so in each of these cases, you can think of, so if the dimension of the eigenspace is d, then you can think of the coefficient space of the eigenspace as s d minus 1. So the d minus 1 dimensional square. And so when we're looking at the eigenfunctions and perturbing them a little bit, we're thinking about moving our coefficients around this s 7 square, usually. So now we kind of focused on the dimension 4 and dimension 8 cases, because they're much more easy to understand. And now Rodrigo is going to talk about the k equals l case, because we were able to get a pretty explicit description of what those will look like. Good afternoon, everyone. I will discuss the k equals l case, which has been completely studied. We can either have 2k or 2k squared normal domains. If we consider k equals l equals 1, we have these eigenfunctions. Fix one of the coefficients, change the other one. We have four different cases. In the first one, which is the checkerboard case, we have four normal domains. And on the other three cases, which are just diagonal stripes, we have two normal domains. So now we consider the linear combination of eigenfunctions and proceed to find its zeros, the zeros of f. To do that, we need to simplify the expression, introducing a supportive variable phi, which depends on c and d. And then introducing s and t as constants that will help us to simplify the expression to get this. This example will learn a lot, because we were able to find an explicit solution, an explicit zero of the function. And to do that, we needed to restrict the value of t. First, when t equals zero, we get the checkerboard case. And when t is not zero, we solve for y. We end up with this function, which just maps to the diagonal stripe case we saw before. Then this is a null domain for fixed values of s and t. We start off with these null lines, the red curves. And we shift them up pi and 2 pi to obtain all the null lines on the graph. And this is just a diagonal stripes case. We have two null domains. Now Gabriel is going to talk to you about the stability of crossing points. All right. So another question we can consider when looking at this is stability-based question. So if we look at this animation, we start here with one crossing in the center point. And as we smoothly vary the coefficients, that kind of pulls apart and instantaneously smooths out. So the question is, when does this happen? And can we characterize in coefficient space the domains where we keep crossing? So how do we do this? So as was briefly mentioned before, you can identify eigenfunctions with points in the sphere that represent their coefficients. In this case, we're just going to work in the k equals l case, which is four-dimensional because it's easier for the sake of the presentation. But actually everything I'm about to do works in arbitrarily high dimension. So what do we do? We identify with the point in the sphere. And then we want to characterize crossings in terms of this function. It's actually pretty easy. It turns out that crossings happen exactly where the function and its gradient vanish at the same point. Why is that true? Well, it's actually pretty simple to think about. So the function is going to be 0 at this point. And then the derivative with respect to x is 0. Derivative with respect to y is 0. So if you move a little bit in either direction, you expect it to remain constant specifically at 0. Well, that's exactly a crossing of node alliance. So we have a nice characterization of crossing points. So what we do is we set up this kind of formal function here, this capital phi, which consists of our original function and its gradient. Well, why is that? Well, it's pretty simple, again, because the zeros of this function, where this function vanishes, is exactly where we have these crossings. So what do we do? We look at the differential of this function. If you don't know about that, it's just basically a derivative. And we've actually proven that in the general case, it is surjective everywhere. What does that give us? Well, it's actually a pretty nice result of differential topology that since the differential is surjective everywhere, we know that the pre-image of 0 under our function here, the pre-image of 0 is something co-dimension 3 living in S3 cross T2. Well, you can also think of that just basically a two-dimensional surface. So that's nice. We have a nice characterization. We have this nice two-dimensional surface. But there's actually something more we can say. So imagine you have your sphere here and you have a two-dimensional surface living in the sphere. Well, if you notice, there are a whole bunch of directions that we can move to get off this manifold, but stay within the sphere. And of course, those points that are not on the surface but lie within the sphere are exactly those points where our nodal lines have no crossing. And the functions that we're working with are pretty nice. So we don't expect to have any kind of cusps or anything. So what this tells us is for every eigenvalue, there is some configuration where you have smooth nodal lines, no crossings, no cusps, nothing nasty, just nice smooth nodal lines. All right, so if you want to find out anything, any more details about our work, Hunter has prepared a really nice note set that it'll be posting in the PCMI app after the presentation. And finally, I just want to say a big thank you to Dr. Matzeo and Phillip Andre for all their guidance throughout the project and PCMI for giving us this opportunity. Thanks.