 Alright, thanks everybody for making it out. Today we are excited to have Alex Guidemann back from Davidson who will be talking to us about the restricted numerical range of the digraph Laplacian. Thanks Alex, take us away. Thank you so much for having me. Despite the unusual circumstances here, I will warn you this is my first like research presentation that I've done over Zoom. So let me know if I have any technical problems. So first question, can you see my slides right now? I don't have videos up so can someone audibly? Yep, we see them. Alright, so let me know if I run across any technical problems. Alright, so today I'm going to talk about the restricted numerical range of the digraph Laplacian. So I'll explain what all those words mean as we get to them. But I do want to mention that this is collaboration with Thomas Cameron, who was a visitor at Davidson with me last year, but now works at Penn State. And a lot of this is motivated and the first part of this will be content from Michael Robertson's senior thesis that we co-advised, which I'm very happy to say that he got accepted for high honors for. Very exciting work and good work by Michael. Like I motivated this this entire project. So all the all the graphics that I'm going to show you today are coded by Michael here. So the general outline of the talk will be first, I'm going to give you quick definition of what a reminder of what the Laplacian is and kind of my my standards for how I define it and then a defining of the restricted numerical range, as well as the motivation and some properties and kind of some first kind of trivial unsatisfying characterizations, but kind of the natural ones that should be true if this were a good tool. And then I'll go on to the meat of the talk, which is characterizations of digraphs that have singleton or real restricted numerical ranges, such as regular tournament graphs, K and floating stars, and this this notion of three balanced graphs, which which we defined. And then I'll end with a what's next where where do I see this research going and what we're working on currently. So first up the Laplacian and its restricted numerical range. So instead of copying pasting this in front of every theorem of the talk, let this statement be prefixed in front of everything I say, gamma will always be a finite, simple, unwitted digraph, always on in many vertices. And L will always be its Laplacian matrix. So simple meaning no multi edges, no loops in the graph, and only considering unwitted digraphs for these characteristics. The convention we take as the Laplacian is the diagonal matrix minus the adjacency. So the diagonal being the out degrees, everything we do here is directed. So in particular, the eigenvalues of this matrix exist in the right half plane. This is the convention that's that's diff. The alternate convention is to take a minus D, in which case the eigenvalues land the left. We're doing the convention that it's D minus a. So I don't know if there are any graduate students on the call who haven't seen graph plush before. So just a quick example and a quick explanation of how they're constructed. So in the the directed graph case, how do you just define a graph Laplacian? I'm not I'm starting my numbering at zero just in case Jordan multi watches this at some point. Starting at zero, I look at this graph and I say, well, where do the edges outward from zero go? So zero goes to four. And that's the only edge that zero that leads zero. So I go to the zero throw here. And in zero throw, I put a negative one in the fourth position, where this is 01234. One goes out to zero to four and three. So in the one row, I got to the zero position, the fourth position and the third position. And then on the diagonal of this, I put the the sum of the out degrees. So there's one edge leaving zero. So one here, there are three edges leaving one, so three here, and so on and so forth. So in particular, by the Gershgorin matrix theorem or some other similar theorem, the eigenvalues of this thing are going to live in the right half plane, because I'm taking a positive matrix of diagonals, and perturbing it by some small number of off diagonal entries. But for the most part, the values are going to rest near one, three, three, three and one. Alright, so the study of spectral graph theory is is the study of the eigenvalues of these, these matrices, so Laplacian and other related matrices. Beyond just the the eigenvalues themselves, I'm also interested in kind of the structure of the eigenspaces. So the numerical range is a tool analysts use that captures more than just the eigenvalues, it also kind of captures the geometry of the eigenspaces in some sense. So it's defined as all the inner products of x star ax for all complex vectors of length one. So this is kind of like the image of the unit all under this kind of quadratic form here. So there's a lot of research done on how the the image and how the shape of this numerical range kind of reflects the combinatorial structure of the matrix that I plug in here. But we're not talking about just general matrices. We're talking about graph Laplacians in particular. So graph Laplacians, because those rows always sum to zero, always have one particular eigen vector, which is the vector E, the vector of all ones, which will always land at zero, right? So if I take L of E, this gives me every entry of that output vector the row sum in that position. So all the row sums are zero. So this is always going to output zero. So particular zero is always an eigenvalue of the graph Laplacian. And I can kind of see it in this picture here. So this is the numerical range of the graph on the previous page. I can see that star, these stars denote the eigenvalues, this eigenvalue zero lies in the numerical range here. So either is the all ones vector? All ones, correct. Okay. No problem. Yeah. And feel free to stop here to have you have questions. Please do. So because the graph Laplacian always sends e to zero, somehow that that zero in the numerical range is not really important if I'm trying to characterize the structure of the graph. This is something all graphs, all graphs do. So this this zero right here kind of distorts the picture and adds extra noise to it. That's not really necessary. So kind of our idea here is to compute the numerical range, but kind of avoid this vector e. And the way we're going to do that, we're going to define the restricted numerical range is the exact same definition. But instead of plugging in all vectors in my complex space, I'm only going to plug in those vectors that are perpendicular to e. So I'm going to try to avoid that eigen space and avoid that eigenvector. And this kind of cleans up the picture a lot. And just this example. How does this picture reduce it actually turns the restricted numerical range into the triangle that connects these right most eigen values. So actually most of the numerical range, all this like circular part is kind of just extra noise that is not really specific to the structure of the graph. So we can cut that out and get rid of it and kind of look at just the the principal part, so to speak. So this is in retrospect, kind of like the most important motivation for our definition that I can think of is that because all all eigen, all graphs satisfy L of e is zero, we cut that out to kind of ignore this commonality between all graphs and focus only on the differences between them. So that's the motivation in retrospect. But that was not the original motivation. The original motivation for this definition is actually the right connectivity is there's kind of two different competing definitions for algebraic connectivity of a directed graph. This is kind of the one that that motivated this work. So the algebraic connectivity is defined by who is the same inner product, which is reminiscent of the numerical range, except in this, we're only considering real vectors. So not all complex vectors, but only real vectors, and again perpendicular to E and of length one. So the algebraic connectivity would defines as this alpha as the minimum overall vectors. And then there's this related parameter beta, which I don't think has a name to my knowledge. At least we just called it beta. And it's defined as the maximum. And these alpha the betas generalize the algebraic connectivity of undirected graphs, in the sense that a lot of the theorems that Fiedler's definition worked for in the undirected case. This alpha and this beta work for in the directional case. So things like biosection with maximum directed as a parametric number. And as we was specifically interested in rates of network synchronization, and detecting how amenable a graph is to being synchronized, whatever whatever that means. So that's the the definition of who there's a competing definition, which is interesting, which I'll apply some context to later. The other definition is the second smallest real part of the Eigen values. So if I go back to this picture here, here are my Eigen values, the second smallest real one would be the number two. So in this case, the other kind of definition for algebraic connectivity would be this number two. And that's that's more analogous to how Fiedler originally defined it for undirected graphs. It was the second smallest Eigen value. Now because we're directed, we need to make sense of what second smallest means. So we make sense by saying the second smallest real part of all the Eigen values. But that that is different from this alpha and sometimes they agree sometimes they don't. I think alpha is always smaller than the second smallest real part of the Eigen value. But that'll we'll have some context to that here in a second. So this was the original motivation, trying to figure out how we calculate these alpha or a directed graph. So some initial properties. So the first is the numerical range, the restricted numerical range is itself an actual numerical range. So if I define q to be an orthonormal matrix, these columns are all orthogonal to e. And I kind of sandwich L I conjugate L by q transpose q. I compute the numerical range of this in minus one by n minus one matrix. It agrees entirely with the restricted numerical range of L. So this isn't quite too important for this talk, but it's kind of one of the fundamental tools for proving things in the paper is that instead of looking at L in kind of like in by n matrix space, we projected down to n minus one by n minus one matrix space. And this is probably only relevant for for Dr. Angelakis on the call. This orthonormal matrix whose columns are all orthogonal to e is pretty much a direct parallel of the of my dissertation work. Those generalized Gelman matrices. So kind of somehow we're taking the traceless matrices that orthonormal form for that. So it's kind of direct parallel with what we did in my dissertation. This the second property is that it's invariant under reordering the vertices. This better be true if it's going to be worth any salt. It needs to be invariant under ordering so that graph have some morphisms somehow don't mess it up. So that indeed is true. The eigenvalues of L are contained within the restricted numerical range, except for that that zero associated to E. So somehow it it we're not losing the structure as we translate down to the restricted part, we're keeping the eigenvalues true. In fact, more is true that these eigenvalues of L are still the eigenvalues of QTLQ. So we're like keeping the actual eigenvalues themselves. We're not losing structure as we as we restrict them. And then how does all this relate to algebraic connectivity? Well, it relates precisely in the sense that the minimum real part is equal to alpha, and the maximum real part is equal to beta. So even though we only took real vectors and we're taking complex, the minimum real part of ours still agrees with with the minimum over just real vectors. So this was our original motivation was that point forward. So what I want to talk to you today, it's about some of the following pictures. So oops, dismiss. So if we were to look at these three graphs, kind of as a set here, and look at just the spectrum, and ask which of these have similar structure, or I guess which two are more like instructor, or to look only at the eigenvalues only at these red stars, perhaps I'd say the bottom two are the most similar. Because they both have, I mean, I can value zero on two, I can value zero on three, perhaps ignoring multiplicity, I'd say these two are most similar because there's two eigenvalues, whereas the third, the top one has three. If I were to say, okay, well, let's look at not only the eigenvalues, but also the structure of the eigenspaces, ie look at the numerical range, I still might say the bottom two are more similar, in that their numerical ranges are ellipses, whereas the top is more of this baseball diamond looking thing. What I hope to convince you in this talk is that the restricted numerical range provides a more robust description of what's actually going on between the two. And in fact, the top two are more similar than the bottom, in that their restricted numerical ranges give me a real line. Whereas the bottom one gives me just a singleton point. And that one that gives me a singleton point is actually very, very special, and has a very completely characterized structure. Whereas the top two don't necessarily have a characterized structure, or I guess I shouldn't say that they have a characterized structure, but I can't necessarily name the exact graph. So I can tell you what the structure of the graph is, but maybe not named the exact graph. And that'll make sense when I get to the actual characterizations. But all three of these also share similarities in that the restricted numerical range is completely real. So there's some underlying structure that all three of them share together that make sure this restricted numerical range is real. We can contrast the pictures like this, where the restricted numerical range becomes, say a polygon or a vertical line. And we'll talk about that toward the end of the talk when I'm talking about what kind of the future work is. And then just so I don't accidentally convince you otherwise, I don't want to convince you that restricted numerical ranges are always very nice pictures. Sometimes you also get nastier things like this, where the restricted numerical range is not a polygon at all, but has some curvature in it. So and sometimes the restriction numerical range and the numerical range don't even change, especially if the graph is disconnected, won't even change when I when I do the computation. This the second picture is also important. I want to clue us in whenever I take the restricted numerical range, I drop the zero eigenvalue, it is very possible. And indeed, often the case that when I when I take out the zero eigenvalue, I'm not just making a straight line, not a polygon between the eigenvalues. And perhaps there's a curved part that that drops this alpha this minimum real part below the minimum real part of the second smallest eigenvalues. So here this graph right here is an example of where the second smallest real part of the eigenvalues is strictly larger than that algebraic connectivity alpha that this has a curvature to it that drops that that left part down. So try to hold this little picture right here in your mind for about three more slides. I'm going to recall it here in a little bit. So first characterizations are kind of the ones we expect. So just a quick remark, the empty and complete digraphs are fully characterized by their Laplacian spectrum, if we include multiplicities here. The spectrum of the empty graph is just zero multiplicity in. And the spectrum of the complete graph is zero. And then in multiplicity in minus one, so not too difficult to see that. And as an immediate corollary of that, well, sort of immediate corollary of that. The restricted numerical range clearly characterizes these as well, where the restriction numerical range is the point zero, if and only if it's empty. And it's the point in, if and only if it's complete. So really the only thing that needs to be shown in these proofs is that the numerical range or the restriction numerical range don't really have like, maybe a circle around this point, there's no noise to mess it up. But the eigenspaces of these two graphs are very well behaved. So I'm not going to have any of that noise to mess up these pictures. So that's kind of the first characterization that's, I know it's satisfying in the sense that I would hope it to be true and it is true, but it's not very satisfying in the sense because it's kind of like, okay, well, obviously, that should be true. A more satisfying example, a more satisfying characterization following. So first, we proved that a directed cycle is characterized by its Laplacian spectrum. So if I have a spectrum which is exactly one minus these roots of unity, that happens if and only if I'm a directed cycle on those inverses. So one direction of this is really obvious. If I have a directed cycle already, using standard, just matrix theory for cyclic matrices, you can find the eigenvalues of it directly to be these roots of unity. What's more complicated is the other direction. If you assume you have these as the eigenvalues going backwards to show that it's a directed cycle, takes a little bit of elbow grease, you have to first prove that it's, well, the way we did it, we proved that it was first regular. And then from regularity, it follows pretty immediately that it has to be a cycle. But that took a little bit of elbow grease and a contradiction for that. And then from from this characterization on the spectrum, we get the following characterization on the restricted numerical range, and indeed also the numerical range itself, which I don't have written here, but there's a picture. So a graph is a directed cycle, if and only if it's restriction numerical range is a polygon with these vertices. And actually that happens if and only if the numerical range is a polygon with these vertices plus zero. So here's a cycle on five vertices, it's written in this kind of non standard way because the computer generated it. But here's a cycle on five vertices. And it's kind of very pleasing to me that the numerical range is also kind of a cycle on five vertices. It's exactly this Pentagon. And then if I cut out zero, it's still a polygon on these four vertices that have exactly these coordinates for their their points. So again, the characterization in one direction of this is pretty immediate. If I have a directed cycle, I know it's eigenvalues. I also know that this matrix is normal. And as a normal matrix, it's numerical range is guaranteed to be the convex hull of its eigenvalues. So immediately I have this picture, I know it's the convex hull of its eigenvalues because it's normal. But what I don't know is that when I restrict it, I don't get that that curve right here. So remember on the previous slide, I said, hold that picture in your mind, it's possible to have a curve between these two points. But what's preventing that is the following lemma, that if L is normal, then I'm guaranteed to have a convex polygon as the output for the restricted numerical range. So if I have a normal graph, it's it's restriction numerical range is always going to be a convex polygon. I'm not going to have that that curvature in here. So a nice corollary of that is that if L is normal, then it's algebraic connectivity, and its second smallest real part eigenvalue will agree because it's just going to be either a straight line here, or or a vertex like this. So that was the only difficulty in this direction of the proof, more satisfying, but still kind of the I would say kind of obvious example, because we know exactly the eigenvalues of this. So now, now I want to get into kind of the the meat of the talk. Oh, before that, here's here's a nice corollary, just relying relating all this to the alpha and beta values. So as suspected, the empty secret digraph has algebraic connectivity zero. And also this beta value zero complete has alpha beta in and directed cycle has alpha and beta given by these values. So one great advantage of using the restricted numerical range was kind of how easy it was to prove, especially this third point here. We'll also proved these formula in a different form. His had bunches of signs and and cosines in it. So I presume maybe use some for your analysis or something like that to actually prove this formula. But ours we get directly from the eigenvalue using a relatively straightforward way. So kind of an easier way to prove at least this third point. Alright, now the actual meat of the talk, the the characterizations. So as we saw the empty graph and the complete graph has singleton restricted numerical ranges. The question I want to ask now is what other graphs have singleton numerical ranges? So the first theorem is if it's a singleton, it must indeed be an integer. And that integer has to be between zero and one. So the fact it's between zero and one is not too surprising. These are kind of like natural eigenvalue bounds. But it has to actually be an integer. I can't have non integer singleton numerical range. So I go through a proof here, which has a lot of details I'm skipping. But the reason I'm going through this proof is because something very unexpected pops out of it that I want to claim as another theorem afterward. So let me go through the details of well, let me go through this proof skipping the details to see kind of some unexpected thing that falls out. So if K is an integer, it's relatively straightforward. So the sorry, I'm skipping ahead of myself in the talk. I'm trying to prove as an integer here. Ignore that suppose it's not, we're going to try to get a contradiction. So because numerical range is a singleton point, both alpha and beta are equal to that K value. Now this alpha is the algebraic connectivity. And it introduces itself in some lower bounds on how connected a vertex must be like alpha is a measure of I need at least this much connectivity at every vertex. Beta on the other hand is a maximum. It tells me at most how connected each vertex can be in some in some complicated inequalities. But what happens within these inequalities, if they're equal, it needs to be bounded by below by at least in some amount and above by at least some amount. These inequalities become very tight. In fact, they become equality. I get exactly this expression that the out degree of every vertex has to be K minus the n degree over n minus one. So the details of this aren't too important. Other than the fact that if I look at it, K is not an integer. The out degree is an integer. And this fraction is less than one. So K is not an integer. The degree is an integer. And this fraction is less than one. So from that, if follows, that the out degree is actually the floor of K. I'm taking I'm taking the out degree, adding something less than one and getting K. Well, then that must mean that the out degree is the floor of K. And this holds for every vertex. So every vertex has the same out degree and that out degree is the floor of K. So if I were to write my graph Laplacian, all the out degrees are the floor of K. So my Laplacian has all floor of K on the diagonal minus whatever the off diagonals are. And then I can introduce the restricted numerical range by computing these inner products at a very specific set of vectors. So the vectors I'll choose are one over root two in the I spot, negative one over root two in the J spot. So this guarantees their perpendicular to E a division by root two just means that it's length one. So K is in the numerical range. This inner product is in the numerical range and the numerical range is only K. And then I just do a direct computation of the form and I get this equality. And this is where something really unexpected falls out. So I've got integer. Sorry, I've got non integer equals integer minus fraction. So I've got non integer equals integer minus fraction. So if this fraction had a i j equal a j i equals zero, in other words, this fraction wasn't here. I'd have integer equal non integer contradiction. So they can't both be zero. If I'm on their hand, they were both one, then this would be one plus one over two. So that's minus one. So I'd have non integer equals integer minus one. That's again, contradiction. So what's following from this equation is the fact that either a i j is one and a j i is zero, or vice versa. So either there's an edge from i to j, or there's a j edge from j to i. But I can't have both edges and I can't have neither edges. So in other words, I must have a regular tournament graph. So regularity comes from the fact that all the degrees are the same. And the tournament graph means either i to j or j to i for all pairs of pairs. So from this, this assumption that alpha equals beta, and then some inequalities of Wu, and then some, I don't know, clever choice of vector here, you get that it has to be a tournament graph. Just definitely some details I skipped here. But the coolest kind of tourism of this proof is that if I had a regular tournament diagram, that happens if and only if the numerical range is a vertical line segment. So this is where alpha equals beta. And in fact, some details that I omitted, the real parts equal to n over two and then has to be all. So kind of an unexpected thing that fell out of this proof was a characterization of regular tournament diagrams. So if I computed the numerical range of this thing, and I saw it's in a vertical line, then I know that everywhere in this this graph, every edge is present, it's a complete kind of complete graph, where every edge is directed in only one way. And moreover, it's regular. So if this thing has two going out of it, it also has two going into it. So nice characterization of regular tournament graphs. And now the question is, okay, what if it was an integer? So this is not an integer. What if it was an integer? So what if I had a singleton integer restricted numerical range? For that, we introduce a new definition. As far as we can tell, just the analog of a join, but in the case. So a directed join between digraphs, we define as the union of the verdict vertex and edge sets. And then we also join in the sense that from i to j. So for every i in the initial, in every j in the final, I also add that edge. So here's some examples where the white is the gamma, and the gray is the gamma prime. So here I'm taking an empty graph on five vertices, and I'm joining that onto a single vertex six. Here I'm taking the empty graph on four vertices, and joining that onto the complete graph on two. Here I'm taking the square, this bidirectional square, and joining it onto the complete graph on two. And then a special kind of directed joint. So this is directed joint in general. But a special kind of directed joint is what we're calling the K imploding star. So the K imploding star is when I take the empty graph on n minus k vertices, and directly join that onto the complete graph on k. So for example, this is a one imploding star, I'm taking the empty graph on five, and it imploding onto six, the complete graph on one. This is a two imploding star, I'm taking the empty on four, imploding on the complete on two. And this is not, this is a directed joint, but it is not a K imploding star. So I've got a one imploding star, a two imploding star, and a non example. And this definition of K imploding star is precisely the graphs that have singleton numerical range. I almost said that it's integer, but if it's a singleton numerical range, it has to be integer. So these are precisely the graphs that have singleton numerical range. And their numerical range is the K that is involved. So if I look at this graph, a computer's restricted numerical range, I get the number three out and just the number three. I know it's a three imploding star. And I can kind of see that in the picture here. Zero, two and three, it's kind of the heart of the star, that's the complete K three. And one and four are imploding onto that star. And this is one of the nicer characterizations we have, precisely because of the following reason that these are not characterized either Laplacian spectrum, Northern numerical range. So if I look at these two graphs, and I can compute their Laplacian spectrum, they have identical spectrum, including multiplicity, they have the same multiplicity. So they have identical Laplacian spectrum, they have identical numerical range, but only this left one will have a singleton restricted numerical range. This right one will not, it'll have some noise around it, so to speak. It'll have more of a circular picture. But I wish I had a picture here. I'll add that to my slides later. So this is kind of the first punchline. This restricted numerical range. It provides new techniques to characterize more graphs that this just the spectrum or just the numerical range, do not provide. And to relate this all back to alpha and beta, we have the following corollary, alpha and beta are the same, only for if and only if for K imploding stars in regular tournaments. And in fact, it's a K imploding star if only if alpha and beta are integers between one and in, and it's regular tournament if only if it's in is odd, and they're equal to in over two. And this mesh as well with the empty and complete pictures. So a zero imploding star is the empty graph, because I have zero kind of core of the star, and an empty graph imploding onto that. But there's nothing there to implode on. So it's still the empty graph. And in the imploding star is the complete graph. The heart of the star is still all invertecies, and nothing implodes onto it. So so kind of the vacuous zero and end end points still fit well into that definition there. So that section dealt with singletons and kind of unsurprisingly gave us vertical lines out. The last thing to deal with for this talk, at least are the real restricted lane numerical ranges. So horizontal lines. So the section's a little shorter. So the definition that we introduced to kind of describe these graphs, we began with three balanced. I don't know now in retrospect, if that's the best name, but it's the name that's already published paper. So I guess we have to stick with it. So we call a graph three balanced. If for any three vertices, the following equation is true. So in other words, if I look at the weight, the ij plus jk plus ki, that gives me the same as reading it backwards, i k, kj, j i. So it's kind of easier to see in the picture here. I pick three vertices, say two, three and six. And I sum the weights of the edges as I go around that triangle. So if I go from two to three, that's plus one, three to six, that's plus one, six to two, that's zero. There's no edge there. So going around this way in the triangle, I get two. That's going to be the same as if I traverse the triangle backwards. So two to six, I get plus one, six to three, plus zero, three to two, plus one. So going around this way is two as well. So pick any three points, going around the triangle one way gives the same total as going around the other way. So actually all three graphs here, these directed joins are all three balanced. So again, the directed joins here are the whites joined onto the gray. So maybe another quick example, one five and six, if I go one to five, five to six, six to one, I get two. And if I go backwards, I get two as well. And it also works for this time, like two to one, one, three, three to two, I get zero, but I also get zero the other way. So this is what we call three balanced. And there's a reason I have directed joins here, and that'll be more clear in a second. So the theorem is that the numerical range is real. So it's a real line, if and only if the graph is three valid, kind of a quick idea of the proof how did this come about. So these points in the numerical range for perpendicular e are going to be real, perpendicular to e are going to be real, if and only if they're going to be real on a basis. So it's kind of natural, you want to check this to be true for all points. So instead check it's true for all points in a basis. The trick here is showing that the basis actually extends that if it's true on the basis, it will be true everywhere. And kind of this quadratic form, it's a little tedious. But it turns out to be true just requires a lot of coefficients and and summations to show it. But that's the gist of it is it's going to be real, if and only if it's real on a basis. And then the part where three balance pops out is just choosing a specific basis. So the specific basis we choose is to fix k and define the basis as E i minus E k. And really, I should put this over root two to make them length one in the next calculation. This should be over root two. But it's again those vectors one in the i spot negative one in the k spot. So these are perpendicular to e. And it's easy to see that they're they're spanning them up in the entire e perp space. And then I just evaluate this equality on the basis. And I get this condition, which after a quick rewrite is exactly the three balanced condition. So I put this here, just kind of get an idea of how we're proving things with the restricted numerical range is often case just choosing a clever basis to get the properties we want to kind of fall out of it. But three balanced isn't necessarily the nicest thing to do that be hard to look at a graph and say is it three balanced or not? But luckily, we have another characterization. The graph is going to be three balanced, if and only if it's the directed join of two i directional digraphs. So if I take two graphs that only have bi directional edges, in other words, if I goes to J, then J also goes to I. So all arrows are multi bi directional. The directed joint of those two will be three balanced. And that's actually if and only if. So correlated that numerical range is real, if and only if it's a directed joint of two bi directional. So looking at these pictures, it might be hard to kind of figure out what's joining on to what. But I think here 123 is in is joining on to zero and four. So 123 is this complete triangle. And all those are joining on to zero four. Here 123 is my bi directional graph. It doesn't have to be connected. One is I got a just an edge and a disjoint one. But all three of these form the first graph is joining on to k to the second graph. So that's our if and only characterization of real lines. So this kind of leads into the question of what's next. So thus far, what we've done is show that they're restricted numerical range is a pretty robust tool to characterize digraphs. And indeed, it can characterize some some things that are not characterized by just the spectrum, including multiplicity. And specifically, what we've done so far in this first paper is characterize degenerate complex polygons. So things that are single points, things that are horizontal lines, and things that are vertical lines. So that influence ours directed joints of bi directional and regular tournaments. So those are kind of our three degenerate polygons for restricted numerical range. This leads into the natural next question. So in the sequel, which we're working on now, we investigate the next question, which digraphs have non degenerate complex I were to compute the restricted numerical range, which of those have polygons, that they don't have this kind of nasty curvature that may make that alpha algebraic connectivity strictly smaller than the second smallest real part I can value the two different definitions of algebraic connectivity. And we have pretty much I'm convinced we have all characterizations done, we have sufficient proofs for everything. And the necessary proofs I can say so far is after hundreds of hours computing and running trying to find non examples on the computer, the computer can't find any. So we have sufficient proofs in one direction. And I'm fairly convinced that they're also necessary proofs. But we're still working on the necessary direction for most of these. Actually, for most of these, I should say, in some of them, we also have the necessary condition for a lot of random extra assumptions. So for example, we have the necessary conditions, if the number of vertices is prime, or more generally, if the number of vertices is square free. So like these things lead me to believe that it's necessary always, because these structures really shouldn't depend on the number of vertices in general, right? Having a directed join between two things doesn't really seem like it should depend on, well, do I have seven vertices or eight? So I'm kind of convinced they're necessary, we're just still this leads us to asking the next few questions, which we hope to work on in the current and in future papers, which digraphs have negative algebraic connectivity. So it's possible that that alpha value was zero, if you're not a polygon, it's possible that zero eigenvalue happens here, maybe it has multiple, multiple multiplicity. So when you take the restricted that zero still appears, and there's some curve some noise cut out around it, that forces that restricted numerical range into the left hand plane. So it's perfectly possible to have a negative algebraic connectivity, which has some consequences when talking about, say network dynamics or something like that. A really interesting question to me is, are all digraphs that are characterized by the Laplacian spectrum also characterized by the restricted numerical range? So is this restricted numerical range truly a more robust way to characterize things? Does this this idea about say the K imploding stars being characterized by one and not others? Does that have a converse with something being characterized by the spectrum but not to restrict to numerical range? We guess the answer is no, but it would be nice to have a proof of that. And then the final thing is, how does this all relate to network synchronization? So who kind of his original motivation for the algebraic connectivity was to describe the synchronization of algorithms on on networks. So what does alpha value actually means? So there's this alpha value, this beta value, a related mu value. There's all these different values that kind of tie into convergence rates and such on networks. How does all this, this, this nuance about the restricted numerical range, when it's polygon, when it's not tie into all of that. So that that would be some some future work after this current paper. And with that, I'll say thank you for your attention. Hopefully I didn't talk too fast. Thanks, Alex. If we could all thank our speaker, then we'll open it up for any questions. Yes, I have a question. So do you know anything about what the restricted numerical ranges, maybe, yeah, numerical range for a random diagram? So randomly, if I were to just compute random things, more often than not, it will not be even a polygon. It will some strange shape. So these these polygons we're finding are very specific structure, which may or may not be. So I'm still trying to convince myself that in a network that represents kind of a real life thing, maybe like, one of the assumptions physicists often make are matrices are all formation. But if you take a random matrices, it's definitely not going to be formation. It would be nice if all networks that are constructed fall into the category of being polygon, for example, because it would be really nice if that alpha value oftentimes agrees with the smallest real part of the spectrum. Because that has a lot of consequences, they describe subtle, subtle differences in like convergence rates and such of real life works. So it would be nice if the structure was overarching enough to be consistent with real life examples. But mathematically, speaking, if you take a random thing, it's most likely going to be one of these, one of the first kind of pictures, where it's kind of one of these like guitar pick looking things. Probably I'm asking because I don't know if you know that there's this whole class of open questions about is it true that with high probability, a random graph is characterized by its spectrum? I think this is open for I know it's open for adjacency, I think it's open for Laplacian. And it says, I wonder, of course, like, does this numerical range business characterize most graphs? Or if you know anything about that? My gut feeling is that if it's true, the spectrum, it will be true for the numerical range. More likely to be true for the numerical range than it is the spectrum. But my guess is that they're both still open. I don't know how often the numerical range itself is employed in graph theory. I've seen several papers in a disjoint but I don't know if I've seen it for characterizations, at least in the survey papers I've read. And definitely the restricted numerical range should be more robust than that. Even this example in here, it characterized where the numerical range did not. That would be an amazing thing to show. Thank you, Alex, for the talk. I wanted to ask you, this reference of who, when was that done? That was 2005, I believe. I see. So that is when the definition of algebraic connectivity was formed? For directed graphs, yeah. For directed graphs, okay. For that definition, yeah. There's also the concept of the second smallest real part. I'm not sure when that one was formed. That one's kind of the more natural extension but perhaps doesn't have the nice properties you'd wanted to have. Thank you. It was nice seeing you. Do we have any other questions for our speaker? Okay, in that case, thanks everybody for coming. Thanks again, Alex. And have a good day, everybody. See you all around. Thank you.