 Floor is yours. Take it away. Yeah, we can see it. Okay, perfect. Slate show. Perfect. Can you, do you want it? Yeah, and we can see the transitions. All right, thanks. Go ahead. Okay, this is a short work with my former PhD advisor, which is Mateo Shonkir and myself. You probably see me as Mateo Marsili because of technical issues, but this is my real name. So I will talk a little bit about the problem of maximum independent sets. In the concept of random graphs. So first, I will introduce the notion of independent sets. Given a graph sheet, an independent set is a subset of vertices, such that no pair of vertices is neighbor to each other. For example, in this picture, I have two copies of the same graph and in parallel two different independent sets for that graph. An independent set is said to be maximal if you cannot add another vertex, such that the vertex set is still an independent set. And in that sense, both of these independent sets that I have in this example are maximal independent sets. And there is a further notion, which is maximum independent set, which is an independent set that is of the maximum possible size for the given graph, which, for example, the independent set on the left is maximum, but the independent set of the right is maximal because you cannot add further vertices, but it's not maximum because it's not of the maximum possible size. In this context, the size of the maximum independent sets of a graph are going to be called the independence number and will be represented by alpha sheet. It is a very well-known result that constructed maximum independent sets or computing independence numbers in the context of general graphs is, in general, an NP-hard task. So, this work, the aim of the work was mostly to study if there was, if there existed some subclass of graphs for which the task is easier and for that subclass to find, really, an algorithm that can construct maximum independent sets or almost maximum and to give the independence number for that subclass of graphs. Our discussion will center around two different random graph models. One is parser dos renegrafs, which is if you have a sequence of erdos renegrafs, which are graphs for which each per vertex is connected with an independent and fixed probability. And if the connection probability scales as a constant over n, then that is called a sparser dos renegraf. And the property they have is that they are sparse, in the sense that if you look at the sequence of graphs, the typical neighborhoods in the graph are an order-one function of the graph size. The other model we will discuss and we will center most of our attention is the configuration model, which is a kind of construction to obtain uniform independent sets with a given fixed-degree distribution. Okay, so more or less the construction goes like this. Suppose I want to construct a uniform graph where the first vertex has a fixed degree, the second vertex has another one, and so on. I get all the vertices and I assign to each one of the vertices a number of half edges, which is equal to the degree I want that vertex to have in the final graph. And then I choose sequentially the three half edges and I match them uniformly with the other three half edges. I do this sequentially and there are no more three half edges and then for each pair of half edges, I establish an edge in the graph. For example, we can think that in this example, this graph results in this. Okay, here is a large simulation I did it with software in this page and the end result is a graph that looks a little bit like this. As you can see, it's marked on red. And the result of this procedure in this instance was not really a graph, but something that is called a multigraph because it has a marked in red two vertices that share more than one edge. Also behind that edges, you can see another vertex that has an edge that is connected to itself. So along the top, we will assume that a certain conversion assumption for the degree distributions hold under this conversion assumption, we have that there is a symptomically positive probability of this construction not resulting in one of these structures that have these problems of multipletages between two vertices or an edge between a vertex and itself. So because the probabilities are symptomically positive, if we establish some property to hold with high probability in the construction that possibly results in this kind of problems, we can automatically condition on the graph being simple, not having this kind of behavior, and the property will hold still in high probability. Another important result in configuration model graphs is that there is a phase transition in the presence of a shy and connected component. A connected component is a subset of vertices such that each vertex is connected through a path of edges within the graph. And a shy and component is when you have a connected component that has asymptotically a positive proportion of all the vertices in the graph. There is this phase transition result which states that if some parameter which depends on the asymptotic distribution of the sequence of configuration model graphs is larger than one, then with high probability there will be a shy and component in the graph. And if the parameter is smaller than or equal to one, then there won't be. Usually when the parameter is larger and you have a shy and component, this is called the supercritical graph. And when the parameter is smaller than one or equal to one, it is called critical or subcritical graph. The kind of algorithms we will study are called sequential algorithms. In these algorithms, at each step of the algorithm, we have that the vertex set is evaluated in three. A set of vertices that is called reactive vertices, another one which are the block vertices and another one which are the unexplored vertices. Here in the representation, I have that the black vertices are reactive, the red vertices are the blocked, and the white vertices are the unexplored. And these sequential algorithms, at each step, what they do is to select with some criteria and different algorithms, depending on which criteria they use to select vertices. And they select a vertex from the unexplored subgraph and they declare it active and all its neighbors are declared blocked. And in this way, because at each step only unexplored vertices are activated, at each time the algorithm is defining an independent set through the set of active vertices. And the algorithm runs until the time in which all the vertices are either active or blocked, at which point it cannot select for the vertices. And at this point, the independent set defined by the active vertices, it simply seems to be a maximal independent set. It cannot be enlarged. I will focus on one special instance of sequential algorithm which is called the degree-degree algorithm, which is a sequential algorithm that in each step, it selects a vertex within the unexplored subgraph that has minimal degree in that graph. So it chooses uniformly a vertex that has minimum degree. In this case, it would be a degree in zero at time equal one. And in general, the class of sequential algorithms are algorithms that take at most end steps to run and most of them are polynomial in the complexity. But the problem is that in general, they are not optimal in the sense that they obtain independent sets that are probably far away from a maximum independent set. In general, there are very few characterizations of maximum independent sets in random graphs. There are a couple that are the exceptions and here I just mentioned some that are relevant for our work. One is that in sparser those creamy graphs, when the mean degree of the graph is smaller than the number E, then we have that the degree-degree algorithm is optimal in the sense that it constructs an independent set that is of the maximum size. Then for configuration model graphs, we have a couple of results, for example, fluid limits for the performance of the degree-degree on regular graphs. This one result from being a tile that characterizes maximum independent sets in regular graphs for large enough connectivity. And there are other works that analyze degree algorithms also from a perspective of fluid limits. Okay, our analysis is based on the idea that the degree-degree algorithm will define, in a way, a selection sequence, which is essentially the sequence of the vertices that it will declare in each step, as active as the algorithm runs. And apart from this selection sequence, we will have that the algorithm will define in each step what we will call a remaining graph, which is essentially the subgraph of the unexplored vertices. And all our analysis, which the results will try to prove in which context the degree-degree algorithm is optimal in some kind of asymptotic way for configuration model graphs. They are based on this proposition that says that if the degree-degree algorithm selects only degree one or zero vertices until the remaining graph is a critical, it doesn't have a sharing component, then the independent set it constructs is of the size of a maximum independent set minus some correction that is very little, if it's smaller than any positive power of the graph size. So this proposition characterizes what we want to study, but the problem is that the condition is very difficult to prove when it holds. So essentially, our main theorems regard that ways of proving when this is the case. Our main idea was that, okay, we have a graph, we don't know for how long the algorithm will select only degree one or zero vertices, but we know that if initially the graph has certain number of degree one vertices, at least until these vertices run out, they are declared active or blocked, then we know that the algorithm will always select these degree vertices. So we define the map M1, which essentially, when evaluated on the degree distribution of a graph, it gives the degree distribution after activating or blocking all the initial degree one vertices. So the idea is that if after a finite number of applications of this map, the graph obtained is subcritical, then we can apply the proposition and say, okay, the independent set that is obtained by the degree-degree algorithm is optimal up to this very small correction. So our first theorem is a characterization of when the degree distribution obtained after one application of the map M1 defines subcritical graph. And what we obtain is a very easy-to-check criteria, which depends only on the generating function of the degree distribution of the graph and on some parameter, which in a sense measures the relation between the initial degree one vertices and the number of total vertices in the graph. So this is a criteria that can always be checked. It's very easy. But the problem is that it's not very general. We may be in situations in which the degree-degree is picking vertices of degree one or zero until the graph is subcritical, but it's not done in the first block of degree one vertices because by selecting the first block of degree one vertices, we generate further degree one vertices, which keep the algorithm going and doing the right thing, which is only selecting those. So we have a second theorem which characterizes exactly the deterministic limits of the degree distribution after connecting, activating these initial degree one vertices. This lets us check the general criterion for asymptotic optimality, which is finite applications of this map result in a subcritical graph. And we have a couple of propositions that tells the first one that tells us that when we are in the general criterion, when the general criterion holds, the degree-degree algorithm is obtaining an almost maximum independent set. And we have an explicit formula for the maximum independent set obtained. And we have another proposition that gives us, for graph stats, don't obey the general criterion, okay, we can modify the graph a little bit and couple the new graph with the old one and obtain an upper bound for the maximum independent sets. And another application we have was to re-derive what is called the phenomenon, which is the thing I mentioned that in sparser those linear graphs, we have that when the mean degree is smaller than e, then the degree-degree algorithm is optimal and there was already a literary characterization of the independence number obtained in that case. We derived a more explicit and new characterization in terms of what is called the lambda function. And the idea behind it was to use the second theorem I just showed to derive a discrete dynamic for some parameters that describe the degree distribution after a certain number of applications of the map M1. And then after a change of variables of this map, what we obtained was that we could explicitly find solutions for this nonlinear map in terms of what is called the tetration operation. And then using a theorem by Euler that tells that infinite applications of the tetration operation converge within this interval, we show the asymptotic optimality for the case in which the sparse graph has mean degrees smaller than e. And then using the proposition that gives the size of the independent set obtained, we derived the independence number for these kinds of graphs. That's more clear. Okay, thank you for the great talk. So we'll have, John, there's no break now.