 So, I am going to talk about a recent exciting result of Julia Shuzoi about a classical problem known as the edge disjoint path problem in networks. So, let me just start by defining the problem. You are given a graph along with the collection of sourcing pairs let us say S1, T1, S2, T2 through SK, TK and your goal is to route a maximum number of these sourcing pairs in the graph in an edge disjoint manner. So, when I say routing a pair what it means is if you route a pair SITI you assign a path in the graph to the pair SITI. So, you connect SITI to TI along this path. The phrase edge disjoint here means that if I look at the routing of any two pairs SITI and SJTJ then they do not share any edges. So, let us look at an example here in this graph. I am routing the pair S1, T1 along the red path S2, T2 along the green path and S3, T3 along the blue path. Once I have routed these pairs if you look at the pair S4, T4 there is no way to assign a path to this pair without violating the edge disjoint property. Any path that connects S4 to T4 at this point would intersect with one of the previously assigned path. So, this is an extremely well studied problem. There is a long and rich history of results on this problem. For now let me just review some classical results about the problem and I will say more as we go along. So, for one thing I should just point out about the expressive power of this problem. Even the EDP problem on star graphs which are trees of height 1. So, you have a single vertex connected to another vertices by directed edges. Even EDP on star graphs is equivalent to the general matching problem. We know how to solve matching in polynomial time, but that is a non-trivial thing. So, EDP captures this very easily. When the graph is directed even on two pairs just S1, T1, S2, T2 the problem is NP-hard to solve. If the graph is undirected the problem is poly-time solvable for any constant number of pairs, but this requires heavy machinery of the graph miners work developed by Robertson and Seymour. So, this is a highly non-trivial algorithm. And of course, once even in the undirected graphs once the number of pairs is unbounded the problem is NP-hard to solve. And I should point out that even on undirected trees if the edges of the trees have capacity then the problem is NP-hard. But in this talk we will not worry about capacities. We will talk of always assume all edges of the graph have unit capacity. So, since the problem is intractable there has been much work on designing approximation algorithm for this problem. And in this regard almost all the work on approximation algorithm starts with one common starting point which is linear programming relaxation, a very natural linear programming relaxation for this problem which is known as the multi-commodity flow relaxation. So, what does the relaxation do? First thing it does is that instead of routing a pair SI-TI along a single path it allows you to simply supply a unit of flow from SI to TI. So, this means you can use multiple paths each path carrying some different fractions of the flow. The second thing you do not even have to write route a pair SI-TI for a full unit amount of flow. You could say I am routing it for an epsilon amount of flow. So, now we can just define a variable XI which indicates for each pair SI-TI how much flow for the pair is routed by using a flow system. And then we have a variable f of p for every SI-TI path p which tells how much flow is carried on the path p. Your goal is to maximize summation XI which is the total amount of flow you route. And the main constraint here is to make sure that if you look at any edge E in the graph and you look at total flow on paths that go through the edge E this flow should not exceed 1 that is the implementation of the edge disjoint condition. So, as a warm-up let us just look at a very simple rounding algorithm for this relaxation. It is an iterative process. You start with the fractional solution for this multi-commodity flow relaxation. Among all the pairs which you have not yet considered for routing pick the pair SI-TI which has the shortest flow path on which LP routes some flow. So, this is shortest in the length of the path. Route the pair SI-TI along this path p and now any other flow path which intersect the path p you discard them. You keep repeating this process until no fractional flow is left. It is clear what you get at the end is an edge disjoint routing because each time you route along a path p you discard every flow path which intersects with it. What is not clear is how good is this algorithm. So, let us fix some notation and we will just analyze this algorithm. It is a very simple analysis. So, throughout the talk I am going to assume n denotes the number of vertices and m denotes number of edges and let us say opt is the optimal fractional solution. So, let us first consider the case when I end up picking a pair to route and I route the pair along the path whose length is less than square root m. In this case when I route the pair how much flow could I throw away? Well at most root m, I have only root m edges on the path and I might be throwing a unit amount of flow crossing every one of those edges. So, as long as I am routing path on path of length less than root m, I am within a root m factor of the optimal solution. At some point if the shortest available flow path has length more than root m, then notice that the total amount of flow that remains in the solution can be no more than root m. Why? The total routing capacity of this graph is m and every unit of flow being routed would consume root m amount of this capacity. You have no more than root m amount of flow. So, at this point even if you route just one more pair, you would be within factor root m of the optimal solution. So, if you combine these two observations, you get here is a rounding, which is within root m of the optimal solution, okay? So one may think that you could do better than this by being more sophisticated. After all as Madhu is pointing out, I'm not, I don't seem to be using the power of LP too strongly. Turns out there is a simple graph. An instance which was shown by, identified by Garg Vazirani and Yannaka case almost 20 years ago. Which shows that this LP has an integrity gap of square root m, okay? And this instance is actually just a grid like graph, okay? So here is the instance. So you have along the x-axis the sources and along the y-axis you have the sync vertices. And the green blobs actually correspond to that little gadget up there, which ensures that paths which enter the green blobs, they have to traverse through a common edge to get to the other side of the green blobs, okay? So once I put this gadget in this instance, let's see how good is an LP solution. You can always route half unit of flow for every SITI pair. How? Pick a pair SITI, start at SI, go up all the way to the level of the sync TI. And just turn towards the sync TI at that level. So just a straightforward right routing up and then left. Everyone does that. Any two paths will intersect just once. And when they intersect, their only constraint is they don't have flow of more than one passing through the common edge. That's ensured if you're routing only half units of flow. So you can route K over two units of flow. But now let's look at an integral routing. Take any pair SITI. Moment you route SI to TI, no matter in what fashion. You essentially create a wall. No other pair can reach there. No other source can reach it sync without crossing this wall. And that means causing a congestion of two on some edge. So you can actually just route integrally one pair. K can be set to, may roughly, root M. And this will show you an integrity gap of root M for this LP. And this gap holds for planar undirected graphs, okay? So it's happening on a very simple class of graph. And actually the news is bit worse if the graph happens to be directed. You can turn this integrity gap into a hardness of approximation result. Remember, I pointed out earlier that the two-pair problem on directed graphs is NP-hard to solve. So now those green blobs, I can just replace them by a gadget, which is the two-pair gadget. And if the two-pair gadget admits a solution, that means both pairs can be routed in an edge-to-joint fashion. You will be able to route all SITI pairs. And if there is no solution, then it serves as an obstruction, as before. And you will not be able to route more than one pair. So this intractability where it seems even hard to approximate the problem, motivates naturally the question, can we relax the problems slightly and get stronger approximation guarantees? Root M is not a very appealing approximation guarantee. And a natural relaxation is to consider EDP with congestion. So from here on, when I say EDP with congestion C, I mean the following version of the problem. I would let you violate the edge-to-joint condition in the following manner. Through every edge in the graph, I will allow up to C paths to use the edge. So the case we have been talking about so far is with congestion value C equal to 1, and what we want to understand is what happens when the congestion is allowed to exceed 1. In particular, the question that's focused on this talk is, is it possible that with constant congestion, EDP becomes well-approximable? Like approximately within a polylog factor. And if you notice the integrity gap example that I showed, it completely breaks down if the congestion of 2 is allowed, okay? It relied very strongly on no congestion condition. Congestion of 2, you can just route like the fractional solution. So once we start looking at EDP with congestion, there is a beautiful result of Ragman and Thompson from 1987, which gives a randomized rounding scheme for solving EDP with congestion. That goes as follows, take the fractional solution, look at a pair SITI, toss a coin, which let's say comes up heads with probability Xi. Xi is the amount of flow that is routed for the pair SITI in the fractional solution. If it comes up heads, you say I'm going to route this pair. So you're routing a pair SITI with probability proportional to Xi. And once you make the decision to route the pair, you look at the flow paths used in the LP solution for this pair. And you sample one of these flow paths, the probability proportional to the flow on them, and that's it. That's how you will do the routing. How good is this routing? It turns out if you allow a congestion of roughly log n over log log n, you can get a constant factor approximation to EDP. And then building on the same ideas, it was shown subsequently that for any constant congestion C, you can get n to the 1 over C approximation with congestion C for EDP. So things do improve once you allow congestion. But notice that as long as the congestion value C is a constant, the approximation factor is still a polynomial factor. So it's better, it's improved, it goes down, the ratios go down as C increases, but they remain polynomial for constant C. So, could we do better than this rounding framework? Okay, so can we do better? Well, it turns out if the graph is directed, this randomized rounding approach is pretty much the best you could do. In fact, there is a hardness result of the form n to the omega 1 over C for EDP with congestion C in directed graph. And also, if you wanted to get something like close to the constant factor approximation, you can't avoid having a congestion which is log n over log log n. So this, for directed graphs, qualitatively, at least, right? We are not matching the exponents exactly in the hardness and in the upper bounds, but qualitatively the question seems to be well understood. Things are not so clear for undirected graphs. You do have hardness results, even when congestion is allowed. But these hardness results are of the form log n to the 1 over C. So the upper bounds looks like n to the 1 over C and the hardness looks like log n to the 1 over C. So, from this point onwards, I'm going to focus only on undirected graph, and this is the state of the art for undirected graph. Or this was the state of the art as of two months ago. So what is the state of the art? For any constant congestion C, you can get log n to 1 over C hardness. And you get n to the 1 over C approximation. So there's a striking gap between upper and lower bounds. So this gap has essentially disappeared with the remarkable result of Julia Shuzai, which shows the following. With constant congestion, it is possible to get a polylogarithmic approximation to EDP in undirected graph. So we really, the truth here really was closer to the hardness results. So this result builds very strongly on number of developments that happened over the last six, seven years. And then adds to those developments a very powerful machinery of its own. So in the remaining time, what I would like to do is there are many, many technical details and I can't, I won't have time to get into many of those technical details. But what I would like to do in the remaining time is to give you a flavor of the various ideas that are put together in getting this result. Okay, so the rest of the talk, I'm going to divide into four parts. And I'm going to start by talking about a framework for solving the EDP problems called the Well-Linked Decomposition Framework, which is the starting point for all the recent results. And at a high level, what the framework allows you to do is to take an arbitrary instance of EDP and reduce it to an instance of EDP, where all the sourcing pairs have very high connectivity to each other. Once you have these instances of EDP with this special property that the sourcing pairs have very high connectivity to each other. Remember in the original problem, all you care is to connect S1 to T1, S2 to T2 and so on. You don't care about the connectivity of S1 to T2 or S2 to T1. Here, we would care about the connectivity of even the sources and sinks which are not involved in any pair. So the second step after that is once you have these instances where sourcing pairs have very high connectivity to each other, we would like to exploit this connectivity to show existence of some interesting routing structures in the graph. And in particular, we will end up showing that you can embed expanders in such instances. And in order to do this embedding of expanders, we are going to use a beautiful recent approach for embedding expanders, which was highlighted in the work of Kandekar, Rav and Vazirani, which breaks us down into a cut matching game that I will talk about. And this view of embedding expander turns out to be very useful for the EDP framework. So the third thing here is EDP in graphs with large minimum cut. That was really the first result which combines both these two ideas together and shows that you can get a polylog approximation for EDP with actually even I think the result was just congestion one provided the graph you start with has a large minimum cut. That is, it's reasonably well connected to begin with. So that was the proof of concept that there is some hope that EDP may, after all, be approximable to be in a polylog factor. And finally, I will talk about how Julia Shozoi extends this approach to arbitrary graphs which do not obey the large min cut condition. So let's start with the well linked decomposition. So what's the idea of well linked decomposition? As before, we're going to start with the multi-commodity flow solution. But this time, we'll look at the solution and use it only to partition the original instance into a collection of smaller instances. Where these smaller instances will have property that all the sources and sinks in them are going to be well connected. So key point of departure from the earlier approaches is in doing this decomposition, I'll totally ignore the flow paths that are used by the LP solution. The only information we will carry forward from the multi-commodity flow solution is what was the fraction for which a pair was routed. And from this point onwards, I wouldn't care once I've done the decomposition. I wouldn't even care about the original multi-commodity flow solution. The second step of this framework is to show that once you have these well linked instances, and I'm going to define this more precisely in a moment. Once you have these well linked instances, you can embed in them routing structures which I'll call cross parts. And these are basically structures on which the EDP problem is easy to solve. And if you could do that, then you're in good shape. Because now you started with an arbitrary instance. You reduced it to well linked instances. Well linked instances have cross parts on which EDP you already know how to solve. And now you will just use these cross parts to route your original instance inside them, okay? So that's the idea of the framework. So let's define these, just a few definitions. And then I'll also show you right away a quick, simple application of this framework. So what's a well linked set? So suppose you have a graph G and a subset X of special vertices in this graph. I'll say that the set X of vertices in the graph is well linked if they satisfy the following property. If you wanted to separate some K vertices in this set X from the remaining vertices in X, then you must delete at least K edges. So in other words, with respect to set X, the graph looks like an expander, okay? Just with respect to the subset X, okay? The whole graph is not an expander but it satisfies this expander like condition for this set X, okay? So once I know what's a well linked set, I can now define what's a well linked instance of EDP. So first, what's an instance of EDP once again? It consists of a graph G along with a collection of sourcing pairs. And from here on, I'm going to refer to the sourcing vertices as terminals, okay? I will refer to them as terminals and I'll denote them by the set X. Whenever I use the set X, it refers to these terminals, okay? And I can assume without loss of generality that all SATIs are distinct, okay? There's a simple trick which can allow you to do that. So if a vertex was occurring multiple times, you can just replicate it, okay, in a suitable way. So in this view, an instance of EDP is actually the following. You're given a graph G. You're given a set X of terminals and the instance gives you a matching M on the terminals and tells you route that matching, okay? It just pairs up these terminals in some fashion and says, give me a routing of them, okay? What's a well-linked instance of EDP? Well, it's just an instance of EDP with the property that the set of terminals is well-linked. So here is a useful result for our purposes. Given any instance of EDP, you can convert it into a collection of well-linked instances with only a polylog factor loss in the solution value. So you start with some multi-instance with some multi-commonity slow solution value. Let's say this value was opt. You will be able to convert this instance into collection of well-linked instances such that in the remaining, the well-linked instances, the flow value is at least opt over polylog, okay? Once you have that, we'll see how this could be very useful for solving the problem. So now the second word phrase that I was talking about these crossbar structures. So let me just briefly define what's a crossbar. If you give me a graph page, I'll say that it is crossbar. It serves as a crossbar with respect to some special subset of vertices i that I'll call the interface of this crossbar. If you have the following property, give me any matching on this interface i. I can route it on edge-to-joint paths using inside this crossbar graph, integrally, okay? So here is an example of a crossbar, a complete graph. It's a crossbar. You give me any matching and I just have a one-hop path. It's simple, trivial integral, right? But complete graphs are not, they're not very interesting and they are not the only crossbar structures. It is a slightly more interesting crossbar structure, a grid graph. A grid is a crossbar with respect to the first row here. Why? Suppose you give me this grid and you give me a collection of sourcing pairs lined up on the first row. And you ask me to route them. So S1 wants to go to T1, S2 to T2 is some arbitrary collection. I could do this routing. S1 will move up to the second row. Just use the second row to go all the way up to T1 and then come straight down. S2 would use the next row and so on. It's a crossbar. I did this routing and it was easy, right? This already gives us one new result as the direct application of the framework, which is a result on how well EDP can be solved in planar undirected graphs. There is an old result of Robertson, Seymour and Thomas, which says that if you gave me a planar graph G, which has K well-linked terminals, then with congestion two, you can embed in that graph essentially a K by K grid. Combined with the well-linked decomposition framework, you now can show that EDP in planar graph is approximable to be in an order log n factor with congestion two. So this already was a strong improvement over what we knew before because at that point, even for planar graphs, only a polynomial factor approximation was known with constant congestion and you get it very quickly from this framework. So here is how the picture looks. You have your terminals. You have this crossbar structure sitting somewhere in the graph. You will route these terminals to the first row, the interface of the crossbar, and then pair them up at the crossbar. What about general graphs? That's our agenda for today, EDP in general graphs. So here's a natural conjecture which emerges from this framework and from this approach which I'll just refer to as a crossbar conjecture and says the following. Suppose you give me a well-linked instance of EDP. You give me a graph G with a set X of terminals which are well-linked. Then the conjecture says then give me any matching M on this set of terminals. I can integrally route one over poly log fraction of the pairs with only constant congestion. Okay? Not only the specific pairing you were interested in to begin with initially at time zero when the game began. You give me a well-linked instance, give me any matching. It is possible to route. And in a way what this is saying is that the whole graph G, if you have a well-linked instance, the whole graph G serves as a crossbar whose interface are the terminal. So if the crossbar conjecture is true, it's clear that the integrity gap of the flow relaxation is only poly log N with constant congestion. That direction is easy. The converse is also true. If the integrity gap of flow relaxation is poly log N with constant congestion, then the crossbar conjecture holds. And why is that? For any well-linked instance given any matching M on the terminals, there is a fraction flow of value at least cardinality of X by log N. Why? Because these terminals, they satisfy the cut condition for routing any matching. And there is a flow cut gap is log N. So you would be able to route at least this much flow. But she told me that the integrity gap is only poly log N with constant congestion. So then I will be able to integrally route one over poly log N fraction of pairs in M. So this means this integrity gap question, what is the integrity gap of the flow relaxation with constant congestion? It's equivalent to understanding or settling the crossbar conjecture. It's not just one approach, okay? It's essentially, so here is the plan for the rest of the talk, okay? So here is the plan for the rest of this talk. So we're going to try to prove this crossbar conjecture. And the specific way we're going to try to prove it is as follows. We're going to try to show that given any well-linked instance on K terminals, we can embed with constant congestion a low degree expander of size K over poly log N, okay? Once we have this embedding, we are in good shape, okay? I won't go over this again, but I won't go over this right now. But it's just a well-known fact that routing on expanders on edge congestion paths is an easy thing to do. In fact, a greedy routing scheme works, okay? And intuitively, the reason is this, pairs can be connected by short paths. Each time you route a pair of sati, the damage you're going to cause is roughly the length of this path, even if you were not very clever in choosing your path. If paths are of length only order log N, the damage you cause each time is only order log N, okay? In fact, if the expander is low degree, you can even go ahead and do vertex disjoint routing, and the same would be true. And this is a fact which is useful to Julia's result, okay? Okay, so now we move to part two, which is we want to embed an expander. How could we possibly try to do that? And this is where the cut matching game of Kandekar Ravan Vazirani comes into play. So in this game, you start with a graph, empty graph. And let's say you start with an empty graph on N vertices. You want to build an expander on these vertices. There are two players who are going to play this game. There is a cut player who wants to build an expander as quickly as possible. And there is a matching player whose goal is to delay the construction of this expander, and the game goes as follows. At step one, the cut player is going to give an equal size partition of the vertices of the graph to the matching player. Matching player's job is to return some perfect matching between these two sets, A1 and B1. And he's going to figure out the worst possible matching he could give to you to delay the construction of the expander. You take those edges, you put them on your vertices in the, initially the graph was empty, now you put these set of edges. At this point, the cut player will look at what it has got and will identify another set A2B2, and you will repeat this process. And the beautiful result is that after order log square and iteration of this game, you do get an expander. So no matter how these edges were given, you would get an expander after log square and iteration. We could use this to solve the ADP problem now, as follows. To prove the crossbar conjecture, we have our graph G, and we are going to build an expander on the terminals in this graph G. This is a well linked instance. When you compute a partition of the terminals A1B1, and you want a perfect matching between them, all I will do is ask the graph for you to give me a flow, a unit flow on terminals from A1 to B1. And that is possible, why? Because the instance is well linked. So the edges of the matching now correspond to path, flow path in the graph. And this is integral flow, okay? You get this flow path routing, and here is the expander being built on the left side. Second round, some other partition comes. You will once again use the well linked property to find me a matching, which is actually finding me flow path, which connect the terminals on the two sides of the partition. You continue this game, after log square k iterations, you're going to get an expander on the terminals embedded in your graph, okay? So that's the good news, right? What's the bad news? These paths will intersect each other, so yes, and so what is the bad news? Just that, yeah, the congestion, because at each step, I am using these edges afresh, okay? I have no way of separating them, right? So problem is it gives you log square k congestion, okay? But the plan is clear at this point, okay? So this is where Rob and you came and they made a really clever observation. They showed that if the minimum cut in your graph was large, and how large? Not too large, it's just something like log cube n. Then you can embed an expander with congestion one by playing this exact game, and how do they do that? Here is a very quick summary of their approach. They're going to take the graph G, and they're going to randomly partition this graph into log square k, edge disjoints graph. So for every edge in the graph, you assign it a number at random between one and log square k. And edges with number i go to the copy g i of the graph. The fact that you started with a graph which had a large min cut to begin with, they're able to use this fact to show that even after this random partition, each g i is well linked with respect to the terminals, okay? And now you play each round of the cut matching game in the private copy g i, okay? That's it, okay? There is work in step two, okay? Which is, you start with the large min cut condition, and then you turn that into showing that each g i is still well linked. But it's technically involved, but it's not too difficult, okay? To do this. So after this result, there was a lot of excitement and it really seemed to suggest that people should try to get the polylog approximation result for EDP with constant congestion. This condition seems not too daunting, log cube n, right? That's not a very high cut requirement. Right, but that just means, since you have some congestion at play, maybe with constant congestion you could turn your graph to satisfy a condition like this, and indeed that's what was tried next, okay? So I'll just briefly say a few words about another very exciting result which came last year, and which was the final, which was the final word on this problem until Julia's result. That's a result due to Matthew Andrews, which took this framework of raw and shoe and you can get around the min cut condition. And here is the high level plan which was used by Matthew Andrews. You're given a graph J which doesn't satisfy the min cut condition. You contract regions in this graph which are violators of this min cut condition. And in this contracted graph, these regions are collapsed down to a single note. And you boost the connectivity of this graph, okay? After you do these contractions. You boost it to satisfy the log cube and min cut condition. And at this point, you invoke the raw and shoe framework. But there is a problem. You contracted regions of graph into nodes, paths which are just going through these nodes. When you uncontract, they have to actually go through the complicated graph contained inside. And what Matthew was able to show is that with polylog log n congestion, he can manage the routing inside these contracted regions, okay? And intuitively, the amount of flow the contracted regions are dealing with is roughly polylog n, okay? And if the congestion you create inside is log of the amount of flow they are dealing with, okay? But that's a very simplified, okay? This is quite non-trivial to make it all work. Okay, so where does finally to Julia shows us the approach. How does she take care of this problem by using only constant congestion? So she, at a high level, she's just going to sidestep from the plan of boosting the min cut of the graph, okay? So this approach of completely relying on the raw shoe approach and only focus on turning your initial graph into a high min cut graph. She's going to move away from that. What she does is, she's able to identify in the graph log square k vertex disjoint subsets of vertices. These are vertex disjoint subsets such that each one of these subsets SJ is well connected to the terminals. And I'll make precise what this well connectivity means. But for now, just imagine that all the terminals could reach without causing congestion to each one of these sets SJ. Each SJ is well linked inside with respect to its boundary. So if you view its boundary as terminal vertices, each SJ is a well linked instance with respect to its boundary. Boundary is just the edges crossing SJ to the rest of the graph, okay? I'll make it all of this more precise in just a minute. And then the idea now is, when you want to implement the jth round of this cut matching game, you will bring your terminals to SJ. SJ is well linked inside. You do your matching inside, and then you go back, okay? That's the idea. Julia calls this a good family of sets. I would have called it really good family of sets. So this is a very powerful construct, okay? And the hard part of our work is really showing that such sets exist and you can find them, okay? So let's look at this in bit more detail. So these good sets are sitting up there, S1, S2, SR. And the way you should view this picture is as follows. Look at S1 and look at the edges which are just leaving the boundary of S1. The number of edges which are leaving the boundary of S1 is exactly equal to the number of terminals you have. So if you have k terminals, there will be exactly k edges leaving the boundary of S1. Every single terminal will have a private edge assigned to it on this boundary, okay? So the set S1 has a representative for every terminal. It's private and unique, same for S2, same for SR. And now if I look at a terminal, so let's look at the blue terminal. It has this additional property that it can connect to all its representatives in these R sets by a spanning tree, okay? So there is a spanning tree which allows it to reach all of them, okay? And not in an edge-to-short fashion, it's just a tree, okay? But it's able to connect. It is connected to all its representative, that's all. So, STs are well-linked with respect to their boundaries. Terminals are well-linked to the boundaries. That is, each terminal can reach one represented in boundary of each one of these SRs, that is the structure. And now once you have this, you can play the game of embedding an expander as follows. So those are my terminals, and I want to build an expander. I play the first round of the cut matching game, and that gives me this partition of the terminals, and it says match them. I go to my set S1. This partition has an image in the set S1 because terminals have a representation in each set. And the set S1 is well-linked, and I will now say, play the game in there, find me a matching, okay? I get those edges here, I go to the next round. This time, I go to the set S2, play the game. I continue, in log square k iterations, we will obtain an expander embedded in G, okay? Just as before. So here is the picture, okay? So this embedding is more complicated than the embeddings of the expander we looked at earlier. Why, before each vertex of the expander corresponded to a single terminal in the graph, this time, each vertex, for each terminal, the vertex I have in the graph is not a single vertex. It is actually a connected component. What is a connected component? It's the tree which was spanning that terminal. What's an edge in the expander? Well, an edge connecting T i and T j now corresponds to a path connecting a vertex in the component of T i to a vertex in component of T j. So like this. So if you look at an edge on the right-hand side which connects those two terminals, that really just corresponds to some path connecting some vertex in the tree of T i. To some vertex in the tree of T j. But routing on vertex which we show in paths on x is still a good thing for us. In that it will give us a low congestion routing in G. Let's look at a path from S to T on which you wanted to route in the expander. Let's see how it looks in the embedding. The expander edges appears here as paths between some pairs of vertices in their corresponding component. But each one of these blocks is a connected component by itself is a tree. I can now use the tree to hook up to the right points and go through all these intermediate vertices to reach all the way at T. And I'm going to use vertex ratio and routings. So these components, once I've used them, once I throw them and that's it. As I said, the hard part is in finding these sets and just a high level one sentence intuition as to why such structures should exist. If you start with an instance which has K well linked terminals. So that means, well linked means there is a lot of connectivity among these K guys. You scale down your goal of building an expander sufficiently down. You don't want to build an expander on K terminals. You would be happy to build it on K over poly log K terminals. So since you have this huge connectivity and capacity in the graph, if you scale down your goal by a large poly log K factor, in principle, you could identify dedicated regions which are well connected, well linked and are of size only K over poly log K. The difficulty is in showing that not only you have these nice dedicated regions, but they satisfy all these additional constraints, which is the terminals can reach them and you have this nice property of well-linkedness in each one of these regions which terminals can arrive at. Okay, that's all I'm going to say about this result. And I conclude with a couple of open questions which still remain. I mean, I think this result is really like it cleans up lots of things. But there is still stuff to be done on understanding EDP of undirected graphs. So I think there are only two questions which I would like to highlight. So the first question is, we started with EDP defined in a very strict manner that the routing had to be on an H-dashwine path. And then we relaxed it to EDP with congestion and we get these nice results. But let's now return, now that we understand the power of congestion. Let's return to the basic question. What happens to EDP with no congestion? Here is the state of affairs. The best-known approximation algorithm is a square root n approximation. The best-known hardness is square root log n. Pretty much the state of affairs we had for EDP with congestion up until Julia's result, right? Except in this case, the upper bound of square root n matches the integrality gap result. So you couldn't start with the, or you couldn't entirely rely on the multi-commandative flow relaxation at your benchmark. You would not get past the root n barrier, okay? The second problem is something I didn't talk about directly much is, but it's just really the natural analog of the EDP question. It's the congestion minimization problem. Same input as in EDP, but this time I tell you, you must route all SITI pairs, everything has to be routed. Your goal is to minimize the maximum congestion you will cause at any edge. So I pointed out a result of Raghavan and Thompson from 1987 already tells you how to do this with log n over log log n congestion. For directed graphs, that's the best possible. But for undirected graphs, the hardness is really a level below. It sits at around log log n, okay? So it's a very interesting question. If it turns out, even for congestion minimization, the right answer turns out to be in the log log n regime. That's all I wanted to say, thank you.