 So, I am going to talk about a recent exciting result of Julia Shuzoi about a classical problem known as the edge disjoint path problem in networks. So, let me just start by defining the problem, you are given a graph along with the collection of sourcing pairs let us say S1 T1, S2 T2 through SKTK and your goal is to route a maximum number of these sourcing pairs in the graph in an edge disjoint manner. So, when I say routing a pair what it means is if you route a pair SITI you assign a path in the graph to the pair SITI. So, you connect SITI to TI along this path. The phrase edge disjoint here means that if I look at the routing of any two pairs SITI and SKTJ then they do not share any edges. So, let us look at an example here in this graph I am routing the pair S1 T1 along the red path S2 T2 along the green path and S3 T3 along the blue path. Once I have routed these pairs if you look at the pair S4 T4 there is no way to assign a path to this pair without violating the edge disjoint property. Any path that connects S4 to T4 at this point would intersect with one of the previously assigned paths. So, this is an extremely well studied problem there is a long and rich history of results on this problem. For now let me just review some classical results about the problem and I will say more as we go along. So, for one thing I should just point out about the expressive power of this problem even these EDP problem on star graphs which are trees of height 1. So, you have a single vertex connected to another vertices by direct edges even EDP on star graphs is equivalent to the general matching problem. We know how to solve matching in polynomial time, but that is a non trivial thing ok. So, EDP captures this very easily. When the graph is directed even on two pairs just S1 T1 S2 T2 the problem is NP hard to solve. If the graph is undirected the problem is poly time solvable for any constant number of pairs, but this requires heavy machinery of the graph miners work developed by Robertson and Seymour. So, this is a highly non trivial algorithm and of course once even in the undirected graph once the number of pairs is unbounded the problem is NP hard to solve and I should point out that even on undirected trees if the edges of the trees have capacities then the problem is NP hard ok. But in this talk we will not worry about capacities we will talk of always assume all edges of the graph have unit capacity. So, since the problem is intractable there has been much work on designing approximation algorithm for this problem and in this regard almost all the work on approximation algorithm starts with one common starting point which is a linear programming relaxation a very natural linear programming relaxation for this problem which is known as the multi commodity flow relaxation. So, what does the relaxation do? First thing it does is that instead of routing a pair SITI along a single path it allows you to simply supply a unit of flow from SI to TI ok. So, this means you can use multiple paths each path carrying some different fractions of the flow. The second thing you do not even have to write route a pair SITI for a full unit amount of flow you could say I am routing it for an epsilon amount of flow ok. So, now we can just define a variable XI which indicates for each pair SITI how much flow for the pair is routed using a flow system. And then we have a variable f of p for every SITI path p which tells how much flow is carried on the path f on the path p ok. Your goal is to maximize summation XI which is the total amount of flow you route. And the main constraint here is to make sure that if you look at any edge E in the graph and you look at total flow on paths that go through the edge E this flow should not exceed 1. That is the implementation of the edge disjoint condition ok. So, as a warm up let us just look at a very simple rounding algorithm for this relaxation. It is an iterative process you start with the fractional solution for this multi commutative flow relaxation among all the pairs which you have not yet considered for routing pick the pair SITI pair which has the shortest flow path on which LP routes some flow. So, this is shortest in the length of the path route the pair SITI along this path p. And now any other flow paths which intersect the path p you discard them. You keep repeating this process until no fractional flow is left. It is clear what you get at the end is an edge edge disjoint routing ok because each time you route along a path p you discard every flow path which intersects with it ok. What is not clear is how good is this algorithm. So, let us fix some notation and we will just analyze this algorithm it is a very simple analysis. So, throughout the talk I am going to assume n denotes the number of vertices and m denotes number of edges and let us say opt is the optimal fractional solution ok. So, let us first consider a case when I end up picking a pair to route and I route the pair along the path whose length is less than square root m ok. In this case when I route the pair how much flow could I throw away well at most root m right. I have only root m edges on the path and I might be throwing a unit amount of flow crossing every one of those edges right. So, as long as I am routing pass on pass of length less than root m I am within a root m factor of the optimal solution. At some point if the shortest available flow path has length more than root m then notice that the total amount of flow that remains in the solution can be no more than root m. Why? The total routing capacity of this graph is m and every unit of flow being routed would consume root m amount of this capacity you have no more than root amount root m amount of flow. So, at this point even if you route just one more pair you would be within factor root m of the optimal solution. So, if you combine these two observations you get here is a rounding which is within root m of the optimal solution ok. So, one may think that you could do better than this by being more sophisticated after all as Madhu is pointing out I am not I do not seem to be using the power of LP too strongly. Turns out there is a simple graph an instance which was shown by identified by Garg Vazirani and Yanaka case almost 20 years ago which shows that this LP has an integrality gap of square root m ok and this instance is actually just a grid like graph ok. So, here is the instance so you have along the x axis the sources and along the y axis you have the sync vertices and the green blobs actually correspond to that little gadget up there which ensures that paths which enter the green blob they have to traverse through a common edge to get to the other side of the green blob ok. So, once I put this gadget in this instance let us see how good is an LP solution you can always route half half unit of flow for every SITI pair how? Pick a pair SITI started SITI go up all the way to the level of the sync T I and just turn towards the sync T I at that level so just the straight forward right routing up and then left. Everyone does that any two paths will intersect just once and when they intersect their only constraint is they do not cause they do not have flow of more than one passing through the common edge that is ensured if you are routing only half units of flow. So, you can route k over two units of flow but now let us look at an integral routing take any pair SITI moment you route SITI to T I no matter in what fashion you essentially create a wall no other pair can reach there no other source can reach it sync without crossing this wall and that means causing a congestion of two on some edge. So, you can actually just route integrally one pair k can be set to made roughly root M and this will show you an integrity gap of root M for this LP. And this gap holds for planar undirected graph so it is happening on a very simple class of graph and actually the news is bit worse if the graph happens to be directed you can turn this integrity gap into a hardness of approximation result remember I pointed out earlier that the two pair problem on directed graph is NP hard to solve. So, now those green blobs I can just replace them by a gadget which is the two pair gadget and if the two pair gadget admits a solution that means both pairs can be routed in an edge disjoint fashion you will be able to route all SITI pairs and if there is no solution then it serves as an obstruction as before and you will not be able to route more than one pair. So, this intractability where it seems even hard to approximate the problem motivates naturally the question can be relaxed the problems slightly and get stronger approximation guarantees root M is not a very appealing approximation guarantee and a natural relaxation is to consider EDP with congestion. So from here on I will when I say EDP with congestion C I mean the following version of the problem I would let you violate the edge disjoint condition in the following manner through every edge in the graph I will allow up to C paths to use the edge. So, the case we have been talking about so far is with congestion value C equal to 1 and what we want to understand is what happens when the congestion is allowed to exceed 1. In particular the question that is focus of this talk is is it possible that with constant congestion EDP becomes well approximable like approximable to within a polylog factor and if you notice the integrity gap example that I showed it completely breaks down if the congestion of 2 is allowed it relied very strongly on no congestion condition congestion of 2 you can just route like the fractional solution. So, once we start looking at EDP with congestion there is a beautiful result of Raghavan and Thompson from 1987 which case gives a randomized rounding scheme for solving EDP with congestion that goes as follows. Take the fractional solution look at a pair SITI toss a coin which let us say comes up heads with probability XI XI is the amount of flow that is routed for the pair SITI in the fractional solution. If it comes up heads you say I am going to route this pair so you are routing a pair SITI with probability proportional to XI and once you make the decision to route the pair you look at the flow paths used in the LP solution for this pair and you sample one of these flow paths the probability proportional to the flow on them and that is it that is how you will do the routing. How good is this routing? It turns out if you allow a congestion of roughly log n over log log n you can get a constant factor approximation to EDP and then building on the same ideas it was shown subsequently that for any constant congestion C you can get a n to the 1 over C approximation with congestion C for EDP. So things do improve once you allow congestion but notice that as long as the congestion value C is a constant the approximation factor is still a polynomial factor so it is better it is improved it goes down the ratios go down as C increases but they remain polynomial factor for constant C. So could we do better than this rounding framework okay so can we do better? Well turns out if the graph is directed this randomized rounding approach is pretty much the best you could do. In fact there is a hardness result of the form n to the omega 1 over C for EDP with congestion C in directed graph okay and also if you wanted to get something like close to the constant factor approximation you cannot avoid having a congestion which is log n over log log n okay. So this for directed graphs qualitatively at least right we are not matching the exponents exactly in the hardness and in the upper bounds but qualitatively the question seems to be well understood right. Things are not so clear for undirected graph you do have hardness results even when congestion is allowed but these hardness results are of the form log n to the 1 over C. So the upper bounds looks like n to the 1 over C and the hardness looks like log n to the 1 over C. So from this point onwards I am going to focus only on undirected graph and this is the state of the art for undirected graph or this was the state of the art as of month two months ago okay. So what is the state of the art for any constant congestion C you can get log n to 1 over C hardness and you get n to the 1 over C approximation okay so this is the striking gap between upper and lower bounds. So this gap has essentially disappeared with the remarkable result of Julia Shuzai which shows the following with constant congestion it is possible to get a poly logarithmic approximation to EDP in undirected graph. So we really the truth here really was closer to the hardness results okay. So this result builds very strongly on number of developments that happened over the last six seven years and then adds to those developments a very powerful machinery of its own. So in the remaining time what I would like to do is there are many technical details and I can't I won't have time to get into many of those technical details but what I would like to do in the remaining time is to give you a flavor of the various ideas that are put together in getting this result okay. Okay so the rest of the talk I am going to divide into four parts and I am going to start by talking about a framework for solving the EDP problems called the well-linked decomposition framework which is the starting point for all the recent results. And at a high level what the framework allows you to do is to take an arbitrary instance of EDP and reduce it to an instance of EDP where all the sourcing pairs have very high connectivity to each other okay. Once you have these instances of EDP with this special property that the sourcing pairs have very high connectivity to each other, remember in the original problem all you care is to connect S1 to T1, S2 to T2 and so on. You don't care about their connectivity of S1 to T2 or S2 to T1. Here we would care about their connectivity of even the sources and syncs which are not involved in any pair okay. So the second step after that is once you have these instances where sourcing pairs have very high connectivity to each other. We would like to exploit this connectivity to show existence of some interesting routing structures in the graph and in particular we will end up showing that you can embed expanders in such instances okay. And in order to do this embedding of expanders we are going to use, we are going to use a beautiful recent approach for embedding expanders which was highlighted in the work of Kandekar Ravan Vazirani which breaks us down into a cut matching game that I will talk about and this view of embedding expander turns out to be very useful for the EDP framework. So the third thing here is EDP in graphs with large minimum cut. That was really the first result which combines both these two ideas together and shows that you can get a polylog approximation for EDP with actually even I think the result was just congestion one provided the graph you start with has a large minimum cut that it's reasonably well connected to begin with okay. So that was the proof of concept that there is some hope that EDP may after all be approximable to within a polylog factor. And finally I will talk about how Julia Shozoy extends this approach to arbitrary graphs which do not obey the large min cut condition okay. So let's start with the well-linked decomposition okay. So what's the idea of well-linked decomposition? As before we are going to start with the multi-commodity flow solution but this time we will look at the solution and use it only to partition the original instance into a collection of smaller instances where these smaller instances will have property that all the sources and sinks in them are going to be well connected. The key point of departure from the earlier approaches is in doing this decomposition I'll totally ignore the flow paths that are used by the LP solution. The only information we will carry forward from the multi-commodity flow solution is what was the fraction for which a pair was routed okay. And from this point onwards I wouldn't care once I've done the decomposition I wouldn't even care about the original multi-commodity flow solution. The second step of this framework is to show that once you have these well-linked instances and I'm going to define this more precisely in a moment. Once you have these well-linked instances you can embed in them routing structures which I'll call cross parts. And these are basically structures on which the EDP problem is easy to solve. And if you could do that then you are in good shape because now you have, you started with an arbitrary instance, you reduced it to well-linked instances. Well-linked instances have cross parts on which EDP you already know how to solve. And now you will just use these cross parts to route your original instance inside them okay. So that's the idea of the framework. So let's define these just a few definitions and then I'll also show you right away a quick, simple application of this framework. So what's the well-linked set? So suppose you have a graph G and a subset X of vertices, special vertices in this graph. I'll say that the set X of vertices in the graph is well-linked if they satisfy the following property. If you wanted to separate some k vertices in this set X from the remaining vertices in X, then you must delete at least k edges. So in other words, with respect to set X, the graph looks like an expander okay, just with respect to this subset X okay. The whole graph is not an expander but it satisfies this expander like condition for this set X okay. So once I know what's the well-linked set, I can now define what's the well-linked instance of EDP. So first what's the instance of EDP once again? It consists of a graph G along with a collection of sourcing pairs and from here on, I'm going to refer to the sourcing vertices as terminals okay. I will refer to them as terminals and I'll denote them by the set X. Whenever I use the set X, it refers to these terminals okay. And I can assume without loss of generality that all SITIs are distinct okay. There's a simple trick which can allow you to do that. So if a vertex was occurring multiple times, you can just replicate it okay in a suitable way. So in this view, an instance of EDP is actually the following. You're given a graph G, you're given a set X of terminals and the instance gives you a matching M on the terminals and tells you route that matching okay. It just pairs up these terminals in some fashion and says, give me a routing of them okay. What's a well-linked instance of EDP? Well, it's just an instance of EDP with the property that the set of terminals is well-linked. So here is a useful result for our purposes. Given any instance of EDP, you can convert it into a collection of well-linked instances with only a polylog factor loss in the solution value. So you start with some multi-instance with some multi-commonity slow solution value. Let's say this value was opt. You will be able to convert this instance into collection of well-linked instances such that in the remaining, the well-linked instances, the flow value is at least opt over polylog. Once you have that, we'll see how this could be very useful for solving the problem. So now the second word phrase that I was using, I was talking about these crossbar structures. So let me just briefly define what's a crossbar. If you give me a graph H, I'll say that it is crossbar. It serves as a crossbar with respect to some special subset of vertices i that I'll call the interface of this crossbar. If you have the following property, give me any matching on this interface i. I can route it on edge-to-joint paths using inside this crossbar graph, integrally. So here is an example of a crossbar, a complete graph is a crossbar. You give me any matching and I just have a one-hop path, a simple trivial integral. But complete graphs are not, they're not very interesting and they're not the only crossbar structures. It is a slightly more interesting crossbar structure, a grid graph. A grid is a crossbar with respect to the first row here. Why? Suppose you give me this grid and you give me a collection of sourcing pairs lined up on the first row and you ask me to route them. So S1 wants to go to T1, S22, T2, it's some arbitrary collection. I could do this routing. S1 will move up to the second row, just use the second row to go all the way up to T1 and then come straight down. S2 would use the next row and so on. It's a crossbar. I did this routing and it was easy. This already gives us one new result as a direct application of the framework, which is a result on how well EDP can be solved in planar undirected graphs. There is an old result of Robertson, Seymour and Thomas, which says that if you gave me a planar graph G, which has k well-linked terminals, then with congestion two, you can embed in that graph essentially a k by k grid. Combined with the well-linked decomposition framework, you now can show that EDP in planar graph is approximable to within an order log n factor with congestion two. So this already was a strong improvement over what we knew before because at that point, even for planar graphs, only a polynomial factor approximation was known with constant congestion and you get it very quickly from this framework. So here is how the picture looks. You have your terminals. You have this crossbar structure sitting somewhere in the graph. You will route these terminals to the first row, the interface of the crossbar, and then pair them up at the crossbar. Okay, what about general graphs? That's our agenda for today, EDP in general graphs. So here's a natural conjecture which emerges from this framework and from this approach, which I'll just refer to as a crossbar conjecture and says the following. Suppose you give me a well-linked instance of EDP, you give me a graph G with a set X of terminals which are well-linked. Then the conjecture says, then give me any matching M on this set of terminals. I can integrally route one over poly log fraction of the pairs with only constant congestion. Not only the specific pairing you were interested in to begin with initially at time zero when the game began, you give me a well-linked instance, give me any matching, it is possible to route. And in a way, what this is saying is that the whole graph G, if you have a well-linked instance, the whole graph G serves as a crossbar whose interface are the terminal. So if the crossbar conjecture is true, it's clear that the integrity gap of the flow relaxation is only poly log N with constant congestion. That direction is easy. The converse is also true. If the integrity gap of flow relaxation is poly log N with constant congestion, then the crossbar conjecture holds. And why is that? For any well-linked instance given any matching M on the terminals, there is a fraction flow of value at least cardinality of X, Y log N. Why? Because these terminals, they satisfy the cut condition for routing any matching and there is a flow cut gap is log N. So you would be able to route at least this much flow. But she told me that the integrity gap is only poly log N with constant congestion. So then I will be able to integrally route one over poly log N fraction of pairs in M. So this means this integrity gap question, what is the integrity gap of the flow relaxation with constant congestion is equivalent to understanding or settling the crossbar conjecture. It's not just one approach, it's essentially. So here is the plan for the rest of the talk. Okay, so here is the plan for the rest of this talk. So we're going to try to prove this crossbar conjecture. And the specific way we're going to try to prove it is as follows. We're going to try to show that given any well-linked instance on K terminals, we can embed with constant congestion a low degree expander of size K over poly log N. Once we have this embedding, we are in good shape. I won't go over this again, but I won't go over this right now. But it's just a well-known fact that routing on expanders on edge distance paths is an easy thing to do. In fact, a greedy routing scheme works. And intuitively the reason is this pairs can be connected by short paths. Each time you route a pair SATI, the damage you're going to cause is roughly the length of this path, even if you were not very clever in choosing your path. If paths are of length only order log N, the damage you cause each time is only order log N, okay? In fact, if the expander is low degree, you can even go ahead and do vertex disjoint routing and the same would be true. And this is a fact which is useful to Julia's results, okay? So now we move to part two, which is we want to embed an expander. How could we possibly try to do that? And this is where the cut matching game of Khandekar Ravan Vazirani comes into play. So in this game, you start with a graph, empty graph. And let's say you start with an empty graph on N vertices. You want to build an expander on these vertices. There are two players who are going to play this game. There is a cut player who wants to build an expander as quickly as possible. And there is a matching player whose goal is to delay the construction of this expander, and the game goes as follows. At step one, the cut player is going to give an equal size partition of the vertices of the graph to the matching player. Matching player's job is to return some perfect matching between these two sets A1 and B1. And he's going to figure out the worst possible matching he could give to you to delay the construction of the expander. You take those edges, you put them on your vertices in the, initially the graph was empty, now you put this set of edges. At this point, the cut player will look at what it has got, and will identify another set A2, B2, and you will repeat this process. And the beautiful result is that after order log square N iteration of this game, you do get an expander. So no matter how these edges were given, you would get an expander after log square N iteration. We could use this to solve the EDP problem now as follows, to prove the crossbar conjecture. We have our graph G, and we're going to build an expander on the terminals in this graph G. This is a well-linked instance. When you compute a partition of the terminals A1, B1, and you want a perfect matching between them, all I will do is ask the graph for you to give me a flow, a unit flow on terminals from A1 to B1. And that is possible, why? Because the instance is well-linked. So the edges of the matching now correspond to flow paths in the graph. And this is integral flow. You get this flow path routing, and here is the expander being built on the left side. Second round, some other partition comes. You will once again use the well-linked property to find me a matching, which is actually finding me flow paths, which connect the terminals on the two sides of the partition. You continue this game after log square k iterations, you're going to get an expander on the terminals embedded in your graph. So that's the good news, right? What's the bad news? No, these paths will intersect each other, so yes, and so what is the bad news? Just that, yeah. Because at each step, I am using these edges afresh. I have no way of separating them, right? So problem is, it gives you log square k congestion, okay? But the plan is clear at this point, okay? So this is where Robin Jue came, and they made a really clever observation. They showed that if the minimum cut in your graph was large, and how large? No, not too large, it's just something like log-suban. Then you can embed an expander with congestion one by playing this exact game, and how do they do that? Here is a very quick summary of their approach. They're going to take the graph G, and they're going to randomly partition this graph into log square k, h disjoints graph. So for every h in the graph, you assign it a number at random between one and log square k, and edges with number i go to the copy gi of the graph. The fact that you started with a graph which had a large min cut to begin with, they're able to use this fact to show that even after this random partition, each gi is well linked with respect to the terminals, okay? And now, you play each round of the cut matching game in the private copy gi, okay? That's it, okay? There is work in step two, okay? Which is, you start with the large min cut condition, and then you turn that into showing that each gi is still well linked. But it's technically involved, but it's not too difficult, okay? To do that. So after this result, there was a lot of excitement, and it really seemed to suggest that people should try to get the polylog approximation result for EDP with constant congestion. This condition seems not too daunting, log cube n, right? That's not a very high cut requirement, right? But that just means, since you have some congestion at play, maybe with constant congestion, you could turn your graph to satisfy a condition like this, and indeed that's what was tried next, okay? So I'll just briefly say a few words about another very exciting result which came last year, and which was the final, which was the final word on this problem until Julia's result. That's a result due to Matthew Andrews, which took this framework of raw and shoe and you can get around the min cut condition. And here is the high level plan, which was used by Matthew Andrews. You're given a graph J which doesn't satisfy the min cut condition. You contract regions in this graph which are violators of this min cut condition. And in this contracted graph, these regions are collapsed down to a single node. And you boost the connectivity of this graph, okay? After you do these contractions. You boost it to satisfy the log cube and min cut condition. And at this point, you invoke the raw and shoe framework. But there is a problem. You contracted regions of graph into nodes, paths which are just going through these nodes. When you uncontract, they have to actually go through the complicated graph contained inside. And what Matthew was able to show is that with polylog log n congestion, he can manage the routing inside these contracted regions, okay? And intuitively, the amount of flow the contracted regions are dealing with is roughly polylog n, okay? And if the congestion you create inside is log of the amount of flow they are dealing with, okay? But that's a very simplified, okay? This is quite non-trivial to make it all work. Okay, so where does finally to Julia shows us the approach. How does she take care of this problem by using only constant congestion? So she, at a high level, she's just going to sidestep from the plan of boosting the min cut of the graph, okay? So this approach of completely relying on the raw shoe approach, and only focus on turning your initial graph into a high min cut graph. She's going to move away from that. What she does is, she's able to identify in the graph log square k vertex restraints subsets of vertices. These are vertex restraints subsets. Such that each one of these subsets S j is well connected to the terminals. And I'll make precise what this well connectivity means. But for now, just imagine that all the terminals could reach without causing congestion to each one of these sets S j. Each S j is well linked inside with respect to it's boundary. So if you view it's boundary as terminal vertices, each S j is a well linked instance with respect to it's boundary. Boundary is just the edges crossing S j to the rest of the graph, okay? I'll make it all of this more precise in just a minute. And then the idea now is, when you want to implement the jth round of this cut matching game, you will bring your terminals to S j. S j is well linked inside. You do your matching inside, and then you go back, okay? That's the idea. Julia calls this a good family of sets. I would have called it really good family of sets. So this is a very powerful construct, okay? And the hard part of our work is really showing that you can, such sets exist and you can find them, okay? So let's look at this in bit more detail. So these good sets are sitting up there, S 1, S 2, S R. And the way you should view this picture is as follows. Look at S 1 and look at the edges which are just leaving the boundary of S 1. The number of edges which are leaving the boundary of S 1 is exactly equal to the number of terminals you have. So if you have k terminals, there will be exactly k edges leaving the boundary of S 1. Every single terminal will have a private edge assigned to it on this boundary, okay? So the set S 1 has a representative for every terminal, okay? It's private and unique, okay? Same for S 2, same for S R. And now if I look at a terminal, so let's look at the blue terminal. It has this additional property that it can connect to all its representatives in these R sets, okay, by a spanning tree, okay? So there is a spanning tree which allows it to reach all of them, okay? Not in an edge-to-short fashion, it's just a tree, okay? But it's able to connect. It is connected to all its representatives, that's all. So, STs are well-linked with respect to their boundaries. Terminals are well-linked to the boundaries, that is, each terminal can reach one represented in boundary of each one of these SSRs. That is the structure. And now once you have this, you can play the game of embedding an expander as follows. So those are my terminals, and I want to build an expander. I play the first round of the cut matching game, and that gives me this partition of the terminals, and it says match them. I go to my set S1. This partition has an image in the set S1 because terminals have a representation in each set, and the set S1 is well-linked, and I will now say, play the game in there, find me a matching, okay? I get those edges here, I go to the next round. This time, I go to the set S2, play the game. I continue. In log square k iterations, we will obtain an expander embedded in G, just as before. So here is the picture, okay? So this embedding is more complicated than the embeddings of expanders we looked at earlier. Why? Before, each vertex of the expander corresponded to a single terminal in the graph, this time, for each terminal, the vertex I have in the graph is not a single vertex. It is actually a connected component. What is a connected component? It's the tree, which was spanning that terminal. What's an edge in the expander? Well, an edge connecting TI and TJ now corresponds to a path connecting a vertex in the component of TI to a vertex in component of TJ. So like this. So if you look at an edge on the right-hand side, which connects those two terminals, that really just corresponds to some path connecting some vertex in the tree of TI to some vertex in the tree of TJ. But routing on vertex which we saw in paths on X is still a good thing for us in that it will give us a low congestion routing in G. Let's look at a path from S to T on which you wanted to route in the expander. Let's see how it looks in the embedding. The expander edges appears here as paths between some pairs of vertices in their corresponding component, but each one of these blobs is a connected component by itself is a tree. I can now use the tree to hook up to the right points and go through all these intermediate vertices to reach all the way at T. And I'm going to use vertex ratio and routings. So these components, once I've used them, once I throw them, and that's it. So as I said, the hard part is in finding these sets and just a high level one sentence intuition as to why such structures should exist is you start with an instance which has K well-linked terminals. So that means well-linked means there is a lot of connectivity among these K guys. You scale down your goal of building an expander sufficiently down. You don't want to build an expander on K terminals. You would be happy to build it on K over poly log K terminals. So since you have this huge connectivity and capacity in the graph, if you scale down your goal by a large poly log K factor, in principle you could identify dedicated regions which are well connected, well linked, and are of size only K over poly log K. The difficulty is in showing that not only you have these nice dedicated regions, but they satisfy all these additional constraints, which is the terminals can reach them and you have this nice property of well-linkedness in each one of these regions which terminals can arrive at. Okay, that's all I'm going to say about this result. And I'll conclude with a couple of open questions which still remain. I think this result is really like it cleans up lots of things, but there is still stuff to be done on understanding EDP of undirected graphs. So I think there are only two questions which I would like to highlight. So the first question is we started with EDP defined in a very strict manner that the routing had to be on an edge disjoint path. And then we relaxed it to EDP with congestion and we get these nice results. But let's now return, now that we understand the power of congestion, let's return to the basic question, what happens to EDP with no congestion? Here is the state of affairs. The best known approximation algorithm is a square root n approximation. The best known hardness is square root log n. Pretty much the state of affairs we had for EDP with congestion up until Julia's result. Except in this case, the upper bound of square root n matches the integrity gap result. So you couldn't start with the, or you couldn't entirely rely on the multi-common duty flow relaxation as your benchmark. You would not get past the root n barrier. The second problem is something I didn't talk about directly much is, but it's just really the natural analog of the EDP question. It's the congestion minimization problem. Same input as in EDP, but this time I tell you, you must route all SITI pairs. Everything has to be routed. Your goal is to minimize the maximum congestion you will cause at any edge. So I pointed out a result of Raghavan and Thomson from 1987 already tells you how to do this with log n over log log n congestion. For directed graphs, that's the best possible. But for undirected graphs, the hardness is really a level below. It sits at around log log n. So it's a very interesting question. If it turns out, even for congestion minimization, the right answer turns out to be in the log log n regime. That's all I wanted to say. Thank you.