 And several previous videos in this lecture series, we've talked about how one can prove a conditional statement P implies Q. And in the previous lecture for this series, lecture 24, because we're in number 25 right now, we actually talked specifically about bi-conditional statements. How do you prove that P is equivalent to Q? And of course to do that, you just make the argument that P is equivalent to Q is the same thing as P implies Q and Q implies P. So every bi-conditional statement has the proving cost of two conditional statements. Well, this is how you prove a bi-conditional statement. It kind of makes sense that the next thing to consider is what if you wanna prove a tri-conditional statement? What if P is equivalent to Q and Q is equivalent to R? That is all three of these are equivalent. Oftentimes you see the following phrase used. You say the following, following R equivalent for which that can be a mouthful sometimes. So it's often abbreviated as T-F-A-E, the following R equivalent here. So we have that condition I is that the statement P and then statement two is Q and then statement three is R. Now to prove that these are equivalent, we're saying all three of these statements are logically equivalent. Well, we have to prove is you have to prove that P implies or P is equivalent to Q. We have to show that P is equivalent to R and we have to show that Q is equivalent to R. That's what all of this says right here, which each of these right here cost two conditional statements. And so when you put those together, this comes at a cost of proving six conditional statements. So think of this as like six direct proofs. That would be the cost of proving if you have three equivalent statements. But wait a second, what if we have, wait for it, four conditional statements. So there's some other statement S right here for which if we want to prove that, we have to prove that P is equivalent to Q, P is equivalent to R, P is equivalent to S. We have to show that Q is equivalent to R. We have to show that Q is equivalent to S and we have to show that R is equivalent to S. Whoa, that's a mouthful. You have six possible pairings and each of those by conditional statements requires two proofs. So you're taking 12 conditional proofs that you have to do there, like 12 direct proofs. Whew, that's a lot. What happens if we do five or six or seven equivalent statements? Now you might think I'm being ridiculous right here, but in my linear algebra textbook, the non-singular matrix theorem states that like 20 plus conditions are equivalent to a matrix being non-singular. So this is actually not outlandish whatsoever. How would one go through the details of all of those? So if you have like N, if you have like N equivalent statements there, how difficult of a proof is that going to be? Well, if you have to prove every possible equivalence, then you're gonna have N choose two pairings you have to choose and then you have to go in both directions C times that by two itself. That's a lot, right? N factorial, sorry, N times N plus one, what am I talking about? N minus one over two factorial times two, those things cancel out. You end up with N squared minus N like conditionals that you have to prove. That is a ton. And so if you throw in like 20, right? You're gonna have like 400 conditionals that you have to check. That can get out of control very, very quickly. But it turns out that you don't have to do every possible check to show that these things are all equivalent to each other. Let me show you some examples here. So imagine, you see some diagrams here on the screen. Imagine the following. I'm actually gonna start here on the right. Imagine we have six conditions that we wanna show our six statements that we think are equivalent to each other. So this right here, let's suppose we can prove that A is equivalence to B. So we have proofs going in both directions. We have sufficiency and then necessity in both directions. So there's two proofs there, right? What if we've also proven that B is equivalent to C, okay? So again, we have sufficiency and necessity going both directions. You get two proofs there. Let's say that we've proven that B and E are equivalence to each other. So again, we have necessity and sufficiency going both directions. Let's say that we've proven that E and D are equivalent to each other and we've proven that E and F are equivalent to each other. So this gives us a total of 10 conditional statements, okay? Now be aware of that for six, that's dramatically smaller than the bound we have up here because if N equals six, you would end up with N squared minus N equaling 36 minus six, which is 30. So presently, when you look at this proof right here, we took 10 direct proofs as opposed to 30, so that's one third that's dramatically smaller, but it turns out that's good enough. Like we showed that A is equivalent to B, but how do you show that A is equivalent to C? Well, if you wanna show that A implies C, what you do is you're like, okay, A implies B, B implies C. So transitively together, that actually shows when you compose those two proofs together, that A implies C. So the proof that A implies B, when it's concatenated with the proof that B implies C, gives us a proof that A implies C. I don't actually have to provide it because by the hypothetical syllogism, right, this is something we'd seen before, if, so this was a valid argument, if P implies Q and Q implies R, we then can conclude that P implies R. The concatenation of the proof that P implies Q with the proof that Q implies R gives you a proof that P implies R. We don't have to prove them directly because indirectly we can infer that A and C are logically equivalent to each other because they're both equivalent to B. So A implies C follows by putting the proof that B is in the middle. You know the other two around. Since C implies B and B implies A, together that actually shows that C implies A. And therefore, without saying it, we actually get that A and C are logically equivalent to each other. How do I show that A is logically equivalent to E? Well, I go this direction and then I go backward. You have both directions. How do I get that A implies D? You just follow this path. You just concatenate those proofs together. And if you reverse the direction, you get that A is implied to D. Similarly, since A is equivalent to B and B is equivalent to E and E is equivalent to F, this actually tells us that A and F are logically equivalent to each other. I don't need an explicit proof because I can infer from the other implications. So this last example suggests that you don't have to do every possible direction because of hypothetical syllogisms. We actually could infer some of the ones that are missing, like there's implications going on here as well. This is just with respect to A. You can look at all the other missing ones as well. And so then it begs the question, okay, I can get something dramatically smaller than 30, like 10, how good can I get? Like what's the minimum number of implications you have to do? Like this one is having these implications go in both directions. You just don't need all of them. You need them to be connected to each other. As long as there's a path from one node to the other, that actually will give us a proof. So they have to, so you take all of your nodes, you wanna make a graph that are all connected for which you have directions going. There's a path from every vertex to any other vertex. If there's a path on this digraph, then it means they're equivalent to each other, okay? Let's look at this one right here. This one, I don't necessarily have bi-conditionals everywhere. Like we do have some bi-conditionals here, but these ones are only one direction, okay? So notice what's happening here. Is there a path from every vertex to every other? So A implies B. B is equivalent to C. So there is a path from A, clearly a path from A to B, but there's also a path from A to C. There's a path from A to E. There's a path from A to D. There's a path from A to F, right? And so it turns out A actually can reach everyone on the graph. What about B? Well, B can get to C. How does B get to A? There's not a direct path there, but if there's any path, that actually by hypothetical syllogism, if there's a path from B to A, that means B implies A, which notice you can follow this path over here to A. Of course, you can get to E, you can get to D, you can get to F. So B implies all the other six. C can get to B, it can get to E, it can get to D, it can get to F, it can get to A. So C implies all of them. E can get to D, it can get to F, A, B and C. And then if you follow all of them, it turns out that there is a path from every vertex to another one. And so by the hypothetical syllogism, all of those statements are logically connected because our graph here is connected with a path from every point to another, okay? In this case, I didn't have to have conditionals because of my loop that I have. I can actually avoid conditionals here. It's similar to what we did over here. We have the two conditionals, the bi-conditionals there, but notice here, this loop only involves four. So when you add those all together, you're gonna get four plus two plus two. This was eight conditionals. So I improved upon it even more. You don't even have to do these bi-conditionals. Honestly, your best strategy is to make one giant loop, right? This one over here will be the optimal strategy. If you just have a loop, A implies B, B implies C, C implies D, D implies E, E implies F, and F implies A, then you loop around and everyone's connected to each other. A implies B, implies C, implies D, implies E, implies F. And you get that loop there. How about B? B implies C, which implies D, which implies E, which implies F, which implies A. So B does imply A. And what happens here? You have one, cause these are all conditionals, one, one, one, one, one. You end up with six, which of course six is N itself. This turns out to be the optimal strategy. If you wanna prove that you have a list of statements which are logically equivalent, prove them in a big cycle. The first one implies the second, which implies the third one, which implies the fourth one, which implies the fifth one. Until you get to the last one and then prove that the last condition implies the first condition. And then every other implication can be inferred from the N conditionals you just proved. And this is the optimal strategy to proving these. Now, of course, any of these diagrams works, but the most efficient one is to prove a loop. And so let's provide an example of this. So the following are equivalent. A is a subset of B. The union of A and B is a subset of B. The intersection of A and B is a subset of A. The set difference of A and B is equal to the empty set. Now, all of these implications are for the most part straightforward, but for the sake of this example, I'm trying to prove a TFAE theorem. TFAE, how do you prove this? What I'm gonna do is I'm gonna prove that A implies B. I'm then gonna show that B implies C. I'm then gonna show that C implies D. And then finally, I'm gonna show that D implies A. So there are four conditional statements that I have to prove. And if I do that, it'll prove all of the equivalences of these things, that each and every one of these things is equivalent, which is much better than the 12, right? The N squared minus N, N here since N is four, you end up with 16 minus four, which is equal to 12. If you prove every possible condition, you have to do 12 implications, but I can get away with actually four. Because of the strategy we're talking about here, four implications, four direct proofs. And so as a direct proof goes, we're gonna assume the hypothesis and then we prove the conclusion. So assume that A is a true fact. So A, the set is a subset of B. Now what we need to prove is that A union B is a subset of B. So we're gonna take an arbitrary element of A union B and I wanna argue that it belongs to B. Now if X belongs to A union B, there's two cases there, I have to consider two cases. X is either in A or X is in B. Now if X belongs to A, by assumption, since A is a subset of B, that means that X is inside of B. Now if X is in B already, then clearly X is in B. Both of the cases, because I considered two cases, X is inside of A union B, both cases lead to X being inside of B. Therefore A union B is a subset of B, which then gives us the first implication, A implies B. So that's what we've shown so far. So I'm gonna draw my four statements here. A, B, C, D. We have now shown that A implies B. Now for the next one, assume that statement B holds. Statement B says that A union B is a subset of B. I then want to use the assumption of condition B to show condition C. So I wanna show that the set A union B is equal to A, okay? How are we gonna do that? Well, to show that two sets are equal to each other, we show they're subsets of each other. Now one direction is super, super easy. Clearly A intersect B is a subset of A. I say this is clear because I don't need any further assumptions whatsoever. Any set A intersect any other set will be a subset of A, right? Intersections, by definition, you're in A and B, therefore you're in A. So I don't need a B. I don't need assumption B to get the statement. This is just always true. All right, now to show the other direction that A is a subset of A intersect B, what we need to do is take an arbitrary element of set A and show that it belongs to the intersection. So let X be inside of A. Well, clearly X belongs to A. So next, I need next show that X belongs to B. Now by our assumption here, well, not the assumption yet, note here that since X belongs to A and A is a subset of A union B, again, this is always true, that if you take a set and you unite it with some other set that makes the set get bigger. This tells us that X is an element of A union B. Now this is where our assumption comes into play here. Condition B says that A union B is a subset of B. So X, which belongs to A, also belongs to A union B. By assumption A union B is a subset of B, that tells us that X belongs to B. Now X is in A, X is in B, therefore X is inside of the intersection. This shows us that A is a subset of A intersect B, which then shows equality and that then gives us the second condition. We've now shown that B implies C. Now for the third one, what we're gonna do is we're gonna assume C. C says that A intersect B is equal to A. So that's the assumption we make. We then wanna prove B and to prove, excuse me, prove D, delta there. How to prove that is we wanna prove that the set difference of A and B is equal to the empty set. So that's what we're gonna do here. Clearly every set has the empty set as a subset. So what we need to do is just show that there's nothing in this. We're gonna prove this by contradiction. Suppose that the set difference A and B is not empty. That is there exists some element X that belongs to this set, A minus B. Well, if X belongs to that set, that means that X is in A, but X is not in B, okay? But by assumption, A is equal to A intersect B. So since X is in A, that means X is in A intersect B, which means X is in A and X is in B. And this is our contradiction. X is not in B and X is in B. So we get a contradiction, which means our assumption was bad and our assumption was that A minus B was not empty. Therefore, A minus B is empty and this shows that C implies D. And so to finish the proof, we have only one more implication to go. We have to show that D implies A. This is the one we wanna do here. How do you do that? You assume D and D was that A minus B is the empty set. We then need to prove that A is a subset of B. So to do that, I take a typical element of A, call it X, let X be inside of A. I'm also gonna prove this one by contradiction. I could do it directly, but I like the contradiction proof here because of the empty set that's going on right here. So take an element of A, take an element of A, call it X. And for the sake of contradiction, suppose that X is not inside of B, okay? Since X is in A, but not in B, that means that X is inside of A minus B. But this is the empty set. And this is my contradiction. Since this is empty, it can't contain an element, we get a contradiction. And therefore, because of our proof by contradiction, that gives that X is inside of B. And since X was an arbitrary element of A, this shows that A is a subset of B, thus giving us the last condition we were looking for right there. And since we've now constructed a loop of four implications, this actually shows that all four of them are logically equivalent to each other. And this is the best way to show that multiple statements are logically equivalent to each other, you draw a loop. The if and only if statements we had proven previously follow that same pattern, but you just have two conditions. You prove an implication in one direction, you then prove it in the other direction, and that then forms the loop you want for your graph. So for a bi-conditional statement, you really don't get a shortcut, but for longer ones, we have lists of three, four, 20, you can dramatically shorten the list by just going in order and forming one big loop.