 My talk is like very nice, I have the ability to shorten my talk nicely if it's the amount of time I actually have. It'll still be nice to go hearing. I would just introduce you on behalf of Roy. Roy is not here. Hi, yeah, hi. So my name's Gene, I'm in the logic program, but I'm not normally in charge of this. Roy is the organizer for this semester, so I guess we have to... Where is he? He's, I don't know, somewhere over the rainbow. We are empty today to talk about impact minutes and... Is it recording? It's recording. I'll be in... I think so. I'm sorry about the food. Thanks for letting me speak first. I enjoy any opportunity I can get to bombard people with interesting point set topology. So today we're going to talk about the point set topology, caveats. Everything today is going to be house dwarf. Let's just remember what that means. If I give you two points, I can separate them by open sets that are also disjoint. That's the setting in which life is going to happen today. But we're interested not just in house dwarf spaces, we're interested in compact spaces. So let's remember, so X is compact. This is the same thing as finite subcover. Open cover is just some collection of open sets that covers the whole space. And then finite subcover that is hopefully clear. There's a lot of other things I could write down that are equivalent. Often what's really useful to work with is this definition. Every net has a convergence subnet. This is a sequence. I won't be using nets too heavily today, but just there is a sequence-like definition of compactness that does generalize to arbitrary spaces. And why do we like compactness? Compactness is really nice. It's much easier to work with. If we have a space that's not compact, we would often like to stick it inside some compact space that we can use the techniques that compact spaces allow us to use. Let's remember a few facts about what are the nice things about compact spaces that we want to use. Actually, maybe let me start off with some prototypical examples. The unit interval, compact space, I can look at any finite set with discrete topology. The other main thing we're going to do is if I have a lot of compact spaces, I can take arbitrary products. That's to take an office theorem. So in particular, what we're really going to be focusing on a lot today is taking the unit interval and raising it to some power. So here I is just some set to be huge. I could be ridiculously huge, but we're still going to get a nice compact space. Remember that the product topology of nice space is given by you can specify an open set on finitely many coordinates, and then you just have to take the whole thing. So if I have compact space, I can take products, and then the other sort of prototypical example we're going to work with is closed subspaces. What's actually kind of surprising is that the interval 0, 1, the ability to take arbitrary products and the ability to take closed subspaces, this is actually entirely general. We get every compact space that we could ever want out of this. And before I get into that, one more nice property. Every, so a nice, this is called a nice thing, nice thing, house store spaces. Though I don't have to specify house store because life is house store today. Spaces are normal. So let's remember what normal is by picture. Here's a space. Here is a closed, so here's a space x. Here's a closed subspace a. Here's another closed subspace b. a and b are disjoint. And then what normality allows me to do is I can stick a inside an open set u. I can stick b inside an open set v. And u and v are disjoint. And this is actually incredibly useful. This is really the driving engine between, like, as I prove a irisome petrization theorem. This is how you prove irisome's lemma. Irisome was kind of an important person. But yeah, if you look at the picture, normality is just kind of a more powerful version of house store. It's what's known as a separation axiom. Sometimes it's called T4. Today we're going to actually talk a lot about, not T4, but what's called T3.5. And to motivate that, I need to sort of tell you what the talk is about. Because we have to mention the second word, compactification. So, house store space, life is house store. A space, the image is dead. We could really think of, it's helpful to just think of y as being a subset of x by some shifty re-labeling you can actually do that. Also, if I omit this condition, I can just take the closure of the image and then obtain a compactification. Because we have this nice property, closed subspaces and compact spaces are compact. So the question we're attacking today is what spaces admit compactification? So, I'll just point this buzzword to space x is compactifiable, x admits a compactification. So what's a nice, let's think about what properties compactifiable spaces have to have from, what can we learn about it from the compactification? Well, compact spaces have normality. So that's going to immediately tell us from normality is what's known as completely regular. I'm going to define this, completely regular. So define and justify why we get complete regularity. We need to remember that normal spaces, there's actually stronger property that's equivalent to a normality. Not only can I separate a and b by open sets in a normal space, I can do so by a function. So let's remember what that means. There's some continuous function on x that's going to send a to the value zero and b to the value one. Not necessarily, I'm not requiring that a is the pre-image of zero or that b is the pre-image of one, but just that a is contained in the pre-image of zero. And that's certainly strong enough to imply normality as I've defined here. The equivalent, the hard bit is irisome's lemma, which as we've heard, irisome is an important guide. So when I take a compactification of x, I, let's say that I have a point. Let me draw a picture. So here's x. Here is the compactification. Actually, yeah, I'll call it compactification. Let's say that I have a in x and a, let's call it point p. And I also have a closed, let me actually draw this a little bit more suggestively, closed a, a subset of x. A is closed in x, but a is not necessarily closed over x prime. So this is why compactifiable spaces are not necessarily normal, because if I have disjoint closed sets and then I take their closure in the compactification, they might suddenly meet somewhere. But since I require p be a point, we're gonna be okay, the closure of a, maybe a prime, a prime might have some points over here, but a prime and p are suddenly disjoint. So what we've done is we can separate, or what we can do, I should say, I can separate a prime and p by a function, because I can do it over here. And this is exactly the property that is complete regularity. Let me write that down. A space complete regular, not containing the points, maybe I'll just say avoiding the point, can be separated by a function. So not just by open sets, but actually by a function. In the regular setting, those are not the same. Orison's lemma doesn't generalize to regular spaces. Point can be separated by a function. Alright, so what we've done so far, we have isolated a property that any space that admits a compactification must have. Any space that is compactifiable must be complete regular. What is actually, again, kind of surprising and not terribly hard, is that that's actually all that we need to require. Before I erase this, let's remember this guy right here. We're going to be taking product of unit intervals a lot in this proof. Completely regular space is compactifiable. So how are we going to prove it? This is one of those buzzwords I mentioned in my abstract. We're going to take the Galfan transform. So I'm just going to put that in parentheses here. Galfan transform. So I'm going to let c of x. This is not quite the traditional notation, but it's what I'm going to use. c of x denotes continuous 0, 1 value. So I mean any continuous function would be bounded, but I really want to fix the bounds of 0 and 1 form. So this is a box space that has lots of topologies, but I actually don't care about any of that. I'm going to regard this as just a really big set. And what did I say we can do with really big sets? We can put them in the exponent of 0, 1 and obtain a nice compact space. So I'm going to form the space 0, 1 to let c of x. This is just a nice, it's a very large space that's compact because it's just a product of 0, 1 by a lot of. And the compactification is going to be a closed subspace because what do we need to do? We need to cook up a compactification of x. I guess I should have said x is suppose x is compactifiable. We're going to construct a compactification. I suppose x is completely regular. Yeah, thank you. That's the direction of going. x is completely regular. The compactification is going to be a closed subspace of this guy. So what's the map? They're just the space of measures. The product is just the space of measures. Well these aren't necessarily measures. They're just 0, 1, 1. They're not linear necessarily. x doesn't necessarily have a linear structure on it. They're just all the continuous functions. They're bounded between these two. So define, the map is actually, once you write down this map it's really the only choice of map. It's really cool. So what is phi of some little x going to be? This is going to be... So the typical coordinate in this large product is some continuous function f. The value I'm going to put at that coordinate is nothing more than f of x. This is like the double tool. Yes, it's a very similar construction. And this is as f ranges over... Phi may not have dense image, but we're just going to take the closure of the image as our proposed compactification. So we need to prove, what do we need to prove? We need to prove that phi is an embedding. What I hope is clear immediately, phi is definitely continuous. And it's definitely injective. So the only tricky bit is showing that the images of closed sets are relatively closed. So to say that precisely what we need to do, we need to show... Once we show this, we will have that phi as an embedding. And to construct our closed set W, this is where complete regularity comes in. A is closed. So we're just going to pick functions separating A from every other point. So for every, I'll say p in x separates A and p. And I'm going to assume... Let's just set fp of A to always be 0. So then the point gets value 1, A always be 0. So it's W going to be... W is just going to be those functions, functions from c of x to 0, 0, 1. What do I want to demand? I'm going to demand that for every p, that's not very legible, that at coordinate f sub p, we take value 0. Then it's immediate that we've done what we set out to do. Because say that W certainly... We want to argue that this is the same as this. This certainly contains phi of A. If there were some other point, that can't happen because we separated it. At that point, r of fp would be 1, and that's not going to happen. So W doesn't depend on A at all? Oh no, W depends... Remember, we chose these functions. So that's going to depend, say, what coordinates we want to be 0. So that's actually... This is our first characterization of compactifiable spaces. And with almost enough time to actually attempt to do this, but probably not quite enough, I'm going to attempt the second characterization. Because what is sort of lacking in this proof is... Let's ask, what is the Gelfand transport? Which of these things is it? Is it like... I think it's... I think it's forming this space. I'm not entirely sure. I believe it's the step of forming this space, and perhaps this map also. Gelfand was heavily involved in this proof. It was like these sort of ideas are what go into the Gelfand-Nymark theorem. So if you're familiar with those things. So what is... What's deficient about this proof? Or what else could we hope to understand? Given a completely regular space, X, this proof constructs the same compactification. This is actually the beta-compactification of X. It's the largest, most general compactification of X. But say that I ask the question, given a compactifiable space, what compactifications can I form? This proof is not going to help us out. This spits out the same space every time. So it would be awesome if we could figure out some internal characterization of what compactifications of X look like. And that's what I'm going to attempt to do in 25 minutes. So I'm just going to morning now. My proofs might start getting pretty hand-wavy just due to the time constraint. But what we're going to do is take a brief diversion into the topic of uniform spaces. I'm going to define this completely generally for now. This is just a set. I have not to apologize it or anything. But I'm going to tell you what it means to put a uniformity. I'm going to call it my uniformity U on X. It is a collection of subsets of X cross X. So think of this as the way to intuit this definition is to pretend that X actually is a topological space. Here's X, here's X, here's the diagonal. And U is going to look like around the diagonal. And this picture is going to be made formal by the next few bullets. So you have to satisfy the intersection over U and U of all U is actually equal to delta. Which is, I guess I should tell you that this is my notation for the diagonal. I am being more strict than the most general. I lied. I said I was being entirely general. But since life is house-dorfed today, I cooked this up in a way that's going to give us house-dorfed things later on. Normally it's just this intersection would contain equality. Fancy you to be closed under some nice closure properties. So I might just say this really quickly and say U is a filter. A quick reminder of what that is, that means it's closed under finite intersections and taking supersets. So it's a notion intuitively script U is a notion of neighborhood of the diagonal or contains a neighborhood of the diagonal. So we should demand that it be closed under finite intersections and supersets. Is that if I have something in script U, then U inverse is in script U where U inverse is just where I flipped the coordinates from members of U. U inverse is the trickiest but most important. And what 4 corresponds to is a kind of very weak version of the triangle of equality. So I'm going to write it given some U and U. So I'll just maybe do this. In the language of metric spaces, this says we can divide distance by 2. There it is. So given some guy in script U, I'm going to find a smaller guy in script U. How much smaller the subset? I have to explain this notation because it's actually horribly abusive. I do not mean v cross v. I mean e to the power of 2 r pairs so that there is another point in the set so that... So if I said this is like the triangle of equality performing a triangle, here's x, here's y, I'm saying this is roughly a distance U. I'm saying that there is some, quote-unquote, distance v and a point z so that this guy and this guy are at distance. So v squared are those points so that there is a z and x with x, z in v, z, y. So if you think about these, if you think about these subsets of the product as relations on the set, then this is just a composite of v with a sum. Yes, yes. That makes sense. That's exactly what we're doing. Is there any value in thinking of it that way? Not especially. In practice, you only need to dive down to things like... Because this is inductive, I can replace 2 with n and this action 4 will still hold. In practice, you only need to go down to 4 or so and you'll be able to prove anything you want to prove. That's mentioning that we want to think of this... We should have the intuition that x is actually a space and not just a set. And we're going to justify that intuition by cooking up a topology from a uniformity. Topology, uniformity. We're going to say all this kind of be informal tau of the uniformity. This is going to be the topology, topology. The typical neighborhood is going to be so that... Notice I did not say... I did not say the typical open neighborhood. These will typically not be open neighborhoods. But from a notion of neighborhood, you can recover a suitable notion of open neighborhood if you want to just work with open neighborhoods. And often, from now on, I'm going to be thinking topology first, uniformity second. So we can simplify our discussion a lot by always working with members of script U that are open and symmetric. And so often I'm going to make those assumptions implicitly. Before I go on, let's just get some examples out there. So uniformities are also uniform spaces in nature. Pro-typical example, metric spaces. Your typical uniformity are all points that are no further apart than epsilon. I guess I should say who are less than epsilon apart if we want to open. Here's kind of a fun example, which I like, but I won't go for it anyway. Any topological group has a very natural choice of uniform structure where I can say, your typical uniformity is going to look something like GH so that GH inverse is in V where V ranges over neighborhoods of the identity. I don't see, we have a lot of analysis people in here. So anytime you define a, I'll just say like normed spaces, or actually spaces whose topology are generated by any collection of norms, I'll just say spaces generated by norms, or even more generally I can replace norm with just pseudo metric. Both the topological vector spaces you work with in functional analysis have very nice uniformities either from 2 or from 3. There's going to be equivalent. And I guess what we're going to is, I'll write this definition here, a space is uniformizable. So after the topological space, it's uniformizable if the topology is the same one induced from some uniformity. Why did I take this 10 minute diversion into defining what uniform spaces are? So one of the key observations is that compact spaces are uniformizable. So let's try to prove that real fast. What I want to be is not bad. The proof is let u subset of x squared neighborhood have to check that this actually works and notably for 4 you have to use compactness to make it fly. But you can do it. And a nice remark which is also suggestive is that u has above uniformity. And so this is kind of suggestive because what it's saying is that somehow compact spaces do not have a lot of choice and they have no choice in what uniformity you have to take to generate that topology. And so this suggests that if you have a space that is merely compactifiable that maybe studying uniformities on it is a good way to study compactifications. And that's exactly what we're going to do except for the fact that we only have 7 minutes left. I should remark that and I've used it implicitly subspaces of uniform spaces are uniformizable just by you can take the subspace uniformity and so then compactifiable spaces by this result are uniformizable. You can imagine that if you have different compactifications of a space the uniform structure you get induced is going to look different depending on what compactification you started with. And while you would hope that the correspondence is exact it's not quite right. There are more uniformities than there are compactifications. But the good news is that every compactification is described by a uniformity. And I don't have enough time to what I was hoping to do was to actually build this smack going back and forth between uniformities and compactifications. But I'll just mumble something about it. Different thoughts, information. I'll just put this in quotes. And so what the information is is the construction runs via these things called near-altre filters. You define those based on your uniform space you get a nice compact house-door space out of it to compactify your uniform space. But I don't have time to run through that so I think I'm going to stop now. This is a question that doesn't relate to anything you said recently. You intrigued me because you said it earlier. You say you can construct any compact house-door space from taking arbitrary products and closed subspaces of the unit. Yes. How hard is that to prove? We did it. We proved it. Take the Gulfant transform of a compact space. I think it's up to you. I think it's up to you. Because you're taking the beta-compactification of a compact space so it's just the same space. Isn't it slick? Does this law spit still work in non-house-door spaces? In house-door spaces. I'm thinking algebraic varieties. I don't think compact spaces in the non-house-door setting are necessarily uniformizable. Because the whole contradiction that you get over here is you obtain a point because say that the way that this proof roughly runs is say that you have a failure of condition 4 in hoping that this is a uniform space. You use compactness to find a nice subnet of a suitably chosen net. You obtain a point, supposedly a pair of points. It's a point in X cross X. Supposedly in every element of your uniformity but not on the diagonal. And that's the whole source of your contradiction. So that contradiction goes out the window when you're not house-dwarf. I know embarrassingly little about what happens in the non-house-door setting. I have to ask the category's everything question. Okay. Do uniform uniform spaces form a category? Other morphisms? Oh yeah, uniform continuity. Okay, so, okay. Or I guess like uniformly continuous maps. Distinct compact spaces give rise to distinct uniform spaces. So if there's one... Distinct uniform spaces give you distinct compact spaces. Oh, so that's what I was trying to hint at but not quite. Compact spaces give you distinct uniformities. There are different uniformities that can give rise to the same compact space. I'll mumble a little bit more about this because it's actually not hard to say that any uniform space gives rise to a notion of a proximal space where what you do is you say that a is proximal to b if for every set in your uniformity I blow a up by v. So think if your uniformity is a metric space what you're doing is you're taking a and you're blowing it up by epsilon and b you're blowing up by epsilon. So I'm demanding for every epsilon that this be non-empty. And what matters in constructing the compactification is this proximalization. And there actually... there is a notion of proximal space and proximal continuity and that is... I guess it would have to be the same as the Seahouse. So I think my line of inquiry was like if this is not going to work it's not going to be what you think. Oh, absolutely. Because you go from uniform to proximal. Proximal is I think by this discussion naturally isomorphic to Seahouse. Then Seahouse... Yes, I think... I just... I smell... I don't know the right... I smell about 5.5. Yes, there's probably an injunction. I'll stop, I'll stop. Do you have any relationships on how nice the information could need to be or is it space to be metaphysical? Ooh, countably generated. Right, because if the diagonal is is gdelta and then you can't... Yeah, so it's... if and only if you have a countably generated uniformity. And then I guess this is a nice general remark is that given any uniformity any uniformity is... any uniform space has a topology generated by pseudo-metrics exactly because of that observation. And that's sort of a... that's a nice alternate characterization of what the uniform space is. It's any space whose topology and really what is... your prototypical pseudo-metric is just some function to the reals. You do assume compactness, man. You assume compactness, though. The compact plus countably generated uniformity is metrizable. Uh, no, no. Any uniform space. Any uniform space. Any uniform space is gdelta diagonal or metrizable. I'm afraid to see the U2 version. Oh, my god.