 Today, we are starting a new chapter, a new topic, namely, Simplicial Complexes. Motivation for the Simplicial Complexes manifests several, you know, several motivations. First of all, simplest maps to understand are linear maps other than constant maps. Simplicials was introduced basically to convert so-called differentiable maps into linear maps. The derivative is a linear approximation to a linear function at a given point. A polygonal path approximating a curve gives a lot of information on the curve itself. In fact, all your fonts in computers and everywhere you see, they are made up of line segments. Similarly, even though the main objects of our study in topology are manifolds, we need to prepare ourselves by studying simpler objects with wider scopes first. Manifolds are themselves built up of chunks of euclidean spaces. But the building of process there and what we are going to do now is quite different. We are also going to build up our objects of study, namely what are called as Simplicial Complexes out of closed simplexes or more generally what are called as convex polyhedrons. So let us understand these objects first in their own way when they are isolated before bringing Simplicial Complexes. The key word here is the basics of affine geometry. Affine geometry is nothing but a study of simple ideas from linear algebra. Only thing is the vector space structure has no origin in it. In other words, we have just pretend as if we do not know which point is origin. Conceptually this is so fundamental but technically everything can be transferred to linear algebra by picking up any point as your origin. That is the advantage. Instead of one fixed origin, you can pretend any other point also as origin and view. So statements which are independent of which point you choose as origin, that belongs to affine geometry. So let us fix up some notation so that again and again we do not have confusion. r power d denotes d-dimensional vector space over the real numbers. The standard base is 1, 00, 0, 1, 0, 0, 0, 0, 1, etc. They are denoted by A i. Every element of rd, we will start denoting them by x0, x1, etc. Up to x d-1. In fact, we will keep including r inside r square inside r cube inside r to the 4 and so on dot, dot, dot. By adding one extra 0. So real numbers is included inside r2 by x, 0. Inside r3 by x, 0, 0, and so on. So this will allow us to study all the r power d simultaneously. The notation is r infinity which consists of all finite sequences of vector space. Infinite sequences but eventually 0. After certain stage only 0s are there. Which is actually in terms of vector, vector spaces is a direct sum of infinitely many copies of r. Okay, so that is r infinity. We shall denote by all points x1, x2, x0, x1, x2, xn belong to rn plus 1 which satisfies the following property. Namely, they are all non-negative, lie between 0 and 1. The sum total is equal to 1. The sum total is equal to 1 defines a 4 dimension 1 subspace inside rn plus 1. If you just take only this one, this is not a vector subspace. But this is what is called as affine subspace. Okay, so it's of dimension n. Inside that I am taking only those which are non-negative and which lie all the coordinates lie between 0 and 1. Okay, so this will be denoted by more delta n. You may wonder why this mod and so on, why not just delta n. Soon we will see all this later on which when we get familiar with this, maybe we can just write delta n for this one. And this space is called the standard n-syntlex. Why this n instead of rn plus 1? It is very immediate because the dimension of this affine subspace is n. So this is subspace of that, okay, of full dimension. Let us look at the one case, namely here put n equal to 1. So we are working r2. So what will be this subspace? x0, x1 summation x0 plus x1 is equal to 1. So this will be the line segment joining 1 comma 0 and 0 comma 1 inside r2. So it is one dimension and that will be called one simplex. So one simplex is embedded in r2. Two simplex will be embedded in r3. Two simplex will be the standard two simplex will be the triangle spanned by 1, 0, 0, 0, 1, 0, 0, 0, 1. So those three will be the vertices, okay. We assume that the reader is familiar with a fair amount of linear algebra but perhaps not with this affine geometry. That is why we are recalling some of these basic things. Starting with some n points inside rd, this number of n has nothing to do with dimension d here. n is some n points, finite many points inside rd. An affine combination means what? It is like a linear combination lambda ixi where lambda is a real number. But the sum total is equal to 1. So this is like the parameterization of a line segment t times v plus 1 minus t times u. Why t and 1 minus t? Sum total is 1. That is the whole idea. So that will give you the entire line but it may not be passing through the origin, right? There is no other condition. Sum total must be equal to 1. So when there are only two vectors, okay, x1 and x2 here are not coordinates. They are points. So t times x1 plus 1 minus t times x2 will be the line passing through x1 and x2, right? So that is an affine combination of a1 and a2. Similarly, if you take three points, it may give you a plane. It may not be. If the three points are already collinear, then affine combination will not give you anything more than that line, okay? The affine combination always makes sense. When affine subspace a of rd, we mean a subset of rd with the property that every affine combination of points is inside a again. To take some finite many points of a, take an affine combination. It must be inside a. So close under taking affine combination. You see exactly similar to how do you define linear subspaces. Only thing difference is in the linear combinations of when you take subspace and so on. There is no condition. Lambda is an arbitrary scalars. Here sum total must be 1. That is the difference, okay? Given a subset m of rd by the affine hull of m is meant the collection of all affine combinations of points in m. And this is denoted as fm. fm is similar to given a subset you can take lm which is the linear space spent by that. It is similar construction. Now I am taking only affine space, okay? We say that a subset s of rd is affinely independent. Once again correspondingly linearly independent in linear algebra. So what is it? Summation lambda i xi equal to 0 along with sum total must be equal to 0 here. Okay? Watch out for affine combination is different. Affine independence of summation lambda i xi equal to 0. Then each lambda i must be 0. If you do not put this condition, what you get is linear independence, alright? So this is the definition of affine independent. This is not cooked up. This is forced upon us. You will see in a moment, okay? The function a to b between affine subspaces is said to be affine transformation if it satisfies the following affine linearity. Namely f of tx plus 1 minus ty is t times fx plus 1 minus t times f. Okay? We are not saying that f of x plus y is equal to fx plus fy. Either we are saying that f of tx equal to t times fx. That will be linearity. This is affine linearity. Okay? That is the difference. t is any scalar x and y are points of a. a and b are affine subspaces so that whenever x and y are inside a, t times x plus 1 minus t times y is also inside a for all t. Okay? Suppose you have any singleton set and any two set. Okay? Any set for a singleton set is affinely independent. In a linear algebra, 0 is not independent. Any non-zero vector is independent. In affine geometry, such differences are not there. Every point, singleton set is independent. See, this is the beauty of affine algebra. It has better, you know, richer structure than a vector space. Any two set, distinct set, it is independent. 0 comma 1, 0 comma somewhere, it is not independent. As soon as 0 is there, it is not independent. In the case of linear independence. Okay? So that is the difference. So this is a close relation between, there is a close relation between affine independence and linear independence. Okay? But they should not be confused with each other. The key result is the following lemma, the proof of which is very easy. We will follow it. What is the correspondence between affine independent and linear independent? So this is a lemma. Take a function f from r raised to some power to r raised to some other power. Okay? r m to r n or r r to r n. Okay? This is an affine transformation. Now I am thinking of r power r as an affine space. Remember, what is an affine space? Affine combination of any two elements should be inside it. That is it. So r r is an affine space also. It is a vector space. It does not matter. If you do not know the origin, if you are forward on it, then it becomes an affine space. It is the point. Okay? So like now affine transformation, not a linear transformation, affine transformation. If and only if you look at the function fx minus f0, that must be linear transformation. Okay? In a linear transformation, 0 would have gone to 0. f might not have taken because it does not treat any point as distinct. That is the whole business. So now you know what is 0. Look at the image of 0 under f. Subtract that the corresponding function will be linear and conversely. Okay? So in other words, x going to a times x is a linear map. x going to ax plus some b is affine linear map. So any degree one polynomial defines an affine linear transformation. If the constant is 0, then it is a linear map. All right? So I have given the proof also here. Suppose capital F after subtracting f0 is a linear map. Okay? Then go back to the original function f, f of tx plus 1 minus 1 minus t times y is by definition f of tx plus 1 minus t times y plus f0 because capital F of whatever is this is f of that minus f0. Right? So I add f0. Now this is linear. This is linear. It is given. Right? So t times fx plus f0 I can add. 1 minus t times fx plus f0 I can add. So 1 minus t times fx 1 minus t times fx I have subtracted. But tc is nothing but now little fx little fy. So commerce is just we can reverse the steps here but we can also do it like this. Assume f is an affine linear transformation. Okay? Then f of 0 by definition is fx minus right? f of 0 minus f0 is f0 this is 0. After that f of tx by definition f of tx plus 1 minus t0 you can write like this. Okay? Minus f0. All right? So when you write like this it is t times fx plus 1 minus t times f0 minus f0. But that is nothing but t times fx minus f0 which is by definition t times fx. So t comes out of that. Finally you have to see the addition f of x plus y equal to fx plus fy. Okay? f of x plus y you can write it as f of twice x plus y by 2. Right? When you write like that this 2 comes out f of x by 2 plus y by 2. But now this is an affine transform affine combination 1 by 2 plus 1 by 2 is equal to 1. Right? So that is equal to twice f of x plus y minus f0 which is now you can expand this one. Fx plus fy. Okay? 2 will come here minus twice f0. So fx minus f0 plus fy minus f0 which is capital Fx plus capital Fx. Therefore capital Fx is linear. Capital Fx is linear. Little F is affine transformation so capital Fx. Okay? So here are the few exercises. Okay? Affine combination of points of F is again a point of F. Okay? Show that affine subspace A of Rd is a vector space if and only if it is passing through the origin that's all. Okay? All these exercises are very straightforward. Show that affine subspace A of Rd is nothing but translate of a linear space by a vector. So this will follow 2, 3 will follow from 2. Okay? Every A can be written as some vector space V plus some vector. If you can choose that vector as 0 then A itself will be a linear space is the previous thing. Okay? So we will define the dimension of an affine subspace to be the dimension of the vector space A minus X where X is a point inside A. Take a point X inside A and throw it away and throwing is not throwing away asset. Okay? Subtract it from all elements. In particular A minus X will have 0 in it. So it will become a vector space. Look at the dimension of that. So this is an easy way of defining by using the linear algebra. You can do it without going through the algebra also. Okay? Independently of all this. So here are some other set of exercises. All these exercises should be separately sent to you. You do not have to copy it from here or anything. Okay? I do not want to discuss each one of them now. But before going to the next session we should be familiar with these things. So do not waste your time. Be firm that you know all these things. Next is one more concept I want to introduce namely which you might not have learned in linear algebra. So namely take a subset of Rd. We say this subset is in general position. If every k subset of A is a find independent where k itself is less than or equal to d plus 1. Every k subset is a find independent. A itself may be an infinite set. What is the k subset? A k subset means a subset which has k elements. For example, take A as a subset of Rd. This will be in general position if no point, no three distinct points are collinear and no four points are coplanar. Okay? A single point is always independent. Even two point is always independent. So that will not give you. Starting with three there will be three points will be collinear. It should not happen. Because k is less than 3 is less than or equal to d plus 1 here. Therefore, every case of find independent means they should not be collinear. Similarly, four points if you take any four points, they should not lie in the same plane. Okay? So this is what the meaning of general position in R3. So if it's R4, you have to have one more. If Rn, you have to have all the way up to n plus 1. n plus 1 point should be, every n plus 1 point should be a find independent. Okay? Notice that the definition is stronger than saying every d plus 1 subset of A is a find independent. Okay? A d plus 1 subset you take, it may be a find independent. Okay? So it should be, I am building up this one inductively. You may try to prove that, that's not the point. Every d subset is a find independent. Every d minus 1 subset is a find independent and so on. Every 2 subset is independent. Every 1 set is independent is easy. So that is all those statements are built up here in one go. That is why this statement is stronger. Whether you just make this one and then arrive at the other one, that's the difference. Okay? These two conditions are equivalent only if A has at least d plus 1 elements. Okay? So you can check that. So here is a general position theorem which uses a little bit of linear algebra. Namely, independence of sets is an open condition. In other words, dependency, linear dependency is a closed condition. So it will be changed. So this is what is ignored in this theorem. Okay? That is the theme of the theorem here. Take any n set inside rd. Given an epsilon positive, there will be points w1, w2, wn. Okay? Such that they are very close to epsilon, close to corresponding vi. The norm of vi minus wi is less than epsilon. And of course, if that is the only condition I could have taken, wi equal to vi. And these w1, w2, wn are in general position now. You take three points in r3. If they are collinear, what will you do? You will move one of the points slightly away. And then you will become general position. This slightly away can be as small as please. Epsilon could be any positive number. That is the meaning of this. Okay? Next thing is, part b is, if a is already in general position, then there exists epsilon positive. Here every epsilon, here it exists epsilon positive such that any set w1, w2, wn which satisfies this property is already in general position. So general position is very dense. That is a part a. And the second part says general position is stable property. If you perturb the whole thing slightly, then it is still in general position. Okay? So that is the interpretation of these two a and b here. The proof is, as I told you, it just follows by the dependency of vectors is a closed condition. An affine space of d plus 1 into n, okay, in this affine space of dimension is 1 of d plus 1 cross and matrices over r. I am taking d plus 1 rows and n columns. In other words, I am taking rn, rn, rn, d plus 1 rows. Okay? Look at that. That is a d plus 1 dimension. This is the same linear isomorphic to r d plus 1 times n, right? That those matrices with rank less than d plus 1 is contained in union of finitely many hyper planes, okay? Satisfying the first, you know, any d plus 1, you have to write their linear combinations of something and so on. Okay? So those matrices rank less than d plus 1. Okay? So therefore, the complement of these affines of finitely many affines of stresses is open as well as dense. So it is dense complement means what? For arbitrary small epsilon, you can find points satisfying that. And once we are in the open set, you can find an epsilon set. All small neighborhoods of that will be inside. Epsilon neighborhoods will be inside that open set. So that is the second part. Okay? So I think we will stop here and next time we will start studying superficial complexes. Thank you.