 I talked about the entire overview of the lectures I'm giving yesterday. So it created an impression that every lecture is going to be of that volume. That's not the case. That was just an overview of everything we are going to cover. So I'm going to now slow down a lot and explain each topic in detail and slowly. On top of that, it's the nature of this subject that many things are involved. So I need to tell you what is involved. And I don't need you to learn everything in depth. For example, when I talk about conformal field theory, I don't need you to know conformal field theory. Only what I say in the lecture will be necessary for you to learn. And essentially, doing the exercises, you will get an idea of what is the depth of these lectures. So the exercises are important. Please do the exercises. I've put them on the website, on the portal. Although it's a little difficult for me to find them now after I've put them on. So I've sent an email to Erica asking her to make it a little more obvious where to pick the exercises. If that doesn't work, I will take a print and show them to you, give them to you. So we will slow down a lot from now on. So let me continue. OK, so let's start with conformal transformations. We went to a point like this yesterday. Conformal transformations are transformations which involve as well as the usual space time translations, which are transformations, which are translations plus rotations. They have scales and inversions in them as well. So this kind of thing will happen. That means that not only will it rotate, not only will it translate to somewhere new, but it also gets deformed in a sense. It gets deformed in this way because scales change, and I also do an inversion. I will tell you in a minute what the inversion is. So the only thing which remains unchanged is the angles. All the lines which here you see form the rectangular angle, but after rotating they will also maintain that angle. To see this, what you need to do is to do transformations which keep the metric constant up to a scale. And when you do that, you find that in larger than two dimensions, you get a finite group. And in fact, you get SOD, 2. So for D bigger than 2, these transformations, they give you SOD, 2. These are rotations, which are D times D minus 1 divided by 2 plus translations, which are D dimensional scale, which is just one parameter, and inversion, which is also a vector. And if you add them all up, you find D plus 2 times D plus 1 divided by 2. So this is SOD comma 2. The comma is put there because the metric doesn't have the same sign in all the D plus 2 dimensions. So just to emphasize that two dimensions have a negative sign in front of them, we put this comma there. But the structure of the algebra is SOD plus 2. This is for D bigger than 2. But for D equal to 2, something wonderful happens. And that is because I have the ability to do complexification over two dimensions. So things become much simpler in two dimensions. So for D comma 2, I have only two components. So I can combine them into a complex number. So my coordinates now are complexified. And now any function over the complex plane will be itself a complex number. Because if I take z to g of z, again, this is going to be u of x and y plus i v of x and y. And if it is going to be just a function of z and not a function of z bar, this means that D by D z bar of g should vanish, which is what I mean by this statement here. By D bar, I mean D by D z bar. D by D z bar means that D by the x of g plus i d by dy of g must vanish. I always do a check myself here. Because I'm not sure if what I've said is right there. So it's perhaps instructive to do it here. D by D z bar on z must vanish. So D by the x plus i d by dy on x plus i y must be 0, which it is. So this is actually telling me that in this mapping, the z bar component doesn't appear. So it's not the most complete mapping I could make. There's something restricted about it. And this is, in fact, what is known as Cauchy-Liemann conditions. Because if you open this up, you get du by dx plus i dv by dx plus du by dv du by dy minus dv by dy. Hence du by dx is equal to dv by dy and dv by dx dy is equal to dv by dx is equal to minus du by dy. These are called Cauchy-Liemann conditions. And should have them on the next slide. Question, yes? So in the last slide, when you showed that angle preserving property, was that related to only two dimensions or? Angle preserving? Yes. No, no, angle preserving is in all dimensions. OK, let's go back here. This is a general condition that if you maintain that the metric changes only by a factor. And since I've said that as an exercise, I don't want to do it on the board. That's one of the exercises that you will do that what you need to do is that you take coordinates and make an infinitesimal transformation and then impose that condition. You get conditions on w u and then also show that the angle is preserved. I will show you only in two dimensions that the angle is preserved in a second. I mean, if you just do rotation, if you rotate the entire thing, what is the kind of transformation? So you do a transformation. It makes a rotation. It makes a translation. But it also changes the scale. So it will push something in and pull something out. However, angles are kept locally. The fact that angles are kept is important in that if you have a coordinate frame of perpendicular lines under conformal transformation, they will go to another coordinate frame, which again acts as a coordinate frame when they are not straight lines anymore. I will show you pictures of them in a second. Angle between two vectors, yes. Or if you like two lines, I was thinking of a grid line now. But that means that if you have two given vectors with an angle, they will be transformed to somewhere else and the angle will be kept. The inversion for the D. D. Yeah, why is that? Why is that? Yeah. Actually, this word inversion is only a simplified expression. It's actually not inversion. It's if I remember right, it's something which goes to, I don't remember it here, but in two dimensions I can give you an expression. It is actually a shift and an inversion. So it's not something like that. It's not a pure inversion, but a shift and an inversion. So the number of parameters it has is two. It's called a special conformal transformation, but a little two. The word is a bit too heavy to D shifts. D shifts, so it will be shifted by a vector. It is shifted by a vector and then inverted. Where are we? OK, so they are angle preserving transformations. So two lines which form an angle like that are transformed and they come from switching to some other two paths. And let's try and show that the angle will be preserved if I make this sort of transformation. So the first question is, how do I calculate the angle? So if I have two points, z and w, which essentially are two vectors, the angle between them is this. Is that right? So the vector z has its own angle. The vector w has its own angle. And this angle is related to the magnitude of w via the complex variable w. The difference of the angle which is from here to there is obtained like that. So now the question is on such a graph, what happens? So z, let's start from here. To get a path like that, I need a mapping which takes an interval into the complex plane. Let's call it z of t, which is essentially x of t plus i y of t, which is the coordinate of this path as it is moving along. As t changes, I move on this line. So this is curve one. Curve two can be organized to be exactly the same thing, but I now call it w of t, which is x of t plus i y w of t. Now under a mapping, g, which takes z to g of z, the first path is transformed to its conformal mapping. So this is something on another plane. So in fact, by this mapping, I'm going to g of z from z and two paths, c1 and c2, go over to other paths. And I want to see what happens to this angle. To get the tangent, I need to take the derivative respect to t, which gives me g prime times dz by dt. Now this g prime has to be evaluated at the point of intersection, so here, which I call z0. But if I actually just use z, then this is the tangent to z, c1. Again, I can look at the g by dt, but this time g is acting on w. So it's equal to g prime times dw. But this g prime is, again, evaluated exactly the same point. It is not a different point for w. It's exactly the same point as z. So again, this is a g prime at z0. So these are the two tangents now after the transformation g. And I want that angle after the transformation. So dz dt divided by its magnitude is the same as dw dt divided by its magnitude. And these are different in an angle because g's disappear. These, they cancel in this on both sides. However, this is exactly what I would have had here if I were to calculate the angle before the transformation. I have to get the two tangents and find the angle. So from here, I also have that dz dt over the magnitude of dz dt is the same as dw dt over the magnitude of dw dt angle. So we see that the true, the angle does not change if you make a g transformation on the complex plane over such an intersection. Now, there is an obvious constraint here that g prime at z0 has to be non-zero. So no, it doesn't have to be single valued. It doesn't have to be single for this discussion because it is not single valued at this point if it is not singular here. But there is a more complex, there is a more complicated version of it which I will come to in a minute. If you want to do it over a domain, then it has to be single valued over a domain. Because this is point wise, that's not a problem. But if you make it, if you require it to be true over an environment, it has to be single valued. An environment is a domain. So if you confine yourself to a domain over the complex plane, then g has to be single valued over that top. Sure. OK. Now. So that g is a conformal mapping, right? g is a conformal mapping. Where did you use that part? Because I used it in the fact that it's only function of z, only function of z. Is that it again? Any mapping that is just a function of z conformal mapping? Yes, but I'll explain a little bit more now. Yes, so if you have any mapping over the complex plane, we will take that as a conformal mapping. But we have something called the Mobius Group. So I'm going to take these sort of mappings to start with. These are called Mobius Transformations. So now consider these mappings, that z goes to this fraction. So this is clearly a subclass of all the mappings of z. But this subclass is a very good subclass, because it's a one-to-one mapping, or it's an isomorphism. So for every point in the plane z, you have a corresponding point in the plane g of z. So if you take a domain in z, it will go to a domain over g of z in such a way that each point goes only to one point. And it has an inverse. So because it's one-to-one, also you have an inverse map, which brings you back. So for this class of mappings, they are one-to-one over an entire domain. And hence your question. Here, we can actually ask for the, we can see that the angles are preserved from the entire domain to the other domain. ABC, they are complex. But there is a condition over them. And that is AD minus BC is equal to 1. So ABC, they are members of the complex numbers. And on the slide, you can see the inverse mapping. So if you take a point from z to AZB plus ZZD, then you can bring it back. I should have perhaps written g of z there. That's how it works. A little algebra will show you that this will take you back to exactly the point you started with. So if this is the condition that you have to satisfy, this forms a group which we can call SL2C. That's the group of 2 by 2 matrices over complex numbers. But it's determinant equal to 1. And this group is isomorphic to SO2, 2, which is exactly what we are supposed to get in two dimensions, which is this group here. So we have this guy in higher than 2. The Mobius group is its reflection in two dimensions. What it should be in two dimensions. And you can see that it actually contains all the components that it should. So if I take a equals to 1c equals to 0d equals to 1, this condition is correct. And you just get a shift of the plane. If I take b equals to 0c equals to 0d equals to 1 over a, what I get is z equals to a squared times z. But then a has a magnitude and an angle. And therefore, it is scaling the point z also rotating. Now this is the unusual transformation, which I call inversion, which is a equals to 0, b equals to 0, b equals to minus over c, which takes z to minus over c to z. A scaling has happened, but also an inversion. So it is the inverse of that. Obtaining inversion by appropriate rotations and scaling. Why is it necessary? Well, you're saying, why is it extra? Yes. You see that if you take this disk, which is z equals to 1, there are points far away, which come into a point inside here. So I want to take this point into here. And I think I need 1 over z to do that. The other transformations cannot do that. They were thinking about rotations and scaling, like a spiral, right? Do you think? All right. If you do a spiral, you're really changing your scale as you're moving. So the point is that these are fixed transformations, that the scaling is a lambda, which scales all the points all at the same time. Of course, if you make a point dependent scaling, you can do this. Yeah? Yeah, so a solid scaling cannot do it. But local scaling can do it, yes. OK, what I will use, and the reason that I have to suffer this difficult name, is that I really need it later. That there is something called the Riemann mapping theorem, which it says that you can actually map two domains to each other if they are simply connected. And the conformal mapping between them exists such that it inverses also conformal. And this mapping actually allows me to map any two domains over the complex plane to each other. And also, it allows me to map any complex domain to the unit circle around the origin. And in a sense, it means that all proper domains are isomorphic to each other, because they can all be mapped to the unit circle and then back. This is very useful for various calculations we do, because you might set the problem in some difficult domain. Then I can map it to the unit circle, solve it here, and then take it back. What I will use it for is that I will claim as we go forward that I can map any domain that any domain on the complex plane can be mapped to the upper complex plane. The upper complex plane h is the complex plane with the condition that y is positive. And it fits all the requirements of the Riemann mapping theorem. So essentially, I can talk about the upper complex plane and use this theorem. And it's also whatever I prove here will also be true in any given domain, any given finite domain. I won't give this proof here. It's too complex. And we don't really need the proof either. What I showed about the angles means that now, if I want to do that, the f prime has to be non-zero on the whole of d. Otherwise, such a mapping doesn't work. The reason is that the angle preserving quality will not work if a prime vanishes anywhere on this domain. OK, now in physics, we will bend the rules a little. And now I say that I don't want this quality of one-to-one mapping. And therefore, I allow all functions g of z as strictly speaking, it's not a group. Because sometimes the inverse might not exist, or it may be double-valued or multi-valued. But the good thing about it is that this will open a much bigger symmetry to me. And it's very useful in the calculations I do. So so far, we kept it mathematically correct. But now I go into these transformations. They don't form a group. But I can associate an algebra with it and say that that algebra is the symmetry of my theory. The algebra associated with this, the whole group of analytic functions, is called the Witt algebra. Or its quantum version is called the Virasoro algebra, which I will come to in a minute how this algebra is defined. But before doing that, these are some of the conformal transformations of the complex plane unto itself for various functions of z. But it does, the red lines and the blue lines are the remnants of the x and y axis. So in fact, you see that by doing this mapping, I have now a coordinate frame here, which is not Cartesian at all. But it may be useful in solving the problem that I have at hand. And which is why we use it so much in physics. Because the main equation which we are usually solving is the Laplace equation, which I forgot to mention. That when you have the Cauchy-Riemann conditions, this implies that each component on its own satisfies the Laplace equation in two dimensions. Also true for V. So since I have such equations to solve in physics, I use the conformal transformation, go to a new plane where things are much easier, particularly boundary conditions usually cause trouble. And solve it there and then come back to come back to the complex plane. OK, what is with algebra? Let us define a set of operators in this way. Now, with these operators, I make a number of observations. First of all, L0 is a special operator because that gives me the scaling of a function. Take a function f of z and scale it by lambda. Take lambda very near 1. So this is f of 1 plus epsilon z or equal to f of z plus f of epsilon z. Equally, this f of z plus epsilon z df dz. And this is exactly minus epsilon L0. The other special member of this group is L minus 1, this group of operators because that's just minus d by dz. The other one is L plus 1, which is z squared d by dz. Now, these three form an algebra which you can see here. A little mathematics, a little algebra on a piece of paper shows you that that is true. And this is good because this is in fact SL2C algebra, which we had for the Mobius group. So I claim, and you may accept it, that these transformations are the generators of the Mobius group. So these are the generators of z going to a z plus b over c z plus d. However, something more than that has happened. This is not all that useful, but this relationship is useful when if you allow n to take other values than these three values, then you find that you again get a closed algebra, but in a wider sense in that this is now an infinitely large algebra for all values of n. I think that's a little mathematics can show you that zn plus 1 d by dz, zm plus 1 by dz go to n minus m. And this is the Witt algebra. We need three operators, L0 and minus L0. And we don't know how we get them. We don't know how we can write them such as this z, we know. We have an idea of how to take it. So just let me take you from the other side. We know that this base group is SL2C, because it is two by two matrices over the complex plane. SL2C has this structure, which you probably know. You may know it as SU2, but it's really SU2 complexified, SL2C. Now there is probably a way that I can rewrite these in terms of Lz and L plus minus, which you know that this can be written as that. So this is in fact, this section is the algebra that Lz and L plus minus form. Now, the other part is pulling a rabbit out of a hat. I write these operators, and they happen to have the same algebra. So I suggest that this is in fact the same algebra. But to actually do the calculation, it needs a little work to show that all the transformations here are generated by those guys. I don't know what you mean. The shape of these operators, z, they're yes, OK, yes. So if you go to another definition of the plane, so you go from z to w, this would be wd by dw. OK. So next topic is self-similarity. I will come back to complex transformations later, later in the way. Patterns such as these are made by architects to perhaps remind us of nature, but exactly the same problem that Professor Dar posed in the beginning of his lecture. Why there are so many self-similar objects in nature? A self-similar object is an object which if you look at a part of it, it looks like it's whole. Here, of course, a trick is used that you have to also change the scale. So it becomes smaller and the same as the big one. The other name for self-similar objects is fractals. A geometrical shape is called self-similar or a fractal. If a part of it looks almost the same as itself, that means I change the scale, I look at a small part of it, it looks just like the big one. Just like it has to be relaxed a little, it can look almost like itself. Almost like means that you can see that it is not exactly the same, but it is slightly different, but it's essentially the same. Here is an example of a fractal. This is a very popular drink in the Middle East. And when you drink it, it leaves a pattern on the glass, which is like that. So you see that this pattern looks like itself if you take a small picture of it and compare it to the whole. And you can run a computer software on that, calculate its fractal dimension, and it comes up to be 1.22, which is a famous fractal dimension, as we will see later. At first, when we saw this, we said, aha, river networks. But it's not river networks. So I will come to this fractal dimension and what it signifies later. But what is a fractal dimension? Sorry, some softwares are missing on this computer and I cannot show you how you draw fractals. I hope that the softwares are fixed for tomorrow and you can see these fractals tomorrow. So two different concepts are stated here. One is self affinity and the other is self similarity. They are very close to each other. So let's take a graph, y of x. I take a small part of this graph and I say that if I scale this, it will look like the whole graph. Now the two things can happen. Either that I have to scale the two sides by the same number, in that case it is self similar, or that these two numbers are not the same. I scale one by A and the other one by B for the graph to look like itself. In that case, it's self-affined. So self-similar is a subset of self-affined when A and B are equal. It's a lucky situation that I am overlapping with. Professor, there are a lot on these topics, but it's lucky for you. You don't have to learn too much or you get to repeat up the same ideas. So here is an example of how I build the fractal. It's called the Koch snowflake. What you do is that you take a triangle, you divide each side into three parts, remove the mid part and stick the two edges there. So it becomes a little bit longer here. So you go to this star of David. Then again on every edge, repeat the same process. You take the edge, take the middle off, add two edges to it and you get this guy and you continue and go on and on. In the infinite operation, you get a very strange shape. You get a shape which looks perhaps like this. It has an edge which is fuzzy. It is fitting on the two-dimensional plane, so its dimension is less than two, certainly. But the edge is not aligned anymore and it has a dimension greater than one. So here we have on the boundary of this shape is a fractal and it has the property that it is not two-dimensional, not one-dimensional. So its dimension is between two and one and I call that its fractal dimension. The fractal dimension is the flat dimension of this shape and in this case, the fractal dimension is log of four divided by log of three. This number which I will explain now how you can calculate the fractal dimension. The point is that when you want to measure the length of this line, you need to use a stick. So you take a stick which is say one unit long. You put it on the edge and if it fits so many times, you count the number of times that it has fitted in this case six and you say the length of it is six meters. If the stick is one meter long. However, someone will object to you and say that, okay, what you've done is almost right but you have a discrepancy because here it's not fitting really on the line. So you say, okay, I take a smaller, a smaller stick. Use that a smaller stick to measure the length. You do that again, so you have a smaller stick and the nature of chalk lines being what they are, it now fits but it's actually an approximation because this is thick and if you are careful again the same problem will arise that it's not really fitting. So you just increase the accuracy by making the scale smaller and smaller which is what I do here. And then you, what you do is that you look at the total length as the scale becomes smaller. So you multiply the number of times a stick is used times the length of the scale. The number of times that it is used in k is increasing. The side of the scale which is lk which every time is decreasing, this multiplication as l tends to zero, m tends to infinity gives you the length. And then the question is how does it grow? The question then is how does this number grow? And the growing rate is very telling. So if I have just a simple line and I use a one meter stick to measure it, the length of the line being l, length of this stick being l, this gives me n of l. So n of l is the number of times a stick of size l has been used to measure a line. Or nl times l is the length. And it doesn't matter here if I let l to infinity or not. This is correct. Only problem is that it may have a fraction at the end. We can sort out the fraction. This is so because we have only dimension one. This says that this shape has dimension one because n of l scales like l minus one. However, if it were a two dimensional object, then it would scale like l minus two. And that would mean that this has a two dimensional object has dimension two. So the power by which n scales is the dimension of the object. And this is the mentality behind this calculation that when you do this four thirds is the way by which n is scaling. Hence that is called the dimension of the fractal. It looks a little bit like the topological dimension. And therefore it's possible to accept that this is the dimension of a factor. And this d of f is what we call the fractal dimension. The power by which the number of units scales as you measure the length of a fractal. You can do it like this as well that you, you say that I cover the object with unit balls of physical dimension s. And I then look at how n scales with s as s tends to zero. And that is the fractal dimension. This here is the Serpinski triangle. Ignored that I think that's a software problem. It has come in from another slide. This is the Serpinski triangle. The way it's constructed is that you take a triangle, you take the middle off, which is this white thing. Then you have three triangles at the sides. Then you go to these triangles and you take the middle off. You then end up with a number of triangles and you repeat the process. And eventually you have a shape which has many holes. All these whites are the holes. They are not the area or part of the area of the triangle. Here is another fractal constructed in a totally different way. And it certainly has a different fractal dimension and the Koch fractal I already mentioned. Another concept is the Hest exponent which is defined for time series. So let's go back to this shape. I have a time series and now I want to analyze the time series. I ask what happens if you change the time dimension. You expand the time direction by lambda. And to get the same signal back to, you have to re-escalate the signal amplitude by a different number. And this number is called the Hest exponent. I think for historical reasons because maybe Hest invented this factor. And H can be different numbers. H equals to one half is the famous Brownian motion. If you take the Brownian motion over time and draw its graph, you find that it scales with the Hest exponent as one half. If the Hest exponent is bigger than one, you get, you have long range dependence. So in the Brownian motion, you have no dependence, no correlation beyond the immediate point. But if H is bigger than one half, then you get a long range dependence. And if it is smaller than one half, you have a sort of sub Brownian motion and therefore negative correlation. But essentially H is nothing but the fractal dimension. And it's related to the fractal dimension by this expression that you have here. So if you now interpret the signal as a fractal, you see that it has fractal dimension two minus df. This? This one, when you escape the diamond? It's, yes, it's essentially identical to the previous signal, yes. So if you take a white noise or Brownian motion and you scale it by the square root of lambda, then you know that when you change the scale of t by lambda, the signal will change by the square root of lambda because you have this relationship for the white noise for the Brownian motion. So the change of lambda here will be canceled by a square root of lambda here and you get exactly the same expression. However, this is Brownian motion in the sense that it is created by a random walk. A random walk, which as you know, you'll make a random walk on the square and this comes out. But you might come and say, I won't make a random walk. I will make a correlated walk. That each step is correlated to the next one. This index will come out to be bigger than one half. Or if you make it negatively correlated, which is called the sub-Brownian motion, this will come out to be smaller than one half. And the question really is, it can be set this way. I go around, look at nature, events in nature and I record the time series. Is it Brownian, sub-Brownian, or super-Brownian? Some, you know, typically in nature, they are classified like that. Okay, my suggestion is that two straight hours is too difficult. Five minutes of break, stretching your legs, you come back in five minutes, right? Do you give breaks in the middle of your talks? Do you give breaks in the middle of your talks or you go straight to the- I think they don't give breaks because they cannot give too much tea or coffee. They can go and drink a water.