 Welcome to this video about the Lorentz Attractor. In this video I will demonstrate a part of chaos analysis, namely Attractor Reconstruction with Python. We will talk about the Lorentz system in general before elucidating the Reconstruction algorithms. For Reconstruction purposes we will first apply the more classical Tarkin's delay time embedding algorithm. Afterwards we will elaborate on a more recent approach, namely the spectral embedding. In our case the spectral embedding is based on the Plaschen eigenmaps in combination with a principal component analysis and a k-nearest neighbor approach. Let's get started. The Lorentz system has been proposed by Lorentz in 1968 and encompasses a description of a weather system, which resembles a chaotic system also. Lorentz discovered this system by shortcutting decimals during conducted simulations. The strange fractal attractor is similar to the shape of a butterfly. We will see a graphical representation in the next slide. In addition, the underlying mechanics, namely sensitivity to initial conditions, is also reflected in the name butterfly effect. Next to sensitivity to initial conditions, chaos constitutes topological transitivity and dense periodic orbits as main characteristics. The butterfly effect elucidates the sensitivity to initial conditions property by drawing the image of a butterfly flap, which is capable of causing a tornado in Texas as stated on a conference. The system consists out of three variables and according parameters. You will take a look at these equations next. Lorentz did not create a model alone. He had help from Alan Fitter, which conducted numerical simulations and by Margaret Hamilton, who assisted in the initial numerical computations. The model represents a simplified atmospheric convection model. There exist several combinations of the parameters SR and B, which renders the system chaotic. The parameters have a specified meaning. S is the Prandtl number, which is defined as the dimensionless ratio of momentum diffusivity to thermal diffusivity. R is the Rayleigh number, which is a dimensionless number associated with buoyancy driven flow. And B is a physical layer dimensionality parameter. In the depicted example, as you can see on the right side, S equals 10, R equals 28 and B equals 2.667 and the DT is 0.01. This system is taken to demonstrate attractor reconstruction methods. We will take a look at the Lorentz system and not at a time series, since a showcase of the mechanics is more clear when using a mathematical and fully disclosed system. Now, I will present you the Python code for the Lorentz attractor. The code that I will put the script onto our GitHub account and link it in the description. Furthermore, the Python code for the Lorentz system itself is also stated on Wikipedia. It is more and more the case to find code on Wikipedia, which I find really convenient at times. As you can see, we're within the Python code I want to use and present to you today. So, the first thing we're talking about, we don't require many packages. We just have the standard NumPy, Mudplotlib, Pyplot, and for the spectral embedding, we require the package spectral embedding from SQLearn out of the module Moneyfold, which makes our life so much easier and spares us a lot of coding, so we apply that. First thing we take a look now is how to generate a Lorentz system in Python. As I indicated, you can take the code out of Wikipedia or write it yourself. For the Lorentz system, we have our coordinate x, y, and z, and our parameter settings, s, r, and b, as indicated in the presentation, and we just input here our equations for the system and return the respective points. Furthermore, we're going to set the parameters dt and the number of steps which we want to calculate. The number of steps are actually the data points we want to generate with the system. Since I'm using a laptop, I'm using only 10,000 points, but you can blow it up as much as you like. It's no problem at all if you have a nice GPU on board. What we require next is to set up some initial steps and set the initial values, and then we can just iterate through our numbers we selected here above and calculate the simulations right here. And at the end, we just plot our system in 3D, and that's basically how you can create your own Lorentz system in Python. Please be aware that you can just change the colors and all the parameter settings within the plot as you like if you don't like the color red, or I don't know, you can just go for it. One notable mention is that if you alter the system parameters that the shape of the attractor will change too, so it's definitely worth playing around with it. For now, I will just execute the code and take a look at our Lorentz system. As you can see, since it's a mathematical system, we can see the attractor straight away. So on this note, I'd like to mention that if you reconstruct a time series, this won't be the case at all if you are in bad luck. So that's why I chose the Lorentz system. With a pure mathematical and fully disclosed system, you will see the attractor right away. We have each coordinate variable on each side of the axis, so we can see the system in its overall beauty for the given parameter settings. So what we can do now is take a look at those lines, which are called orbits or trajectories of the system, and they don't intersect ever. It doesn't matter if it's a time series reconstruction or if it's a mathematical system, those trajectories will never intersect nor will they repeat twice. Even if it's looking like they are intersecting, they are not intersecting, and we can see it when we take a look at our Lorentz attractor. And here we see why it's called the butterfly effect or butterfly attractor. And we can clearly see if we're shifting it around that those lines aren't intersecting. Please keep in mind that if you reconstruct other data sets that they might be of higher dimension than three, and the attractors may look a little bit awkward, but the same rule applies that they won't intersect. What else can we do with this Lorentz system? We can just flip it around once more and take a look at all of its sides and go back to the original view. And what we can do now is we can just take a look and zoom in, where we can clearly see all the basins of attraction of the Lorentz attractor, where you can see here and here that all those trajectories are landing at one fixed point or periodic orbit, which is depicted here and here. And if we zoom a little bit more in, we can clearly see all those dots we generated in our very nice Lorentz system. And we will return to our presentation before we will jump into the reconstruction possibilities. Welcome back. We will now talk about our first reconstruction algorithm, namely the Tarkens delay time embedding or Tarkens approach. The earliest variant of the attempt to reconstruct attractors, not from a mathematical system yet from a time series, is given by Tarkens in 1981. Tarkens is one of the pioneers who coined the term of strange attractors among others. Please note that I will not present the theoretical foundation, mathematical definitions and underlying rationales. I will give references to the according videos at the end of this video for you. The theorem is implemented by iterative shifting of a comb through the Scalar data series as depicted below. You can imagine blowing a balloon to full size. At first it is flat, then inflates and shows its true form. In the Tarkens approach you shift a comb through a Scalar time series as mentioned. In the illustration you see the index values for the time series 1, 2, 3 and so on. You select a delay time tau, which is normally based on the autocorrelation functions, which means the delay time is to be selected in such a manner that the autocorrelation is insignificant and yet the delay time not too largely chosen. Then you take the first index of the time series as value for the x-coordinate, the index value with x plus delay time as the y-coordinate and the x plus 2 times delay time as c-coordinate for the first point of the reconstruction. Note that you will require a sufficiently large data set for this procedure. Subsequently you shift the comb through the time series and plot the created 3D points accordingly. Now some theoretical notations are required to deliver a full picture of the algorithm. Reconstruction theorem explicates the existence of a transformation between the original and reconstructed phase spaces with vector functions. And in addition this implicates property preservation under reconstruction, meaning that the characteristics of the dynamical systems do not alter under the application of smooth coordinate changes, which means the fiormophisms, yet the preservation of original geometrical structures of phase space is not implicated either. We will now take a look at the Python code for the Tarkin's delay time embedding for our Lorentz system. So for the Tarkin's delay time embedding algorithm we are required to use a scalar time series and no 3D system. The first step is to just save the Lorentz coordinate variables as separate time series. The second step would be that we select one of those, then we select the delay time, which I chose to be 60 for x and y due to the autocorrelation diagrams and it will be 20 for z. And please note it has to be an even number or the whole comb will not work. Speaking of the comb, here we just divide our time series in three parts based on the delay time tau value we selected up here. We are going to shift those comm through the complete time series and save all the points in a list and finally we are just going to plot the complete system. That's the whole magic around the Tarkin's delay embedding and what I am going now is that I systematically will show you the Tarkin's delay time embedding graphics for x, y and z variables and we are starting with the x variable. We can see another 3D system just that in comparison to the Lorentz attractor we plot x t versus x t plus the tau namely 60 plus x t plus 120 which is 2 tau and what we can see here now is for the x variable for the x coordinate of the Lorentz system we can see the Tarkin's embedding approach and the reconstructed attractor. Please note that if you reconstruct a time series you won't get a nicely pattern like this you will most likely receive either a ball which means that your system is maybe not chaotic or has some noise attached to it or you will receive something like a spaghetti monster which is purely acceptable for the Tarkin's approach. A clear and very distinct attractor can be received via the spectral embedding which I am showing later. So what we do now I will just zoom in in this reconstruction and we can take a closer look at how the timely displacements shape the Lorentz variable x. To continue I will just switch to the y variable. Please note that the delay time is the same based on the auto correlation diagrams which I will not show today and we will take a look at our y variable. The approach is basically the same we take the y time series iterate the comm through it and plot the x t's versus the x t's plus tau versus the x t plus two times tau's and we will receive a very nice reconstruction of the Lorentz attractor's y variable As with the x variable please keep in mind that if you are using a normal time series you will probably end up with a spaghetti monster representation. This is really normal so don't be afraid and don't get anxious if you don't receive very nice representations. I will just flip it around and grant you a very nice look of it before we will take a look at the z variable. For the z variable we will just select the data for it and we have to change the delay time to 20 according to the auto correlation diagrams and now we can have a nice look at the reconstruction of the z variable of the Lorentz system which in my opinion looks quite nice. We can just regard it from all sides and we can see that it is quite shifted in itself and it really looks gorgeous. What we can do now is we can just zoom in again like with the others in regard to the fractal basin of attraction right here in the middle and we can see all those little data dots and we can see all those nice trajectories lining up and now you can clearly see that they don't intersect nor repeat twice and that's it for the Tarkin's approach. We will now go back to the presentation and talk about the novel approach the spectral embedding. As you have seen the Tarkin's delay time embedding is a simplistic algorithm easy to implement and it provides you first insights. To continue the second method of reconstruction of attractors is a more recent variant labeled spectral embedding and is a little bit more complex in its application. Parallel to the Tarkin's embedding a novel embedding technique using manifold embedding and Laplacian eigenmaps is implemented by Song et al. 2016 by applying a spectral embedding algorithm therefore it is assumed that the strange chaotic attractor of a given time series to lie on a low-dimensional manifold which is embedded into a high-dimensional Euclidean phase space. It is premised that the topological structure of the dynamical system in phase space can be displayed by few independent degrees of freedom embedded in a low-dimensional non-linear manifold visible by calculation of non-linear dimensionality reduction algorithms. In our case it's a principal component analysis. To conduct a successful mapping and extraction of the hidden strange attractor of the chaotic system Laplacian eigenmaps are determined since in practice it is not possible to measure components of an unknown high-dimensional vector space. Spectral embedding is executed which forms an affinity matrix based on a nearest neighbor algorithm and a principal component analysis given by the specified function and applies spectral decomposition to the corresponding Kraft Laplacian. The resulting transformation is given by the value of the eigenvectors for each data point. For the PCA component number it is common to apply the embedding dimension of the system. I also displayed for you a heuristic I came up with during my empirical experiment for my VHD thesis which results in the most workable reconstruction therefore you can just take one percent of your data length and add one and a half times your delay time. Some of you may now denote that the resulting number of neighbors is quite high yet please remark that for reconstructions you have to have a lot of data. As with the Tarkin's embedding I will now guide you through the spectral embedding in Python. For the spectral embedding we have to select the PCA components which normally resembles the embedding dimension as a minimum for the respective system. Since the Lorentz system is a mathematical system and fully disclosed we know that the dimension is 3 so we pick the components equal to 3. For the neighbors we just pick 20 basically for the mathematical system it isn't as much as a matter as for real time series data finally we have to select the affinity operator which is the nearest neighbor algorithm as discussed. For now we have two procedures I want to present you the first is that we will able to show the attractor based on its eigenvalues by just inputting all variables x, y and z into the procedure instead of using the delay times. This is done here in this part we will comment that out once we will get to the single variable reconstructions but for now I want to present you the whole Lorentz attractor based on eigenvalues. The fitting of the attractor or the calculation of the spectral embedding is basically just a tool liner which you can see here we input our components we select our affinity operation, select a number of neighbors and select an additional parameter and say okay please fit the transformation to our selected data set which in the first showcase is the complete Lorentz data we simulated above. To conclude we will just plot the data and take a look at it which we will do now. As you can see the reconstructed attractor still looks like the original one but it looks more sorted in my opinion what you can see here is the result of the spectral embedding of the whole Lorentz system where you can see in this part here that all those trajectories really do line up perfectly don't intersect and never repeat and I have searched the whole web and I guess I'm the first one to show the Lorentz attractor based on eigenmaps. Finally I will show you the reconstruction based on each Lorentz variable on itself. To show you the reconstruction of the x y and z coordinates on their own we will go back up in the code to the tokens embedding section and we will select the x values and we will have to change back the delay time for x and y and since we are using them we have to change also our number of neighbors which we will just take to 60 this time and now as you can see we have commented out this section where we selected all x y and z values and we will work with the delayed x values this time. The spectral embedding will not calculate the eigenvalues for x y and z but only for the delayed timed axis this time. I will execute the code now and we will have a look at the eigenvalue spectral embedded x of the Lorentz attractor and as you can see that's quite an intriguing view we will now zoom in and out to get a closer look. To continue we will take a look at the y reconstruction via spectral embedding and we can see that the shape of the reconstruction is quite similar to the x coordinate based on the eigenvalues like with the others we will take a zoom in and a zoom out for the z variable we changed the number of neighbors and the delay time back to 20 due to the autocorrelation structure of the variable and we will take a look at the attractor reconstruction of the eigenvalues for the z variable right now. As you can see we have a little cone here and it just feels like that the flat surface we have seen during the Tarkin's approach has been shifted outwards now. We will take a zoom out and assume into the reconstructed eigenvalue system and we'll get back to our presentation. As you have seen the spectral embedding provides a more sophisticated and more clear depiction of reconstruction of time series and strange attractors. Finally I will state the references which should serve you as further guidance. I also have copied them into the video description below. Thank you very much please subscribe to our channel and let me know which content you are most interested in. Leave a like and find some related videos in the captions.