 So, welcome everybody, I'm Markus Vogel and today I'm talking about the Earth's exponent dynamics of S&P 500 returns. I'll present implications for market efficiency, long memory, multi-fractality and financial crisis predictability by the application of a fully generalized non-linear dynamics analysis framework. My presentation is structured as follows. At first, I will present the initial situation and the research idea because this is part of a larger research study with several papers. Second, I will present the chaos analysis framework. Third, the empirical results. And lastly, I will conclude with implications and future prospects. So, to present you the initial situation and research idea, this paper is based on a vast body of literature taken out of several research streams like non-linear dynamics, chaos theory, quantity of finance and so on and we sampled around 200,000 papers quantitatively and mathematically in order to extract the state of the art of the current literature. First of all, stylized facts or empirical characteristics of data are dominant on financial markets. There's a whole collection of more than 20 facts given in the literature. For example, non-linearity, asymmetries, momentum or fractality. The big conclusion is that no single best forecasting model in quantitative finance and quantitative modeling of markets is given. Yet, we observed that non-linear models tend to perform slightly better. Second, if we took a look at the non-linear dynamics literature, we found out that there's a 40 years old debate on whether the data generating process of time series is due to stochastic or chaotic dynamics and we developed a fully framework for that too. I keep it short because the results I have presented on last year's conference, but the quintessence is that we use cascaded wavelet filtered or denoised daily S&P 500 lock returns which have to be denoised because noise can really destroy any non-linear analysis and we found out that the data is a mixture of evolutionary deterministic chaotic and stochastic dynamics and chaoticity diminishes forecasting due to exponential growing error terms. The third topic is the Hearst exponent. The Hearst exponent can determine whether the underlying data follows a stochastic process. This is if the exponent is equal to 0.5. And shows persistency in a classical sense, which is if it exceeds 0.5, or whether the data is represented via mean reverting processes. This is if the process is below 0.5. In the standard literature, generally only single-valued representations of the Hearst exponents are given which means for the whole dataset only one exponent is calculated. This is the basic situation. So what is the research idea? Out of all the sample literature, the scientific literature applies rolling windows to Hearst exponents which means that they shift rolling windows through the data and for each rolling window a Hearst exponent is calculated. Yet the literature does not analyse the resulting Hearst exponent data characteristics. They just state the rolling window results. Second, our cooperation partner found out via empirical experiments that Mandel brought conception of the Hearst exponent to reveal long memory if it exceeds 0.5 is incorrect. We showed that if you remove fractal trends from the financial market data, the Hearst exponent runs to 0.5. That shows the existence of fractal trending characteristics instead of long memory which is assumed through the majority of academic literature. We further in various studies show that those trending characteristics cause the momentum effect which enables outperformance alpha generation and other properties and is a clear contradiction of the efficient market hypothesis. Thus, Hearst exponents can detect fractal trends which are inherent in chaotic dynamics. So the research idea for the study is to calculate and analyse rolling window Hearst exponents series with a generalized chaos analysis framework. So this is the framework and I will be very brief about it due to timely reasons with four steps in this part of the framework which is tailored for non-stationary data. The first thing is we have to fulfil several prerequisites and standard tests in order to ensure the validity and for the application and implementation of several chaos analysis algorithms. Thus, we first reduce noise, we test for sessionarity in order to determine which parts of the framework are applicable. We test standard distribution, so we test for gaussianity. We test non-linearity correlation structures, the entropy. If, depending on the data, we can apply significance tests. In the second part, we implement a recurrence quantification analysis and the recurrence quantification analysis describes the recurrence characteristics of the data and graphically and quantitatively determines the empirical data generating process properties underlying the data. Third, we applied a multi-resolution analysis which states potential exploitable frequency information for real-world trading systems and the last point is that we analyse power laws and other distributional characteristics by implementing multi-fractally-trended fluctuation analysis and elaborate on scaling laws and power laws. So how do we calculate those rolling windows? As you can see, you take the original data series and you shift a rolling window through the data and for each rolling window, a Hearst exponent is calculated. We selected a step size one in order to obtain as much data as possible. We also shown that there is no difference in the results between mutual exclusive windows and overlapping step size and we took what the minimum and maximum what the academic literature selected. So we selected the window sizes of 100, 1000 and 2500 days. In addition, due to lack of distributional theory, we calculated a bootstrapping algorithm with 50,000 steps each in order to determine whether the Hearst exponents are significant in difference to 0.5, which they all are. So I come to my empirical results. At the bottom left you see our S&P 500 return data and an orange denoised version of it. And then you have the three rolling window results, the H100, H1000 and the H2500. Note that the H100 represents the long-term development and the H2500, the short-term development. The red line indicates that the Hearst exponent is equal to 0.5, which would mean that the data follows a stochastic process which reflects martingale properties and the validity of the efficient market hypothesis. So the first thing to state is that the efficient market hypothesis is invalid. The second thing one can observe is if you look at the H1000 in the bottom left is that you have phases during market evolutions where high, very large, strong fractal trends are given in the data and suddenly drops down into mean reversion, which is an explicative rationale for momentum crashes, which means the actual vanishing of trends to be followed. So due to timely reason I will present you the collection of empirical results first and then elaborate in the details. First of all, the empirical data generating process of the Hearst exponent series is determinable via the recurrence quantification analysis measures versus surrogate data sets and versus mathematical chaotic systems combined with other stated characteristics. So in total we find that all three series are some kind of AR processes yet with very, I would say, stacked properties on top of that. First of all, the distribution is non-Gaussian. We have no hidden sub-dynamics. I will elaborate on that later. We have partially frequency information which can be exploited. We have high auto-correlation functions and we find multi-fractal spectra's power loss and fat tails. So first of all, I will elaborate on the auto-correlation functions. If you look at the left bottom right, you have the H100 which shows a short memory effect which means exponentially decaying auto-correlations while the other two series show true long memory effects which have significant slow decaying auto-correlations which exist in parallel to fractal trends. Now I summed up parts of the multi-resolution analysis and the recurrence quantification analysis. What we did is we applied a discrete wavelet transformation based on Adobo G9 wavelet to decompose our data with low pass and high pass filters and for each of those decomposed series we calculated the recurrence plots. So what you can see for the detailed coefficients is that there's only white noise remaining. So there's only residual noise and no hidden sub-dynamics visible for all the Earth's exponent data. Once you quantify those recurrence plots, as you can see them here, you can mathematically calculate the properties of the empirical data-generating process. Next is we employed a continuous wavelet transformation based on a Shannon wavelet and over 1024 scales in order to elaborate on existing frequency components which can be exploited in real-world trading systems and with exception of the short-term Earth's exponent series, all other window sizes have frequency information which can potentially be exploited. Finally, we calculated the multi-fractal detrendent fluctuation analysis spectrum where those dotted black lines indicate the local minimum and maximum Earth's exponents, which is coherent with the novel definition that the Earth's exponents exceeding 0.5 indicate the fractal trending instead of long memory. And on the bottom you see the power order distribution tests, the straight line is a theoretical power law while the dark red line is a complementary cumulative distribution function which shows the probability function properties of the data and we can clearly see that we have a power law with large fat tails. This is also the case for the short-term data. So the Earth's exponent series show low and narrow multi-fractal and fat-tailed power law characteristics and it is important to note that there are two existing rationales for multi-fractality which can be either caused via fat-tailed probability distributions as shown here and via non-linear temporal correlations. So how can we combine those Earth's exponents and the S&P 500 data series in order to elaborate on crisis periods? This is novel and I didn't find this approach in the literature so what we did, we shifted the rolling window through our original S&P return series data, the denoised one and for each rolling window we calculated a continuous wavelet transformation spectrum and for each wavelet spectrum we calculated the mean vector out of the coefficient matrix. So we took one of this coefficient matrix out of the CVT and we calculated for each scale the mean value in order to compress the whole matrix into a mean vector and then we just added up the rolling window mean vectors. So we receive a heat map which shows the average frequency per scale behavior of the S&P 500 return data series and this can be crossed to reference with the behavior of the Earth's exponents. So I will show you here what you can see here for I will stick to the 100 window. You can see that in particular in crisis periods especially the subcrime crisis that on average the frequency information is getting lastly positive and significant before dropping back into the negative which is in coherence with the momentum crisis I showed you earlier. If you broaden the window sizes, I've stated here wider window sizes lead to smearing effects and loss of resolution yet you can see the interconnection on a frequency level. So the Hearst exponent series shifts from strong momentum to mean reversion and are visible in the mean scale average in a frequency level in the original data series and thus can be an ex ante indicator for financial crisis. Finally I tried to conduct several forecasts based on fractal Brownian motion, several standardized stochastic or autoregressive models on LSTM and deep learning MLP network as well as the same combination MLP and LSTM in a wavelet neural network. wavelet neural networks actually replace the activation function with a mother wavelet function and I tried though all those methods to estimate those Hearst exponent dynamics and tried to use those estimates as input for a multi fractal Brownian motion in order to describe the chaotic data series. The forecasting results are pretty poor. The only acceptable results is the fractional Brownian motion for the short term Hearst exponent series which achieved this moving average with acceptable results in inside of aerometrics and the other approaches failed miserably. This could be due to the incurred errors in the first step and also stacked on the exponential error growth of the chaotic system itself. In terms of neural networks, especially wavelet neural networks the configuration with LSTMs and MLP tend to dislike sharp and frequently switching data such as given in financial return data and standardized stochastic processes are simply not capable of reproducing the advanced and more complex nature of the stochastic empirical data generating process. So before I conclude my presentation I want to elaborate on how chaos, Hearst exponents and momentum all stick together. In non-linear dynamics if you have a phase space in which your data lives and it is chaotic it is dissipative so the phase space deflates onto its strange attractor. The strange attractor can be intersected via Poincare sections and the Poincare section of the strange attractor is a fractal set. In order for a fractal set to work you need scaling laws which means you have a power line you can extract the growth exponents or the scaling exponents using the multi-fractal detrended fluctuation analysis and those fractal trends can be measured by the Hearst exponent itself which leads me to my conclusion the Hearst exponent dynamics are non-chaotic even if they're stemming from a chaotic system and are capable of evaluating the efficient market hypothesis directly. They show the existence of fractal trends which are responsible for the momentum effect and are an explicative rationale for momentum crashes and financial crisis. The next point I want to make is that forecasts and chaotic systems are vastly futile yet by the application of Hearst exponents the future avenue of research should be the ex-ante predictability of financial crisis based upon its evolutionary dynamical empirical data generating process and finally to conclude my presentation market efficiency is that financial markets are complex systems inherently chaotic and reveal multi-fractal power law characteristics as well as shifts in empirical data generating process nature which means that it shifts from fractal trending to mean reversion during crisis periods. Thank you very much. So we have two minutes left. Are there any questions? My question is about the statement of market efficiencies that I think I'm not 100% sure that you can conclude that out of your results. I mean at least I will try to give more context to that definition because it's a people that study a lot that statement and the conclusion is not necessarily. I will explain we have some time left so I will give a proper explanation so if the efficient market hypothesis is valid even in a time-varying efficiency manner it has to be a marketing data it has to be a stochastic process like a random walk or a grounding motion process and be normally distributed with the Hearst exponent you can directly see if the data is a marketing data or not so if the Hearst exponent efficient market hypothesis is valid then all Hearst exponent should be rendering around the 0.5 level at least briefly be 0.5 in nature and what you can see here in those rolling windows is that those exponents are vastly exceeding 0.5 those are all significantly different from 0.5 I didn't include the picture because it just shows the same pictures just shifted upwards so those are significantly significantly with the bootstrapping algorithm tested different from 0.5 and what you can see here especially in the short-term run that you have strong fractal trending which is exceeding 0.5 which means that you don't have a stochastic process and if you don't have a stochastic process all properties inherent in the efficient market hypotheses is violated and this does not include all the behavioral finance decisions like hurting or panics or heterogeneity of investors which means that you have on different time scales people doing different things during crisis periods they start doing different things at different time scales potentially leading to chaotic dynamics we guess that it can be the case that investment styles themselves are a reason for disinformation but we're not at that point yet and the time is up and actually overdrawn for 45 seconds if you have any questions we can discuss it once we're done so thank you all for participating