 Good evening everyone. I'm really honored to be invited as a guest speaker here inside the AI lab of the Frankfurt School of Finance and Management. My name is Markus Vogel, founder of Markus Vogel Business and Data Science. And tonight I will ask you the following question. Wouldn't it be great to be able to forecast stocks more accurately using new technologies and breathtaking physical concepts? It sure would. This is why tonight I will talk to you about financial time series analysis using wavelets and neural networks. This evening's topic is diseases of one of my master's students who is also a part of the audience tonight. I am very proud as doctoral candidate to be responsible for him and his topic. To get started, I will introduce you to our content. We'll start with a very brief introduction of financial time series and then switch into signal analysis and wavelets. Afterwards, we will elucidate wavelets in combination with neural networks and neural network topologies. To conclude, I will present you our current research questions. At first, we take a look at the graphical representation of a financial time series. Here, at the left hand side, we see the daily price series of the Apple Corporation denoted in US dollars and ranging from the year 2000 until today. On the right hand side, you'll see our graphical representation of the logarithmic return calculation of the Apple series. One can clearly see that this data shows clusters in time. If we look at the year 2001 and 2008, we are able to identify the dot com and great financial crisis, which are indicated by huge negative returns. This leads us straight towards the question, what properties do financial time series have at all? And what do they mean? Since today's talk is timely limited, I will not show you the classical properties of financial time series, but they help straight into the latest research results and confirm properties. At first, financial time series are not stationary, they are nonlinear and are not independent. This means that whatever has happened yesterday has a clear, nonlinear and unstable impact on today's development. Further, it has been shown that financial markets data displays asymmetric characteristics. To go on, one of our cooperation partners among other researchers has shown via experiment that financial time series follow fractal patterns and bear momentum effects. This means trends to exist and they can be measured, as well as speaking in terms of momentum that winners are going to win and losers are going to lose. We have shown during cooperative research that the length of these trends follow a logonormal distribution, which together with upper findings only leads to one conclusion. Efficient markets do not exist further via applying Fourier transformations, one can find out that returns mere frequency information, which can be exploited. This leads us into our next topic, namely signal analysis and wavelets. Before starting with wavelets, it is worth to mention that we cannot utilize classical Fourier transformations due to the non periodicity or non-stationarity of financial time series. Even the equally sized windows of a short time Fourier transformation, an alternative for wavelets, is not able to extract the frequency information in an useful manner. Therefore, we now take a look at wavelets. The verb wavelet stems from the French word under that and means little wave. One can distinguish discrete and continuous wavelet transformations. We will only present continuous wavelet transformations in this talk, since we only analyze the returns of our stock. So, how are wavelets used anyway? To use wavelets, one simply takes a given wavelet and shifts it through the signal. Doing so, we calculate coefficients, who say how close the wavelet and the signal are at a given point in time. Therefore, we obtain time information, since our wavelet is finite. This part is called translation. After we translate it, the wavelets through the signal we take it and stretch it. To be more precise, we dilate or compress the wavelet and repeat the translation again. This whole process is called scaling, since we changed the scale of the wavelet. It is important to note that the shape of the wavelet always stays the same. Why are we doing this anyway? We try to combine two worlds here. At first, we want to contain the time information our signal provides us with. Second, we want to extract the frequency information. Under the restrictions of the Heisenberg uncertainty principle, we are able to do just that. In comparison to the constant windows of the short time fully transformation, we change the window sizes while applying wavelets. Also, we introduce another notion tonight, namely the ones of scales. It is as follows. High frequencies resemble rapid changes and therefore good time and bad frequency resolution, which is expressed by low scales. Low frequencies on the other side are longer and therefore adequate for good frequency, but bad time resolution, which is expressed by high scales. It does not matter to lose the frequency information at the lower scales, since we obtain it using the higher scales. Enough theoretical gibberish. Let us get straight to the point. Here you see a continuous wavelet decomposition matrix depicted as heat map using the python mudcloth lip package. This matrix represents the coefficients of the translations and scalings, which represent the resemblance of a wavelet at a given scale for a given point in time. This means we have 256 scales following a power law and almost 5000 days resembling the almost 20 years period of data we chose. Since we are not using discrete wavelets and since prices do not contain frequency information, we do not see quite much in this picture. This is for demonstration purposes, so we will use our apple returns instead. Et voilà! Here we see the resulting continuous wavelet decomposition matrix if applied to our apple return series. In straight contrast to our price series decomposition, we see a lot of frequency information. Again, we can see the dotcom and gray financial crisis in this picture indicated by significant coefficient displayed in white color. But why all the hassle if we could see that on the return series itself? First, we see this information at different scales. Second, we can determine that the coefficients turn significant just slightly before the outbreak of the crisis. For those who now complain about patterns seeking, remember that this picture is a representation of the coefficients which result from our mathematical decomposition. So, what do we want to find out is whether we are able to forecast financial crisis or financial time series in general using the conception of wavelets. Since we do not aim to only analyze one time series exposed at a time, we seek to utilize a machine learning solution. What we want to do is use financial big data and a scenario-based analysis using the wavelets to determine whether it is possible. To a, forecast a time series using this concept at all, and b, produce high quality forecasts at best outperforming existing methods. Since we cannot calculate all of it manually, we try to use neural networks in combination with wavelets. There are two feasible ways of action. First, we take a wavelet transformation as shown and use it as input data for a standard neural network. And second, we take a raw signal and train it with a network which is able to do the decomposition for us. This approach is then called wavelet neural network as depicted on the left hand side. What else can you use a wavelet neural network for? You are able to reconstruct missing data to our topic, namely calculate enhanced prediction. You can forecast chaotic time series, which is part of my VHD seizes, and you are able to reduce non-linear noise within your data. Since I noted before there are two ways of action. Take a standard network and train it with a decomposition or use raw data and feed it to a wavelet neural network. We want to teach a network to recognize patterns and produce forecasts due to utilizing financial big data. So, here's the catch. Neural networks, no problem. Just pick one. To the zoo of possibilities. As you can see, there are a lot of topologies able to do different tasks. This leads us to the following research questions. And the first block of questions, the main questions are, what is the difference between feeding a neural network with raw time series or feeding a DT composition? Is it different? What effects may occur? Second, if we tend to use wavelets for a wavelet neural network, we are back at the zoo again. Trying to determine, does it matter which wavelet we use? And if so, which is suited best for a task? This is not the end. We have many more unknowns to go for. In the second block of questions, we try to answer, how does the performance of let's say an RVF change if we change the activation function to a mother wavelet function? How does this performance rank in comparison with other topologies? And speaking of input data again, does the performance change? And if so, how if we use different timeframes? There shall be the last question for tonight. If you have already answered to our questions, or want to share some insights, sources or other information, feel free to contact me later at the event anytime. And to conclude with a tiny pitch, we are for hire. So feel free to check out our web page. Thank you very much.