 Hello everybody and first of all thank you for the the invite, thank you for having me here and thank you for this beautiful gathering. And today I'm going to, so actually I'm Mohammed Dendaoud, PhD students at Crest under the supervision of Alexander Tsibakov and today I'm talking about some work that has been done a while ago which is about the construction of fractional bone and motion while I was in an internship at Bloomberg. So what I'm going to talk about today is first I will give you some motivation of what I'm going to present today. Then I will talk about something that is called the series expansions for continuous stochastic processes but it would be just very basic the things and then I will show you how what I exactly did to construct this sort of an expansion of fractional bone and motion and maybe at the end I will give you some further research direction. All right so first of all what is fractional bone and motion so I guess that everyone in this in this room knows what is exactly bone and motion so think of it as a generalization of the bone and motion and with this new degrees of degree of freedom which is h the HERS index so when h is exactly equal to one half then it's this is just the bone and motion that you can see in the in the the blue the blue path and whenever h is bigger than half then the the increments gets positively correlated and then you get something more nice and smooth and when h is smaller than half like the the red the red path then your increments are negatively correlated and you get something very rough so these are the the three examples of fractional bone and motion and actually it's given also by it's a centered Gaussian process before after all and it's given by this covariance structure here and for the purest it's a fractal as well and so it has stationary increments and it verifies the similarity all right so among the applications of FBM I'm going to talk only about the financial application because I did this while I was in a finance company and actually for those who are familiar with stochastic volatility models so we usually model the prices as low as log prices as semi-mercant gulls and the volatility as a semi-mercant gull as well and it turns out in in the recent work of Gadro and Rosenbaum and other guys that the log of the variance is quite is somehow very rough it's something more rough than just a semi-mercant gull or just a brownie motion so here I have a simulated data which is the fractional brownie motion with h very small 0.15 and it looks similar so this is one of the motivation behind them trying to simulate this fractional brownie motion now the problem in practice is the simulation we don't we don't know exactly how we don't have some efficient tools to simulate the fractional brownie motion so think of it if you want to simulate the a Gaussian process but you have some increments that are not independent then in this case what you do the first thing is the Tolesky decomposition you can think of just take the covariance matrix try to take this to do to get the square root of this matrix and multiply it by some independent Gaussian variables and the problem is that to get the square root of a matrix you need to diagonalize and then this is of n cubed complexity and it's quite a lot to pay so another method is what we call the approximate circular method and basically the idea is to as I said the increments are stationary and you just take the covariance of the increments and since they are stationary then all the diagonals would have the same components so the idea is to modify the lower triangular part in order to make this matrix a circular matrix which is just up to some shift and in this case we know exactly what what is the diagonal the eigenvectors of a circular matrix and then it's easy to get to to do the diagonalization in this case the problem is that it's just an approximation so it's not an exact method so the last idea and the last thing we we can use is our series expansions and the good thing about these series expansions that is that they are continuous in time so we don't have to discretize to take points in some grade they could be continuous in time okay so now i'm going to talk a little bit about about series expansions for continuous stochastic processes so basically what we will always suppose here is that think of k of this function k as a kernel so it's a symmetric kernel okay so it's an operator the operator tk is quite very is famous in the literature it's just this linear operator that just takes the entire scalar product between the kernel and a squared integrable function so the two conditions we always assume are the first one which is the mercer's condition is that just the think of it as a in a finite dimensional case this is just saying that the matrix is semi-definite positive this mercer condition is exactly the same as saying that your matrix your covariate the matrix of the kernel is symmetric positive and this is just saying that the trace is finite basically okay so these two conditions are usually verified and when they are verified actually you can show something that looks like the spectral theorem which is actually that sure there exists a set of eigenfunctions and eigenvalues such as your kernel could be could be diagonalized in this space so it's exactly the spectral theorem for semi-definite semi-definite positive matrices so it's got something generalized here and it's something very true for compact operators and actually the the series converges absolutely and uniformly and all the stuff because of this trace condition we have all these conversions results one of the applications there is for people who are familiar with machine learning is the kernel trick and basically if you have from this formula you can write a kernel as a scalar product in some inner space and this tells you that you can see any kernel as a scalar product and then you could basically you could just do as if you were in the Euclidean space why do I show you this mercer theorem is because there is a very beautiful theorem which is the KL decomposition that tells you basically that think of a Gaussian process that is centered then actually this this process depends on two things the first thing is time because it's time dependent and the other thing is the omega which because it's random so we have these two parts and this the beauty of this formula is that it completely separates the randomness from the time dependence so you could write your your process actually like a sum of a product between some random variables and some time dependent functions so you somehow completely separate the randomness from the the time dependence which is something very beautiful and in the specific case of Gaussian variables we know that the zk's are exactly independent Gaussian variables so the general version of the theorem is that if the process is not Gaussian then this is still true if the auto covariance the covariance function is continuous it would always be true but in this case the zk's are not known we don't know what are exactly the type of the the randomness of these zk's they're independent but not they're just decorated sorry but not independent so this is I will give you just an idea of how to get this this probe because it's something that really will be used will be using is think of the immersive theorem it tells you that the kernel can be decomposed this way and it also tells you that the eigenvalues are positive because this operator is somehow semi-definite positive okay so now if you you decide to construct this thing this very simplest thing is that just to take the square root of the eigenvalues okay think of it take it because it's positive you can take the square root and multiply them by some say you pick some independent Gaussian variables and you construct this series so if I tell you and trust me this series is a conversion is a conversion series then you just need to take the covariance of this this thing and the covariance of this Gaussian process is exactly kst if you take the covariance because the k are independent the course would be exactly this thing above okay so this is very useful in simulation because if now you know the k of the composition you just have to pick some independent Gaussian variables and multiply them by the the basis of the k of the composition and it will give you something that has exactly the the correct covariance structure that you are looking for so this is something very useful now where what is the problem all this is a beautiful theory but the problem is that I think for several years people were to have been trying to find the k of the composition of the fraction of Brownian motion but unfortunately no one succeeded so we don't know explicitly the k of the composition of the fraction of Brownian motion we only know it for the Brownian motion but it's useless because we don't need this to simulate a Brownian motion which is can be simulated linearly so this is why we're trying here to give not a positive answer to what I just said but to find another series that is not decayed the composition but that will have exactly the the same rate of decay of the decayed composition and to give you some some intuition about the how I thought of this construction is remember this operator that I have showed you in the beginning this tk here okay remember this operator so we need to diagonal this operator okay so now think of a stationary process if the process is stationary this means that the kernel kxy can be written as some phi of f of some x minus y you can always if you have this stationarity if you have the stationarity you can always write your your kernel uh as a function of the difference in absolute value okay so whenever you have this stationarity you can actually see your operator as just a convolution operator and we all know some basic things we have learned are the the eigen vectors of the convolutional operator are just for a basis okay so actually there is something very deep between stationary processes and harmonic analysis whenever you have something that is a stationary somehow think of a Fourier basis because it's something very very linked okay so this is what I exactly did so I said okay let's let's just take the auto covariance function and try to take the the Fourier with the composition of this function and why do I write it this way because here the ck is actually arm we can show that the series of the ck is because they are they have this of this order of growth we can show that this the series of the ck is convergent and in this case the Fourier series is not normally convergent so uniformly and then you can replace t by zero and you have zero in both sides so we can actually I'm just replacing the dc component which is c0 by the minus the sum so I can write it this way it's just because I like to write it this way and the ck are given this way so the thing that now we should keep in mind is that what we did is that we we keep the same covariance structure between zero and one but basically outside of zero one then it's a new covariance structure because in this case it would become periodic so what we are doing is only valid between zero and one but something that we really need to keep in mind is that this new modified covariance structure is not necessarily a covariance structure so it does not really verify this new matrix that has nothing is not obliged actually to be semi-definite positive so it's not a covariance anymore so just take what this this what we did and we replace it in the covariance kts so we replace it here and you do some simple trigonometry on this function and something very interesting happens here is that actually just by doing this simple thing we can write actually this kts which is the the kernel as the sum of a product between some some functions that are of some basis okay so there are these functions these eigenvalues are minus ck times some functions some product between f t f s which is very similar to what we had in the in the mercer theory the problem is that remember what we did for the k the composition is that we just said that minus the eigenvalues are positive so we could take the square root okay so in this case we have there is no reason for minus ck is to be positive if they are positive just take the square root and do the same thing as in the k and then you will have something that has exactly the correct covariance structure so the question is that do you know any functions such that all the Fourier coefficients are positive for all of them are negative so this was something very very weird when I first thought of it wow is it really possible for a function to have all the questions positive or negative and then as a very novice person I just decided to see the spectrum how it looks like and the surprise that actually for h smaller than half they all are actually have the the same sign so they're all i so they all have the the same sign so it's very interesting actually so the ck's or the Fourier coefficients are negative in this case so minus ck's are all positive so for h smaller than half it's very simple you just need to take the square root and well this is surprising but I think I was I was lucky when I first found this then I said okay let's see for h bigger than half and in this case you can see that the Fourier coefficient does not have the don't have the same sign in this case we need to do something more to work a little bit more to make this work for h bigger than half so just to get an idea of why is it working for h smaller than half think of minus ck you can write it this way if you do some integration by part and actually this too this is just the product between two functions this is t power to h minus one and this is the sign the c news okay so the integral of the product is just this is the product of these two functions and the integral is just the sum of all these small surfaces and as you can see all these surfaces are decreasing in amplitude so the sum of all of them would be positive so this is why these coefficients are always would always be positive actually and actually if when you really think of it there is something deep about this singularity around zero and this singularity around zero is what makes things work actually whenever you have this singularity things work and even the the equivalent of ck at the infinity is depend on this singularity around zero and it's something very interesting and this is why it does not work for h bigger than half actually because we don't have this for h bigger than half we don't have this singularity we have something that is equal to zero here so the result is this one maybe now for h smaller than half then x t is exactly can can be written this way so this series is exactly a fractional Brownian motion with the correct with the correct covariance structure and we can prove actually with some some very classical tools that uniformly this series has the optimal decay which means that you cannot find any other series the composition such as it would be faster than the one who I present here so it's by far the it's not the only one but it's by far the the rate optimal one and just i'm giving you here some example of simulation of the variance and here of the the covariance structure in three-dimensional just to give you an idea of the existing series so this is the first one with the zafar zed and van centen and for their series you need to find the zeros of roots of basal functions you need to know also gamma of something and this is another one well you need beta gamma and another beta so the idea is to tell you that this this series is somehow very simple just take the fft of t to the power th with your favorite programming language and then multiply it by some cost and sign you will have a series that is exactly doing the same thing the case of h bigger than half just do in in few words remember what I said we really need this if you want there are two regimes there is this regime of h when we are smaller than 0.5 and we have this singularity here in the in the middle of both we have the function which is exactly t and here we have something another t to power two h and the last one is t to two squared so the idea is to try to do some mirror and symmetry here in order to get this singularity again and the idea is to consider the function t to power two which is the other corner minus t to power two h and in this case for h bigger than half this symmetric function of the auto covariance would have the singularity around zero and then we would again the same thing and it would work exactly the same way and it would be optimal so this there is a actually the machinery I showed you here is basically for fractional Brownian motion but it could be generalized to a large class of Gaussian processes and I and this is what I did in the in the paper is whenever there is some stationarity either of the process of the increments you just can use this Fourier basis and by looking at the singularity whenever you have a singularity around zero of the auto covariance function you will have something that is optimal that has optimal decay so I do this for fractional Orson olympic fractional bread and other many other processes and the takeaway message is that these series expenses are very interesting for simulations because actually something deep about them is that basically what you need to do is just to truncate the series so you need to to keep n number of harmonics so in this case you don't choose to discretize actually the time you choose to discretize in the harmonic so instead of this you have something that is continuous in time but then this time we need just need to pick the right number of harmonics that you can exactly control by this formula so just from this formula here you can know exactly what's the number of harmonics to put in order to get a good procedure so you just need to to take the good number of harmonics and the last point is that this series could be could have some potential uses for example for estimation of the first index or the drift of these fractional processes it's not very easy actually because the basis is not or to know us so you cannot just project and get the an estimate of the coefficients but if somehow there is a way to find an estimate of the C case then we could potentially get an estimate of the first index or drift or many other things this is pretty much all I wanted to tell you here if you have any questions I'd be happy to answer or otherwise we can talk about it offline thank you thank you for the talk is there any question last last force I think there is one picture of the of the analysis of error I'm sorry what yeah can can you show us no no no there is an image a photo of of the you have a past that very quickly where exactly yeah yeah the the one with the color blue and next one yeah what does it mean this is just the the covariance this is just a rough well this is meant to be the covariance which is meant to be continuous but in this case because I I have chosen just a small number of harmonics it's something a little bit rough so it has it does not it's not exactly the correct it's an approximation of the covariance because I've just picked maybe 10 or 20 harmonics first but it's it should be the covariance function two-dimensional covariance find the kernels so so we see that at the corner there is one peak yeah yeah yeah it's it's a peak so it's why I don't know it's just it's very rough actually because for h smaller than half as I said it gets very rough actually and then yeah this is why I have this it's not very small and is there any wavelet basis approach to a functional but there is another one not about with the taco and another yeah I don't remember exactly their name so they have exactly series expansion which is not very easy it's like basically you have two indices so it's two different bits of wavelets and the series it involves some Gaussian independent variables but it's a little bit involved it's not explicit actually for simulation so it's not very useful it's not explicit okay all right yes so on the blackboard on the blackboard you explained when h is larger than one half you can transform the covariance function and how is the new covariance structure related to the one actually the idea is because what what I did in the beginning is I I considered the Fourier decomposition of this function okay and because of the singularity the Fourier coefficients were like all positive or all negative so here instead of considering this guy the idea is to consider the Fourier decomposition of this new guy because it would have this singularity but of course at the end I'm paying something in my series I would have a term which would be something that looks like C0 times T times Z I have a small term to add up to my series but the idea is to modify your the thing that you are going to do the Fourier basis in order to have the singularity and then you you just subtract something at the end there is a small cost to pay but it's nothing it's okay all right thank you thank you so now Maria