 I am Gabriela Tsiova and today I want to present my research, Sherp, Bernstein and Hefding type inequalities for regenerative Markov chains. And this is a joint work with my advisor, Patrice Bertay. So the outline of my speech is as follows. Firstly, I will introduce and recall some basic definitions and properties about atomic Markov chains and Harris recurrent Markov chains. And then I will briefly describe the namelline splitting technique that allows us to use regenerative properties in a more general Harris recurrent case. And then I will present my result concerning maximal inequalities under uniform entropy. And I present Bernstein type maximal inequality for regenerative Markov chains. And then the bounded case. And in this case I will present Hefding type maximal inequality. And after all I will show that it's easy to generalize those results into Harris recurrent case. And I will just briefly show a Bernstein type maximal inequality in a case when Markov chains are Harris recurrent. So having said that though, let's get started. So firstly, I assume that x is a homogeneous Markov chain with a transition probability pi and initial probability v. And I also assume that x is phi irreducible and aperiodic. And we are interested in this framework in atomic structure of the chain. And we say that the chain x is regenerative or atomic when there exists a measurable set A, such that all the transition probabilities from such sets are the same. So whenever the chain hits this regeneration set, it forgets its past, so regenerates itself. So when we are ensured that our chain possess atomic structure, we can right now define the sequence of regeneration times tau A j. And by tau A, I denote the first time when the chain visits atomic set A. And by tau A j, I denote all jth consecutive visits of the chain into atomic set A. And when we have the regeneration times sequence, we can now cut our data into segments which are just the blocks corresponding to the consecutive visits of the chain to regeneration set A. And by the strong Markov property, we know that such blocks are independent and identically distributed. So this regenerative structure of the chain allows us to, in a natural way, to generalize the theory from IID setting into Markovian 1. Because instead of considering IID observations, we are just working with IID blocks. In this framework, we are also interested in more general class of Markov chains, Harris recurrent Markov chains. And Harris recurrence is a communication property of Markov chains. And when x is phi irreducible Markov chain, we know that the chain is visiting infinite number of times any set of positive measure phi with probability 1. So we know that this is a communication property of our chain. However, we don't know if there exists some set from which all the transition probabilities are the same. So this property does not ensure that our chain is atomic. And as you can suspect, we also would like to use regeneration techniques also in the general case, Harris recurrent one. So we don't know if there is atomic structure in this chain. And actually, in order to retrieve those properties, we need to extend the probabilistic structure of the chain via the Nammalian splitting technique in order to construct an artificial atom. And then via this technique, we can actually retrieve regeneration properties in a Harris recurrent case. So before I will formulate the Nammalian splitting technique, I will just introduce a notion of small set since the Nammalian splitting technique relies heavily on this entity. So we say that S is a small set if it satisfies this minorization condition, which is just the uniform bound from below on the transition probabilities. So having in mind the definition of small set, we can right now switch to the description of Nammalian splitting technique. So by Yn, I denote a sequence of independent random variables with parameter delta. And we want to construct the Bivariate chain Xm. And the Nammalian splitting technique relies on the randomization of the transition probability pi. Each time that the chain visits the small set S and the definition I just provided a few moments ago. So if Xn is in S and Yn is equal to 1 and that happens with probability delta, then Xn plus 1 is distributed according to the probability measure phi. And if Xn is in S and Yn is equal to 0 and that happens with probability 1 minus delta, then Xn plus 1 is distributed according to the following probability measure. So if we take a closer look at the transition probabilities of our Bivariate chain Xn, we see that whenever Xn is in S and Yn is equal to 1, then all the transition probabilities are the same because Xn plus 1 is distributed according to the probability measure phi. So we see that if all the transition probabilities from such set are the same, that meets our definition of a tone I introduced. So that's why by S hat I denote atomic set for the Bivariate chain Xn. So right now we are working, instead of with the chain X, with the Bivariate chain Xn. And what is important in this case, it is shown, for instance, in a book of Main and Weddy and very well explained that the Bivariate chain Xn inherits all the properties, communication properties of the chain X. So we have phi irreducibility, aperiodicity, and so on. So whatever we establish for the Bivariate chain Xn holds also for Harris recurrent chain X. So another interesting technique is quite theoretical construction. So maybe it's better to take a look at the illustration. So here we observe the trajectory of the outer aggressive model of order 1. And we see that whenever we are in this gray area is just the small set we've chosen. So whenever we are outside small sets, we see that no regeneration happens. And whenever X, Y is in S and Y, I is equal to 0, that happens with probability 1 minus delta, we also observe no regeneration. And whenever we have the realization that X is in a small set S and Y, K is equal to 1, we cut our data and we create a new block. So here you have to block the composition of the outer aggressive process of order 1, which is the Harris recurrent Markov chain. So this is a short introduction into processes I consider and try it now. I want to switch to presenting of my results, but before I will do it, I will just introduce a little bit more notation. So firstly by fbi opsam. I denote just the sum of observations fxi between two consecutive visits of the chain to atomic set A. And I assume that the mean inter-renewal time alpha is finite and by ln I denote the total number of consecutive visits of the chain to the atomic set A. And I decompose the sum of observations fxi into two components. So firstly I sum block up to some deterministic number n over alpha plus the reminder term delta n. And the first counter part of delta n is just the first non-regenerative block. Here is the reminder term between n alpha and ln, then random number of blocks. And this is the last non-regenerative block. And by sigma square f I denote the asymptotic variance. So in this framework I consider maximal inequalities for Markov chains. That's why I need to control the size of the class of functions f I consider. And in this case I am using uniform entropy number and covering number. So the covering number is just the minimal number of balls of radius epsilon needed to cover the set f. And the uniform entropy number, so we just take the supremum over all the discrete probability measures q. So here are the assumptions under which I derived my maximal inequality. So firstly I impose the Bernstein block moment condition on f. And oops. And the second assumption is just the assumption on the moments of returns to the Markov chain to atomic set a. The next assumption is just exponential moment condition on the first non-regenerative block. The a4 assumption is just assumption on the last non-regenerative block. And finally I just require that the uniform entropy number is finite. So here is the result. So we assume that x is a regenerative positive recurrent Markov chain. And then under the mention assumptions for epsilon less than x, we have for enlarge enough the bound on the tail probability of the scaled and centered process. So here we have few counter parts, here we have few counter parts in the bound. So the first exponential term is related to the bound for the sum from i equal to one to n over alpha for the blocks. The second exponential, this is the bound related for the first non-regenerative block. This bound comes from the bound for the last non-regenerative block. And those bounds are just the bounds obtained for the sum from n over alpha up to ln. And what is important in this framework that all the constants in the bound can be explicitly computed and the form of them is given in the paper which we submitted and is available online. I will tell you about this paper when I will stick about references. So the remark, short remark. So as you saw, the bound works for a sufficiently large n. And it's actually the kind of a drawback since the inequality is the deviation inequality and we would like to have a concentration inequality for our applications. However, if f belongs to a ball of a holder space Cp and we have an equivalent space and dot with the norm of that type, then the inequality actually is concentration one. So sketch of the proof. So we are considering a centered version of the fx and we notice that as n is going to infinity, that ln, the total number of blocks is behaving like n over alpha. And we consider the sum of blocks summed up to deterministic equivalent n over alpha. And we consider the sum consisting from this, from that. So we decompose the sum of the fxi minus min up to z and alpha and delta nf. And we note that since the process, which is the sum of the blocks up to n over alpha, it's just independent. Those fbis bar are just independent and identically distributed sub exponential random variables. So we can directly apply the birth time inequality on that part. And for the delta n, the analysis is more complicated. So we decompose the stale probability into three counterparts and we want to control all those three counterparts separately. So the first and the last terms associated with the control of the first and last blocks which are non-regenerative can be easily controlled by the Markov inequality. And for the reminder term, when we consider a sum from ln up to n over alpha, this is more complicated. So actually the proof is quite technical but the control of this term actually comes down to the control of the moment generating functions of the process. And it's quite technical since in this case we need to consider all the regeneration scenarios of the Markov chain X. And there is quite technical and you can find all the details in the paper which treats this kind of results which I wrote with Patrice Berthay. And in order to switch to the maximal inequality we just control, we just apply some arguments similar to arguments in Pollard and Kosserrock and combining that we can actually switch to the result for the supremum. So, and in this part actually is crucial if we have deviation inequality or concentration inequality since in order to have the band which is a function of the uniform entropy number in case we are not, it depends on what kind of, which space is our F in order to obtain either concentration or the either deviation inequality. So in this part we must be more careful. So we can obtain naturally even a sharper band when class F is uniformly bounded. So in this case we are able to obtain haveding type inequality and actually we can have a sharper control of the moment generating function. So under the assumption that F is uniformly bounded we can actually obtain the haveding type maximal inequality for regenerative Markov chains. So here we see that for the sub exponential IID random variables we are able to retrieve the band which is a haveding type band. And actually L and R can be our just of the different form because we have bounded case and we have a better control of the, on the bound on the moment generating functions. So actually the bounds C1, C2, L and R actually are enabling us in this case to for a sharper control of the tail probability. So it is noteworthy that this kind of results can be easily generalized into a Harris case via the mammalian splitting technique. So as I mentioned, this technique allows us to extend the probabilistic structure of the chain in order to construct an artificial atom and use all the regenerative properties also in the Harris case. So the generalization works under slightly different assumptions we need to put on the chain. So as you see, we just have the same kind of conditions like bear chain block moment condition, block length moment assumption and assumptions on the last and the first non-regenerative blocks. However, in this case, we just need to take a supremum over all y in a small set and we consider just the conditional on y moments and we just do the same kind of change for all the next assumptions. Under those modifications, we obtain a bear chain type inequality for Harris recurrent Markov chains. So here is just the bound and you see that is a similar form as before. Instead of we just consider the Bivariate chain and we consider just alpha n and which is mean inter-renewal time for the Bivariate chain Xm but the rest Nc1, c2, L and R are slightly different form but the bound is quite similar as in a regenerative case. So here are just small part of references. So firstly, this is a paper which I wrote with my advisor, Patrice Berthai and here you can find all the details behind the proofs and more explanation about regenerative Markov chains and Harris recurrent framework. This is the paper my both advisors wrote and actually what we've done, we slightly improved the results since the inequalities we obtained here are a special case of the fact Nagaev inequality obtained by Patrice and Stefan in their paper. Kosarov and Pollard are just the references when I was mentioning how to switch from the just birth type inequality into maximal type of it and here is the paper of Navelin that explains how to split and how to create a Bivariate chain Xm with artificial atom AM that allows us to employ all the regenerative techniques also in the Harris recurrent case. So this is all what I wanted to present for you. I encourage you to look for the details of the proofs just to the paper which I wrote with Patrice. So thank you for your attention and if you have some questions I'm happy to answer. Okay, we need to hurry up a little bit but maybe one quick question. So just so before your work there were only works in special cases? Before we work there was no result for Harris case. So this is actually the novelty and also there was no results for regenerative Markov chains. It was only results for Markov chains under mixing conditions which are difficult to verify and in this case all the conditions are very easy to check and you can apply the inequalities directly and also there was like many results with very difficult constants to compute and here we have the explicit form and actually the rates which we obtained are enabling us to retrieve fast rates in statistical learning algorithms because we want to apply these kind of results to showing some consistency for the statistical learning algorithms and the rates we obtained here are satisfying us because they are allowing us to obtain fast rates when learning the statistical learning algorithms. So this is the motivation behind. Thank you. Okay, so let's thank Gabriella again and so we'll change this.