 There is A.V. Paulson, and so now we have Stephen Walton, Trevor Clark, Ilya Hayes, Hongkwan Tran, Caleb Nistazi, Juan Serrano Pineda, Pinedo, and Jonathan Jacobs. Hi, we're the Gibbs Group, and we're going to start with a little bit of history on the phenomena. So originally, it was discovered by Wubelheim in 1848, but no one really cared about this paper until 50 years later when Gibbs published an article in Science where he didn't notice it originally and published a correction in 1899. The paper that Gibbs originally noticed the phenomenon was analyzing the convergence of Fourier series to the sawtooth wave, wherein he noticed that no matter how good of an approximation you made to the sawtooth wave, you still got an error, and it would never converge to the actual function that you're trying to approximate. One of the interesting things was happening at the same time was a man named Albert A. Michelson developed a device to decompose functions into their Fourier series and then recompose them back just using analog computing techniques. So what you can see here is you can kind of see the Gibbs phenomenon happening as we do more and more terms in the Fourier series. What you can see is that our series will approximate the function as we do more and more terms, like that yellow one is the most, the blue is a fewer, and red is fewer, and so on. But you get this overshoot that doesn't go away, it just gets thinner and thinner. So to try to get a number for how much that overshoot happens, one way we can do that is take our Fourier series and with this particular series, we can actually just take the derivatives term by term and find the critical points and that will give us that very topmost point and the very bottommost point and then we take the limit as n approaches infinity and we get this really nice integral of sine x over x which gives us about 1.17 and that's the sum of the overshoot and the undershoot and that's the Gibbs constant. So when we're approximating a function, a natural question to ask is how can we get rid of this Gibbs phenomenon around the discontinuities and kind of a natural way is to take, is instead of approximating this function by finite Fourier series is to take the average of those first couple finite terms. And so that's given formally by a Chisaro sum and as, it's pretty well known that the Chisaro sum converges to the same value as your Fourier series and sort of argument for that is if you go far enough out, you have infinitely many terms that are really close to your limit and so the terms before that won't really contribute to your average. So in particular, if you have a Fourier series that converges to a function, your Chisaro sum will also converge to that function. So the Chisaro sums though eliminate the oscillation around the discontinuities that the Gibbs phenomenon that arise from the Fourier series. So here we have the step function in blue and we have the first 20 terms of the Fourier series in green which really oscillate around the discontinuities and then we have the Chisaro sum which still has a little bit of undershoot and overshoot but doesn't oscillate nearly as much. Okay, so another way that we saw of deriving the Gibbs constant was by considering a truncated Fourier series where basically you multiply the terms of the Fourier series by one out to n and you basically take the limit as n goes to infinity and something that we thought would be kind of fun to play around with is instead of doing that filter there if we just considered you know different filters and for example, I tried seeing what happened if you considered instead of that filter, a filter more like that and to see if maybe the first few terms kind of contributed the Gibbs constant and so the top three lines are kind of the equations that the paper brought up and the last row kind of what I brought up and it ended up still producing the same Gibbs constant. All right, so one of the things that you can also ask is that you can try to approximate functions using other complete orthogonal bases and so on this slide up here we have an example of some polynomial orthogonal bases that you can use and so we looked at three of them Legendre, Hermite and Jacobi polynomials. These are the three Rodriguez formulas for all of them and the reason that these are important is because when you're trying to figure out what the constants of your approximation need to be you're taking an inner product with respect to a weight function and the Rodriguez formulas all have the weight functions for these respective polynomials contained within the formulas. So again, the thing to notice here is that we're constructing these series in pretty much the same way that you would construct a Fourier series. You have a constant which is dependent upon an inner product derived from a weight function multiplied by the polynomial you just take infinitely many terms and so what we have along with some of these pretty pictures is you can see even if with n equals 20 you start seeing this Gibbs phenomenon appear closer to the discontinuity and if you look at the third column in the data tables you can see that we are still approaching this about 1.179, the Gibbs constant is still appearing and this holds true for Hermite and Legendre but you notice that the Hermite polynomials we had 50 terms there and with these Legendre it's closer to the appearance of this Gibbs phenomena even though there's only half as many terms so the convergence rates differ depending on which polynomials you use and we also see that this holds true for Jacobi polynomials which are the most general case. Okay, yeah and that last slide showed another with a couple of other parameters for that so this affect the shape of your distributions. I do want to mention on that that was the Jacobi polynomials which actually generalized to Hermite and Legendre that was shown to converge to the Gibbs phenomena or the Gibbs constant by a paper by a copper in 2006 so we didn't include numbers with that. Okay, so anyway so we wanted to ask why is it that you get the Gibbs constant for these generalized orthogonal functions? So one plausibility argument that we found is this. So basically the idea is to take the Fourier series of your orthogonal functions, right? And so what you can do is if you've got uniform convergence for that Fourier series for that function and uniform convergence for the Fourier series for the original function that you want to approximate with your generalized Fourier series, let's go to the next slide. You can actually truncate both of those. You can commute the sums and you can push the original constant from the generalized Fourier series through. And so what you have here, it looks a lot like just a Fourier series for your original function that you wanted to approximate. And so the next one is we let M go to infinity. You provided the uniform convergence things that I stated before, you will actually get those constants out. And so this is one argument for getting the Gibbs constant. Thanks.