 Okay, thanks for your attention. So I'm gonna talk about image analysis today. It's good. As Brian said, I'm from City College in New York and the work I do in my lab is on granular materials and it's funded by the National Science Foundation. But today I'm gonna talk about image processing. And this is the most important slide. So after this, unless you really like image process you can rest or think about other things. The brain is the best, let me just move this down a little bit because I talk too loud. The brain is the best image analyzer in the world. The worst brain is way better than the best computer at image analysis. And so if you cannot do the image analysis that you want with your brain, if you look at the picture and you can't figure out what's going on, there's absolutely no way the computer's gonna do it. If you can't do it with your brain, it can't be done. On the other hand, if you can do it with your brain it still mostly can't be done. Okay, so that's a very minimal hurdle but many people try to do image analysis on things that they can't figure out. I don't know, is that really a particle or not? I can't tell. Well, if you can't tell the computer is definitely not gonna be able to tell. But if you can tell, there's a chance that the computer can tell and that's what we're gonna talk about today. And along those same lines, it is always better to make your image better rather than making your image processing better. It's much easier to make better images than it is to make better image processing. Now, sometimes you're limited. If you have a Hubble telescope with a bad lens well, you've just got what you've got and there's well, it costs a billion dollars to fix it. Okay, we did it. But that's kind of a good example. No image processing is gonna fix that kind of thing. So the only real way to do it is by fixing the images, make your images as good as you can before you start. So that's the main message. Today I'm gonna talk about a couple, is that still making a weird noise when I talk? Is it okay? Okay. I'm gonna talk about a couple things. I'm gonna talk about some image analysis tools, the fast forward transform and convolution and cross correlation. These are sort of the basic toolbox of image processing. Then I'm gonna mention particle imaging velocity symmetry is a way of tracking particles or other objects in an image. And I'll talk about particle tracking and when I ran it last night, I didn't have time to talk about image demodulation so we probably won't get to that today. So just start from the total basics. What is an image? Okay, an image is literally on a computer just a collection of numbers. Okay, so here's a bunch of numbers that I have and I've displayed these numbers in a matrix form, X and Y. And these numbers correspond to the colors that the computer displays. But there could be many colors they correspond to. On this side, zero is middle sort of gray. On this side, it's blue. These are the same images just displayed by interpreting the numbers with different colors. This is what's typically called false color and this is grayscale. But the basic take home message is an image. All it is is a matrix, I, N, M. So if I wanna know what I67 is, I go to six and I go to seven, it's 10. That's all an image is. It's just the amount of light intensity that a camera found at a particular point in space and those points in space are labeled by indices N and M. Okay, here's a typical simulated video image that one might have. This is a process of coarsening. So this thing is like oil and water if I spread it out and then watched it come back together. And this is the kind of thing that one might like to analyze or might want to understand something interesting about this. Like there's a couple of things you notice. For example, there's a wavelength, the sort of distance between the reds and that's changing over time. You might like to know how that's changing. Also notice that at the beginning, the sort of thickness is much more constant but at the end the thicknesses are different. So these are kinds of things that you might like to understand about an image. And one of the key features, one of the key tools that we have to understand images is the fast Fourier transform. And throughout this talk, I've put a lot of equations in here. I don't expect you to follow these equations during the talk, but the talk's available online afterwards and then that gives you a chance to look at the equations in more depth if you want to understand them better. But these are the equations for the fast Fourier transform. What I wanna point out is that it's a transformation from an image, here's a thing with two indexes, just like I, to another thing with two indexes. So it's a transformation from an image to an image. It could also be done in a time series. It's another way to do it. So there would be from a list of numbers to another list of numbers. Whatever dimension your original data has, the Fourier transform will have that same dimension. The other thing I want you to notice is that there's an inverse transform, okay? So we have the Fourier transform, which goes this way from little f to big f, but we can also take big f and transform it back to little f. And you can just test this yourself. You can see that this works. You can just plug this formula in here and you can do all the sums yourself. They work out analytically and you get the answer f. So it is an inverse transform and it's inverse both in its continuous form, which I'm not showing here, and the discrete form, which is what I am showing here, okay? So this is the Fourier transform. Now what is a Fourier transform? What are we trying to do with the Fourier transform? Let's start with something simple. Here's a very simple signal. Okay, this is a signal where we're just oscillating up and down. Just look at the green for a minute, okay? It's just a sine function, you can see. Here's the equation that I have for this. It's just a complex exponential. I'm using a complex exponential because the Fourier transform actually is complex. And it's actually not uncommon to take signal data that is complex. That is to say one might take the data with respect to f is 90 in your signal. And those two things can be thought of as a complex number. So this is very common in signal processing to think of these things as a complex number. So we actually have a real part in green and a imaginary part in blue. And together, these form one particular sine wave with one particular frequency. And so when I take the Fourier transform of this, what I get is a list of the frequency components of that signal. So this signal has only one frequency component, one specific frequency component. That frequency component is five and there it is. So I can get out what the frequency of this is. You can see this as one, two, three, four, five. So that's where that comes from. That's the frequency of this data. And that's where we get this one peak. Now that's for a signal, but what about for an image? So let's do the same thing. For an image now, I have something with two indexes. And now I have the real and imaginary images that are oscillating in space here. If I take the Fourier transform of this, I get one peak here just off the middle. And you can zoom in on that. And you can see that it's off from the center, just like this is off from the center by five, this is off from the center by five. And that's the same thing that we have here. So this is just the two-dimensional analog. It's off in the vertical direction because this image is oscillating in the vertical direction. If it were oscillating in the horizontal direction, you'd see it off in the horizontal direction. Or if it was at some angle, then you'd see it off at some angle. So this tells you the frequency content of an image. Now, if we have something much more complicated than the one we started with, like this image, which I showed you earlier, when we take the Fourier transform of that, we can look at the absolute value of the complex number we get. And that's what's shown here. And what you can see is that there is a band of frequencies. So there's lots of the same frequency. Here's a zoomed-in version of this. You can see it. That's this frequency that goes red, blue, red, blue, red, blue, red, blue, red, blue. But it's not a perfect frequency so that we get a particular dot. There's also all kinds of directions. I can follow along this way and see one. I can follow along this way, I can follow along this way. Those are all the different frequencies that we see in this image. And a couple of things to note. So there is a particular mean to this frequency, which you can see with your eye clearly, but it's fairly broad, which you can also see with your eye clearly. The thicknesses of these lines differ. So here's a really thick one, here's a thin one. Here's another image taken with a slightly different version of these the same equations. Now you can see this one is much more uniform, right? The frequencies in this image are much closer to the same. And you see this peak is much thinner. Still at the same place, they still have the same wavelength, but it's much thinner. Another thing you can notice is that they tend to follow the wall. So these are perpendicular to the wall. These are perpendicular to the wall. These are perpendicular to the wall. That means there's a priority for going either vertical or horizontal. And you can see that in the Fourier transform. There's more weight in the horizontal and the vertical. Then there are in the diagonal directions. So this is the kind of analysis it can do. When you say, you look at this image, you say, well, it looks like these kinds of things. But if I wanna quantify this, now I can. How much is the horizontal component greater than the off diagonal components? Well, I could just take the value here and compare it to the value there. Now I have an actual number, which tells me. Now I can change something in my model or in my experiment and I can say, well, I tend to get alignment with the walls as I change this parameter. So it allows us to turn images into numbers where we can make theories. Now another very important image tool is the convolution and the cross correlation. And one of the main reasons why they became very important is because this Fourier transform is very important. And it turns out there's a very fast way to do the Fourier transform. If we look back at the Fourier transform formula, it would seem, at first glance, that it takes n squared operations to do it. So we have to do this sum over every single one and then we have to do this sum over every single one. You'd think it should take n squared operations. But it turns out there's a cute trick you can do where you can make it take only n times log n. And log n is very small. So it actually takes like n. So even though this is a 2D operation, you can do it in the time of a 1D operation. And because of this, this fast Fourier transform is very powerful because it's very quick to be able to do it. And the same thing holds for the convolution in the cross-correlation because these functions can also be done using the fast Fourier transform. There's a way to convert between these and the fast Fourier transform. So they can also be done very rapidly. And the convolution and cross-correlation are written out here, but let's talk about exactly what's going on in these things. So the convolution is done between one list of numbers and another list of numbers. Now this is the 1D version, just like we started with the 1D version of Fourier transform. Here's the 1D version of convolution. So I wanna take this set of numbers and convolve it with this set of numbers. Now when I do a convolution, what I do is I take one set, it doesn't matter which one, you can do this either way. Comes out the same. But I'm gonna take the red set and I have to flip it with respect to the direction it's going. So I'm gonna take 102 and becomes 201. Now I'm gonna line it up, last digit to first digit here. And I'm going to multiply these two together and add up any of them that are there. So in this case, I get one times one is one. So that's the first entry. Now I move it over by one. I'm gonna convolve it, I'm gonna bring it along with this. And so now I have one times three is three plus one times zero is zero or three. That's my next number. Then I'm gonna go along. And now I get two times one is two plus zero times three is zero plus two times one is two or four. And I keep doing this as I go along. I move it along one more time. I move it along one more time. I move it along one more time, one more time. And then that's the last time because if I move it more, I'm off the end. So I'm done. So this number here is the convolution of these two lists of numbers. Now convolution plays a very important role in mathematics as well as image processing. And it turns out that if you interpret these as the exponents of a, sorry, as the coefficients of a polynomial, then this is polynomial multiplication. So this is basically just the process of polynomial multiplication. So I have one plus three x plus two x squared plus x cubed plus three x to the fourth. And I want to multiply it by one plus no x is plus x squared plus two times x squared. If I want to multiply those two polynomials, well, that's it. That's the answer. It's one plus three times x plus four times x squared, et cetera. So that's where this comes from. But also has a lot of applications in image processing. Now one thing you notice is that there's kind of a difference between some of these multiplications that we did. In the middle, if we go back to the middle, so starting from here, we kind of get what's called a full convolution because we have all three numbers here are lined up with three numbers there. And in MATLAB and many other software analysis program, they allow you to get just the full version. So the full version is just when the 201 completely overlaps. That's just the 477. There's also another one you might be interested in. You might want to have one that's the same size as this. And this is what typically happens in signal processing. So I have some perfect signal and then I have a measurement device and I have my output. And what typically happens is I take my perfect signal into my measurement device and it gets convolved with this measurement device. Ideally the measurement device should be a delta function, just all zeros except for one one. Then my ideal signal, if you followed this through, will be exactly what I put in. But it's not usually the case. Usually there's some delays or there's amplification errors or different things like that. And so the actual thing that happens is it gets convolved with the input signal and the output signal is then the convolution of this. We might like to have a signal that is the same length as the one we put in to get out then we use the one called the same. Okay, good. We've backed up my computer now because it's now three o'clock in America. So then the other one that we might look at, so I think I said this wrong before, this small inner region is the one that's called valid. So that's the one where we've actually overlapped everything. And then the one is the same as the one I just talked about and the one that's full is the whole one, the complete one. Now this is the convolution and now this won't let me go on because it's waiting for me to say I noticed that. Okay, so now that's the convolution. The correlation is very similar to this and I won't go through the whole thing, but now the difference is that instead of flipping this first, now I don't flip it. Okay, now I just don't flip it. And sort of the idea here, the other one was for polynomial multiplication, that's where it sort of comes from. The idea here is that I'm trying to look for a signal in this signal that's like this one. Okay, so if this turned out to be exactly 102, I'd get a very high signal because it matches up. Exactly. With other things, it won't match up quite as well and I won't get as high a signal. So that's sort of the idea here, but it works out the same way. So then we have two and we go across, we go across, we go across and so forth. And then we have the same thing. We can have the full thing, which is the whole one. We can have the one that's the same size or the one that's valid, just the one that's in the middle. So this is the correlation. Okay, now this can also be done in two dimensions. We just did it in one dimension. I'm not gonna use numbers to do it in two dimensions, but it can be done in two dimensions. We just double the sums. We just do the same thing twice. But to sort of see how it works, what we do is we take something. I wanna take this image and I wanna convolve it with this image, okay? And so if I take this image, I do the same thing. I first pick it up and I bring it up to the corner so that just the two corners are touching. I multiply everything together that's touching and I put that number here. And I keep doing that all along here. I move it over and I can and again and again. And this is what you get. And this is a command in MATLAB or in many other software packages to do this convolution. Now, notice the difference between convolution and correlation only matters if the thing you're convolving is asymmetric, right? The only difference is whether you flip it, right? So if the thing is symmetric, convolution and correlation are identical. They don't, there's no difference between the two. So often one finds people using convolution when they should probably be using correlation but because it's symmetric, it doesn't matter. But it's something to keep in mind. And when you do flip it, you have to flip it in both directions if you're going to do, if you really need convolution instead of correlation, you have to flip it in both directions. So that's something to keep in mind. And again, you have this same idea of valid. So this box is the place where this guy is sitting inside here. So it's covering the whole thing. There's the same, which is the one where the center of this fits on this corner. And then there's the full, which is the whole thing. And you can already see why the full one is a little different than everything else. So the full one, because it's on the outside edge, it doesn't have as much power because it's not overlapping as much. And even in this range of the same size, it's not quite the same as what's going on inside. So you can see why you might want the valid region because it's the one that's sort of representative of what it would be like if the image was infinite. So that's sort of the idea there. The difference between correlation and convolution in images can be seen here. So let's take this image, and now I'm going to grab a little piece from it. That's this one. Now this piece, which I've shown here is, sorry, let's see, is that what I did? Yeah. This piece is not symmetric. So it does matter whether I use correlation or convolution. If I use correlation, then, sorry, I'm going to take this back. I thank you, I took the middle. Thank you, that's, yeah. So I took the middle. You can see this piece looks like this one. This one doesn't look like, it looks similar. And that's what I'm going to talk about. But this is the piece I took. So I took this little piece out and I'm going to correlate it with this image. And I've just taken the same size. So you can see it's the same size as this image. So when I do that, I get a bright spot in the middle. That's because this part is very correlated because it's exactly the same. Okay, so I get a bright spot in the middle. Now on the other hand, when I do convolution, I flip it and this flipped, which I've done here, does not look like this at all. And so I don't get something that has a peak there. But I get this mathematical process of convolution, which can be interesting in its own right. But most often in image processing, we're interested in correlation because we want to know how two things are compared to one another. Now notice also in this little box here, this looks similar to this one. And you can actually see there's a bright spot there. So even though this is not the same as this one, because it's similar, we also get a bright spot there. And you can see there's lots of other places where there's bright spots. If I were to put a box around that, it would look similar to this. So it's sort of a way of finding out if two things are similar. Okay, now, why does cross-correlation come up so often in image processing? We have this idea by looking at the images that the correlation tells us kind of how close two images are together. But there's no real clear reason why that would be the case. It doesn't really make sense. The reason is because of something called least squares fit, okay? If you have two functions and you'd like to know if they're the same, how do you tell if they're the same? Well, if they're mathematical functions, you just look at them and say, okay, they're the same. But if you have data and you have a function, you wanna know whether they're the same, you have to do something else. And so what people typically do is to take the least squared fit between the two things. So what that says is, I take whatever numbers I get from my experiment, I take whatever numbers I get from my theory, I subtract the two, then I square them and add them all up. If they're the same, because I subtracted them, they'll be zero. When I squared it's zero, you'll get zero. So if they're exactly the same, I'll get zero. Now, of course, if it's data, it won't be exactly zero. So I look for the minimum. I say, well, when is it as close as possible? So I might adjust some parameters in the model till I get the minimum or the closest to whatever my data is telling me. At that point, I have the best possible fit for this theory. And I can decide whether that's good enough. Is that a good enough theory or not? And that's sort of the idea. In image processing, we might wanna do a similar thing. We might wanna take an image and ask, is this little piece of the image as close as possible to this image or not? How close is it? So I can take this image, subtract off a little piece and square it. And then I sum it all up. And if this thing is small, if this thing is zero, in fact, these two things are identical. If it's anything else, it might be a smaller or larger number depending on how different they are. Well, it turns out I can take this and I can square this out. So I can get I squared to IP plus P squared. Well, if we go back and look, this sum, double sum on this is exactly what we had for the definition of correlation. Correlation, okay? And so it turns out that this correlation is exactly what one needs to find this minimum squared difference. Notice this is just the image squared. So that doesn't really change much. This is just the other image squared. So that doesn't really change much. So it's this part here that changes a lot when this thing changes. And so if this thing equals this sum, you have something which is exactly equal to the image that you had. And so if this thing is large or small, the larger it is, the closer it'll be to this, and that gives us this result, that the correlation kind of tells us how correlated things are, how close together two things are. So we can use this trick to actually extract things like velocity from images. So how would we do this? Well, we can take a small patch from an image and we can cross correlate it with a later image. We can ask, where is this little patch of this first image in the next image or the next image or the next image? We can ask, where has it moved? And then we can use that. We can find the position of the maximum, the place where it's most likely to be. And then the distance from the origin to that point is how far that piece of the image has moved in that amount of time, which gives us our velocity. And then we can just pick a new patch and do that and then go to next frame, next frame, next frame, and find the velocities over and over again. So here's an example where I've done that. And this can be done with basically any kind of image that's moving. So we saw this movie earlier of this image that's moving along. And so what I did was I took again this little patch that we looked at last time, but instead of cross correlating it with the original image like we did before, I now cross correlated with an image that's a little bit later in time. And so when I do that, now I get a peak. So this little object, you can see it, there it is, there it is again. But it's actually moved a little bit. Now how is it moved? Well, it's moved down a little bit, okay? And we can see that if we zoom in, we can see that it's moved down by about five pixels. So that little patch has moved down by five pixels. So we could say then that the velocity of this region, this little region right here, is five pixels per time step. So that's how we can find it. And we could do this for every point in the image and find out what the velocity field is. So this is sort of made up data that I just wrote on my computer, but here's an example from my lab. This is a rotating drum filled with stainless steel particles. You can kind of see the particles, they're kind of small. The particle size is approximately equal to the pixel size. So this is a kind of far away. We're just rotating this around. It's two dimensional. So it's the thin slice of particles that are rotating inside this drum. And it is cascading down and flipping over here. And so that's what's basically going on. If we zoom in on a little region like this, here's that picture zoomed in. You can see these are the particles. And now you can see here's a particle and here's this particle. And in the next frame here it is. So this one went from there to there. And you can see particles are about the size of a pixel because now we can see the actual pixels. And so it might be hard to track this guy. This one's not so bad, but to track this particle, well, you know, maybe if you blur your eyes, you can see that's a particle and that's that same particle, but it's very hard. So it breaks my rule. It's really hard for me to do it. So I can't have a computer track those individual objects, but I can use this cross correlation technique. And that's also something we see. We can see that little structures like this one. So the little white here, black, black, black, those are all together, those are the same thing. So if I take this box and I cross correlate it with the next frame, I can see that it's moved from here to here. And so this is the velocity of that point or this sort of region at this particular time. So this is just an example of how this works. And here's this image again. And now I've gone through at every point inside this image and done this and now here is the speed map for this entire flow. So I just went through, did that exact thing for every point and then I brought out the speed of this. Now something interesting that you can actually see, the background itself has a velocity or a speed. That speed is zero in the middle and larger on the way out. Well, that's right, right? If I'm rotating something around in the middle, it's not moving, but on the edge, but I don't have any particles in there. So how did I do that? Well, it's just little imperfections in the glass are enough to trigger this correlation. And so just the fact that there's little tiny bits of dust or there's any sort of scratches or anything on the glass allows me to actually pick that out. So I can actually find the velocity of the box as well as the velocity of the particles inside the box. So it's a very powerful technique. But sometimes we wanna have even better information about what's going on inside this system. So we've also, in the same system, we've also taken close-up movies. And here's a frame from a very close-up version of this. And here's a movie of the whole thing. So we've taken this and we've broken it up. You can actually see the little boxes where we've broken this up. And in each one of these boxes, we've zoomed in and really looked at it. Now we have an image where we could imagine tracking every single particle. And that's what I'm gonna talk about now is how to actually track each one of these individual particles. We could, of course, use the PIV method or this correlation cross-correlation method on this image, but we can actually do a lot better if we track each individual particle because we can get things like the individual statistics, like the velocities of each individual particle so I can find out things like the histogram of each individual's particle's velocity and other things like that that we might be interested in. So how will we do this? Well, we're gonna go back to our old friend of Lee Squares. So we would like to find where this particle is. And before what I did was I cross-correlated a piece of the image with a later time. Now what I'm gonna do is I'm gonna make up a picture of my particles. I'm just gonna make it up in my head. And how do I do that? Well, here is a function for this picture. This is the function of this picture. When I plug this function into MATLAB and plot this matrix I get back as an image, I get this picture. This picture looks a lot like a particle and you can see this is a cross-section of it. So it's dark out here, it's bright in the middle and it's dark on the edge. It has a couple of parameters that define exactly what it looks like. For example, how fast it goes to zero. If the image is really sharp and very strong focus, this will go to zero very rapidly. If it's a little weaker focus like we have here, it'll go slower. It has a certain size, a certain size as it would. That's like the diameter of the particle. So I now have this function which represents the particle itself. And now I can take this guy and I can cross-correlate it or I can do the least squares fitting of this particle with the image that I have. Now this is a lot of math so don't really look at it but you can look at it when you get back but you can see some of the same ideas that we have in here. Here's my image, here's this ideal particle function. I'm just taking my image, subtracting this ideal particle function centered at different edges. So I just move it around with X naught. Wherever X naught is, that's where it is. Move it around, square it, sum it up and that thing becomes the thing that I would like to minimize. So I try to minimize that with respect to the position, the particle diameter and this little width, this thing that determines the focus. So I'm gonna try to minimize this and if you work through the math just like we did before, turns out you can write it all down in convolutions. And what people often use is just this part but you actually get a much better result if you use the whole thing, if you use the entirely squares fit rather than just the convolution part. This is just the convolution that one saw before or this one depending on how you do it. So that's what we're gonna do. Now how does it work? So I just take that function and I apply it to this image. So this is that same image that we had before. Now it's been corrected. I fixed my background, I used the filters to change the lighting so we have a nice perfect picture. So you can see it's a very sharp focus here. And if I apply this function then this is the map of this function that I get. Now what I've done is actually show the inverse of it, one over this so that it's bright wherever we think a particle is. This function is actually zero if it was exactly the same. If we were at the part where it was exactly the same should be zero. That's hard to see in an image where it goes dark is hard to see. So we flip it over for visualization to be able to see where it's bright. So each place where there's a little spot here that's one of the particles. And I've just increased the color scale here so you can see some of the lighter ones. Some of the ones that may be a little bit hard to see on this. But we basically find every single particle in this image. And this is routine. In an image like this we'll find every single particle almost every single time. In a big run like this where we have maybe 10,000 or 100,000 frames like this maybe we miss four or five particles in the whole entire run. And we also even get things that are partial particles like this guy right here. See that's where it's center is. That one right there. You can see it. But you can see it's very clear in this image. There it is right there. No question. That's where that particle is. It's because we're doing this fitting thing. It's because we're saying, well, where would a particle be if it had this little bit still left in the image? And that's where it has to be. Yeah. So doing this convolution we're essentially moving this image to every single pixel. And this technique is pixel accurate. So we move it to everything. So we do it for every single point in the entire image. And it takes about say a millionth of a second. How many? This is, this image is 340 by 340. And it takes about a millionth of a second to do one frame. Okay, because this is done using fast 40 transforms and it's really fast. If you got excited about it and you want to do it fast which we've done a couple of times we can get it up to close to 100 million frames per second using a GPUs. So GPUs can do convolutions extremely fast. That's one of the things they're really built to do. So you really get excited about this. You can do this kind of calculation in real time on real images as they're coming through. So it's a very fast technique. How do you expect to do this? GPUs are cheap. That's what's in many, many people's computers at home. So it's just the graphics card that's in many people's computers. They run anywhere from 200 bucks to, I guess you could pay $1,000 for a really nice one. Okay, so there's something that you can get. But on just your regular old computer you can easily do this at any frame rate that you'd be interested in doing. So how do you have an image time series? We do, we have an image time series of this. So there's, I was sort of giving the per frame time that it takes. If you have 100,000 frames and it's 100,000, one every, it's 100,000 per second then it takes a second to go through the whole thing and so forth. So this part can be very, very fast to do this part. But the drawback to this is as we mentioned we're just moving this to each pixel. And we really can't do any better than that or not a lot better than that with this technique because we're only trying to match the individual particles. So this is one way to do it. And just to compare these sort of ways with some other ways you might have heard about, here's an image, here's a made up image. So I just calculated this image but I added noise and some other features that make it seem more like it's real. Here's the least squares technique telling me where I think the particles are and these black lines are contours. The one furthest out is the 50% contour and they go up to about 95%. So you can see they're all perfectly centered. A very sharp peak right where the particle is. Here's another way. If you just use the cross correlation which a lot of people do, because it's faster you have only one cross correlation to do rather than three. So it is faster, it's a factor of three faster but it's not quite as good. And you can see that if I keep, this is the 95% confidence interval, my actual particle position is a little bit off and that's because it looks a little bit more like a particle if I'm a little bit off here because these two particles are close together. So it looks a little bit better if I'm off. So I get this little change. This is another technique that you may have heard of the Huff Transform. It's another technique for finding particle centers and it also does very well. You can see there's no question that's where the point is. But there are these satellites that occur in the Huff Transform. And if you're interested in that I can talk to you about it later. And these satellites are particularly bad when you have a kind of crystalline structure which you do often get with circles. And so these things can add up if I have another particle here and another particle here and another particle here and a crystal structure right missing the one in the middle I can think there's one there even though there's not. And so this is a very common problem with the Huff Transform. And the Huff Transform can be also thought of as a least squares transform but just on the edge as opposed to the entire particle. Because it's only the edge you don't quite get as good a match. And that's basically the idea. We're matching the entire particle here. So here's what we do when we want to get sub-pixel accuracy. So what we do instead we take the same image and we find the particles to pixel accuracy using this technique that we have. Now instead of calculating a single particle now I'm gonna calculate the entire image. So I'm not just gonna calculate a single particle I'm gonna write down the function for the entire image. So this is writing down the function for this entire image. When I plug this function into MATLAB I get this image. So this is the real image. This is my calculated image. So I've calculated the entire image. Basically it's simple. I just keep adding up these particles in every position that I found. But I do a little bit more work because if I were to just add them up when they're nearly overlapping I would get some extra weight. So I get rid of that so that I get a much better version of this. And that's where we get a lot of the accuracy out of this. Now what I can do is I can do the least squares fitting again. And so that starts by taking the difference between these two images. So I take, this is the real image. This is the one I calculate. And this is the difference. And you can see the difference squared actually. So you can see there's places where it's not right. There's little bits where there's bright spots. That's places where I don't have the image exactly right. So now what I can do is I can take this and minimize this over the positions. So I can take this function. It's a function of all the positions of the particles. And I can minimize this function by setting equal to zero and looking at the derivatives and find the solution to this equation. And when I find the solution to this equation, the answer is the better version of the position of every single particle. And so this was my original one. The chi squared was about 1,000. It's the same one we had on the previous plot. Now after I find all the positions correctly, I get down to 600. And now you can see it's much better, right? Here we had all these bright spots that were off center. But now every single one is a nice perfect ring. Well, that means that I'm not quite right on my diameter. So I need a little bit different diameter. I need a little bit different width to really fit that. And so if I then fit the width and the diameter, I can get this image, which is down to 180, which is a factor of five better than when we started. And it turns out that with this technique, I can get sub-pixel accuracy to typically a hundredth of a pixel. So you can go from pixel accurate to a hundred times better using this type of technique. And so that's why we use it. Now this technique does take a long time. And so because we have to calculate the image, we have to solve this coupled set of equations, many, many equations. And so this typically takes on a regular computer on the order of a second. Okay, so if you're doing a hundred thousand frames, you would like to have a cluster to do this. Because if you do a hundred thousand frames or the second a piece, that takes a long time. So we do this on a cluster. You can also speed this up on a GPU, but not as much because the solving of the equations is the thing that takes the longest time here. And it's not that much faster on a GPU than it is on a regular CPU. So you're sort of stuck with that sort of timeframe until Intel comes out with the next processor and the next processor. So if you just wait a few years, it'll be 10 times faster. But right now it's about a second per frame, which is reasonable for this kind of accuracy. Now let's just compare what we get from, yes. Yeah, it's exactly like that. So here's the equation for that function. So I take my ideal particle image and I place it at position N and I sum that up for all the particles. So I take an image of the single particle set at the spot where that particle is and I add to that another one with the particle set. And I add to that another one with the particle set there. And by the time I add them all up, then I have every single particle in there. And this W that I have out front is just a little weighting function that says only put it in inside the Voronoi diagram of that particle. That is to say wherever it's closest to that particle, I solve the function there. And then wherever it's not closest to that particle, I don't solve it. I use the other particles point. So that's how we get this nice image that looks like this. Yes, definitely. Because for example, if we look at the image, if we look at parts of the image, so if you take things like this area right here, these two particles are very much overlapped as far as the image is concerned. If I take a single particle here, it doesn't work quite right because my function is not like this. It doesn't have red on the edges. So I won't get the best answer I can possibly get using a single particle because I'll actually find something that's slightly off. Like in this one, it might be slightly toward this guy because this guy has a stronger overlap than this one. So it'll be slightly off. And that slide is small. It's less than a pixel, but it's way more than a hundredth of a pixel. And so you can't get the kind of accuracy you get by just using a single particle at a time because this one would work perfectly, right? So I can find this one to a hundredth of a pixel accuracy using my original technique. But this one, I can't. These in here, this red row right here will be very hard to find to that accuracy. It's because we're doing least squares fitting. I don't guess we don't have a, there's a chalkboard, but no chalk. But the basic idea is if you were to do that, you have a little box with the particle in it. It's got these, if I take a box around this particle, it's got the other particles in it. And so it's very hard to find a way to fit it perfectly. It's the fact that these particles are nearby that makes this image work. And so you're actually using the fact that the other particle is nearby, using both of them together to tell you where's the best fit for everybody around me. And that together is what gives you the much higher accuracy. So let's compare particle tracking and PIV. So we took our particle tracking that we did on these images and we found the y velocity as we move across this rotating drum. And the dashed line is from the particle tracking and the circles are from the PIV. And what you can see right away is that the PIV is very good. There's no fitting parameters here by the way. I don't fit between the two. This is just the answer I get from both techniques. So it's pretty good, but it definitely has more noise. So we don't have this kind of very high accuracy that we get with the thing. The other interesting thing is that for the dashed line, which is the particle tracking, everything is zero out here because there's no particles. So it's zero. But with the PIV, as I mentioned before, I actually can track the cylinder itself moving. And this dashed line here is that overall velocity of the whole thing. And you can see what's basically happening. In the bottom of the chamber, everything is just going in solid body rotation. As we move up in the chamber across, then we get this increase in speed as it slides down the top of the cell. So this is the sort of differences that you can get in these systems. So let me just finish up by mentioning a few other ways we can extend this kind of particle tracking. I'm not gonna go into detail on any one of these types that I just mentioned. So for example, if we have different shapes, so here is some images of little rods that we'd like to track. And the way I tracked them was by finding the edges of each edge as a sort of particular shape to it. And so I could track it by fitting that particular shape. And then if I wanted to, I could do subpixel accuracy by fitting these actual objects to the thing. Here's another example. This is a bug and Dan Goldman's not here this week, so I don't remember what it is now. But this is some bug and it's crawling along and I wanna track it. So I sort of said, well, this looks kind of like a circle and this looks kind of like a circle. And if you fit those, these are the points you get. So you can track weird things. If you want to get really high accuracy, you can make your particles big. So we took some data where we had about 40 pixels across every particle, so really zoomed in. And we used this very high resolution image to do this subpixel accuracy tracking. And here are the velocities that we get from that and they're falling under gravity most of the time. That's what's happening. So they fall under gravity, the velocity's increasing, they're falling under gravity, but then they collide with some other particle and so then they change and then they fall under gravity again. And if we zoom in on this little region, these three lines, which are kind of hard to see here, but each one of them represents a change in the resolution of 30 nanometers, okay? So if we were off by plus 30 nanometers, we'd be here. If we're off by minus 30 nanometers, we'd be here. And we can see our line goes straight through that. This is just gravity, I'm just fitting this to gravity. 30 nanometers is way less than the wavelength of light that I'm using to create this image, okay? So think about that for a minute. There's this Rayleigh criterion for determining the positions of things that's based on the wavelength of light. You just can't go below it. Well, can't you? Well, you can in many ways, and this is one example of this because I'm using all the points in the particle to be able to figure out where this is. So I'm not just using one point, which is what the Rayleigh criterion is separate. If I have two point sources, how close together can those two point sources be before I can't distinguish them? That's not what I'm asking. I'm not asking that question. I'm asking a different question. If I have lots of points, can I tell where those guys are on average? And so we get something like one in 10 to the five. The particles are about three millimeters. We have about 30 nanometer resolution. So we get one part in 10 to the five in finding their positions. And this can be very important in determining things like the radial distribution function in granular gas. It's a very important quantity. Everything is based on this quantity and it's not easily measured. And so this is the one way that you can measure it. You can also extend this technique to 3D, at least if the sample is thin or if the sample isn't too dense. And so in this case, we've done the same thing. This is the same data, but we also have the third dimension. And what we have is a thin container and the particle is bouncing around inside this third dimension as well. And this is just a map of the trajectory of the particle inside the third dimension. And this is very small. So this is 0.2 millimeters across how far it's moved because the particles are very well confined. And we just take two cameras and correlate the two different directions to get the third dimension. Here's another example that we can use. This is to measure forces. This is an actual experimental image of photoelastic particles being compressed. And whenever a photoelastic particle is compressed, if you put it between polarized light, polarizers, then you get a pattern that's created. And this pattern is dependent on the force that's created. So we took this image, we used our particle tracking technique to find the intersections. You can see every one of these intersections has a certain pattern. It's this little dipole looking object. So we just tracked that dipole object and we got all these positions. Then from finding all these positions, we actually calculated what the stress should be in each particle and then therefore what the light that goes through each particle should be. And this is our completely calculated image. So this image is supposed to look like this one, which it does. It's completely calculated. Yeah? Yeah. Well, so it just turns out that because these dipoles are kind of self-similar. So even though this one's bigger than this one, close by, they look the same. So you're good enough. But if they weren't the same, then we would just have to do an extra step in our fitting. We'd have some sort of scaling. So we do this picture as a function of scaling. And then we would get some dots will be bright in some of them with some scaling and some dots will be bright. And other we just take the maximum to find the final value. And just to zoom in on this image. So here's that one particle. Here's the experimental image that we have. You can see experimental defects. Here's the calculated image that we have. You can see all this detail that we're able to get from this calculated image. And this is the difference. So basically there's very, very little difference between these two just by using this least squares fitting technique. And just you could fit anything that you have an equation for. Okay, so you can fit anything that you have an equation for. But they do use this technique or similar techniques to do facial recognition. And this is just the results of finding the force from those pictures. The circles are from a N-stron, a force measuring device with very high accuracy. And the line is from doing it from the images. These other ones are other techniques which don't work as well. So you can ignore this. But the line and the circles are a very high resolution technique for measuring the forces and then the technique using the image processing. So thanks for your attention. Thank you.