 I need to have it because I don't think we're going to record this. But so this will be our normal schedule for most of the days. We'll come in at 9 o'clock and start our lecture for the morning. These lectures are supposed to be sort of tutorial. They should be broad and general. And you should feel free to ask questions, because the idea here is for you guys to learn something new. No one is an expert, including us, the faculty on all of these topics. So this is your chance to learn something about a lot of different topics from the faculty that are here. So we have that in the morning, then we go straight to our hands-on sessions. Not like we did yesterday in the afternoon. We'll go straight there in the morning, we'll refresh, go to the hands-on sessions. Then after that, we have lunch. And then after lunch, we come back here again for a second talk. Okay, and that time we'll have a second talk at that time. There may be some other things we do, like an open mic session, some other things. Might get Joe Niemola, where did he go? There you go. Might get him to do this international collaboration. I hadn't asked him yet, but he'll probably come in one day and maybe tell us about international collaborations. And then in the afternoons, we have some sort of different thing. And the first three afternoons, it'll be all about professional development in the sense of writing posters and snapshots. And you'll hear all about that this afternoon when Mike comes and talks about scientific writing and some other things there in the afternoon. And then free time and dinner. And that's the basic schedule that we'll have for the rest of the school. Here is the assignments for all the hands-on sessions. We've already done Monday, so that's Monday. Hopefully, these are the assignments you had for Monday. These are the ones I had for you. And then these are your assignments for each of the days. So you just look Tuesday, you find your name, H, you go to Info Lab across the way. So that's how you use that. I'm just going to pass these around. And on the back is the schedule, the latest schedule that we have, which there may be a couple of little additions later on as we go along. But just sort of pass these back. And faculty can take these as well. There should be enough for everybody. If not, I can make more. And I think that's mostly... So I would just like to point out. So Joe Nimola, who is one of the directors, please stand up, Joe, has been a key person, he's a local director along with Maria Liz Crespo. And Joe has supported the schools ever since their inception back in 2008. Without Joe, these schools wouldn't happen. So I'd like you to have your hand up for me. I just want to thank you for asking Joe to say a few words. But Joe plays a mean saxophone and will you play for us next week at Friday? Oh, okay. So yeah, we have a band built in years on a research band. Okay. I really need this. We're recording, so. Oh, you're recording? Okay, so I'll take away all the jokes. Not anyway, it's a real pleasure. It's always a pleasure. This is actually one of our premier activities. I'm not just saying that because I'm in front of Mike and everybody, and Ken and Mark and everybody else. But it really is really one of our main activities that we love here at ICDP because it involves a lot of things, professional development, hands-on equipment. Well, you've already heard all this story. All right, so anyway, but I think it embodies what ICDP likes to do, thinking ahead to the future to do more of these kind of activities. So anyway, it's really been a pleasure. And I hope to get the band. I just heard about that 10 seconds ago. So I got to ask the rest of the guys. Okay, all right. It's a week from Friday. A week from Friday. Okay, got it. Okay, and so without further ado, I'll get Ken to introduce the speaker. Okay, it's my great pleasure to introduce Mark Shattuck of City College, New York City. It turns out I've heard this talk before, and it's my all-time favorite talk. You're going to learn so much from in this talk that you're just going to love it too. So with that, no pressure. No pressure, exactly. Here's Mark. Okay, thanks. Okay, well, good morning. And as Ken said, my name is Mark Shattuck. I'm at the Benjamin Levich Institute at the City College of New York. The slides that I have for you today are from a larger talk, a three-hour talk, actually, that I gave. And we're not going to be here three hours, don't worry. But I wanted to include all of these slides. It adds a lot of extra details that I won't be able to talk about today. But they'll be on the hands-on website, the hands-on Google Drive. So you can download this talk, along with all the movies, as well from there. It's already up there now. And so any of the details that you see that I skip over, if you're interested, then you can look them up there. And I'll just try to point them out a bit as we go along. So I'll also just mention that I have edited a book. I'm sort of obligated to tell you this now that I've done it. My publisher says I'm supposed to. And so all of the information that I'm talking about is also in the book chapter that I wrote in this book. But the other chapters in this book are also very good if you're interested in granular materials. That's just a list of everything that's in the book. The first thing, and the most important thing, and if you take away nothing else from this talk, this is what you should take away from this talk. Your brain is the most amazing image analyzer ever. And there's nothing even close. No computer can come close to what your brain can do. I can look out at this room. I can read that sign. I can see Mike with the corner of my eyes. I can identify almost everyone almost instantaneously just by glancing around the room. No computer program can do that or even close, especially at the detail that I can do. I can read the badges from here. I can see a glint off a pin. All this stuff I can identify very easily just quickly like that. It would take a computer program hours and hours to even come close to coming to any of these sort of things that I can just do like that with my brain. So the first rule of image analysis is if you can't figure out what the image is telling you, there's absolutely no way the computer can. If you can't see it, you're done. There's no chance. Unfortunately, the converse is not true. Just because you can do it definitely does not mean that the computer can do it. So even if you can see it, you see, oh, well, there's an amoeba here moving along and I can see that your mind is filling in all kinds of information that the computer may or may not be able to get. So you can't necessarily do anything that your brain can do, but that's at least you have to be able to do it with your brain for the computer to even have a chance. So that's what you should really get out. And the other sort of take home message is that because of that, the better you can make your images, the better your image analysis will be. And it sounds kind of trivial, but it's very important. A lot of times people think, well, I'm really good at software. I know Matlab. I know ImageJ. I can do all this manipulation after the fact, but it's almost always better to get more data in the beginning, better data in the beginning than it is to try to post-process it. So that's a very useful rule of thumb. So get your images as good as you can. Get the detail as great as you can before you come to the computer. So that's the main thing. So if you wanna go to sleep now, you've learned the most important parts of what I'm gonna tell you today. Now that being said, why would we use a computer? I already just said they're terrible at their jobs. But that's not true. There's a lot of great things that computers can do. One of the things is that it's much faster to do very repetitive tasks. So for example, particle tracking, I'm gonna talk a lot about that in the second half of the talk. To do particle tracking the old days, we would take a movie, we would project a single frame onto a very big wall, we would have a piece of paper on the wall, we'd identify the centers on the piece of paper, then we'd take it down, and we'd measure the distance from a focal point, and then we'd do another piece of paper and we'd keep doing that over and over again. So I was not tracking 400,000 particles per second using that method. I wasn't even tracking 4,000 particles per second, and I probably didn't even track 4,000 particles using that technique. It's just too much, too intensive to do. So the computer just happily does it without any problem. The other thing is that it can be extremely accurate. So you can do a much better job with the computer at getting high accuracy, finding the exact center. Our brains are really good at finding the center of a circle or finding certain things, but the computer can do much better in terms of finding that exact point that you're looking for. And so in terms of speed, we have GPU codes that can do about 400,000 particles per second with pixel accurate code, and we can do about 4,000 particles per second with 1,100th of a pixel accuracy routinely. And so this is the kind of thing that we do easily on computers. We can get accuracies up to something like this, 55 nanometers on a three millimeter particle. That's about 1,10th the wavelength of light. So we're finding the position of the particle to 1,10th the wavelength of the light. So this is incredibly precise, accurate measurements, and that's what the computer is really, really good for. And now, why would we ever need this kind of accuracy? You might say, oh well, I see the particles are doing so I don't really care. We just give a quick example of where we might use this and where we have used this kind of tracking. In this experiment, a granular experiment, we take a thin layer of grains and we shake them up and down and we look down on top and this is what you see from on top, the particle's moving. They're moving around like a gas, but are they like a gas? That was the question we were trying to answer. Are they really like an actual gas, like a hard sphere gas, like you learn about in statistical mechanics? Well, for one thing, they aren't like a gas because they're not conserving energy. In real gases, the particles conserve energy and so these are definitely not doing that. So are they the same or are they different? How would we find out? Well, what we did in a series of papers was to try to match up all the things that idealized energy conserving states do with hard spheres and things that real particles do, real hard spheres do. And one of the key things that we did was to look at this radial distribution function. And the radial distribution function is just asking, what's the probability that I have a particle a certain distance from another particle? And so what we did was compare the radial distribution functions that we measured from experiment. Those are the solid lines. And from simulations of actual hard spheres in energy conserving situation, and you can see there's almost perfect agreement. So what it told us was that these particles actually are acting as if they were in regular equilibrium, just like you have in an ordinary hard disk system. So what is phi? What is phi? Phi is the volume fraction. So it tells you how much coverage there is of particles. So this is a lower volume fraction, less particles per unit area. This is a higher volume fraction, more particles per unit area. And each one of these curves represents a different packing fraction or a different density of this hard sphere gas. Another example is in the velocity distributions. If you have an energy conserving system, you know that the velocity distribution has to be Maxwell-Boltzmann in equilibrium. But if you don't, you know it can't be Maxwell-Boltzmann. So what is it? Well, it's this form right here. It has a Maxwell-Boltzmann plus a correction that's fourth order in the velocity. C is the velocity. And we were able to measure this correction in this gas here. And if we had only a tenth of a pixel accuracy in measuring this velocity, then this is the size of our dots. They would be this wide. In the tenth of a pixel is what you normally see. If you just Google particle tracking, they'll go, hey, I got a tenth of a pixel. Well, the tenth of a pixel is not enough to see this kind of effect. You need something better. We had a hundredth or maybe 200th of a pixel accuracy in this thing, so we could easily see this deviation from Gaussian behavior, which was important for theorists to understand these kinds of systems. So without that kind of accuracy, we wouldn't have been able to understand these things. So just as a general outline for the rest of the talk, I'm gonna sort of break it up into two parts. First I'm gonna talk about some general image analysis tools that are applicable to very, very many applications in image processing. And then I'm gonna talk more specifically about particle tracking, actually how we might do tracking of particles. And by particles here, I mean anything pretty much whatever you wanna track, anything that's moving in a frame. So that's the basic outline of what we're gonna talk about. Now to just make sure that we're all on the same page, I wanna just describe what an image is. So an image in a computer is just a matrix of numbers. That's all an image is. So if I look at this image, what this image is telling me is that the pixel at five, five has a value of minus eight, whatever that means. And this list of numbers is just representing then something about the image. It could be the intensity, like in this case, if it's a black and white image, a grayscale image, it might just be the intensity of light that you're seeing. Or it might be something that's the amount of red that you're seeing in an image. And there might be a green and a blue channel as well that we could look at. It could be something else like the spin density in magnetic resonance imaging or the optical density in CT that you'll learn about in one of the hands-on sessions. But whatever it is, it's just a matrix. And so to display this information, we can display it in multiple ways. These are the same images, but they have different colors. Why do they have different colors? Well, I'm using a different color scale to represent the numbers. So in this case, the lowest negative value is black and the brightest white value, the brightest, the highest value, positive value is white. In this case, it's blue and red and it goes between these colors. So you can use any sort of color scale to represent an image. And this is what we'll do often to allow us to see certain details in an image. Now here is an example of a movie. And what's a movie? Well, it's just a bunch of images stacked together. And so now it's just a three-dimensional array. So you've got our two-dimensional array of numbers this way and this way, but then we also have the array coming out in time. And this is just a computer simulation that I wrote at some point. And so just changing in time, we look at the different frames one at a time and that tells us what's happening over time. And those are the kinds of things that we might like to analyze in these systems. And we're gonna talk about a very general tool that's used to analyze all types of signals but also can be applied to images. And that's the Fourier transform. And I'm not gonna go into any of the details of the math in this talk, but you can look up in Wikipedia all about this if you wanna know more details about the math. But the Fourier transform is a very important technique because it actually allows us to extract frequency information from time information or spatial frequency information from an image. And that's what we use it for. And the other reason that we use it is because there's a very nice trick to calculating the Fourier transform called the fast Fourier transform which makes it very fast to do. If you think about a general transform of a vector, you have to multiply it by a matrix which takes n squared operations. But when you multiply by this special matrix with these special symmetries in the Fourier transform, you're able to do this multiplication through exploiting these symmetries to get something that goes like n log n. So basically it takes n times rather than n squared times to do this kind of transformation. And so that makes this transformation very useful. And so many other transformations that people try to write in terms of the Fourier transform so that you can do them in this fast way. Now what is the Fourier transform? I say I'm not gonna go into the math of it but I want you to understand the idea of the Fourier transform. So what the Fourier transform does is tell us the sine and cosine content of an image or a function. So here, for example, is a one dimensional image what we know as a function or a time series or something like that. And what we have here are two channels out of phase with each other. So one of them is a sine function, one of is a cosine function. And I just do this complex version because it makes the display easier. You could do it on real data, just the same. You just get a slightly different version of the Fourier space. But when you have this complex data and you take the Fourier transform, you get one peak. That peak is located at five, you can see here. If you count one, two, three, four, five, it's telling me how many cycles there are in this waveform. It's telling me the exact sine, cosine content of this. This is a more complicated signal like if it went up really peaky and then came down fast and then went up peaky and came down fast. It would have the same basic frequency but it would have to have other frequencies to make that different version of this. And so that's what the Fourier transform is telling us. The frequency content of the function. Now we can extend this to two dimensions or more as many dimensions as you want. And so here is a two dimensional version of this same function, except now what we've done is in the y direction we're changing but in the x direction we're completely constant. That means a frequency of zero. And so if we look at the Fourier transform of this, it has a real and imaginary part just like up here. There's the blue and the green. But the real part is a certain distance away from the center here and it turns out, how far away is that? Well it's exactly n naught. This same number here which is just counting one, two, three, four, five. The number of cycles that we have. And so if you have a more complicated image there'll be more dots. And you can think of the Fourier transform as basically the dots. Where, what frequencies we have? How much of each frequency that we can possibly have in this image, where is each one? And so here's an example of a more complicated image. This is from the movie that I showed you earlier. Here's early on in the movie. Here's, sorry, here's two different points in that movie but with different boundary conditions actually. And so what we can see is that in the Fourier transform, and here I'm just looking at the absolute value of the Fourier transform or the modulus of the Fourier transform. We can see that the majority of the content is a certain distance away from the center. That distance away is telling us the number of these peaks. One, two, three, four, five. If I count them across, that'll be how many pixels across this is. But you can see it's not a perfect thin line because we don't have just one frequency like we had in that other picture. We see that it's kind of broad. And you can see that. You can see that sometimes it's thicker and sometimes thinner. There's a thin one. There's a thin piece. There's some thinner stuff there. But it's thicker and thinner. And you see that that is represented by the fact that this is broad. This is just a zoomed-in version of this same picture. You can see it's very broad. And you could find out, well, it looks like there's a little bit of a 45-degree bias. See, those peaks are higher there. And you actually can go in and find these sort of 45-degree bias. It has to do with the boundary conditions that I use in the system. And if I change those boundary conditions, for example, here, now you can see there's a bias to the 0 and 90 degrees. And also, you can see that there's a much tighter wavelength. This is much closer to a perfect sine wave. And you can see that the ring is thinner. There's fewer different frequencies involved. There's lots of different directions still, but there's fewer frequencies. And you can see this overall 90-degree bias that we have. So this is a kind of analysis that one can do using the Fourier transform on images. Now, something that's related, and as I mentioned, is something that we care about, because we can calculate it using the Fourier transform, is the convolution and the cross-correlation. These are very general techniques in mathematics, and they're also very commonly used in image analysis. And I just want to describe what these two things are in detail, because it's often hard to figure out what people are talking about. They're, oh, just convolve that with blar. This is convolved with that. Or this is correlated and cross-correlated. These things, it's very hard to figure out what's going on. So it's good to sort of go through and try to understand what's happening. So to make this very simple, we're going to have two very small images, two one-dimensional images. And this one's going to be five pixels across. This is going to be two pixels across. And what I'm going to do is I'm going to convolve these two together. So what that means is I take this one and I place it here. Now, in the convolution, I take my second image and I flip it. I'm going to flip it over. So I take the 102. I flip it over 201. Now I line it up. And I line up this side with this side. I multiply these two guys together and add up. And so I get 1 times 1 is 1. Now I'm going to move it over. I'm going to shift it by one spot. Now I get 1 times 3 plus 0 times 1 is 3. Now I'm going to shift it over again. I get 2 times 1 is 2 plus 1 times 2 is 2. Together is 4. And I'm going to shift it over again. And I get 5 plus 6 plus 1. I can multiply. And then I shift it over again. And I shift it over again. And I shift it over again. And now I can't shift it over anymore. That's as far as I can go. If I shift it over any further, then I won't have any other multiple images. All these zeros. You can think about it as all zeros from the rest of this all the way across. So this is the convolution. And this is generally useful more in math than it is in image analysis because of this flipping. In image analysis, we won't want to do the flip. And you'll see why in a few minutes. But in math, this is used very commonly because this is also the way one can multiply polynomials. So if this was a fifth order polynomial and this was a third order polynomial, this is the product of those two polynomials. So it has a lot of uses in mathematics. But the convolution doesn't necessarily have that many uses in image analysis. Although you'll see that you can use it sometimes. Now in terms of the different types of convolutions, and if you go to Matlab or to ImageJ or to Python, you'll find that there are three types of convolutions that people talk about. One is the full convolution. That's the one that the mathematicians usually care about because that gives you the full thing. Every possible position that you can put it through, that's going to give you all of those. The other one that's kind of interesting is the one called valid. So if you look here, these three in the middle, that's the only places where I had full overlap between the two signals. So from here to here, I have full overlap. That's the valid region. That's the region where I got all the overlap I can possibly have. And then the one that we most often use in image analysis is same. And that's where I just wanted to have the two images be the same size at the end because I'm processing them sequentially through other things. And so that's the one that's often used there. Now what's the difference between the convolution and the cross-correlation? The only difference is this minus sign right here. And the minus sign is just whether we flip it or not when we start. So we do the correlation. Instead of flipping this first, we just take it. And we do exactly what we did before. So we put it here, we put it here, put it here, we put it here, we put it here. And it's the same as before. You have the full, the same, and the valid. They're all the same as we had before. But now instead of flipping it first, we have left it the same. And this correlation, as the name implies, is kind of telling us where this thing looks most like this thing. That's sort of what it's telling us. And you can see where is it largest? It's 8. So what it's saying is 1, 2, 1, 3 is closest to 1, 0, 2. And you can see they're just off by adding 1 to each one. They just shift by 1. And we get the same thing. No other section here is so close. The next closest section is over here where we have 6. And that's just because that one doesn't really count because we're kind of off the edge. So it's not exactly the same as harder to tell. But when we get to here, this is the place where they're closest together. Or that's sort of what people say about the correlation. I'll talk about exactly why that's the case in just a minute. So that's the convolution and the cross correlation. And very often in software packages, they only provide you with the convolution. It's very common to leave out the correlation. The reason is because you can always just flip and get the other one. And so they just give you 1. But the one you'll use most often in image analysis is the correlation. You can use the convolution if your images are symmetric. Because then it doesn't matter if you flip. So that's another place where you can use it. Now, how would we do this on an image? Let's say we wanted to convolve this image with this image. So we do just what we did before. We take this image and we put it up. We just put the corners next to each other. We multiply together and we add it all up. So that's going to give us this very dark value because it's a very little overlap. Then we just keep moving it over, moving it over, moving it over, all across this image until we get a new image. And here's the new image that we get. And we have the same three regions. We have the valid region, which means it's the place where I can put this all the way across and fit it inside. We have the full image, which is the one I get by moving all the way out. And then we have the same size. So those are the three images. And you can see the convolution takes a sort of different character as you move into these three regimes. And you can see you're not really getting the same thing out here as you got in here. And convolution like this is a kind of blurring. It's basically blurring the image. And so what you see is this is a kind of blurred version of this. If you were to blur your eyes and look at it, you can kind of see that's what you should expect to see if you blur your eyes on this type of image. And so that's what convolution does. Now let's look at the difference between convolution and correlation. So here now I've taken a little bit out of this image. I took this bit out of this image. And I've correlated that with this image. And I've convolved that with this image. And what you can see is that in the correlation, I get this big, bright spot right at the center where those two guys are exact. So I get the brightest spot where the two things are the same. That's what people mean by correlation. They're correlated there. They look the same. You can also see you can use it to pick out little areas that look similar. Like, for example, this little guy right here, it looks very similar to this. And you can see it is a very high correlation. Here you can see in the center of this, it's not convolution doesn't really tell you anything. It's sort of zero at that point. And there's bright spots other places. That's because this is just sort of blurring the image with this weird blurring tool. So that's the main difference between correlation and convolution. Now, why does this work? A lot of people just say, well, it's correlation. So it's correlating. So that's one answer. But there's a mathematical reason why this works. And it has to do with a very powerful technique that people use in lots of areas of research. And that is the least squares fitting. And this idea is that I want to find one function that's as similar to another function as possible in a least squares sense. That is to say, if I subtract and square it, when will that be smallest? That's actually how I do the scheduling. I put a cost function for your all's preferences in, and I try to minimize that cost function. I try to find the spot where my cost function maximizes that. So I use that same sort of technique here. If we do this in terms of images, what we might like to do is ask, when is this little patch of an image like the actual image we have? So we take the image we have, we subtract off the patch, we square it, we sum it all up. So that's what we would do if we were doing this least squares fit. This will give us the smallest value when these two things are the same. Sorry. Oh, come on in. This will give us the smallest value when these two guys are the same. So this is where this comes from. But if we can take one more step in this, and we can actually square this out, we can square the image and subtract off 2 times the image times the patch, plus the patch squared. And if you look, this is the exact definition of the correlation. And so it's got the pluses just like we have. So here's the definition of the correlation, n plus k, m plus l. Here's this. So all you have is the major contribution to this least squares is the correlation. It comes in negative. So if you want the smallest value of this chi squared, you want the largest value of the correlation. And that's why correlation correlates with the images. Now, this is actually not the full least squares fit. And these can matter. It depends on the situation, but often they do matter. And I'll talk about that when we talk about particle tracking. They can matter quite a bit. So now, how would we use this? One technique for using this is what's called particle image velocimetry. And what we do there is we take a patch from one image in time and convolve it with another image in time. So I take the same little patch and I convolve it with a later time. So here's one time and here's the second time. I'm going to take the patch from this guy. I'm going to convolve it with this image. Sorry, I'm going to correlate it with this image. And so when I do that, I'm going to find the spot in this image where this little patch has moved or the most likely place for this little bit of the image to be here. And you might ask, what am I tracking? There's no particles here. I'm tracking the actual image. The actual image is going to move. These little things kind of act like objects. And they move along in a continuous way. And I might actually track that information. If we zoom in on that section, what we can see is that the peak is here. That's telling us that that little patch has moved downward. And so what that's saying is that this little patch in the middle here has moved down in the next frame. And I can do that at every point in space. I can take a little patch from every point in space. And I can correlate it with this. And I can get an entire velocity field out of this. And this works on many, many types of images, not just particles, not just these kinds of things. And here's an example from my lab. So this is a rotating drum where we put particles in and we rotate it around. And this is sort of the cascading of particles. And these particles are way too small to track individually. But we can use this correlation technique or this PIV technique to measure these guys. And let me just show you how that comes out after I change these batteries. OK. OK, good. So here's a little region that we might like to track, find out where it is. So let's blow up that region. Here's a blown up version of that region. Here's that same region one frame later. Now it's kind of hard to see. You try to imagine tracking this. This is back to your point of, can I see what's going on? Can I actually see where this particle went or this particle? This particle is where over here? I can't see it. So I can't do that. That's not something I can do. What can I see? If I look at this box and I look at this box, well, they look pretty much the same. Look, there's that black line. There's that black line. There's that little thing there. There's that thing there. There's that. There's that. There's a lot of things there. So I can see that those two things are similar. Exactly how far has this moved? Well, my eye can't tell me that. It tells me it's moved. But now the computer can tell me exactly. Look how bright that spot is compared to any other spot. And so I can see this has moved from here to here. That's where this chunk moved between frame to frame. And I can take that at every point and I can create a velocity field. So here's the velocity field that I took from doing that at every point in space. And you can see something interesting. It's maybe a little hard to see on this monitor. But this is darker in the middle and brighter as we go out. That's because I'm actually tracking the glass as well. The glass itself has little imperfections and little things on it. And so that moves along with the system as I rotate it. And so I can actually track that. It makes it very nice because I can actually get the rotation rate and a lot of other things out from this just by tracking the glass, which doesn't even seem to be moving. But of course, if you were to look at the individual frames, you could see, well, there's a little spot here and it moves and so forth. So you could see it if you got in. So this is a good example of how we might use this kind of technique. Now, what we often do is we want to get better tracking. And so one of the things that we do is to zoom in and take much better images. And here's an example from that same experiment. But now, each one of these little squares you can kind of see. There's about 30 of them. Each one is its own image, its own image. And so we just took a whole bunch of them. We pieced them all back together to get this whole image. And here's one of those frames. And you can then use that to actually track individual particles. And so that's what I'll talk about next. So in terms of tracking particles, there's sort of two parts to tracking particles. There's one finding the particle. And then there's one tracking it from frame to frame. So once I find each particle, then I have to decide where has it gone in the next frame. And so those two processes are somewhat orthogonal. And today, I'm not going to talk about particle tracking. That's an interesting topic, but we just don't have enough time for it. But I'm going to leave the slides in here that talk about particle tracking, how we connect up from frame to frame. There's not a lot to it. The basic idea is that the blue are the previous frame, the red are the next frame. And you just look for which one's closest to where you were. And if you wait too long, you just can't tell anymore. So that's the bottom line. If you wait too long, like if I look at you guys, then I, well, first of all, you have to be identical. If you were all identical and I closed my eyes and said, change seats. And I open my eyes. I won't know where each particle moved. I waited too long. But if I peek, and then I can see where you're going, then I have not waited too long. So that's the basic idea. There are a lot of little details about what do you do when two particles have moved closer to a different particle and things like that. But this is all a solved problem. This is a problem that people have solved quite a while ago. So it's fairly straightforward. But what I want to spend more time on today is talking about how we find particles, how particles are actually found in an image. And this can apply to not just disks and spheres like I've been showing, but it can apply to lots of different situations. And I just list here a whole bunch of reasons why we do tracking and how we do tracking. And this I'm not going to go through in detail, but it's here in the slides if you want to see it. So here's an example of tracking that we might like to do. And I just want to talk about, again, try to push home this idea of getting the best image that you can get. It really pays off in the end. And here what we're comparing are three different types of lighting to do tracking. One of them is backlighting. So we make a bright field behind. The particles occlude that field. And so we can see the particles. That's the big black spots. Another technique is we use a point light source to illuminate the particles. And then there makes a really bright spot on each particle that we can then use to track. A third way to do it would be to put a ring around your camera. If you put a ring around your camera, then you have something that's axi-symmetric and allows for some extra details in how we can do the tracking. If you ever watch CSI or any of these police dramas, all their cameras now have big rings around them. That's because this lighting technique is much easier, much better for getting uniform lighting. And so you can see the basic result. And this is actually an experiment where we took pictures from two different lighting sources at different colors. So we can actually extract this picture and this picture separately. But on the same image, we're doing all the same tracking with all three techniques at the same time. And what we can see if we compare to the backlighting, which is the best, is the one I'm going to describe in detail in a minute, if we compare that, the ring lighting is still pretty good. So the difference is small here. This is one pixel, by the way. These are all sub-pixel accurate. So we find the positions of these particles to better than a single pixel, one of those individual spots of light that we found. But there's some deviation here. And that deviation is maybe a 10th to a 20th of a pixel, which is a typical type of tracking that people do often. In the point lighting, the problem is the point is off-center. And so that, how off-centered it is, depends on where you are in the image. And so you can actually see there's a deviation that has a slope to it. And that's because the point source is off. It's not on-center. Whereas the ring sources are on-center, we're able to correct for that. And also the point source just gives you one pixel of information, basically. The other pixels that light up are usually errors. They're not really even supposed to be lit up. And so that error shows up here that this is maybe a fifth of a pixel accurate, something along those lines. Now I just want to mention a couple other places where this type of technique can be applied. One common technique these days is laser sheet imaging. So what you do is you take particles and a fluid that are index match that you can see through. But you dye one of them, either the particles or the fluid. Then you put a laser sheet through there. And then the dyed portion lights up. And here's an example of the particles being dyed. And you can see we can get a nice slice. This is in the middle of a very thick image. We can get lots of different slices. And we can build up 3D information. Another example is magnetic resonance imaging, which I've done a lot of in the past. So these are actually mustard seeds. And you can see two cross sections of the mustard seeds. And then here's a three-dimensional reconstruction. You can use these same types of tracking techniques to operate on these images. Another example is x-ray CT, which a lot of you guys will get to experience. And is a very up and coming way of getting 3D data. MRI machines are still $1 million. And they have been for a long time. But the cost of CTs are getting lower and lower and lower. I don't know how much are they down now? I don't know, 10,000 euros, 20 something in that range. But they've been $1 million in the past and 50, 60 k euros. And they're coming down and down. And so I think this is a very good way to get. But this is just an example of some sticks that we threw in when we were studying bird's nest, actually. So here's, though, a typical image that you might get in a particle tracking situation. And you can see that the background is not so great. So the first thing I would say to my student when they bring this image to me is go back and fix the background. It's wrong. It's not uniform. And then they may say, well, OK, it's hard and blah, blah, blah, and there's all these things. And in the end, if I agree, maybe this is the size of a football stadium. And these are basketballs. And so we have this very uniform light. But it's over a huge range. So, OK, I give up. You've done the best you can. Once you've done the best you can, what can we do? Because this still is not good enough to get very high resolution tracking. So how will we do better and do actually some correction on this? So the first thing I'm going to do is change the color scale. In the gray scale image, you can see it's kind of hard to tell the difference between particles here and here. The particles look basically the same. There's not a lot of dynamic range in this. But when I go to this other color scale, this is the exact same data. But now you can see how much radiation there is in the background here, much better. And you can see that the rings around these particles are a little bit different than the rings here. It's a little bit tighter on this side. We get a little bit better focus when there's more light. It's a little easier to focus in that region. So this is a good technique for really looking at the whole spectrum of the intensities that you have. But now what are we going to do about this background? Because it's really not uniform enough to do very accurate tracking. So one very nice technique is to just take the maximum pixel that's ever occurred. And that's what I'm going to do now. So now, instead of playing the movie with the actual movie, I'm taking the largest value the pixel has ever taken. You can see what happens. Very quickly, we're left with just the background of the image. Now we can do the exact same thing, but instead take the dark background. So we can get the dark background by doing that. Now it takes a little bit longer to get the dark background. There's some spots that take a while to get covered up. And we have to do something special for the corners. What we actually usually do is just drop something over the top of it at the end to get a good dark background. But this technique can be used if that's not a possibility. Now, once we take those, we now have a bright background and a dark background. And these are on different scales. So you see this is 600. This is 12. So this is very dark. But I've brought up the color scale so you can see the variation. So there's actually information in the dark background as well as the bright background. Now how are we going to correct our image? We'll use this function here. We take the bright background and subtract our image from it. And we divide by the bright background minus the dark background. Now why do we do this? This gives us an image that's strictly between 0 and 1. Why? Well, when the image is its brightest, then this is 0 in the numerator. So it's 0 when the image is brightest. When this is darkest, well, that's the exact same value as k. So then this thing is exactly 1. So this image goes exactly from 0 to 1. And you can see the results here. So here's the original image that we had. And here's the corrected image. This is the same image. The same data is here. But now you can see how much more uniform this image is than what we had here. And just to show you that I'm not tricking you, here's the image in the same color scale. And I've scaled this image to 0 to 1 by using the maximum and minimum of this particular image rather than the point-by-point maxims that I was using before. And that's the big difference. And you can see how much better this image is and how much easier this image will be to track than the previous image. And just to hammer that point home, here's a line through that image. The green is the uncorrected image, just scaled from 0 to 1. And here's the corrected image, the blue one. You can see how much better it is. Now there is a little bit of an issue here. One is that this is always strictly above 0 and this is always strictly either 1 or below 0. And that's probably not what's really happening. Probably really we should expect that this should be centered around 1 and this noise should be centered around 0. And you can actually achieve that effect by instead of taking the maximum pixel, you take the average value of the pixel when it's bright, the average value of the pixel when it's dark. When you do that, you get this amazing correction. Boom. Okay, well that wasn't that exciting. But it does help you a lot in the sense that now when I take means down here, I get 0 instead of getting always a positive value. So just a little tiny addition that you can do. And I won't tell you how you do that, but it's in the slide there. Now once we have this nice image, how are we actually gonna track the particles? We're gonna use this same idea of least squares fitting that we talked about early on. And the idea is that we can write down an actual formula for this image, a mathematical formula for this image. What is this mathematical formula gonna look like? Well it's gonna be the sum over every particle of an ideal particle picture. So we're gonna have to come up with a mathematical expression for this image. And that'll be the sum of little tiny particle images. And so that's what we're gonna work on. And I'm gonna skip through all the math here because there's a lot of math. And that's why it takes three hours to give this full talk. But you can go back and see the math you want, but we'll get all the information from just thinking about this. Here's our mathematical function for a particle. And it's based on the hyperbolic tangent. And the basic idea is that it's a step function. It's zero outside the particle. It's one inside the particle, except that we have an imaging system. And every imaging system blurs the image a little bit because there's a point spread function. There's the picture that you get when you take a picture of a point. If you take a picture of a point, you actually get an airy disk. And that picture is what you expect to have. So what you expect is this sort of perfect step function picture of the particle convolved with your point spread function. Now you can actually do that for the airy function and you can get a very, very complicated function for this. But it's really, really close to the hyperbolic tangent, which is much easier to calculate. And it's much easier to calculate the derivatives, especially, which you end up needing to solve this problem. So here's just a picture of a cross section and the width of the hyperbolic tangent tells you how blurry the image is. And the diameter here just tells you how large the particle is. Now we take this and we do our least squares fitting on it. So we take this image that we've measured and we subtract off this ideal image. The ideal image now depends on the positions, the diameter and this width, which is the focus. And so now we can minimize this function. If we minimize this function, we'll find all the points. We can do another thing. We can actually look at this function as a function of x naught, the positions of the particles. And if we do that, there's all the math to figure out how we do it. But if we do that, then what we get is an image which tells us how likely there is to be a particle in a given position. So we're actually calculating the chi squared for every single point in space. And so here's our original image. Here's the white dots are the places where we found particles. And here's this chi squared image. And we do one over it because chi is small wherever the particles are. And so we do one over that so it'll be big wherever the particles are. This is just for visualization. And you can see there's a bright spot wherever there is a particle. And maybe it's harder to see out here but when we change the color scale, you can see there's a bright spot for every single one of these. And what you notice is we found every single particle. We always find every single particle. When my students don't find a particle, I say, go back, find that particle. That's Fred, he's my favorite particle. You have to find him. So we always find every particle. In fact, we find particles that aren't even in the frame. So if you look at this little guy right here, that's just a little bit of a particle. But we found it, there it is. That's because the most likely position for a particle to be, if there's a little blip there, is there. That's what this map is telling us, the most likely position for particles. So we can just go through and find all the peaks and that tells us where every single particle is. Now I'm not gonna talk about this but you can do this exact same technique to do another very famous way to find particles, the Huff Transform. You may have heard of it. It's in a lot of canned software. So you can actually use this exact same technique of least squares fitting to understand how the Huff Transform works. And that's all talked about in my book chapter and then there's a little bit of it in here. It gives you the flavor of what's going on. Now just to compare three popular types of particle tracking, here is the probability of finding the particle in three different techniques. The least squares that I'm talking about, just plain old cross correlation. You just take the cross correlation and see where you find the largest value and this Huff Transform. And so what you can see is that the, these are all at the same scale. So what you can see is that in the least squares that you get the exact point as close as possible to the point. It's very sharp peaks. You can't even start to see anything other than the contours. All these contour lines are at the exact same levels of contour. In the cross correlation, you can see those contours are very spread out. And so it's harder to find that exact position, especially if you wanna do some kind of sub-pixel accuracy by sub-sampling this. You're gonna start to see that it creeps inward as you try to do sub-pixel accuracy. And the Huff Transform is almost as good. It's almost as sharp. It's not quite as sharp because we're dealing only with the edges of the particle. We're not dealing with the inside of the particle. And we get a little bit extra by dealing with some of the inside of the particle. And also there's a special kind of artifact that occurs in the Huff Transform. You can see here in these wings. And that can be very prominent when there's a dense packing of particles. So this sort of gives you an idea of how these different techniques work. Now once we have those positions which are roughly pixel accurate, we could probably do a tenth of a pixel accurate by doing some kind of interpolation on those images. We actually go in and do a second fitting where we break the image up into little regions where there's only one particle in every single region. So here's one particular region. And now we do a separate specific fit for that individual region. And this allows us to get around things like our original version of the image was just adding together two particles. So if particles were near each other, the little tails would overlap and it gives different things that aren't exactly right. So we don't get as perfect an answer. But here where we have only one particle in there, we know we're gonna get exactly the right answer. So that allows us to go to very high accuracy by doing individual least squares fitting on every single particle. And so what we see is that we can take this corrected image where we've done our background correction. We can then calculate an image using our chi-squared field. That gives us this image. And then we can find the residual, what's left over, the chi-squared. Take this image, subtract from this image, square it and show the image. So this is what's left over. Now when we add our least squares fitting over every individual particle, we can see that this drops significantly. And now you can see the errors are much more symmetric. So they're not as asymmetric as they were here. They're much more symmetric. That's because we've found the exact position where the particles go. Now we can also fit over the diameter of the particle and this focus function get even better. So by the end, we have basically a factor of 10 better fit than when we started. And this allows us to get very high accuracy results. So here in the circles is that PIV technique for finding the rotating drums positions. And the solid line, the dashed line is this other technique. And you can see how much smoother it is compared to this one. That's telling us that there is much higher resolution in this. And so that's sort of the contrast that you can get. Here's an example of sort of our highest resolution version of this technique. We're actually tracking in 3D by taking two simultaneous pictures at this, looking at the same frame. And we can actually track this in 3D. This is millimeters, by the way, here. And so you can actually track this down to nanometer resolution. If we look at the velocities that we measured, these particles are freely falling. That means their velocities should be increasing like gravity. So if we look at their velocity, their velocity is increasing downward. And these two lines are separated by assuming that the position error is 55 nanometers. And you can see that the values we get in here lie well within these bounds. And so this allows us to actually get very, very high resolution in these systems. And this can be extended to all types of systems. Like in this case, these seeds are actually not spheres. They're actually ellipsoids. And so we can actually find not only their positions, but also their orientations and the sizes in all three directions. So we have all these fitting functions. And you can get all kinds of things like how prolate they are, how oblate they are, sphericity measurements, things like that you can get from this same kind of tracking technique. You can also track other objects. So you don't have to just use circles. Here's an example of tracking rods. And one of my colleagues, Dan Goldman, who is often here at these hands-on sessions, loves bugs. And so we put together this little bug tracker. And it tracks two points on the bug and allows you to see the bug moving along. You can actually see there's a very interesting part in this gate where he kind of wobbles as he walks. And you can see that very well in these tracking images. This can also be extended to more exotic types of images. Here's an image of a particle that is under stress. When many materials are under stress, they change the index of refraction in a way that's proportional to their stress. This is called birefringence. And so when I apply stress to something like this, I can actually see that light is rotating. And this is a technique that actually allows you to measure that rotated light. And so what you can see is this particle is being pushed here. It's being pushed here. It's being pushed here. It has three contacts. Actually, it has fourth contact. You can kind of see there's a fourth contact here. Here is that image that we got from taking a picture of reality. Here is the calculated image we got from fitting this to this function, which includes the forces between the particles. And so it allows us to actually extract the forces. Here's the fitting function written out in a very compact form. There's all of these parameters that we have to fit for in order to solve this problem. And here's an example of a whole packing of these particles where we've fitted this and found all the contacts and found all the forces on every particle. And this just gives the kind of values you can get. So this is the force as a function of strain, how far we've strained this. And again, this can be used for all kinds of things. In these images, there's this very bad artifact of these lines, which are actually shadows of the particles that are in the way of the beam. And this can be used to get rid of them because it doesn't care if there's an artifact like that. It's asking for the best fit. And so if there's a big white blob there, it doesn't care. It says, well, that's not part of the fit, but the best fit is still to have it here. Or there's this big dark line through it. Well, the best fit is still for it to be there. And so this is very sort of error correcting or artifact removing. Anyway, so that's what I wanted to get across today. And thanks for your attention.