 Thank you, I'm gonna include all the slides that I gave in this longer talk in the slides that are on there And you'll get to see them somewhat go by here Just so you know what's available I'm not gonna be able to talk about all of it because we're not gonna sit here for two and a half hours like I had in Germany So I'm gonna give you a overview of it But the slides that are in here are all the slides that are available on this that I've that I've given recently I'll also mention as I'm now required to do basically I have a book that I edited and In the book is a description of everything. I'm talking about today. So that's also another another thing If you don't come away with anything from this talk except for these points that will be good So the first thing is the brain is the best image analyzer there is there's nothing better not even close It's no contest no computer can do anything that the brain can't do basically so That's the first thing. So if you cannot with your brain do the image processing Idea that you have there is absolutely no way a computer is going to do it Okay, so if you can't do it a computer can't do it, okay? Unfortunately the converse is not true Even if you can do it the computer usually also cannot do it Okay, so most of the time the computer can't do any of the process any image processing simple imaging processing that we can do today You know, I can just look out this room glance very briefly. I can see every person's eyes. I can identify them I can you know see that there's no way a computer program could do it at the speed that I do it and can do it Accurately like I can just do this very simply by just glancing around That being said There are good reasons to have a computer do image analysis and it can be much more accurate Than you can do with your eyes, but it can't do a technique that you can't do So the first thing to do when you look at an image you want to say oh, I want to track this thing or I want to Tell the difference between these two objects if you can't do it You can't see it in your own image then there's no way a computer can do it So you need to do something better What that usually means is that you have to do some kind of better You have to make a better image and this will really really pay off on The front end when you make your images you should make the best images you possibly can This makes the biggest difference in any image processing technique the less Post-processing you have to do the better a lot of people who are very oriented to computers these days and they're like Oh, I'll fix that in software later. That is generally a bad idea Generally that will just introduce huge errors that won't help you out So if you can make your image better if you can get a better focus if you can get a better lens If you can get better lighting all these things will pay off Tenfold when you'd go to your image analysis because all that stuff if you do it in a computer is going to be very difficult For example de blurring an image is not a possible process It's an option like in Photoshop de blur But of course it can't work because how does it know the depth if you knew the depth then you could de blur But you don't know the depth unless you took a stereoscopic image Which is back to the point of you need to have a good image to start with so if you need to have depth information You need two images. So these are the kinds of things that you should think about when you're making images So why would we use a computer to do something like tracking particles? Well one thing is speed With our GPU software that we have now we can track 400,000 particles per second accurate to sub to pixel accuracy and we can track 4,000 particles per second to one one hundredth of a pixel accurate your eye cannot do that Okay, so it can't do get that accuracy and it cannot do it at that speed in the old days when we would track particles We would take slides or movies We would play one frame on a wall a big wall the biggest wall You could find with the biggest lens you could find to open it up And you would go up and try to draw on a piece of paper where the picture of the image where the picture The thing was you want you would then measure the distance to that point and you would put another piece of paper up And you would do it again. I didn't do 4,000 particles probably in my whole Career that way so it just takes too long to do it But now we can do it with computers very easily the other thing is Amazing accuracy that you can get from these kinds of tracking algorithms we have a tracking algorithm that gives us 55 nanometer resolution on three millimeter particles. That's below the wavelength of light that we're using to Illuminate the particles and so that's a pretty amazing thing that you can do Just by using a computer to look at the images now I'm just going to give you a couple of quick reasons why you might like to Track particles and the first one is in this very simple Experiment these are movies from an experiment where we're looking down on granular materials being shaken around and these are The particles are being shaken around and there was a big question about how these particles act Do they act like ordinary particles that are energy? That energy can serve like hard particles or not and we were able to answer this question by doing very very careful image analysis tracking we here is the radial distribution function for example of particles from our experiment in the dashed lines and Sorry from our experiment the solid lines and the dashed lines are from simulations of Elastic hard particles so we were able to show that even though these particles are in elastic they behave in a way that's Very much the same as what happens for elastic particles and without the resolution that we have here We would not been able to do this without without that another example is in the velocity distribution function for granular particles in that same experiment You know that in a gas you should have a velocity distribution, which is a Maxwell-Boltzman You know that if you don't have energy conservation, you can't have a Maxwell-Boltzman. There's no way it just won't work So what do you have it turns out you have this equation? Which is a Maxwell-Boltzman plus a fourth order correction and to measure that fourth order correction We really need a very very high accuracy in the velocity because we have to measure the velocity That means you have to measure two positions very high accuracy in order to do that and we were able to do that if we were using a Tracking technique that was one tenth of a pixel per frame, which is what's typically out there Our resolution would have been this box and we would not have been able to see this Deviation that we have from Gaussian, but because we have a hundredth of a pixel per box now You can see this deviation very very easily and it was very easy to show this and it was a very Interesting point because people have been paying a lot of attention to the tails of these distributions but it turns out right at the center of the distribution is off by 10 percent and that's a much bigger Issue than the fact that the tails were off by a tenth of a percent at things that only happen one every ten to the fourth Times so this kind of thing allows you to do some stuff that you couldn't do otherwise I'm gonna try to tell you a few image analysis tools basically and I'll talk a little bit about finding and tracking particles as well So I just try to give you a few sort of general ideas about image analysis If you haven't done anything before this is just a way to Sort of get you oriented to what people can do with images First let me just say what it what is an image? What is a computer image? A computer image is just a matrix of numbers. That's all a computer image is It's not anything different than just a list of numbers So when you take a picture with the digital camera, there's just a bunch of pixels and every pixel is just a set of numbers Okay, and these numbers often represent colors But they could also just represent intensities directly and when they represent intensities we could Write the same image. These are both the same image one of them expressed in a color scale This is just intensities one of them expressed in gray scale Which might have been the way I took this image just as a black and white image But anyway, this could also be the red channel of a color image or the green channel of a color image Or it could be the x-ray intensity in an x-ray image or it could be the Intensity of density in an MRI image. So these are all the same thing. They're just matrices of numbers. That's all an image is Here is a video data. Okay video data is just images images are just Arrays of numbers and a movie is just an array of numbers that you have multiple times Okay, so this is just going multiple times in time So we have an array with index by n and m, but we also have an array index by the Time so it's just one image image after another. This is just an example of Of a movie that one might get from a computer simulation Now one of the very strong tools and one of the tools that's used very very often in image analysis is the Fourier transform And I just written it out here I don't expect you to you know understand this in this slide This is for you to go back if you're interested in this to see what is going on with the Fourier transform But the Fourier transform takes an image here this Two indexed object and it turns it into another two indexed object Okay, the main difference is that this side has complex numbers So this takes a single real image and turns it into a complex image. Okay, so that's the that's the main difference between these Two now what does a Fourier transform do the Fourier transform? converts an image from One space to another space it finds for example the frequency representation of a signal rather than the time representation of a single signal or it takes the Wavelength representation of a signal or the wave number representation of a single and turns it into sorry It turns a spatial information into a wave number into the number of times that you have an oscillation. So for example in this 1d image now or what we might call a function so 1d image or a function here We have this Function, which is just a sine function in this case. I'm just using a complex Signal because it makes it a little bit easier to display you get complex signals actually all the time That's the way radio is transmitted for example FM radio is transmitted using a complex wave It sends one signal at one phase of the sign and one signal at the other phase of the sign So at the sign and cosign so it sends both of them and if you do that operation I showed in the previous slot slide what you get is one peak at a particular frequency five and Five is just a total number of cycles that we have here and it's the number of cycles per Time unit in this arbitrary system. So it just gives you tells you that this signal is a perfect Sine wave at that frequency. Okay, so that's all that happens here perfect signal that frequency now What about for an image for an image now here is the Analog of this signal but for an image now at this same signal is going all the way down in this direction And this direction it's just constant. So I have no Oscillation or no change in the x-direction, but I have a change in the y-direction If I take now the two-dimensional Fourier transform So I for a transform in both directions then what I'll get is a single spot because this system is represented by one single Sign you so it'll leave varying wave in space So I get this one spot and if you zoom in on that spot You'll see that it's a distance in a way, which is the number of cycles per unit of space in this case and so that Distance just like this distance told us what this number was this distance tells us what this number is if we had Many many signs and cosines in this image. We would get many Dots here in this image and Here is an example where you would get lots of dots Here's that image that I showed you earlier from that movie and here is the Fourier transform of that image And now you can start to see some interesting things You can see some things that your eye kind of knew about but now you'd have a way to quantify it So here for example, we can see there is a kind of length scale in this problem. There is you know a sort of Average separation between blue and red. There's a sort of width to the blue and a width to the red That's the same over the whole image. That's represented by having power At a distance a fixed distance away from the center of this image Here's just a blow-up of that same thing to sort of see what's going on what you can see in this first image is that the angles of these things are all much more random and The widths are much more varying and so the width of this the range of frequencies needed to reproduce this is wider So this is a wider band than it and has Power at all angles because there's basically all angles in here now this particular simulation Kind of knows about the boundaries and it likes to have these roles or these waves these these stripes come in Perpendicular to boundaries so you can see later in time now these things are all lined up kind of In one of two directions either vertical or horizontal and now we can see that represented in the Fourier spectrum Now we see that there's a big peak at the sides and a big peak at the top and bottom And you can also see this one has a much more well-defined wavelength Everyone is really really the same size and you can see that because the width of this the Variation in the position of these peaks is much thinner And so we can see then that this image has evolved to an image that has a very strong wavelength this is exactly what professor Swinney was talking about yesterday this idea that if you have a by frication when you first Go into this region you have one particular frequency that gets selected And this is the kind of way that that kind of signal might get might evolve over time So this is a typical kind of use of the Fourier transform in image processing now Another very common use of the Fourier transform and one of the reasons that we use the Fourier transform For lots of these things. There's lots of transforms out there. There's wavelets. There's number theoretic blah, blah, blah There's all these transforms out there But we use the Fourier transform because we have an algorithm which allows it to be Computed much more efficiently than you might think if you're taking a list of numbers and multiplying it by a matrix You think it should take n squared Numbers that's what you do with the Fourier transform you take a number and you multiply it by a mate You take a vector and you multiply it by matrix that should take n squared operations because of the symmetries of the Fourier transform You can do it in n log in so it only takes basically in it only grows like the number rather than the number squared Most transforms grow like the number squared So we do this one much faster than any others and so it's used very frequently And it's also related to another thing which is used very frequently in image analysis And that is the convolution and the cross correlation again. I'm just Putting these formulas up for you to see what they are and to go back later if you want to understand this in detail the convolution is An operation which is both mathematically interesting to pure mathematicians, but also is used some in image analysis the Correlation which I'll show you next is more often used in image analysis But I'll just mention this one because you see it a lot and it's much more commonly implemented in languages like Matlab and Python You'll almost always have a convolution. You might not have a correlation. So this is why it where it is So if I want to Convolve this list of numbers with this list of numbers what I do in the convolution is I take this list I flip it around and I line up the first number of this with the last number of this one So I have one and now I multiply them together and add up however many are overlapped So I had I multiply one times one I get one I add up all the ones that are there Which is just one so I get one now. I move it over one spot now. I have two overlaps I have one times three plus one times zero. So that's three now. I move it over again now. I have One times two plus zero times three plus One times two that's two to four that's four. That's good. I move it over again. I get seven I move it over again. I get seven again. I get two I get six Now I'm done. I can't do it anymore if I go any further I'll just get zeros because there's no overlap. So I've just done the convolution the reason why mathematicians like the convolution is that if you had a polynomial where these were the coefficients of the Zeroes order term first order second order third order fourth order and same over here zero first second order when I do this convolution I'm getting now the polynomial that has this as its coefficient So this represents polynomial multiplication if you did synthetic division or synthetic multiplication of polynomials in math class This is what you were doing. It's called the convolution Now there's three sort of representations of the convolution that we might be interested in one is the full Convolution this is the one the mathematicians care about because it's got though. It's the answer to this polynomial question That's every number. I made here on the other hand I might be thinking of this as an image and this is some sort of test thing I'm looking at in that image in that case I might want something that the same size as my image in which case I'd have something called what we call the same size The fact the last one that you might be interested is the one that we call valid And the one we call valid is valid because it's only these three numbers where I had full overlap Of the two signals, okay, and you'll see this in mad lab and other programs They'll have an option of either returning the full convolution the valid convolution or the same convolution and Depending on what you're doing. You might pick one or the other of these Now the difference between the convolution and the cross-correlation is that in the cross-correlation We do not flip it around. Here's this minus sign. That's the flipping that we did in the Cross-correlation we do not flip so now we take the same 102 and we put it over here But I didn't flip it this time and this is how you can make a convolution become a correlation So if you have convolution and you need correlation just flip one of them and then you'll have the other Okay, so that's why it's often not implemented because it can be easily implemented by the user at runtime So we do the same thing two times one is two. That's all we have now We move it over two times three is six So forth so on so forth all the way across and again just like before we have this full same or valid all of these are Possible things you want to do and the correlation is kind of telling us and we'll see this in a minute The correlation is kind of telling us how much? like This number is this set of numbers where is the position where this is most like this And you can kind of see it here in that the place where we got eight we got the largest overlap as it's called This is the place that's kind of most like this one. This is just one number off from each of these Okay, so they add one to each of these I'll get this so that is very very close It's very it will look very similar to your eye for example If you were looking at this as an image or if you were looking at it as a function You would see they look the same, but they're just shifted by a small amount so this is the point where it looks the most like this function and These things can be done in either one dimension like we just did or they can be done in two dimensions or in Dimensions actually although usually you don't see it going much beyond two dimensions and here for example now We can do the same thing for an image. So here is that image. I showed you again Here is something I would like to convolve with that image. Okay, so here I'm going to convolve this object with this image and I do exactly what I did before only now I'm doing it with images. I take this guy I flip it around which in this case it doesn't matter which is why a lot of things also don't Implement the correlation because it's the same if it's symmetric. So flipping doesn't do anything if it's symmetric So many times you don't ever need the correlation because you can just use the convolution to start with so take this guy I flip it over and I put it up here at the edge at the corner So this guy is just touching the corner where one pixel is going over at that point. I Then multiply them together app all the pixels and I just keep doing that and move across and move down and so forth And here is the image I get by doing that What you can kind of see is that this is a blurring effect. In fact, this is exactly what happens when an image is blurred so if an image is blurred it's Convolved with the spread function of the image the imaging system That spread function is usually something like a Gaussian typically an airy function, which is very much like a Gaussian And so this is what happens if we were to blur our camera So here is an image if we blur our eyes. We should be able to kind of get this image Okay, so that's what's happening here. It's being blurred by this size this size of spreading it out It's basically saying anywhere. It's overlapping this. I'm going to sort of take the average so it's just spreading it out and so you can in this case also have that exact same thing of full Same or valid so I can go all the way out to the corners and I have the full one And you can see it's different on the edges right because we're not overlapping the whole thing So it is different if you want to if you want to use this edge You have to do something to normalize that edge to give you a better idea about going on there Then we have the valid one This is the part that looks really just like the blurred version of this and then in between you kind of have this Variation as it comes in and that's the same size But you might want the same size because you might want to be comparing it to other images that you've taken with the same camera Now let's look at a case where convolution and correlation Make a difference. Okay, so now I'm going to convolve this little patch Which I took out of here So I took this little patch out there it is and I'm going to convolve this little patch with this whole image Okay, so I'm going to convolve it first. That's over here So I take this little patch and I convolve it with this image and this is the picture I get okay So it's again It's a kind of Smoothing but a kind of different weird smoothing because I'm also kind of pushing power to one side and not taking away Stuff here so it gives a kind of weird image and in fact this is rarely used in convolution We did this really doesn't mean anything obvious on the other hand This little chunk is exactly like this little chunk So when I convolve it when I correlate it with this guy I'm going to get the biggest signal here because this is literally exactly the same as that because I just pulled it out of there And so you can see I get this nice big peak right there at the center where that object is So this is a this is kind of the simplest way of saying where is this object in the image? Well, it is at the point where the correlation is the highest Okay, but you can see this may have some difficulties because for example right here There's also one that's pretty close to the same height and you can also see that if I put a box around that that looks a lot like that one Okay, so they're very very close and that's sort of telling you so it's also tells you when things are similar So it'll tell you when it's exactly the same but also tell you when things are similar And so you any place where there's a big spot like this one here if you were to draw a box around that You'd see that it looks very similar to this little chunk that we have here Okay, now I want to do I do want to talk a little bit about this set of math There's not that much math in this but I want to talk about this briefly and the reason is to sort of Explain why correlation has this property and convolution does not have this property Okay, and often the start of an image processing Technique is to say well correlation tells you where it's correlated there. I'm done But that's not the whole story and actually correlation is not the best way to figure out if two images are the same The best way to figure out of two images the same are to take the image that you want and that you want to interrogate and Take the little image that you want to find out if it's in there You subtract them and square it now. Why do we do this? What is a very common technique throughout all of science is called the chi squared or it's called least squares a method What we're doing here is we're saying I'm going to subtract two things and then square it that gives me a positive number That positive number will be smallest when the two things are most alike It has to be because if they're identical it'll be zero and that is the lowest value that it can possibly have Anything else is going to be higher and so we have a number Which is zero when it's exactly the same and lowest whenever it is the most similar and when I say most similar I mean in this sense of chi squared in the sense of the squared value is lowest now There are other ways to do this like for example I could have just put the absolute value or I could have put the fourth power or There's actually a way to do it with a zero power. There's all these possible norms one can use We use the square one not because it's the absolute best because it's not actually the absolute best usually the absolute value is better But we use the square one because it produces correlations and it's much easier to calculate than any of the other ones and also There's a lot of mathematical Analysis that can be done on this they can't be done in the absolute value because absolute value is not a real function It's a generalized function. It doesn't have the same properties You get differentiated the same way you can differentiate other things So it makes it much harder to do all the analysis We don't have any theorems good theorems that tell us things like you know maximum Likely estimator is given by blah if you have this we can't do that with absolute value So we that's one of the reasons we use this although a very similar analysis can be done with other functions So now let's see what happens. We have this squared value now. We're going to move this guy around We're just going to shift its indices throughout the whole image and we're going to ask Where is this the minimum value? What shifting of n and m? Do I have to do to find the point where this is minimum and we can actually do a little work on this object? We can first go ahead and square this out So if we square this out, we're going to get this guy squared and we're going to get this guy squared But in between we're going to get the cross product of these two squared And if you go back, which I'm not going to do right now if you go back and look at this this particular sum here double sum over this that is exactly The oops that should say yeah, let's just say correlation good correlation That's exactly the correlation as I defined it earlier this exact same thing So the correlation comes out naturally when doing this now the difference is that the correlation is not zero at The minimum and it doesn't have all the properties that the full function has and so if you measure this full function Which you can also do very quickly because it just takes two convolutions plus one Square so it's it's actually very fast It's only a little bit more than twice as slow as just taking the correlation and it gives you a lot of extra information So this is where I typically start when I want to do some kind of image Processing technique I will start with this equation Not with this one because I get a little bit of extra oomph out of it that I can use to do better on my analysis I'm not going to mention that that just I just I guess I did have these here so you could see them But you can see that this Cross correlation this function is exactly the same as what we get here. Okay, so that's that's the same thing Now, how would we use this one very common way to use this technique is through what we call PIV or Particle image Belly symmetry and this there are commercial softwares out there person who came up with this idea as a patent Has made a lot of money off of it So but all they do is they take Instead of correlating this little chunk from here with the image It's in they correlate it with an earlier image so I take a chunk from this guy and I Correlate it with this image that tells me where that little chunk of stuff has most likely moved in the next thing And so that's what I've done here I've taken this little chunk and I've asked where is it going to move in this and here is my correlation my convolution and Here is the distance that it's moved and that tells me that at that spot This image was moving on you might say well, how does an image move? There's no particles in this image It's just junk flying around That's the amazing thing here It's just telling you as long as that junk is flying around in a sort of continuous way it can find it So it doesn't really have to be particles You can be pretty much anything and as long as you do this technique of Cross correlation with later times then you'll find out where each point and you can do this for every single point Because I could have taken a box anywhere I could take any box I want and I can do this same technique and that will give me the place where that box has Moved in the next slice that gives me a whole velocity field out of doing this and that's all PIV is And we can use it for example here's an example of a rotating drum That I work with in my lab and here's one snapshot of a movie of that rotating drum each one of these little things is a particle They're very small and this is where this technique is really Really the best to use these particles are too small to track individually, but we can track them using this PIV technique So we take a little box they zoom in on that box Here is that box and I've taken a box this size and correlated with it So here's this box in this frame here's this box in this frame and you can see they're very similar So here's this little part. Here's this little part. Here's that little white part and here is the Cross correlation you can see that it's moved this amount. So this is how I tell I just look where it is That's how far it's moved and from that I can get a whole velocity field out of that So here's this image. Here's the whole velocity field that I got out of that and you can also see something interesting Which is that in this image you can actually see there's signal in the part where there's no particles That's because the glass has a little tiny imperfections. There's little things on it And so I'm just seeing the glass Pure flat glass moving along. It's that sensitive I can actually see just the little tiny imperfections in the glass moving along And this is actually a very nice way to do this because I can also now get the rotation rate out of this because I'm now Just looking at the glass itself. So can I get both the particle motion and the glass motion in one image analysis process? But sometimes that's not enough sometimes you want something more and now here is another version of that exact same experiment But now we've taken many many Close-up versions of this so each one of these little boxes you can see is a close-up movie of the whole thing And one thing you can see is that those things can be stitched back together to give a pretty good movie So it this thing is pretty stationary because we did this on one day this on the next day and this on the next day So this is very repeatable experiment and you can line them up very nicely. Here's one of those movies, okay? And you can see how much care we've taken to make this movie good If this is the raw movie you can see it's black on white Is that we have taken care to get the best imaging that we can possibly do here and we'll even correct this later And I'll show you how to do that in a minute So now I'm going to talk a bit about finding these kinds of particles in the remaining time that I have And so there's two sorts of parts to particle tracking. There's Finding the particles and then there's tracking them along so that you can connect them I'm not actually going to talk about particle tracking today where you would connect them along But the slides for how to do it are in there and it's also in my book chapter So I'm just going to skip this part on how you do it, but it's pretty simple You just sort of say well, what's my closest particle in the next frame and that's the one that's the next one There's that but there's a couple of little details like what if one is closer and it's moved too far and blah blah blah And so you can look at this and see how I do it Okay, so that's how you actually track the particles is in there you can look at it later But what I want to spend a little more time on is how you find the particles This is the more interesting problem and once you found the particles There's lots of things you might do with it as well as tracking them There are other things you might do so I want to spend a little time on finding the particles and also because this is also a very General technique is not just for particles. You can find anything in an image using this same sort of Technique and these are just a whole bunch of reasons why one wants to do this and I'm going to skip this for now, but you can go back and look at it and let me just Reiterate the main point of this talk, which is that your brain is really good You must make your images as good as you possibly can And so I'll just reiterate that at this point to remind you of that point and here for example is a sort of Demonstration of how you could make your images better. Okay, so here are actually three imaging techniques All on one plot and we did this by taking a movie with two cameras At the same time that we're Taking different versions of this this same frame both of them are backlit So we have a backlighting and the particles are blocking the light, okay? But then in this case we have one single point source light and this was done with a red laser Okay, so that's how we did that and this is done with a blue light. That's in a ring Okay, so it's a ring light. You see these often like if you watch CSI They always have one around their camera reason is because it gives you a more uniform lighting And it also does help you in particle tracking So for example here what we might do is just find the brightest spot and that would tell us where the position was We might do some sub pixel accuracy by fitting it to a Gaussian or something else like that That's what we do here We do a sort of similar thing here but now we're looking for this black thing in the middle of the thing and This has a big advantage because this black point no matter where the light source is is in the same spot Basically, it's much closer to the center of the particle because it's circularly symmetric and the particle is circularly symmetric So we can see it there this one depends on exactly where the Particles and it also depends on where you are in the image and you can see that now as I compare I've compared the distance that I the the Particle positions that I found the the error between the backlit image, which I'm going to show you the best Compared to the ring lighting, which is good So there's not that much difference between it and there's no variation in pixel position because it's circularly symmetric It doesn't care about that and the point one where it does care and you can see there's a fair This is not a lot. This is just one pixel. So the distance reading here and here is one pixel Okay, so this is sub pixel accurate It's all within one pixel basically of all of these But you can see there's a big difference in the accuracy that one can get and if you need this Accuracy then you have to use the best image you can possibly get and you need to use these kinds of techniques that I'm going to talk about now I'm just going to mention a couple of other places where you could do this kind of thing We're not going to talk about them in detail There's a technique this be very common right now a laser sheet technique You put two you put a fluid and a particle or anything that you want to image That are index matched But you put dye and either the fluid or the particle and then when you shine a laser on it the dye is illuminated and so you can get an image like this where the particles are bright and the The fluid is not is not visible and you can take this laser sheet And you can scan it back and forth and get a 3d image of this entire object just by scanning it along Another very common technique to do this same kind of thing is magnetic resonance imaging Here is a magnetic resonance image of poppy seeds that I mustered seeds These are actually mustard seeds that I took a long time ago. Here's two slices from it Here's a 3d reconstruction of the poppy seeds the mustard seeds in a little container x-ray CT Especially micro CT right now you can buy a you buy your own CT tomography unit for like $10,000 now That's still expensive, but it's way less than it used to be even five years ago They were 40 or $50,000 and before that it was a hundred or a million dollars. So And not that long is very steep decline on this curve. I don't I mean I'm expecting to seem in Walmart soon So you can do CT with that here's a pile of sticks that we did this on and so you can see Here's one image from it. Here's the reconstruction. Here's detail of that So these are all things that you can do but back to that problem of lighting and I'm just going to Go through a little bit here about what we might do in this system Now the first thing I would do if a student came to me with this movie is I would say well, your background is not uniform You need to go back and do a better job. Okay, but let's say for whatever reason let maybe these particles are the size of basketballs and it's just too hard to make The background any better and this is just the best you can do. We have an amazing sheet with blah, blah, blah And so they tell me okay. Well, all right. You've done your best then fine. You've done your best So now what do we do because it's still not that good? Okay, so it's still not as good as we would like to have So what do we do? Well, the first thing I like to do is to change my color map Okay in a grayscale color map I really can't see as much as I can see in a colored color map So to see like for example, I can't really tell that these guys are much crisper than these guys, but in the This movie I can really see see how much crisper they are in the bright light Then they are in the darker light and that's just a part of less light They just there's less signal less signal the noise and so I just can't see this But in this colored scaled image I can really see that difference and also the background You can really see how the background is varying. Okay over time Now, how would we get a good background of this? Well, the best way to get a good background is to take the image without the particles Okay, that's the best way. That's what you should do But sometimes that's not easy because maybe it takes a long time to set this up or maybe once you've set it up It's hard to move or whatever. So another good technique for this is to Take the maximum pixel over the whole movie and that's what I'm going to do now So here I'm just running the same movie, but every time instead of displaying The image I'm displaying the maximum pixel up to that point. And so what you can see is that very quickly I just get a picture of the background image because as the particles move away Then the background image comes up and now I can use this background image as a way to correct my original image Here's another technique now. I'm doing the minimum over time Now I'm finding the value of the darkest pixels Over time and I'd have to run this one much longer Because it takes a lot longer to find this one than it does the other one because in this case There's more open space than there is covered space And also I'd never be able to get it in the corner for example because these particles can't get into the corner Of this thing. So there are some limitations with this, but I can use this if they're not there I really don't care what the values are. So there's that Now I can extract these two and here is that bright background that I showed before And here's the dark background the final one you would get if you waited till it was all all done And I do some tricks to get the corners Basically, I just add randomness in on the corners that matches the randomness that I have here Just to make a nice image to display and then how do I correct it? Well, I take my image And I take the background image the the bright image and I subtract my image from that image Okay, and then I divide that by the bright image subtracted from the dark image Now this number has to go from one to zero Why? Because when the image is darkest, that's this number and that will give us one when the image is brightest That's this number and so that will give us zero So wherever the image was brightest, it will be zero wherever the image is darkest It will be One and you could do it the other way if you like to see bright particles You do it this way if you like to see dark particles you see it you do it the other way You just switch the sign and here is now The original image And the corrected image and you can see The huge improvement one gets by correcting this background Okay, and this is about the level of background that you can sustain and get really really good correction If it's much more than this, you can't do anything about it But this is about it and just to compare it to that original image. This is the same color scale So these two are presented in exactly the same color scales So you can really see how that is and just to really hammer this point home Here is a slice through that image. Here's the uncorrected image in green because you can see this is the Position of a bright spot in in these images and and this is the corrected version Actually, I guess I've I've flipped this This is the uncorrected version, but scaled. Sorry. Thank you. This is the uncorrected image But scaled in here and so now it's bright. It's the same in these pixels But you can actually also see because of the dark correction. I'm actually making these Points flatter because I actually can get the dark noise there and I can fix that Now one thing here with this maximum method is that I'm limited to exactly between one and zero And that means down here. This is really spurious. It's it's really zero But I'm getting a number which is on average above zero because I've said it's maximum or minimum I can do a little bit better by instead of saying maximum and minimum I take the average of the brightest values and the average of the darkest values Then I can shift this a little bit and it's very subtle here, but that's it So that's all I'm going on about so that now I'm taking the average Dark pixel and the average bright pixel and now you can see my noise is centered around zero Which makes it a little bit nicer and my peak is centered around one It's not at one, but it's centered around one and that's also a little bit nicer for these things So that's a nice technique for getting the background corrected. I won't talk about how we do that Now that we have this Perfect image and you really need really good images either by these kind of techniques Like I just talked about or by just going back to the lab and saying well I've got to get a brighter light or I've got to get more lights or whatever I'm going to do Once we have that Now we want to extract these objects and now these are spherical objects But you could extract any object and I'll show you some examples At the end. Let's see. How am I doing here? Okay. I need to wrap up pretty soon So how are we going to do this? Well, we can represent this image as the sum of particle images Okay, because here are the Particles they're just I'm just going to take each particle I'm going to make a function for that particle and then I'm going to add all those up and I'll get this image Okay, now this particle function could depend on anything I need in this case There's really only two parameters the position And the diameter because it's a perfect sphere So I really just have two but if it were an ellipse I'd have three because I'd have both the width and the and the height if it were A bug I might have many Okay, so depends on what I have as to what I'll have to put here to To track this part But we use this as our ideal image our calculated image the way we get the the image we're going to use it this way Um And I can rewrite this for example Actually, I'm not going to talk about this you can look it up. This is more of the details I've thought about talking about it, but I guess we started late, but still I should try to finish up soon So I'm not going to talk about this in detail But this is the way that we actually go about doing this Let me just show it in pictures rather than talking about the math at this point So here I've drawn what I call an ideal particle Okay, so I've made a mathematical function for my particle And we've done this for lots of shapes not just circles as I mentioned bugs I don't know if dan is here right now, but I tracked some bugs for dan goldman and we used we made a bug function Uh function that looks like a bug and it's a mathematical function And anything you can write down and if you really get down to it You can use your images as functions because as we know images are just lists of numbers just like functions So you can actually take many pictures of your object and then use that as your function So there's always one way to get this function And so what we're going to do with this oh, so let me just say a little bit more about it So here this function is a perfect circle except That it's blurred a little bit and there's no way to get around some of this blurring Okay, your image system has a point spread function. It can't represent an infinitely small dot That just can't do it. There's nothing can right there's at the very least There's the heisenberg uncertainty principle, but there's also the Rayleigh criterion essentially that tells you how small you can go And so it's going to be blurred a little bit So this is like a step function except it's blurred a bit and you can think of this as a perfect sphere Convolved with this point spread function and that's the picture that we have here and one reasonable representation This is not exactly an airy function Convolved with a step with a step function that function is very difficult to write down and it takes many lines of code and is very Difficult this one is very close to it It's a hyperbolic tangent and use the width of the hyperbolic tangent to represent the focus essentially of this system And again, I'm not going to talk about the details, but it's basically exactly what we did before what we talked about before We have the image. We're going to subtract this ideal image. We're going to square it That's going to give us these convolutions which allow us to calculate this minimum value Let me just show you the results. You can go through this math if you want You get there. So here is an image Okay, here's a real image from From that rotating drum that I had before This is corrected now. So you can see nice sharp edges nice black background And here is this new image. I'm getting by doing this least squares fitting so this is the Probability that I have a particle at any position and you can see all these peaks And over here, it may be hard to see that there are peaks, but when I change the color scale You can see there's peaks at every spot and every single particle in this image has a peak Okay, every single one we track Millions of particles. We never miss a particle. Okay, if we miss a particle I tell my student go back and find it that one is Fred. I really like him We have to have him so we never miss a particle And we don't and not only that we never miss a little piece of a particle Because as you can see like this guy here, this is a little piece of a particle Here's another little piece of a particle We can still find it Because to have a little piece of a particle like that you had to have a particle outside there So the most likely location for a particle is outside of your box And so you can find every single particle using this technique and here are the points That show every single particle that we find I'm not going to talk about this, but you can use this exact same technique to do another common Tracking technique, which is called the Huff Transform. You may have heard of it This technique can be mapped directly onto the technique we're using except Instead of tracking the circles you track the edges You take an ideal edge and you track it the same way It's not quite as good as what we do, but it's a very good technique And there's a lot of canned software out there for the Huff Transform So if you don't want to do your own Here's a comparison of three sort of major techniques in particle tracking Here's the one I just talked about least squares These really sharp peaks tell me the positions of the particles It's very very sharp The contours are all the same levels of contour and each of these So there's no question where this particle is We can find it to the highest accuracy using this technique With cross correlation by itself Just cross correlation everything is really spread out Because you're not really finding that one spot where it's perfect You're finding where the cross correlation is maximum Which is usually near to the part where it's there But it's also much more spread out And so you have things like you can see that the actual point that we find Is not symmetric with the contours of other things around So if we do some kind of fitting to find sub pixel accuracy We won't do as well The Huff Transform does almost as well as this Because it's basically the same idea just using the edges Instead of using the whole particle But you get a little bit more out of using the whole particle Like for example you don't get these ghosting effects That you have with the Huff Transform Because out here two particles are really overlapping each other And you get that little ghosting effect And that can be significant if you don't have good signal to noise I'm just going to finish up briefly by saying Once you've found the particles We usually just use this technique to find them to pixel accurate We want to find the particles to really high accuracy We break the image up into each individual particle image And we track each one separately using a least squares technique So we go into just this little baby image that we know has only one particle And we use the least squares technique To fit the best parameters The best possible parameters to find In the other case we're just doing all parameters We're just scanning all parameters In this case we're actually going to fit it Find the actual minimum using a minimizer Some kind of minimizer we have So we actually find it exactly And here's the proof of how it works Here is our corrected image Here is a calculated image So now we know the positions of all the particles We can calculate them Here is the difference between them So this is the chi-squared You can see how much difference there is And the sum of that is about a thousand When we do, that's the pixel accurate calculation When we do the sub-pixel accurate calculation Now we go from here with a thousand to six hundred And then we can also change the diameter and the width The focus and the size of the particle And we can minimize that And we can go down to 180 So we can get basically a tenfold increase In our fitting By going to this sub-pixel Looking at each individual image carefully And here is just an example of it The circles are from that PIV technique I talked about earlier The dashed line is from this particle tracking technique And you can see there's a lot more noise For example in the PIV Than there is in the particle tracking Here's what I mentioned earlier This is a position in millimeters Where you're doing 3D tracking with two cameras And we are able to get 55 nanometer resolution Using this technique Here's gravity So we're looking at particles that are falling in gravity Here's the velocity as a function of time That slope is gravity These two lines are separated by assuming Gravity is off by an error of the particle of the 55 nanometers Those are those two lines I'm not going to talk about this But it's in here You can do the same thing in 3D We can find things like the shape parameters Sphericity Using these kinds of techniques We can track rods We can track bugs Here's a bug You can use this to track and find forces in particles This is a force technique that you can use Go through that I'm not going to talk about it And that's what I wanted to tell you about particle tracking Thanks for your attention