 Good morning! Welcome to the Martin C. Jiski Hall of Biomedical Engineering and the Purdue Engineering Distinguished Lecture Series. Just to note, there is an overflow room in 2001 upstairs if you choose not to stand, but that is absolutely up to you. This series was started two years ago as part of our 150th anniversary celebration here at Purdue to bring outstanding academicians and professionals to our campus to engage our students and staff and faculty in thought-provoking conversations all around grand challenges in both science and engineering. Now in addition to this much anticipated presentation, we'll have a panel session up in room 2001 immediately following where we bring together a number of scholars in the field to really look ahead about what our world will be like with advanced optical technologies. I'm now delighted to introduce to you Mark Lundstrom, the Acting Dean of Engineering and Cypress Distinguished Professor of Electrical and Computer Engineering here at Purdue. Professor Lundstrom is both an outstanding teacher and mentor and as well a renowned scientist in the area of physics applied to electronic devices. He was the founding director of our NSF-supported NCN, the Network for Computational Nanotechnology here at Purdue and is a member of the National Academy of Engineering. So please give a warm welcome to Dean Mark Lundstrom. Thank you George and good morning everyone. It's my very great honor to introduce our speaker, Professor W.E. Myrner, the Harry S. Mosher Professor of Chemistry at Stanford. Before I introduce W.E., I just want to add a few words about this seminar, this very special seminar series that we're conducting here at Purdue. The Purdue Engineering Distinguished Lecture Series brings a prominent scholar and intellectual leader to campus every month. The visit includes a distinguished lecture such as the one we're about to hear, a panel discussion which will follow this lecture. Panel discussions are always interesting and thought provoking. It includes extensive interactions with faculty and students and I know our students have been preparing for some time for a Professor Myrner's visit so we're looking forward to that. W.E. Myrner was born in California but grew up in Texas. He did his undergraduate education in the Midwest at Washington University in St. Louis. He received his B.S. in electrical engineering with top honors, his B.S. in physics with top honors, and his A.B. degree in mathematics summa cum laude all in 1975. He's still a ham radio operator and a member of the IEEE. So I can claim him as an electrical engineer. W.E. then studied physics at Cornell and received his M.S. in 1978 and his Ph.D. in 1982. At that time the U.S. was fortunate to have a number of great corporate research labs. W.E. joined IBM Research in Almedin where he was encouraged by his IBM managers to do the best science possible and he began the work that would lead to the Nobel Prize. As corporate America changed in the early 1990s IBM Research did too and that led W.E. back to academia first at UCSD and then at Stanford where he has been Professor of Chemistry since 2002. He's a recipient of many major awards including the Wolf Prize in Chemistry, the Irving Langmuir Prize in Chemical Physics, and the Peter Dubai Award in Physical Chemistry. He's a member of the National Academy of Sciences. He's a 2014 recipient of the Nobel Prize in Chemistry for his work on Super Results for Resence Microscopy. And today he'll be speaking about Super Resolution Microscopy in Cells. So please join me in welcoming Professor Murner. Thank you. Well thank you, Dean Lundstrom, for that wonderful introduction. And in particular I want to begin this wonderful event for me. The honor of being able to present this lecture. I want to thank Professor Fang Wang for doing so many things to organize my visit. And of course the entire engineering school for allowing me to come. It's a great pleasure and honor to be here. So we're going to talk about single molecules, individual molecules, which is a combination really of engineering and science, physics, chemistry, applied to biology. So you need to be ready for moving from field to field during this presentation. And I'm going to be making sure hopefully that there will be something interesting for everybody to follow. And I'm happy to explain specific things later in the questions. But first of all our roadmap for today is shown here. It's really been 30 years, actually 31 years since the first experiments. But we're going to start with that early days. I would like to talk a little bit about the original discovery and observation of individual molecules. Because that's what everything followed from afterwards. Super resolution microscopy is one of the favorite applications, but it's not the only one, but I'll describe it briefly. And then talk about new work, because the question here is of course what's new. So we've been doing some low temperature localization microscopy with single molecules to provide annotation of cryo-electron tomography images. So that's a very new step forward that we think is interesting. I'll then talk about three-dimensional imaging where we use point spread function engineering to get the third axial dimension in a two-dimensional microscope. And talk about a specific tilted light sheet microscope that utilizes point spread functions in a nice way. There's a number of applications that I won't have time to talk about, applications to specific biological systems. But I'll mention them along the way. And then at the end, for some more fun, we'll talk about neural nets, because we've been utilizing neural nets for analyzing single molecule images recently. So starting with that historical summary very briefly. If you think back to the mid-1980s, there were beautiful experiments on single atoms in a vacuum trap, but no single molecule experiments. And so we detected single molecules in 1989 when I was at IBM. It was a technique that utilized quantum limited laser frequency modulation spectroscopy. A beautiful technique, but I don't have time to describe. But why did this happen at IBM? It was sort of more interesting at this point. We had the ability to do fundamental research in basic science, exploring an application to optical storage, to use molecules and solids, and to write bits in the frequency domain. So there were these things called spectral holes, which are marks in the wavelength of the light used to illuminate a sample. And you can imagine putting thousands of bits in one location by using this frequency domain. But the real important point is that I was interested in the signal to noise ratio. I was interested in what is going to limit our ability to detect a small spectral feature. So that represents, equivalently, asking this question. Think about penicin molecules in a crystal of para-terfinal transparent. And this is the electronic absorption of penicin molecules, billions of molecules. And so you can see that it's got a certain color, but this is an inhomogeneously broadened line, where we were going to write a thousand bits in the peak of this line. So it was important to know whether this was a smooth line. If you spread it out at high resolution, is it just a horizontal line or not? That's the signal to noise question. And so we set out to measure what it really looked like. And we found this. We found an amazing spectral structure. Now, very, very high resolution, low temperatures, single frequency, one megahertz of the line width lasers are required to measure this. And it's not noise. If you measure this once and then measure it again in the same piece of sample, you see the same spectral structure. So this is coming from the molecules piling up. Individual molecules' absorption is piling up in a certain way. And so we called it statistical fine structure because its amplitude scaled as the square root of the number of molecules in resonance. Think about that for a minute. Most spectral features scale linearly with the number of molecules. You put 10 times more molecules again, 10 times more absorption. This is a feature whose amplitude changes as the square root of the number of molecules, because it's coming from that statistical effect that I just described. So because we could see this in an answer to that question above, it also led me to think, well, maybe we can detect a single molecule if we can detect these peaks. And in fact, you only have to work square root of n harder to get to the single molecule limit. Think about that for a minute. You don't have to work n times harder, just the square root of n harder. So that's why we used FM spectroscopy in 1989 to detect single molecules. And it sort of started at this field. Now, there was an explosion of interest in detecting single molecules, and everyone switched to fluorescence, which was very important, because you could also detect the molecules by recording the emitted fluorescence. And that turns out to give better signal for noise, but it wasn't the first method. Anyway, that important step by Michel Oury caused another rush of wonderful people moving into the field, measuring every situation they could think of for single molecules at low temperatures. But what really matters for thinking about the whole field now is what were these really important surprises that occurred? We started seeing at low temperatures that molecules would blink, they would turn on and off, or they would move in frequency space, even at 1.2 degrees Kelvin in a crystal. Interesting science, of course, people thought, oh, well, these molecules are not very stable, they're not very interesting. But this is the beginnings of what's going to come later when I talk about single molecules used for super resolution. We could turn molecules on and off, optically. And those were very important surprises that weren't really expected until you started doing experiments in the single molecule regime. Then the field moved to room temperature in the mid-90s, and a lot of people showed that you could detect single molecules, even in room temperature, by many different microscopic methods. Near field, confocal, wide field, two photon, and so on. And so that's all wonderful. It just means you have to be more careful about lowering backgrounds, because now you don't have the narrowness and the strength of the absorption that's true at low temperatures to help you. But nevertheless, surprises also appeared at room temperature. We started looking at a single copy of green fluorescent protein, and the first time we saw that, we saw it was also blinking on and off, in a random sort of way. And so that's an important step. But again, some people thought, well, this is not very interesting. What are you going to do if the molecules are unstable in some way? You'll see that coming back in a few moments when we start talking about super resolution as well. So the room temperature experiments that we do, just to make sure everyone's on board, in terms of the techniques that we utilize mostly, work basically like this. Imagine you want to see a particular, let's say, protein, or oligonucleotide, and if it's not natively fluorescent, we attach a fluorescent label to it, usually. That's these kinds of molecules. Small molecules, one to two nanometers in size is one possibility. Rotomans, cyanine dyes, or whatever. Or you can also attach a green fluorescent protein to the object of interest. What the point is, once you've done that, now you want to pump and collect the emitted light from those single molecules. And what we're always doing in these experiments is pumping electronic transitions of the molecule and collecting emitted fluorescence shifted to long wavelengths. Usually you avoid dark states like triplets and so on, but that actually is a source of the blinking that we're going to utilize later. So the experiment, you can think of it as focusing a laser down to a small spot, but you may know that due to diffraction, you cannot make this spot infinitely small. It cannot be smaller than lambda over roughly two times the numerical aperture of the microscope. The numerical aperture is about one. So this is the diffraction limit of visible light if you're using, let's say, 500 nanometer light, green light. And so that is a fundamental property that's coming from physics, coming from diffraction itself. So you might say, well, that's huge compared to the size of these molecules, but no problem. You can still get to the single molecule limit. Just dilute them. Just make sure they're further apart than the width of this spot so that now the emitted light is coming from just one molecule being pumped. Okay? So that's the kind of thing you have to do at room temperature to get to the single molecule limit. But we'll solve that problem in a moment as well. Thinking of this regime and just giving you an example of what happens in this regime, especially since we went to room temperature, what happens if you look with a wide field microscope and look at a big piece of a sample. And this is a cell where a transmembrane protein has been labeled with a fluorescent label. And we see this wonderful, exciting measurement. And I still love this measurement, even though the imaging and the codec is not so beautiful now, but this is quite a long time ago, right? This particular measurement. I love to watch those molecules on the surface of the cell because, again, you may not have realized this, but the molecules on the surface of the cells in your body are doing this now even faster because this is about 22 degrees C. Yours are moving faster. This is the native motion of individual molecules on the surface of your cells. And you very quickly realize that motion and diffusion and randomness, Brownian effects are important on the biological scales to make things work. And I'm showing it again and again because there's so many interesting things in this movie, sorry. One more time. Also, note that the molecules don't look like infinitely small dots. That's the diffraction limit that I just told you about. They look like a spot that's about 250 nanometers in size. And you see them disappearing. Some molecules are going away. That's photobleaching. And molecules give us maybe a million photons before they give up. Some special cases, of course, can do better. But you see that we have to live with those finite number of photons. Everything that we do becomes an estimation challenge. How do you learn as much as possible with the finite number of photons? And so on. So now, the next thing that's important to recognize is that because you could start doing that, observing individuals and removing ensemble averaging, many, many scientists all over the world started applying these methods. Let's look and see what happens if we watch them one by one. How do they behave? How will they behave the same? Do they behave differently? Is there heterogeneity? And of course, there certainly is. And I apologize for this laundry list. But it's just to kind of emphasize that so many interesting things have been done at room temperature and at low temperatures to just look at individual molecules. It's fantastic. There's applications to biophysics and cell biology. There's applications to photon anti-bunching and so on. It's wonderful to be able to look at these sort of ultimate individual objects and ask questions about a complex environment, whether it's a polymer or a cell or other sorts of interesting situations. So now, if you think about these kinds of measurements in a different way, I want to kind of tell you a little bit more about what we can learn by measurements on the single molecules. First of all, remembering that we have this diffraction limit, even though we have a very tiny emitter, its spot looks large and now I'm showing you like a two-dimensional representation of the image on the camera. That's the diffraction limit defining the width of that spot. And what many people do, a very important thing you can do is to localize the molecule, find out where it is. And you do that by fitting a model function to this measured data from a camera. And then the model function will have as one of its parameters the center position or the location of that molecule and the error in determining that position has a much smaller error distribution. And that error distribution scales as one over the square root of the number of photons. So this previous limit, that's the abbe limit, can be reduced if you have a single molecule because you can localize it much better. If you get a 10 to the fourth photons, then it's 100 times below the diffraction limit. It's the precision for where you can find the molecule. So this is all great, but the regime where much of that previous work I just showed you is being done. But it doesn't give you resolution. This doesn't give you the ability to distinguish two molecules that are very close together because these spots will overlap. So that's why super resolution is a step beyond just localization. And so for super resolution imaging, what we do, and I'm going to describe it fairly quickly because I think many people already know this, what we do when we have many, many molecules potentially overlapping, we use the fact that you can turn them on or off. We use the fact that they either blink or we can photo-activate them to make the concentration be very low in any imaging frame. If you make sure that there's very few in any imaging frame, then you can localize them by the normal method, by this method up here. And then you can get information from the samples of the underlying structure from multiple images. So another way to say that is suppose I have this structure here and I've got floor fours all along it. If I just do a diffraction-limited image, I'll see big blurry features. But if I can go to this individual molecule limit at low concentration, enabled by active control, the experimenter has to actively pick a way to make sure most of the molecules are off at any given time. Then in each imaging frame I can localize, and the next imaging frame I'm going to get a different molecule. Different molecules, the different emitters are imaged at different times. So this is, if you like, time domain multiplexing of learning about the positions of the molecules in a complex structure. And then you take all those positions that you've recorded in the computer and put them all at once. So the final result is a reconstruction of the underlying structure that was beyond the diffraction limit before. So it's a fascinating way to use that sort of blinking business that I mentioned earlier. Now, what are people also still doing? They're also tracking single molecules. So let's not forget that. This works for a static structure, essentially. That's static on the time scale of this measurement. But you can also just watch one single individual where that same individual is being observed at different time points and get a precise trajectory of the motion of that molecule. For example, in the nucleus or in a cytoplasm or in other interesting structures where you'd like to learn about the behavior from the motion. So these approaches let us now observe structures and motions beyond the diffraction limit in 3D. You can also measure orientations of molecules. Another fascinating thing that basically comes from polarization measurements. So you can see the direction that the dipole, the transition dipole in the molecule is oriented. You can look at properties of the local environment from the emission. Does it have a short lifetime, a long lifetime? Things like that. And the size and charges of objects can even be sensed in single molecule measurements. So that's kind of the big overview. And now, given that we can do this super-resolution imaging, once again, a large number of people have jumped into this area and are utilizing super-resolution microscopy in many different systems. And once again, I cannot possibly summarize everything that's done in this field. But to just give you a few quick examples, here's an interesting observation in the Zhuang Lab at Harvard. They were looking at neurons and a particular protein in the neuron spectrum and discovered that there is a fascinating banding pattern perpendicular to the long axis of an axon that hadn't been observed before. That's below the diffraction limit and so on. It tells you something interesting about the structure of the axon. Here's something from Feng Wang's work. This is the synaptanemal complex imaging with super-resolution. Here is an example that relates to amyloid diseases. This is the Huntington protein inside a cell. Mutant Huntington will form aggregates and fibrils. There are these really huge, large inclusion bodies. But we also find it fascinating to be able to see with super-resolution these very, very tiny fibrils that are all around, scattered around in the cell. So there's a number of people that are applying these methods to different kinds of important questions that relate to amyloid formation. And then this last quick example is an image of the glycocalyx. Now what's the glycocalyx? It turns out that the surfaces of our cells are decorated with many interesting sugars. The glycocalyx is the general term for these glycans that are on the surface of the cell, forming interesting structures and so on. And especially in cancer cells, this is a cancer cell, there are these tubules extend out from the cell membrane, and they're decorated with these sugars. So if we image the sugars and the positions of the sugars using a fluorescent label, we can see with super-resolution their shapes and sizes and so forth, and then follow that as cells go through a transition into a cancerous form, etc. So there's just so many interesting things that people can do now with these kinds of approaches. So since, as I said, I can't summarize this whole field by myself, then what I'm going to talk about now is some of the new things we've been doing that are designed at pushing super-resolution forward, pushing single molecule measurements into other regimes. So I'm going to talk about going back to low temperatures briefly. I talk about in a little bit more detail how we do our 3D imaging and then explain this tilt 3D microscope. I showed you this already, so I'm not going to go a lot more into that, but I'm instead going to spend the end talking about how you use neural nets in this whole problem. So I hope that you can follow me through these different sort of descriptions of really new work. Well, as a, if you like, a motivation for this next portion, I'm going to talk about super-resolution microscopy, which is an optical method, and combine it in a certain way with electron microscopy, which is a related and different method. But the way to talk about this work is to go back to this beautiful drawing by David Goodsell of what's inside a bacterial cell in this case. There are so many interesting little machines and proteins involved here. Well, people can look at this with electron microscopy, which has very, very high spatial resolution, but what you see, and for example cryogenic electron tomography, that's the one where you take many, many tilt images and so on and try to look at a thicker sample instead of just looking at one protein at a time, looking at the whole sort of cell, you see a grayscale image. You see an image of the density that's there. That's what the electron microscopy is giving you. It's certainly a high resolution, but the problem is you don't know which structure is which if it's just all a grayscale blur. Now, on the other hand, we know that you can see single molecules, like I just said, and localize them that are specifically attached to a specific protein. So the idea here is to combine this cryogenic electron tomography and single molecule fluorescence, localizations. Then what you can do is, if this advance will work, you would have the knowledge that those particular positions in the cryo-electron microscopy image were those specific proteins that we had labeled and observed by light. So that's why we call it, we're going to call this single molecule annotation. So this then requires that we do our optical microscopy on a sample that's a cryogenic sample that can be immediately sent to a cryo-electron microscope. So that's the challenge here. We need to do experiments on cells that are on an electron microscope grid. So Peter Dahlberg, my postdoc, has been pursuing this interesting project. So the workflow for all of this involves, first of all, you have cells that are fluorescently labeled, and we typically, right now, are using a particular fluorescent protein called PAMKate. It's just another one of these huge menagerie of fluorescent proteins, but it's one that works at 77K, because we also found that many others don't work at 77K. What do I mean by work? I need this active control scheme. I need to be able to turn the molecules on and off, because if they're all on, then I won't be able to get the super-resolution information that we'd like to have. So PAMKate, we learned that that worked and published that recently. So here's some bacterial cells that have been labeled. You have to plunge-free a solution of the cells on an electron microscope grid into liquid ethane. That produces vitreous ice. It freezes the water so fast that it doesn't form a crystal. It stays transparent, and it's a way to make sure that you haven't really perturbed the system very much, even though you've cooled it down. Then we take that sample into the optical microscope. So here's the electron microscope grid. It's in liquid nitrogen. All the image thing, then, is done in that environment, but done in the usual sort of way with a pumping laser and an activation laser and collecting fluorescence from the sample, just like before. Then that sample is taken over to the electron microscope and measured, if you like, with electron tomography, and then those images have to be combined. You have to figure out how to register and visualize the electron microscope image and the information from the optical image together. This is, as I say, the work of Peter Dahlberg that has been pushing this in my lab. So I want to tell you an example of how this works from one of our favorite organisms, Colobacter crocentis. Colobacter is a bacterium that's been studied for decades by our friend Lucy Shapiro at Stanford. And what's interesting about it is that this organism divides asymmetrically. This is a pre-divisional cell, so you can see that there is a stalked form and another daughter cell that's forming that has a flagellum. And so as the cell cycle goes forward, you have stalked cells that can start dividing, but then you create swarmer cells and leave a stalk cell behind. You have two different daughter cells, even though it's a small bacterium. So that's fascinating. How does it work? How does this happen? How do you program this cell? That's been the key problem that's been under study for a really long time. And so I want to use this as a model system to prove our technique by focusing on just one part of the cell, this region called the chemoreceptor array. This is a set of proteins that sense environment and say, okay, if there's a lot of food or so forth, then we're going to now begin our transition from a swarmer cell to a stalk cell, for example, but all the details of that are not important. Just remember that if I think of that structure and do an electron microscope image, an electron tomogram slice, you see that there are these two lines here close to this inner membrane. There's all these other membranes. Don't worry about that, but you can see the chemoreceptor array right there. So it's an example of something from the electron microscope image that you can recognize. And so that's why we wanted to use this. You can recognize it both in the electron microscopy and I'll show you in the optical microscopy. So this chemoreceptor array is composed of certain proteins, but I wish I knew why this doesn't advance. If anybody knows, tell me. This is a logitech. So it's supposed to be made of MCPA proteins and other proteins, but let's look for the MCPA proteins. Can we find them? Can we see them optically in this chemoreceptor array? And so we image over expression of the protein MCPA fused to PAMK, our photoactivatable fluorescent protein. And these images look like this. So on the left is the diffraction-limited image, and you can see that there's some white images. That's just the entire cell lit up in another way. The fluorescence I'm talking about is this orange and red. It's near the end of the cells as it's supposed to be. You saw it. It should be near the end, but we need a much better localization than this diffraction-limited image. So if you then go to the single molecule limit, you see this beautiful individual spots and you're going to see more of them turn on as we activate more fluorophores and then you'll see them ultimately turn off and you'll see things like that. But notice this is at 77K and this is 100 times sped up. These molecules are not blinking fast. They stay for a long time and that's why we're doing this actually. If you have a long time and less photobleaching, you get more photons and therefore better precision for determining the position of the molecule. So that's the idea here. So these molecules get localized by fitting and measurements, and in fact the long-on times will lead to not only high precision, which is good, but possibly also overlapping PSS, which is not so good. I say PSS here because that's the image of a single molecule informally. So we have to work hard to deal with overlapping PSS, and of course there's theoretical approaches that various people have presented about dealing with overlapping PSS, but in this case since we can see the emission from one spot go up and down digitally, we know that there's a molecule that turned off at this point, on and then off in this time period. So you can take the light underneath it and call that a background and subtract that background and fit the photons from just one emitter. So you can see that this is very clear. There's an estimate of background, emitter plus background, and then you get the emitter alone and can fit the emitter alone. So once you do that for every frame, then you get these small black dots. Those are the localizations for each frame, for each frame, each one second long frame. But it's the same molecule for many, many frames. So we should combine the information from all of those localizations. And so that's what this circle is. That's the average location over all of those individual frame localizations. And now we can use these measurements to define its precision, the statistical precision, by the radius of that circle. So the bottom line you get information down to the few nanometer level of where this protein is by this method. That's the idea. So now, in this case, it happened to be 12,000 photons, so 9 nanometers. But there's a number of cases where it's 5 and even down to 1 nanometer. So what do we do with this data? Now, after all that registration business, you put it on top of the cell and you see that there are many spots that are close to this chemoreceptor array in the electron microscope image. And it's sort of fascinating that many of them are there. And note, by the way, that our optical images are above and below the plane of this electron microscope image, because this is just one very, very thin slice. And you might say, well, what's going on over here? There's a molecule very far away. Yeah, that's interesting, isn't it, right? There very well may be a molecule over there. Because there's no rule that say all of them have to be in the array. You synthesize these. They move around in the cell. They have to find their location over time. But mostly they're in the proper location. So is it in the chemoreceptor array? Yes, of course. And we see a clear correspondence between the single molecule annotations, as we call them, and the CET density. Great. So that is a known structure. But we can also apply this to unknown structures. The structures where you don't see a clear pattern in the cell. And the example here is a molecule called POP-Z, which is in the polar region. It forms sort of a microenvironment near the poles. POP-Z does. And so it's not easily seen in the electron microscopy images. So what do we do then? Well, we observe singles. And here's an example of the data of blinking molecules that are all used to, every time a molecule comes on, we localize it and put it in the image. Here's the electron microscope image of that same region. And you can see that here's one of the cells, and here's the other cell of two that had divided. And if you now take all of this electron microscope image information, what people normally do is that they like to annotate electron microscopy images. They, by hand, start marking out things that can be observed. Okay? So that is how you can localize and say, well, you can see these membranes, the intermembrane, the peptidoglycan layer, the outer membrane, and so forth. And you see all these dots. These dots are all coming from ribosomes. Ribosomes are large and big blobs, so you know which things to mark as ribosomes. But remember, this is hand annotated, and some people use neural nets to do that, too, but they train the neural nets by having people pick out these spots. And that's fine. So it gives you, you know, a fancy, fancy image that you can rotate and so on. But what about all the proteins that are not annotated? That's the point here. There's thousands of proteins that are not annotated, and that's why we're looking at these optical experiments to try to annotate those that are not easily seen. So this zeroes in on this region near the pole where there's no ribosomes. That's the feature of POP-Z. It excludes ribosomes. So the ribosomes stop, and there's something here, but you can't annotate it in the electron microscope image. So that's sort of showing that as a blow-up. It's this region free of ribosomes. So I'm now drawing the void, okay, the region where there are no ribosomes, all right, and then on top of it, you'll see the localizations of single POP-Z molecules. So now we can show that you can use this method to say where are the molecules that you can't identify easily in electron microscope images? These are also shown with a 3D information with this cross structure, because we also get a little Z information from this microscope using astigmatism. So that's an example of what you can do with single molecules now to try to push into an effect and improve another class of fascinating measurements. Great. So now let's go back to the main thread of thinking now about no electron microscopy for a little while, but let's talk about three-dimensional imaging, because, of course, the world is three-dimensional. For example, if I'm thinking of the motion of a protein on the surface of this bacterial cell, if I only measure the two-dimensional projection of its motion, I'll get a very distorted view of the motion. Any motion that's in the axial or Z direction is basically lost if you only do a two-dimensional image. So why is this a problem and how do you fix it? Well, what you have to do is think about the microscope, think about the light going through the microscope. There's a number of different solutions to this, but I'm going to talk about our way of doing it. For a conventional microscope here where the object is moving in the Z direction, you see that in one position it's in focus and you see a nice tight spot, but then you don't see anything after that. It goes away very quickly and, in fact, at the focal plane where it's brightest, it changes slowly. That means its fissure information is very poor. Its derivatives are zero when you're right in focus. So this is not so good for determining Z position. So we've been switching to a different kind of microscope where we alter the optical behavior of the microscope by placing a phase mask in the Fourier plane of the microscope, or the back focal plane. In between the objective and the tube lens. This particular phase pattern, which looks kind of crazy, came from very nice work by Rafael Pistun at the University of Colorado. It is really only a map of phase delays. So these different colors mean different numbers of radians of phase delay at different positions in the back focal plane. This particular phase pattern converts the light from a single emitter into two spots. Two spots on the camera. And more importantly, as you move the object in Z, if you have different emitters at different Z positions, then you see that these two spots revolve around one another. So that now I can encode Z in the angle of the line between those two spots. So that's what this is all about. It's a way to get Z by encoding the information in the image of the single emitter. And this is important to note that this works over two microns, over a long range in the Z direction. The previous performance of an empty microscope only works for 500 nanometers or 600 nanometers right near the focus. This works over a longer range. So maybe we can use that. We'll think about that later. And then I also want to emphasize that we don't do any scanning when we use this performance, this response. We don't do any scanning. You don't scan anything. Just one image will show you all the molecules within a slice. It's two microns thick. And it tells you what their Zs are by just these lines between the two spots. You get X, Y, and Z for this whole thick little pancake. So that's the basic idea. And this works great for doing super-resolution microscopy. Here's a two-color case, 3D, using the so-called double helix point spread function. We call it the double helix and you think about it in terms of the mathematics. It's kind of like a double helix along the Z axis. And the images are slices through this double helix. That's what all these images are, these pairs of spots. So this is, of course, another wonderful example of the cleverness of my students. Here's Matt Lu and Steve Lee where they've got a new acronym called Spray Paint Now, which is just, of course, for fun. The first part of it is super-resolution by power-dependent active intermittency. So what's that? Blinking. So fancy term for blinking, right? Because you need blinking molecules to do all this. And so, anyway. So to make one more important point about these point spread functions, they are useful on the one hand for super-resolution. That means measuring an extended structure where you use some blinking to get all the different molecules. Or you also can use them for motion. You can also use them at the single molecule regime by just having a low concentration and following just one. You see this molecule is an RNA particle inside the protein, inside the cell that's moving around over a time of nearly 35 seconds, 36 seconds. And you see that the angles tell this X, Y, and Z for that whole time period. Okay, so very useful for that too. Well, now, what do you do when you want to go graduate from thin, small little bacterial cells to mammalian cells that are much thicker and so on? Then you want to do something important to remove backgrounds, because if you just illuminate a thick cell in wide field, you'll have haze from all kinds of molecules that are out of res... not in the focal plane, okay? That are out of focus, so to speak. So this is solved in many different ways, but we want to solve it for the wide field problem, so we've been, of course, using something that's well-known called light sheets of microscopy. The idea is just to illuminate with a thin sheet of light. Imagine a pancake of light coming in from the side of this image to illuminate that sample. And so that is going to only illuminate one plane, and so there's no background from above or below the plane. I'd list all these pioneers and so on, but I want to point out a difficulty with many light sheet designs. Think of that cell, and here's our light sheet that's coming in from the side, so think of a plane that's slicing through that cell. You have a problem if you want to get down close to the cover slip when you want to look at the bottom of the cell, because that will badly distort the light sheet due to this corner of the chamber that's holding the cells. So we are solving this problem just by a trivial solution, a simple solution. Just tilt the light sheet. Just tilt it maybe by only 10 degrees. And you might say, well, how's that going to solve the problem? Well, maybe you might think this produces difficulties, because there's molecules that are out of focus and so on, above and below the focal plane, but because our point spread functions work over that long Z range, that is not a difficulty at all. So that tilted light sheet lets us, in a very simple way, implement light sheet performance if we combine it with our special point spread function. So this just shows how simple the microscope is. There's just one cylindrical lens, and it's making the light sheet that's going through the sample. Because we've tilted it, now the beam is going through the flat part of the side of the sample cell, so it is not distorted. And then in the collection path, it's called a 4F optical processing system, well known in optical processing, to produce a Fourier plane, which is where the phase masks are placed. And we can either use the double helix phase mask, or another one I haven't mentioned yet, called a tetrapod, which is a new design that works over a longer range, even 6 microns, 10 microns, even 20 microns. So it is, we're using that for a special reason I'll talk about in a moment. So anyway, this so-called tilt 3D microscope is a simple way to get the 3D information from a cell, sorry. And here's an example of mitochondria that are measured in the cell by blinking, by so-called storm, molecules that turn on and off. But let me just make sure I emphasize how this really works. This really works is to think of this experiment in the following way. Suppose we place the light sheet here at this lower position. Remember, there's molecules that are going to be pumped that are above and below the focal plane. But that's no problem in the detection path because our double helix response will tell us what their z is by the angle between the spots. So we get z from the PSF, the point spread function, not from the precise position of the light sheet. Once you've finished with those cells, you can go to the next position of the light sheet. And now you're going to see that you get sort of the same images for molecules, same thing for another position of the light sheet. So how do you patch all these different positions together? You do that with this other point spread function, the one that works over a longer range, the so-called tetrapod, because we can use it to image a bright bead that's stuck at the cover slip. Now, no matter what the focal plane position is, we can tell how far we are above that focal plane just by looking at the images of the tetrapod. So it's a long-range PSF that gives the connection between the different double helix slices in this microscope. So it beautifully uses these interesting behaviors of these point spread functions. So Anna Karen Gustafson, who's the postdoc who's been working on this, and she'll soon be an assistant professor at Rice, applied it to the nuclear lamina, which is a fabric of proteins that are close to the inside of the nuclear membrane. And so by imaging lemon B1 and doing blinking single molecule storm microscopy, but with all of this 3D tilted light sheet, she was able to obtain these images. Let me just emphasize that the double helix is used for each plane and then the tetrapod is used for the long range. This is something that you can get from double helix optics and my disclaimer is that I'm on the advisory board of double helix optics. But these data in three dimensions are really quite beautiful of the nuclear lamina. You can see an intranuclear channel inside this cell, inside the nucleus, running across the nucleus. Those were known before, but now we're observing it with super-resolution microscopy in 3D over a long range. So again, I want to say that this is our particular approach to 3D. Other people use other techniques and different kinds of imaging, but this is just to show what can be done with these point-spread functions. So that's mitochondria. Here's the nuclear lamina. And now this is applying this to the glyco-calix again. I mentioned it briefly, but here I'm now showing these tubules in 3D above the surface of a cell using this tilt 3D microscope. Okay, so I now want to switch gears one more time. We've been talking a lot about imaging and microscopy and applications and so on, but I haven't really gone deeply into every application and I apologize for that, but it always takes a long time to explain the biology, you know, the setup for anyone given problem. Let's talk about something else that's very new that just came out very recently, a neural net that we've used to solve another kind of problem for our kinds of imaging. So this is the work of Leonhard Merkel, a postdoc in my lab, and the idea is to think hard about this estimation problem of where the molecule is. So here's that image of a single molecule I sort of showed before as a 2D camera-like representation. And in one dimension, you can see that here's where the molecule is. You can see these are the pixels where the molecule is located, but all of this around on both sides is what we call the background. This is not something coming from the detector alone. This is light that's coming from the sample that is typical in all of our measurements. We typically work to have enough sensitivity to be background limited. If you want to reduce the background, you make the best possible samples you can, but there's going to always be some source. For example, auto fluorescence or other sorts of sources of fluorescence that might give you a background. And you want to estimate the position of the molecule and the presence of that background. So that means that the point spread function itself in our experiments is contaminated partly by background. Even though you have a known model for the underlying shape of the point spread function, this fitting has to be done carefully so that you're not confused by the background. So the background itself can distort the positions of the molecule since you want to get them precisely as much as possible. So how do we correctly account for background fluorescence? So let's think of this in several different ways. First of all, the PSF shapes are the molecule's emission alone and the background shapes are coming from whatever. But those two combined could cause a problem. So this new neural net called BG net looks at this problem from the point of view of creating these two as, let's say, good data on the left, the point spread functions and bad data on the right, the background signal, background shapes. So since we know what's going on on the left, we know the point spread function, we know theoretically from vectorial diffraction theory what it should be, including phase retrieval, to quantify it. We can calculate what the shapes should be for molecules of different positions for different degrees of distances from focus or distances from a cover slip. So that's all known that can be provided to the net. In the background, we're going to challenge the net with a bunch of simulated backgrounds coming from something called Perlin noise. And so the net is trained to figure out what is the difference between these two and give us the background back from a single image. So in this way of doing it, the background estimation is not an isolated problem. So to make this all work, just to make sure you understand the workflow, we're going to train on known PSFs with some simulated background and ask the net to say, what is the background? Okay, in that case, give me the background alone. Then you can take that background and subtract it from the measurement to give you just the PSF alone for fitting purposes. So this is done with a UNET. I'm not going to go through all the details. This is one of the networks that's been used for a number of different image processing applications. It's a deep neural network that's trained on many, many examples of point spread functions with a lot of backgrounds. Let me emphasize that this Perlin noises could be pretty cool because you can simply use the Perlin noise rule to compute random backgrounds. But notice that they cover many, many different spatial frequencies. That's the really important point here. We've solved the problem for not just low spatial frequency in the region of interest, but also high spatial frequency in the region of interest. So other methods of removing background let's say only low, only low spatial frequency, for example. But these fancy PSFs have much more detail at high spatial frequencies. That's why we have to be able to remove the background at those high spatial frequencies. So let's talk about the performance of this. Net for an open aperture or the standard PSF. So this is what a molecule is supposed to look like. But in fact, in a real measurement you see it with background. You detect it with background. And of course this is a simulation just to show you how well it works. This is the true background that was added to the PSF to make that image. And the net extracts this as the predicted background. So you can see how close the predicted background to the background that was added to the image. So just to emphasize that that gets only this and it gives you this back and you see it's very close to the true background. The residual is very small. And so if I use, if I subtract the image from the true background or the predicted background I see basically only the PSF. It also works for these more complex point spread functions. Double helix and tetrapods. So in the case of double helix just look here. Here's the background for this particular test. Here's what the net produced back. And here's the image minus the predicted background. And for the case of the tetrapod the same thing. Here's the corrupted PSF plus background that was added to it. Here's what the net gave us back. So you can see that this does a nice job of removing structured background. To demonstrate it one more time to show you how this works in a real sample this is an imaging problem where we're looking at cells. But now I'm showing you an example of one frame from the movie of single molecules. You see that there's very bad structured background in this image. Individual molecule PSFs are corrupted by that background in different crazy ways. So if we use the net then we can analyze images of microtubules that come from the super resolution measurements. And this one is taking the data of the single molecules and just using the most common method of removing background constant background. Just assume it's constant and I'll subtract a constant. It'll fit with a PSF plus a constant. And that's what you get from that method, the old method. Here is using the BG net to subtract the predicted background. You can't see anything unless you look up of course much more closely. And in these different regions of interest you see that the BG net does a far better job of showing you what the microtubule images look like compared to using constant background as the model for the fitting. And same thing is true for all these other regions of interest. So we think it's an important step forward to be able to estimate structured background. Important for this particular experiment but possibly useful for many other areas of science. So I've given you sort of a whirlwind of many different things history, 30 plus years, lots of different kinds of measurements, both at low temperatures and in cells, the biology all combined together and now connecting that back to some sort of engineering kind of considerations of images and even neural nets for processing. So I want to thank my past students and postdocs and collaborators. Here's the current team and some of the people you might see that we call ourselves the guacamole team. What's a guacamole? Well you know it's one over avocados number of moles that's what we call a single molecule. Thank you for the polite laughter and I want to thank our funding agencies which have supported this work and we have a little no ensemble averaging logo here that we sometimes use. Once again, it's a great pleasure to be here. Thank you very very much for your attention. So you mentioned in the beginning that you measure the lifetime of a single molecule and can you elaborate a little bit more on that. This may be very interesting. Sure. When we have a single molecule we can pump it with a pulse laser and so if you then measure the time delay between the pump and the detected photon then you can produce a histogram of those time delays which gives you the excited state lifetime. So that is done in other measurements not so much in these that I've shown but we do it regularly for single molecules in solution that we've trapped by our able trap our anti brownie and electrokinetic trap a completely different sort of device that I didn't describe but we have a machine that lets us suppress brownie in motion for single molecules in solution even down to one fluorophore and so that those objects in terms of science we're almost always looking at photosynthetic proteins which are pigment protein complexes that have a large number of fluorophores coupled by energy transfer and so the lifetime is a very nice reporter of what's going on in a complex emitter. Is that what you were after? Yes. So is that possible to track the lifetime of a single fluorophore? Yes. So you mentioned that multiple fluorophore form a complex Oh yes. You can still do it with one fluorophore. Oh yes, sure. The thing is you just keep getting the same answer for the most part but even the single fluorophore when we trapped a single fluorophore it showed changes in brightness and some changes in lifetime. So this was the work of Tren Wang one of my wonderful postdocs who did a little work on that particular regime. I had a question about for life cells how accurate are the measurements if one want to go into the organelles like for example phagolisosomes and lysosomes and look at what kind of reactions are happening. Is that doable today? Yes. In every case when you think about how accurate is it going to be you're always going to be answering that question based on trade-offs and one of the things that I didn't show you but I'll show you very briefly now here without going through all the biology this is what happened here? Where'd it go? I had it for a moment if I should be able to see this if I'm able to turn it on these are live cells and there are the bacterial cells again and there's a specific protein that's been labeled that's part of a regulatory pathway so all the details I'll leave out but this is a situation where we're observing the motions of individual copies inside the cell this is done at about a 20 millisecond time interval so you can see by the coloring of these trajectories the motion as a function of time and when these are placed at equal time points so when you have here out on this particular molecule is a transmembrane protein so you see it in the membrane and you have to show three different projections since it's in three dimensions but this motion is slower away from the pole but faster close to the pole sorry, backwards faster away from the pole and much slower in the pole so it slows down in this fascinating polar region that's one example of one of these key proteins here's another one that's cytoplasmic fast in the cytoplasm very slow when it's moving inside the pole so this paper just appeared in nature microbiology but it's a study of both the motions as well as the positions and so now the most important answer is if I'm looking at dynamics I tend to like to go to the single molecule regime now there are some people especially the viewers doorflab and work that was done by and so on to image very very quickly to try to do the full thing and blinking and recording and blinking and recording many many images as fast as possible and that's a beautiful tour de force kind of measurement that allowed you to get even to video frame rates with a detailed structure but here I'm really trying to look at individuals and see how they behave relative to some other structure so that's a regime where you will trade off if you want to get much much more precision then you have to wait a little bit or force the photons to come out faster turn the laser intensity up and use that finite set of total photons that you get in a slightly different way so anyway sorry for the long answer but milliseconds are easy generally done and there are a number of processes where milliseconds can tell you that there's interesting changes in motion already occurring