 I don't have the conference title, where is this again? At his name so that people don't. 30 minutes for questions if it goes on. Good afternoon. Now we start our final session with a mobile phone based fluorescent microscopy sensing and diagnostics by Zachary Bayer from University of California. He will teach around one hour and 30 minutes question and comments. After that we will have very interesting demonstration here with mobile phone, five prototypes we have here. We will have the possibility to come to see here as we are working to ask questions. You can start. Hello, can everyone hear me? My name is Zachary Ballard. I am a PhD student. I'm not a professor. I'm like many of you in the room, still in the learning process. I work in the lab of Aidoan Uzjan who is a professor at UCLA in the electrical engineering and bio engineering department. He has asked for me today to come and speak on his behalf about some of the very exciting things we are doing in our lab specifically to do with fluorescent microscopy. It's a great pleasure and a privilege to be here and speak in front of you all. I hope you enjoy the talk. This morning for those who attended the lecture this morning Professor Diazbro talked about fluorescent microscopy and the cutting edge of fluorescent microscopy specifically super resolution. Systems such as STED and STORM and the confocal microscope depicted here have given us invaluable information about how biological systems operate, what they look like and will continue into the future to give us incredible insight into the world around us. However, this is a $100,000, $200,000 U.S. dollar piece of equipment. It is relegated to a laboratory, a well-funded laboratory. There's only a handful of STED microscopes in the world partly because it's a new technique but the other part is that they're difficult to engineer, they're bulky and they require very expensive optical components. So in our lab we think a lot about how can we take the principles of fluorescence microscopy and design systems that are compact, small, and cost-effective. Something that might interface with your mobile phone. So before I talk about fluorescence microscopy and a couple of the devices I have up here, I would like first to give you an introduction to our lab and how we ended up where we are with fluorescent mobile microscopy. So this is our lab. It's a very big lab. We have about 20 core members split between postdocs and PhD candidates like myself. We work with over 30 undergraduates at UCLA in California and at any given time we have a dozen collaborations. We are in the electrical engineering department but we have biochemists, we have biologists, we have computer scientists working at UCLA and all over the world in collaboration with our work. Our main research area is lens-free on-chip imaging which I'll talk about briefly at the beginning of the talk and then also smartphone-based microscopes and point-of-care diagnostics which will be the majority of the talk. These devices are fluorescent microscopes and recently we've been exploring areas of wearable sensing as well as computational imaging and sensing using machine learning. So in our lab we think about this graph a lot. Many of you are probably familiar with what this graph says, the story of this graph. This is Moore's Law. Moore's Law states basically that the number of transistors you can fit on a given chip in a given area doubles about every year. We all see this, we've all lived long enough to notice that mobile phones are becoming more powerful, are becoming faster and are becoming smaller. The same sort of Moore's Law can be seen with the pixel count on your camera and I say your camera because I guarantee most everybody in this room has a camera with them right now. Every mobile phone made, every smartphone made comes with a camera. It's basically a necessity at this point for consumer electronics and Moore's Law can be seen in this piece of technology. I think the new iPhone has 16 megapixels, something like this, 16 million pixels, which is amazing. And this is a consumer device. You do not need to own a well-funded lab or be a part of a well-funded lab to own an iPhone. So we think about how can we leverage this technology, how can we leverage the low-cost nature of computation these days to enhance and enable new systems for microscopy. This slide tells a similar story that mobile phones are becoming almost as powerful as a desktop computer. We have mobile phones now that you can wear, computers you can wear on your wrist now with the Apple Watch, the Google Glass. This technology is amazing. It's a trend that will continue into the foreseeable future, and we try to think about how to leverage this in our work. Not only is the technology amazing, the network is huge. 75% of the world is covered with cell phone coverage. There's about one cell phone for every person on Earth, and this technology can be found in developing countries. It is not only for the United States, UK, Europe. This technology is completely ubiquitous and everywhere. And with the network, it provides many different opportunities. You can take images and send them to a remote pathologist who can then make a diagnosis and send you back results. It is changing the way we do medicine, and it's changing the way we do computation with the cloud, cloud connectivity, being able to actually process large amounts of data remotely and being sent the answer back to your phone. So these are some examples of microscopes that we build in our lab. This is actually the very first prototype built in our lab about eight years ago, and it's roughly the size of a quarter or a half dollar. This uses lens-free on-chip imaging, which I'll talk about briefly at the beginning of the talk. And then we also have telegraphic microscopes for 3D imaging, as well as fluorescent microscopes, which will be the majority of the talk. We also have looked into sensing equipment for detecting protein concentration, as well as heavy metal detectors. This specific device up here is integrated with a mobile phone, and it can actually sense different levels of mercury in any given water sample. We took this along the beaches of Southern California and measured the mercury contamination as we went from San Diego to Los Angeles. The idea is you could do this for every season and create spatial temporal maps that tell you how contamination is maybe spreading over time and over a given area. Here we have an E. coli sensor, as well as an RDT reader for rapid diagnostic tests, which are used very heavily in clinics, as well as a device that actually counts GRD assists in a given water sample. So these are some examples of microscopes that we designed in our lab. We also have been working with Google Glass for hands-free applications, as well as wearable sensing. A Fitbit is a wearable optical sensor. It measures your heart rate using LEDs. We're trying to take that idea and that technology and push it forward within fluorescent microscopy as well as other modalities. Before I talk about fluorescence microscopy, I'd like to briefly introduce holography or on-chip imaging, lens-free on-chip imaging. This is actually the foundation of our lab. This is how we started. The basic principle behind holography is to create partially coherent light, which can be created with incoherent light passed through an aperture. This partially coherent light can then create a diffraction pattern after interfering a reference beam and light scattered from a sample. We can then record this diffraction pattern at a detector or an image sensor and then computationally reconstruct what the image looks like at the object plane. This is a really interesting and exciting modality for microscopy because it's incredibly compact. It's incredibly low cost, only needing an image sensor. The illumination can be an LED, which is a matter of sense. The most important thing is that the field of view for this type of modality is only limited by the size of the image sensor, which is amazing when you think about it. Most conventional benchtop optical microscopes, as they go to higher magnification and higher resolution, have a smaller field of view. That trade-off is not present in on-chip imaging, and we think about how to use this to our advantage and use this to help in the clinic and our work. This is an example of an image taken with a cell phone. Here are diffraction patterns, so these are holograms of red blood cells. We can take such an image and by using a back propagation equation, actually reconstruct what the image looks like or the object looks like at the object plane. Here is the reconstruction as compared to a 10x microscope objective. Again, this is just running through the framework. Here you have the scattered light from the object denoted in the black dotted line interfering with the reference light denoted in the red dotted line. We can then take this hologram, which contains both amplitude and phase information, and backpropagate the interfering waves to reconstruct a red blood cell. This is more images, more examples of holography in action. It's curious to look at the holograms on the left and note that they don't look like much. No one can look at this and tell me that it's a UCLA moniker until they pass it through the back propagation equation. This enables us to create very compact systems. We've taken this work further by creating a pixel super resolution. When we say pixel super resolution, we do not mean beating the diffraction limit like stead or storm does. What we mean is taking an image sensor with a given pixel size and using computation to decrease that pixel size and increase the effect of pixel count. We can do this by designing a system as depicted here, where we have a row of fibers that illuminate an object directly above an image sensor. We then capture different subsequent images of the same object that have sub-pixel shifted information. We can then combine this information to achieve a higher resolution hologram. This is a kind of silly video, but it illustrates the point. If you take a number of sub-pixel shifted images, you notice this image is jittering around. You can then combine that information by using an optimization framework to actually achieve a higher resolution image. Another example of the improvements we get so over here is a conventional inline hologram taken with a camera that has a 2.2 micron pixel size. 5 megapixels, so this is an older camera on a phone, perhaps. By using the pixel super resolution technique, we can now actually resolve these fringes far away here that are nowhere to be seen in the normal inline hologram. This reduces the effect of pixel size to under 0.4 microns and increases our effective pixel count to 180 megapixels. More examples of pixel super resolution. Here we've resolved a 300 nanometer grating using pixel super resolution, and obviously this technique is very valuable for imaging biological samples as well. That's what I want to come back to. Our lab is very applied. We're always thinking about addressing a need, a need that faces a specific region, a specific part of the world, a need that can be addressed with appropriate technology. This is an example of a blood smear, and we can actually use our inline holograms, our pixel super resolution techniques to actually image malaria-infected cells and differentiate them from healthy red blood cells. This is one example of the work that we can do with holography. Another very exciting work that was published in 2014, so a little while ago, was the imaging of tissue samples. This is a very challenging field because tissue samples are very dense. They're not sparse objects. They're cells that are overlapping and combined and right next to each other. This is what a hologram looks like of a tissue sample. As you can see here, there's no very clean, airy disc pattern. There's no diffraction pattern that can be seen. It's just a mess. We can actually use pixel super resolution along with a multi-height phased retrieval algorithm to actually image very dense samples such as breast tissue, which is depicted over here. And again, this shows the power of on-chip imaging. Here is the field of view of a 40x microscope objective, which can give you a very clean image, very high-resolution image. It's objectives a pathologist would use to actually diagnose cancer from a biopsy. However, the on-chip imaging field of view is only limited by the size of the image sensor, which for this case is the entire size of this image. This is incredibly useful for pathologists because now we can, in one imaging take, digitize an entire field of view and actually provide that to a pathologist all at once, eliminating the need for a scanning microscope, eliminating the need for any sort of manual scanning and opening the doors for computational analysis of tissue samples. We can add color. A lot of the work in the lab is aimed at adding color to images to match exactly that of normal pathology stains. Here's another example of a pap smear and the color correction that we've performed with our systems. Another way to improve these systems is a method that we developed in our lab called these nanolenses. Nanolenses are actually a minimal energy surface created by polyethylene glycol that condenses around sub-defraction-limited particles. What happens is we can use polyethylene glycol vapor to condense them around these 40 nanometer particles in some case. This effectively increases the scattering cross-section of the particle and then we can resolve that the particle is there using super pixel resolution or our on-ship imaging. This enables us to detect the presence of objects you would not be able to see with a benchtop microscope. Again, we can do this using a cost-effective and field-portable device depicted here that we've constructed and validated for measuring down to 48 nanometer particles. I've talked briefly about on-ship imaging, lensless on-ship holography, but the main point of the talk today is on fluorescent microscopy. I wanted to start at the beginning with on-ship imaging to show you what we can do with that, but there's many things we can't do with that. For the things that we can't do, we employ fluorescence. I'm going to talk about two projects in our lab. They are not my work, so I will do the best to try to teach you guys this work from other members of my lab and answer your questions at the end of the talk. The first device I will talk about is a smartphone-based Giardia analyzer. This device actually counts Giardia sys in a large volume of water sample. It is right here, and after, in the next session, you can come and actually look at this and look inside, and I can give a quick demo. This work was made by Dr. Hatice Koydemir, who was originally supposed to give this talk, but could not make it. Then the second device I will talk about is a smartphone-based fluorescence microscope for imaging, sizing, and sequencing DNA molecules. So the first device targets Giardia lamblia. Giardia lamblia is one of the most common water-borne pathogens worldwide. There's over 200 million cases a year of Giardia infections. It is an assist or an egg that you can consume by eating fruit that is not well-washed or by drinking water that is contaminated with Giardia. The Giardia sys can then enter your gut and thrive and multiply. This can cause stomach pains, diarrhea, and in some cases death. It's everywhere in the world. The United States, in fact, has Giardia almost in every state in the rural area. So this is a problem that doesn't just affect developing countries without access to potable water. Giardia is absolutely found everywhere. This is an image of a fluorescently tagged Giardia cyst. It's about 5 microns, so it's not under the diffraction limit, so you can't see it on a benchtop microscope. But what makes Giardia so hard to find is that it's typically in a water sample, which has many other particulates in it. So when you are imaging that under a microscope, you often can't differentiate the Giardia from something that might be dust or something that may not be harmful. Therefore, it is necessary to use fluorescence to tag Giardia such that you can differentiate it from other particulates in the water. Conventionally, Giardia can be found in a water sample by, again, tagging it with a fluorophore. You can then take this water sample, you can filter it, you can maybe centrifuge it to concentrate the particulates down. You can then spread the water sample on a glass slide and use a conventional benchtop microscope and scan and count Giardia cysts in the water sample. This type of diagnosis is very time-consuming. It also only can take a very low volume of water. You can imagine spreading a droplet over a surface. Imagine spreading 10 milliliters of water over a surface and imaging every square millimeter of that sample. It would take you days. So we are interested in developing a system that can take a large volume of water from a given river or lake or stream. And then we want to label Giardia and then we want to detect it optically in a reasonable time frame. So under an hour, we want to be able to do this with a robust device that can be taken to the stream, to the lake, so that we can find Giardia samples at the point of care. So this is the Giardia microscope. It is a fluorescent microscope. Again, I will demo this in the second session later this afternoon. It weighs about 200 grams. That's excluding the phone. And it has a custom developed app and algorithm for counting the Giardia cells and was made entirely in our lab using just a 3D printer and off-the-shelf optical components. So this is a little more in-depth of the design. So we excite the fluorophores that are tagged to Giardia just using blue LEDs. These are 470 nanometer LEDs. They're very cheap. We then shape the excitation light using bandpass excitation filters, which are actually the most expensive part, most expensive optical hardware in the system. These LEDs encircle the sample and illuminate it evenly. Then the sample, which is placed here in this cassette, the Giardia's fluoresces and then is captured by an external lens, which is a de-magnification lens in this case. And then the light passes through a long-pass filter and then onto the image sensor embedded in the camera. This is just an illustration of the 3D printing process. We get about 100 micron resolution with our 3D printer, and that's the practical answer. The company will say higher resolution, but 100 microns is a huge benefit to us. We can print rapid prototype these devices every day in our lab for very low costs and we can make very sophisticated and intricate optomechanical parts with a 3D printer. This is just another slide illustrating the principles of the device. We have the LED excitation filters, which was actually one excitation filter in time cut up with a glass cutter placed in front of the LEDs. This is where the Giardia, the tagged Giardia sits, fluorescing down into the image sensor. The phone we're using is a Nokia Lumia phone. It's a Windows phone. It has a pixel size of 1.12 microns, which is very standard nowadays in mobile phones, and it's actually a 40 megapixel camera, so it's an incredibly sophisticated camera and a relatively cheap phone. This is the cassette where we actually load the water sample with Giardia. Giardia, again, is tagged with a fluorophore. It then is dropped into this cassette, and here we have absorbent pads that absorb all the water. It can absorb 10 to 20 milliliters depending on how many absorbent pads we have. Here we have a mesh filter. It's a 5 micron mesh filter. This does not allow Giardia Sys to pass through, so the tagged Giardia Sys remain on the surface, whereas the fluorescent molecules that have not been attached to the Giardia will go into the absorbent pads along with the rest of the liquid. And then here we have an image taken with our fluorescent microscope. Don't you think it's very pretty? No? Nobody? No, I think it's very ugly looking, actually. It doesn't look like much. And that's because we have a de-magnification lens here. We actually are de-magnifying the sample so that we can achieve a very large field of view. And again, I go back to this very large field of view. It's incredibly important for measuring a large volume of water. If you were to take this cassette, which has been engineered, and put it onto a benchtop microscope using a 4x objective, which is kind of the standard smallest magnification, largest field of view microscope objective at a benchtop microscope, you would have to take 26 images and stitch them together to count the number of Giardia Sys in a given water sample. Here we can take it in one snapshot with the de-magnification lens in cell phone. So now I'll talk a little bit about how the sample is prepared. It's relatively simple. We have a sample of water. We add an antibody conjugated with 4x4. This antibody specifically targets the cell wall. As Professor Diasbro said today, a lot of biologists just always assume that when you tag something, it works 100% of the time, and we are very guilty of that. We do have a very sophisticated machine learning algorithm that actually does help us differentiate tagged Giardia from things that are fluorescing, but aren't Giardia, and I will talk about that later. But for the purposes of this work, we assume that the antibodies do their work and attach only to the Giardia cysts. After we add the fluorescent conjugated antibody into the water sample, we cover it to prevent photobleaching from ambient light, and we allow it to sit for 30 to 40 minutes. This is an example of the workflow here, collecting the sample, adding the fluorescent tag, waiting, then we inject the fluorescently tagged Giardia water sample into the cassette, load the cassette, and image. We also do add a counter stain to increase our signal-to-noise, as well as a solution which prevents fading of the fluorophore. So here's an example, again, of an image taken with the phone. It's maybe hard to see what's going on in this large image, so we have these zoomed-in versions here. This is taken on the phone. This is a benchtop comparison, so you can see here that we're sacrificing quite a bit of resolution. We really can't make out any features of the Giardia cysts in this sample. But we, again, we leverage computation to try to count the Giardia cysts with the given low resolution. But the important point here is that they're differentiable from each other using our camera phone. Another microscope comparison showing the Giardia cysts. So now we'll talk about the software and the computation. Again, this is needed. This is very necessary for getting this device to work. It would obviously be ridiculous to hire somebody, maybe an undergraduate, to sit and count all the Giardia cysts in a given image. There can be hundreds, thousands. So we've developed an app that we've loaded onto the mobile phone, which does this counting of the Giardia cysts for us. So the workflow for the software is like this. We open up the application. We take an image. We then upload this image to our servers, which are based in our lab. The servers then take the raw image, convert it to TIFF, crop it, and then they run the machine learning algorithm, which I will talk about in one minute, and then calculates the total Giardia count and then sends it back to the mobile phone. It also tags the GPS location as well as the time and the date. So I'm not sure how many of you are familiar with machine learning, but it's a very hot topic right now. It's kind of the combination of lots of fields that have been around for a long time. But the idea of machine learning is that you have an algorithm that performs a task. For instance, our task here is counting Giardia cells and differentiating them from things that do not be Giardia like dust. But in order for the machine learning algorithm to work, it has to learn. So it learns on data previously taken with the system. So for this work, we actually have images of 30,000 fluorescently tagged cysts taken with our microscope, as well as 100,000 dust particles. We've labeled these using a gold standard optical benchtop microscope, so that we know exactly what we're looking at. And then we've taken this training data and extracted features from each cyst. When I say a feature, I mean maximum brightness, maybe the width of the cyst, the length of the cyst, the ratio between the width and the length of the cyst. We have 71 of these features that we then input into our machine learning algorithm. And then the machine learning algorithm will actually output a count. We did a survey of different algorithms, found that this bagged trees algorithm, which is a bootstrap aggregator algorithm, we can talk more in the demo session about the specifics of the algorithm, but we actually are able to, with those 71 features and that training data, get a very high sensitivity of our device with 95% success rate of differentiating Giardia particles from other things that are fluorescing, but are maybe not Giardia. The total limit of detection of the system is 12 cysts per milliliter. We obviously would like this to be one cyst per milliliter, but the high cyst per mill count in the limit of detection, the higher cyst per mill count, comes from the fact that we don't have 100% tagging rate of the fluorophore on the Giardia, and perhaps some of the Giardia goes through the filter. But nevertheless, 12 cysts per mill is a very satisfactory result for a portable device and is actually being used right now by the field tested by the US military for assessing remote water sources. This is an example of analysis results coming from our machine learning algorithm. Here we have a very high density cyst count and a low density cyst count. The algorithm is robust and does work in both types of scenarios. So, taken together, we've created a system that is portable. I held it up a second ago. It's very small. I brought it in my suitcase and didn't worry too much about it. 200 grams. It can take 10 to 20 milliliters of water, which is a large volume. It can do a cyst count in under an hour that's with the labeling step included. Here is the 67% recovery efficiency which leads to a limit of detection of 12 cysts per milliliter. Now for the second part of the talk, I'm going to focus on a smartphone-based fluorescence microscope for DNA imaging, sizing, and sequencing. This work was pioneered by Dr. Ching-Shun Wei, who's actually left our lab just a couple months ago and is now a professor at the bioengineering department at North Carolina State University, so we're all very proud of him and wish he has a lot of success in his work. Now, I talked about Giardia and why we need to use fluorescence to see Giardia. We need to tag it with a fluorophore to differentiate it from other particles that might be in water. DNA is a very different story. You can actually extract DNA using a centrifuge and common laboratory techniques into a very pure form, so we don't necessarily need to know that it's DNA. But it's an incredibly weak scatterer. You cannot see it necessarily with an optical microscope because its dimensions are far below the diffraction limit and its width. The length can be many microns, dozens of microns, but the width is far too small to see. It's strongly an optical microscope. So many researchers have poured a lot of time into tagging DNA and have gotten very good at even tagging specific sequences in the DNA genome, which is obviously very valuable for studying genetics and studying DNA replication and mutation. These type of measurements can be done on a conventional fluorescent microscope, confocal microscope, STED, but there's a lot of need to do this type of imaging with a field portable and low cost system. The implications are rather wide reaching. This is another instrument used for sensing DNA, distinguishing different sequences based on their length, which is relatively low cost. However, it has problems with sequences or strands that have a very low number of kilobase pairs. So a lot of the motivation behind this project has to do with studying DNA replication and studying mutations. So I'm not sure how many of you have heard of copy number variation, but this is a phenomena in DNA where when it replicates, it accidentally duplicates a specific part of the genome. This is a mutation, and it has been associated with cancers, neurological diseases, Alzheimer's, autism, schizophrenia, there's a long list. And so if we had a robust and low cost way of actually sizing a DNA sequence, we could have a really good understanding of if copy number variation is going on in a given sample. This would be very valuable for diagnostics and for understanding the effectiveness of different medicines and also learning more about DNA and the replication process. So here we have created another fluorescent microscope. This microscope is specifically designed for imaging DNA, whereas the other fluorescent microscope was specifically designed for counting Giardia cysts. They're both fluorescent microscopes, but because of the application they take on very different forms. Let's see if I have... So the fluorescent microscope for DNA analysis is here. It's more or less the same size as the Giardia analyzer. And using very simple sample preparation techniques, we can actually stretch the DNA and then size the length, do a sizing measurement where we essentially measure the length of the DNA. Here we have comparisons of a cell phone image taken with our mobile microscope and then a benchtop image. We have a very wide field of view, two millimeters squared. Again, this is our field of view. This is a comparison to a 100x objective on an optical benchtop microscope. So the design of the microscope is as follows. We actually employ a laser diode here. We need to excite the fluorophores at a very high rate. We need them to be very bright. This is our laser diode. This isn't a laser in the traditional form. It's actually relatively cheap compared to a pulsed laser or something like this. It is the most expensive part of the system. It comes in at about $150. They are becoming very cheap now, especially if you move to lower powers. We have a laser diode. It's incident upon our sample at a very high angle, 75 degrees. This is necessary for getting a very pure dark field and reducing the background. We then have a cover slip with the DNA that's loaded through a sliding chamber. Then an external lens and then an emission filter. We don't employ an excitation filter here because the bandwidth of the laser diode is adequate enough for reducing background. We have a focusing knob on top here. The sample preparation process. I think this is the coolest part of this work because it's amazing the results you can get with something so simple. We take a cover slip. This is an established technique, by the way. It's not necessarily pioneered by our lab, but it is utilized to our lab with great effect. You have a cover slip and you silentize the cover slip. You have a model layer of aiming groups on top. This can be done through vapor deposition very easily. There's other methods of doing this as well. It's a very common step in wet labs for surface functionalization. We then take our silentized cover slip and drop our fluorescently tagged DNA, three microliters using a pipette. We then take a second cover slip and put it on top of the droplet and quickly push down with our fingertip. This creates sheer forces at the boundary of the silentized cover slip and the DNA. It actually stretches the DNA and they align outward from where the droplet was. This is a comparison of doing the compression technique correctly, whereas if you're a little too slow and not exactly enough finesse, the DNA can look like a bunch of jumbled yarn or something like this. This is a technique that requires some training but is very simple to implement and very low cost, not needing any sort of expensive pieces of equipment. Here we have an image of our mobile phone. This has been color mapped, obviously. Here are 77 stitched frames from an optical microscope. Again, it's coming back to this idea of field of view. Especially for studying copy number variation, it's very important to gather large statistics. The only way to gather large statistics with a small field of view would be to mechanically scan or have someone manually scan. But by using a demagnification, lower magnification sacrificing some resolution with a very high quality image sensor, we can achieve a very large field of view for gathering large statistics about the DNA sizes. So once we have images, this is how we process them. We actually average about anywhere from 5 to 10 different images. Signal to noise improves as the square root of N as you average images. So there's diminishing returns. However, you can get a large improvement in your SNR by averaging. So we found the sweet spot to be around 10 images. So we can take 10 images. They're at a four second exposure time. And then we combine them and average them to create a high SNR raw image. And we add a mask to that image. And then we actually do a rough estimate of the length, measuring the skeleton of the DNA. And then we actually also use the PSF of the imaging system in combination with this rough estimation to get an idea accurately of the size. We estimated the PSF of the imaging system using 100 nanometer fluorescent beads and then actually created a sliding PSF window here and developed an algorithm that detects the minimum distance between the skeleton length and the sliding PSF window here to actually accurately size the DNA. So comparing our sizing measurements with that done on a conventional benchtop microscope we achieve a very close match. So we're off by, we have a bias, a negative bias of 0.33 microns with a standard deviation of one micron. The standard deviation is not necessarily a factor of our measurement technique but could be a factor of variations within the length of the DNA sequences measured. We also validated this measurement technique for different sizes of DNA strands. Here we have five kilobase pairs, 10 kilobase pairs, 20, 40 and 48 with their size distributions denoted below. So we have a negative bias for the higher, for the longer DNA and actually a positive bias for the lower DNA which is not still sufficient for measuring and counting base pairs or kilobase pairs. So this is a comparison. You're at comparison of the cell phone versus the 100x objective. This is the length measurement. So the y equals x line would denote the gold standard or exactly how the DNA, how long the DNA should be. So here you see that positive bias for the very small length DNA and then up here you get slightly negative bias but all in all the mobile phone base microscope and the bench top system agree very closely with each other in terms of measuring the length of DNA. So it's great. We can measure the length of DNA. We can do it in a field portable system but we don't get any information about what actually that genome is. What is that sequence that is maybe copied? So one way again to do this is to leverage fluorescence. Here we have an image of, I don't know if maybe the screen is a bit dark but here there's different colors of fluorophores conjugated to different sequences and here they found that this strain of DNA varies from this one by one fluorophore or one sequence. That there is their copy number variation and they also know exactly what sequence was copied incorrectly. This is really important for understanding the genomics behind cancer, et cetera. So we're trying to take this work now to do similar types of things. So in this work we used the KRAS gene which produces a protein which is vital for tissue signaling. That's how tissues respond to their environment and grow. The mutation of this KRAS gene is fundamental for cancer growth and cancer spreading in tissues so it's a very important gene to study for replication and mutation. So to implement this type of tag genomic sequence imaging using fluorescence we had to develop a dual color fluorescence microscope. Here we are employing two different LEDs that sit again at a very high angle to avoid to have a very pure dark field. We're using a 532 nanometer laser diode and a 638 nanometer laser diode that can be turned on sequentially and controlled. We also have included a white LED very cheap off the shelf just to be able to do bright field imaging for alignment purposes, et cetera. And this work is done in collaboration with Uppsala University in Sweden and they're very good biochemists and what they were able to do was immobilize this KRAS genome onto a glass slide and implement rolling circle amplification which is a way for DNA to multiply seemingly endlessly the same sequence over and over again. When it's doing this replication it is prone to mutations and if we can tag those mutations with a fluorophore we can then get the ratio of successfully replicated genomic sequences to mutated sequences. So that's where the dual color comes in. We need both laser diodes exciting different fluorophores at different wavelengths spectrally separated so that we can get an idea of the ratio of successful replications to mutations. What's very exciting about this work is the possibility of doing this in situ or even in vivo. What I mean by that is working with actual cells actual biopsies taken from patients. The dream here is to combine morphological information about the tissue and the cells with molecular information about the genomic sequences in those cells. So here is an example of a tissue fluorescing, the blue is from the background but the green dots are the mutated genomic sequences which the researchers in collaboration with us have been able to successfully tag the mutant label only so only the green dots indicate the mutant label and the successful replications are actually not ligated and tagged. This can give you an idea of if tissue is cancerous and can be the future of cancer diagnostics pathology and understanding how cancer is spread. Here is another example of combining tissue morphology with molecular information. Here is some cancerous cells tagged, this is the bright field image along with the fluorescent image tagged on their successful replications of the K-R-A-S sequence and the mutated sequence done, performed on our mobile phone. We're very excited about this work, it's been recently published in Nature Communications and the collaboration is ongoing and we're very excited for the future of this. So in conclusion I've showed two examples of fluorescent microscopes developed in our lab. These are microscopes designed for a very specific purpose. They are created entirely with a 3D printer and off-the-shelf optical components. The GRD devices around $250 combined total. The DNA analyzers about $400 when you take into account the laser diode. These costs do exclude the phone. If you were producing this on a large scale obviously these costs could go down but they're a fraction of the costs of a fluorescent microscope, a confocal microscope or other type of standard laboratory equipment. And we hope that these devices can have impact in low resource areas outside of the context of a well-funded lab. So with that I'd like to thank everybody in my group and all of our funding sources. With that it concludes my presentation. Thank you very much. Thank you first. After this you will have many questions and we have time for that. Yeah, I think I went way under. Question please. Yes. Thank you for your nice presentation. I'd like to know more about the imaging process. Could you please explain again for which device or do you mean the holographic imaging? Right, so this is again out of the context of fluorescent microscopy, this is holographic imaging. Hologram is kind of a misnomer because when we think hologram we think of Star Wars we think of a 3D projection of something. When we say hologram in the lab what we mean is a diffraction pattern. We mean two interfering waves. So we can record a hologram from an object by sending partially coherent light. You can obviously do this with completely coherent light but we use partially coherent light for reasons I'll get to in one minute. Partially coherent light interacts with the object. This object then scatters the light and at the image sensor we then record the interference of the scattered light and the reference light, the background light. This interference pattern looks like the airy disk. So that's why in the images with the cells so here are the fringes of the diffraction pattern. This is the interference we're observing. So this contains both amplitude and phase information. And then we can pass this image through this back propagation algorithm and it actually can take into account the spatial frequencies and reconstruct the image at the object plane. I hope that answered your question. Did you have any other follow-ups? We use partially coherent light because we can achieve that with off-the-shelf LEDs to achieve strong coherence and you need a laser source. So there's a lot of holography done with lasers to great effect. There's some problems. One of the problems is speckle which is basically noise created by many different interferences in your system. So by using partially coherent illumination we actually can resolve holograms with less speckle and therefore less noise. Thank you for a nice presentation about the applications and the very nice devices. I had particularly two questions. The first one is about the calibrations and the validation test that you've been performing. I mean, many times using big laboratory equipments and stuff, sometimes the percentage of the validation does not fall into above 90%. But it was very interesting for me that how we're using a very vital review and a very, very low resolution, the very high percentage of validations could be occurred. I want to know a bit more details about the validation tests that have been done. And secondly, I wanted to see that if the validation tests of all devices give these very nice results, is there any efforts by your group to put it into the medical applications through the insurances and the medical systems of the US or any other part of the world? Yeah, that's a great question. So let me discuss briefly more about the validation of the GRDA device and then I'll talk about the DNA device. So the validation of the GRDA device was done actually with a flow cytometer. So we spiked water samples. We spiked not only water samples from the lab, we spiked water samples from the tap water, water samples from the ocean, and water samples from a local reservoir. We then validated our limit of detection against these different water sources with spiked GRDA. There's no way of knowing if those sources have GRDA. Hopefully our tap water would not. So we rigorously validated that to show and demonstrate that this device would indeed work if you took it to a stream and measured a water sample there. If that water sample may have, it may have a very high concentration of dirt or something that could mess with the measurement. That's why in our training algorithm that I discussed, we have over 100,000 dust samples. That's actually a really important part of our machine learning algorithm and the algorithm that differentiates the GRDA. We have to know what objects fluoresce automatically or maybe capture the fluorophores that we would like to tag the GRDA with to actually accurately count the GRDA. So it's a really vital part of this work to getting this device to work in the field, not in a laboratory alone with very nice filtered water. So I hope that answers your question about the GRDA device. Again, the validation was done with a flow cytometer which is the best, most gold standard way of doing it. So one GRDA assist at a time is passed through and counted and we spiked them ourselves so we know the concentration. And then for the DNA, our validation was done with 100x microscope objective which is, as you can see, there's Star Wars in the background. As you can see from these images, 100x is high enough resolution to actually see the strands all the way and get a very accurate size of the length. So this is a valid gold standard for us to compare our microscope images to a benchtop microscope. And here you can see the size distributions and then the corresponding lengths. And this is the whole story of the validation right here. So this is how well our device performs against the conventional gold standard techniques. Whether that is good enough for studying certain copy number variation mutations, I am not the expert in that. But it certainly is under one kilobase pair accuracy which is excellent. When you are ordering DNA, they differentiate them oftentimes by the number of kilobase pairs. So we have under one kilobase pair accuracy. But again, I can't speak to if that, how useful that is for specific types, studying specific types of mutations. And then the second part of your question was about... Was about if there is any plan in your group to apply that into the medical system. Absolutely. So we are always open to collaborations with groups who can field test our devices. Like I said, we are working with the U.S. military now to field test this Giardia device. Professor Idoan Isjan, who leads the lab, actually does own a company that commercializes some of the devices from our lab. The most successful commercialization actually has been for reading rapid diagnostic tests which are very prevalent in clinics. These are paper-based tests that mobilize certain biomarkers and change color essentially based off of the presence of a biomarker like a pregnancy test. So that's one device that has been commercialized. And yeah, we are very interested in collaborating with institutions and other groups to validate these to push it towards commercialization, absolutely. But our lab is not a business. We focus on publishing papers. And so all the commercialization efforts have been outside the lab from existing members or from third parties that see use in this technology. Thank you. Nice work. Basically, you know, just replacing this big equipment into a small system. That's really a great move. But I have a question. Perhaps you have just heard this question many times. So in this setup, you are just using your mobile phone. The sensor is being used, right, for imaging. So whereas the mobile phones we pay the most for its software. So if you replace the mobile phone with a sensor, similar kind of sensor, and program that perhaps that can also reduce the cost of this equipment. Yeah, that's a great point. It's funny because if you were to buy a CMOS sensor from Sony, if they were willing to just sell you one of their CMOS sensors, their color image sensors are far cheaper than their monochrome, which is very counterintuitive because there's far more engineering going into the color sensors, right? I think we do purchase some of our image sensors from Sony. And I think that we can buy an image sensor standalone for maybe $12. And we have to buy them in bulk. But you're exactly right. You can design a system solely around an image sensor that is mass produced for mobile phones, but is cheap because of that. And if you design your system around that, yes, that would further drive down the cost. However, there are some very distinct advantages that mobile phone connectivity gives us in terms of diagnosis. For instance, creating spatial temporal maps, like I said with the Mercury, going up and down the coast and being able to store that information in a database, being able to transmit images. For instance, the Giardia SIST counting algorithm, that is a very, very complex algorithm. We have to run that on graphical processing units on our server. The mobile phone alone is not powerful enough to do that in a timely manner. It's very vital that we use the application, the software on the phone, to simply transmit that image so that it can be processed and the results sent back to us speeding up the time to diagnosis in providing those applications. But you're exactly right. And we do engineer systems solely around the image sensor. I have one of them up here and we do have some larger benchtop systems that do use mechanical stages and we don't put a cell phone in there, we just have the image sensor. Question please. It's not a question just to thank you for making a cell phone, smart cell phone useful for something else than social communication and mailing that lets people play with science using a smart phone. Thank you. And to build on that, thank you very much for your compliment. We are interested also in the educational aspect of this. We have a device up here which I can show you in the next session that's, it's very useful for some clinical applications, but actually we take it to high schools and middle schools and elementary schools in the states all the time and we allow kids to just put their hands all over it and play with it. We have a box of different samples and it is amazing to see their eyes light up because it's a phone, it's a device every day and now it's looking at things they had no idea and have never seen before and that's really amazing to me. Okay, hi. I'm a question. I don't know if this commercial device maybe has a kind of processing algorithms. For example, when you take a photo my question is if you can handle for example the camera which is the flexibility that there are open libraries for that you can acquire the data the raw data or how is the process and how is the background of the software and the app and the libraries in this field. Are you talking specifically about the iPhone? Yeah, so that's all of the mobile phones we've made recently use the Nokia Lumia phone which is a Windows phone and you can use any phone there's nothing really the technology is not that much different there's a higher pixel count on this phone but the iPhone has also a very sophisticated image sensor however we cannot take raw images from the Apple software it's prohibitive so that's why we use the Windows phone only because we can take raw images that's really the only reason obviously you can develop apps for Android and Apple store all this stuff and on the Windows phone but obtaining that raw image is very important to us I didn't show this slide but we have a graph of the sensitivity and accuracy of the GRD accounting device with different file formats some of which are compressed some of which the iPhone would only let you access and the accuracy is far lower than that of dealing with the raw image as you can imagine because there's more information so there are limitations in terms of the software and the prohibitive nature of the proprietary technology and hopefully there's a future where these devices can be commercialized and go through all that and not worry about that I wanted to ask you on the holographic version so you reconstruct the intensity from the interferogram but not to use that information to reconstruct also the phase delay in the sample you can you absolutely can yes we have phase reconstructions of objects and sometimes they're very valuable for creating getting new information about the image the phase information is encoded there so you can do a phase reconstruction absolutely and one thing I didn't understand on this slide actually with base pairs so which is actually the resolution in terms of KBP yes yes so let me go to this slide no you have it already oh yes the resolution is in terms of killer base pairs yes so it's around you said around 1 KBP yes less than 1.33 but that is not below the resolution of the microscope that's not of the phone cell phone microscope I'm not sure I don't I'm not sure what the resolution of our cell phone microscope is we would need to do a test with an Air Force target obviously the bench top microscope is somewhere around 200 nanometers so this being 330 nanometers it might be on the edge of the resolution of the microscope yes but I don't know for sure you compare two images one obtained with the mobile phone and the other one with the microscope have you made the error between the two images and if yes how many kilobytes is the difference or what is the difference between those two images so are you asking about what the difference in the length measurement is for the microscope versus the mobile phone see here is target detection you are trying to see the both inside the image images and to detect the spot if you have made error between those two images an error in terms of the length measurement terms of images one is with blue the other is without blue I'm sorry can you speak up a little bit sorry if we consider commissioning international or illumination standards there is possible to see the error difference between two images one is with blue matrix resolution you have two images one taken by mobile phone and the other with the microscope have you computed the error between those two images that goes back to the previous question what is our resolution compared to the benchtop microscope that would be an excellent test we do that often in the lab with an air force target with our grading lines with minimum distance between grading lines we can resolve to assess the resolution I'm sure that was done with this microscope it's not my work so I don't know the number off the top of my head but it would certainly not be as good as the benchtop microscope so my guess would be somewhere around I don't know 300 nanometers 400 nanometers something like that not necessarily exactly at the diffraction limit because it's not a perfect system yes more question please let's thanks again professor Sakaris for the nice presentation and now we will take advantage that we have 20 minutes in advance and let's start at 340 with a demonstration we will organize in this way five students will come here and interact with the professor maybe he will show there how to disassemble the mobile phone fluorescent microscope and also I want to say that as we have time today group number 6 and these students that were not in the preparatory school and are in group 4 1, 2, 3 we will have today a special ferrometry experiment after mobile phone demonstration ok group number 6 and you know the students ok let's we have a 30 minutes break and come back ok get your confidence same to Federica ok yeah that's great because this depending on how many people organize in the best way I have some slides just 10 I will go through briefly to just give an overview so people know before they are coming in and that should take 10 minutes and then people can line up after that in any case you have one hour and 30 minutes for demonstration the question do you have a question about the holography I wondered whether what he was getting onto was about the twin image oh twin image yours is an online holography yeah right so that's a huge problem yeah yeah it's a huge problem with holography and line holography so the reason these are impossible to image they are so dense look at the holography I'm propagating this also get the twin image so this was a big barrier for the lab courses actually do that and you are also taking multi-layer