 Today we have come to the lab and we have a look at our time correlated single photon counting setup. In the setup that we are going to show you today the light source is a femtosecond pulse titanium sapphire laser whose output varies from 690 nanometer to 1040 nanometer and then we have something called a pulse picker which cuts down the repetition rate to the desired value and after that we have second and third harmonic generation arrangements by which we can generate blue and ultraviolet light. This light is used to excite the samples and then the fluorescence of the sample it goes through a time correlated single photon counting detection detector and electronics and then we generate the decay. So, today the purpose is to show you how actually a decay is recorded and wherever possible the components of the instrument itself. So, first let us have a look at the laser. This is our femtosecond titanium sapphire laser. Now unfortunately this laser is sort of a black box you cannot open it and show you what is there inside. We take a rain check for that and later on in an older model we will show you actually what is there inside a laser. Now the titanium sapphire laser gives us output as I said in the range of 690 nanometer to 1040 nanometer which is basically red almost near IR kind of range. This wavelength is not really useful for excitation if you are going to talk about a fluorescence experiment. Secondly the repetition rate at which the pulses come out of the laser is about 80 megahertz or 100 megahertz you can think which means the separation in time between two pulses is about 12.5 nanosecond and that is a problem because if you have a long decay which does not get over in 12.5 nanosecond then we cannot really record it. So, first of all we have to cut down the repetition rate and then we have to generate what are called higher harmonics about which we will study in detail a little later on. First let me show you what is there inside this box. So, what you see here is a lot of optics right and when you see it for the first time everything looks alive. So, just to give you an idea of how things work the light comes in from here red light and this slab that you see that is the pulse speaker. It is a quartz block on which we apply some radio frequency which gives it a variable refractive index and as a result of that we can chop down the number of pulses. So, that depends on what kind of radio frequency we give there and then the red light comes and is incident on a crystal that is placed there you might be able to see a little bit of blue light there that is where second harmonic generation takes place. Second harmonic generation means you can think two photons of smaller energy join up to produce a photon of exactly double the energy. Now this conversion is never complete maximum efficiency that you can hope to get is about 20 percent. So, out of this crystal we get say 20 percent of light is converted to blue and remaining 80 percent remains red that comes here and gets split this is a dichroic beam splitter. So, now blue and red lights take a different path and one of these is given a variable time delay using this micrometer screw gauge. Then both of them travel by the same path onto a third crystal which is kept here that is a third harmonic generation crystal. In third harmonic generation what we do is we mix one photon of fundamental frequency and another of second harmonic and as a result we generate third harmonic which is basically lambda by 3 and now to it is time to show you the output you might be able to see two blue spots here the lower one the brighter one is actually blue right now we are working with fundamental frequency of 885 nanometer. So, the blue spot this one is half of that 442 nanometer approximately and the one the upper one which looks blue to our eye is actually the UV light third harmonic 885 divided by 3 nanometer which is about 290 nanometer or so. It looks blue because the UV light is incident on the card which has a lot of proteins and stuff and they emit in the blue that is the fluorescence we see otherwise it is not visible to our eyes. Now we have to choose which one we want the blue one or the red one right now the red one is being used. So, the red light goes hits this mirror then comes to this mirror and gets reflected into the sample chamber where we keep the sample. So, we have kept our sample in this cuvette and that goes in here. So, excitation light comes this way hits the sample emission is in all directions, but we record in a perpendicular direction. Let me show you the detection channel the first piece of optics through which the emitted light goes through goes is polarizer that is kept here. This is a polarizer put it in here we have a monochromator which we have discussed in the theory class and this is a fast detector that we have it is a new kind of hybrid detector and you can see the output of the detector goes through this thick cable. So, you see that thick cable it has come out and it has gone to a small box there from there you might be able to see a small cable that comes out and goes into a terminal called start A that is the start of the time to amplitude converter that we have discussed in the previous class and before going there inside that small box itself we have the constant fraction discriminator. So, that is start what about stop you can see that there is another terminal there which says stop input from stop comes as a synchronous pulse from the laser power supply itself. Right now we are not trying to show it to you because it is a little too circuitous, but we do have a stop which comes from the source. So, here you see we are starting charging of attack by the fluorescent signal and we are stopping by the synchronous signal. This is called reverse mode and reverse mode is useful because especially when you do a high frequency kind of measurement reverse mode helps decrease the dead time of the instrument and recording is fast. Now, we will go over to the other side once again and we will actually record a decay. So, here we are this is where we record the decay after the signal comes from attack to the multi-channel analyzer. A multi-channel analyzer is actually a card nowadays it goes inside the PC which acquires it for us in our case all the electronics goes into the box that you see right here. So, on the screen you can see two counts one is stop and one is start. Stop is something like 7956074 and stop is start is about 7800 or so. So, this gives you the repetition rate of the laser we are working at a repetition rate of about 8 megahertz that is the number that you see here which means that the number of pulses hitting the sample is 8 into 10 to the power 6 per second and here what we see is number of emission events that are recorded per second 76527 you can see this number is fluctuating where the number at stop is not that is because the laser gives out pulses at a constant repetition rate whereas when you record the emission that is a more random event but we have kept the number of such random events as a very small fraction 1 percent or less no more than 2 percent definitely of the number of stop events this is required otherwise we get what is called a pile up effect and in our decay we get a spurious fast component. So, this is what we get per second before acquiring a decay it is important that we understand what are the parameters here you see it says time range is 50 nanosecond you might remember in the previous session we had discussed tack range this is tack range right here this means that the tack waits for 50 nanosecond for another signal to come. It also means that the full scale of our measurement in time is 50 nanosecond and you see there is something called coaxial delay you have to actually delay the start as well as stop pulses appropriately so that they all come in the region where we are looking. And now to give you a little better idea you can look at the start inputs and the stop inputs separately here the TCSPC source is custom because we are using a source that is not really part of the spectrometer we are using a titanium sapphire laser. Now this threshold and all generally nowadays for commercial instruments you do not really have to play around with these but if you are working with an assembled instrument you need to know what kind of threshold you have to set so that you get a good signal and it is not contaminated with too much of noise. This is the start inputs and these are the stop inputs you are seen already coaxial delay of 35 nanosecond and here one more thing that I would like to draw your attention to you to is histogram size. Here histogram size is 8192 bins which means that in the multi-channel analyzer there are 8192 channels. So, if you it is approximately 8000 but the reason why it is 8192 is that you will see in all these measurements it is always 2 to the power something. So, you can work out 2 to the power x is equal to 8192 if that is the case then what is x I leave that to you but these are whatever number you get here is always 2 to the power something and that is because computer works on a binary logic. So, this is something that you can change if you change it then the number of points will change keeping the full scale the same. So, that will change your resolution. So, once again let me emphasize that you should not always work with 8000 points. Sometimes you might need more if possible sometimes it is enough if you work with less. You need to know what kind of decay you are looking at and you have to choose your resolution properly so that you do not end up spending too much of time recording the data, but you have data with sufficient resolution. With that background we can try to record a data itself. So, you can see here we have two options IRF and decay IRF means instrument response function we will record it later maybe before we do the analysis right now we are looking at the decay let us start acquisition. Here the y axis is logarithmic so you can see that the decay starts building right away and to our eyes it looks like all the points are going up at the same time but that is not correct because if you remember we are recording about 8000 events per second our eye does not work that fast our eye cannot really tell more than 10 or 30 events per second. So, it looks like it is happening altogether but actually it is not this decay is now being built point by point and what you see here is this is time 0 this here is the decay and it looks like this because it is logarithmic the advantage of having a logarithmic y axis is two fold first of all if it is a single exponential decay it will come up as a straight line. So, if it is anything other than a straight line you know for sure that it is not single exponential. Secondly another advantage of having a log scale is that you can see high counts as well as low counts together you can see from here to here it is about 10 from here to here is 100 here to here is 1000. So, that is why you can actually look at the smaller decay smaller count part of the decay as well along with the part where you have larger count. If I do not use a log scale then this is the decay that we actually get y axis now is linear and you can see the decay is practically over by the time we have reached 38 nanosecond which means that the scale that we have used is perhaps good enough but maybe there was no need to use 8000 points here. So, this is how you record a decay in TCSPC next we are going to record the IRF we are not going to show you that and we will show you a little bit about how to do data analysis right. Now we have recorded the decay we have recorded up to 5000 counts but actually it is better if you record up to 10,000 counts at least and we have zoomed in over a range of 15 minus 6 that is 9 nanosecond. So, you can see the decay is almost over there is a little bit more is there beyond it this is zero time and these are the points. The next thing to do is to record an instrument response function so that we can analyze the decay that is what we will do right now. To record the instrument response function we have replaced the sample via scatterer in this case LUDOX and we have changed the wavelength to the excitation wavelength. The earlier decay that we showed you was recorded at 350 nanometer. Now we have changed the wavelength to 295 nanometer and we are looking at scattered light and since we are looking at scattered light you can see the counts are really very high and even though we have actually decreased the band pass to we should reduce the band pass a little more it was 4 nanometer now we have decreased to 2 nanometer. Now we have about 11,000 counts remember in TCSPC you should not have too much of counts coming out of the photomultiplier tube that gives rise to pile up effect and can also affect your detector in the long run. So, now we go back and we record the instrument response function this will be done in a Jiffy because first of all counts are so high you can see this blue one coming up that is the instrument response function which means the laser pulse as the instrument sees it this is the plot with y axis in linear scale this is the plot with the y axis in the log scale. So, here you might see that there is a little bit of an after pulse that always comes especially if you use a very fast measurement and if you use ultrafast lasers but sometimes these are after pulses come as a result of poor alignment of the light into the sample chamber and that has to be taken care of by tweaking the mirrors right. So, now we have recorded the data your decay is here and instrument function is also there and you might remember that we have discussed that in order to fit a decay we have to do what is called iterative reconvolution we have to decide a fitting model and we have to convolute it with the instrument function that we recorded here and we have to see how good a fit we get and you might remember another discussion we had how many points we have how many points do we have in the instrument response function we had said that the number of points is infinite in principle but finite in practice because we are working at certain resolution. So, you see each of these points is going to act as a delta pulse and that is what we are going to use to deconvolute this data and extract the lifetimes right. So, what you see here is the result of fitting this decay to a bi exponential function. So, this range denoted by the two red two blue lines is the range that we have said for the instrument function. So, essentially if you remember for getting the intensity or at each of these points in time t we have to integrate the fitting curve multiplied by the instrument function at time t minus t dash over we had did not mine well 0 to infinity or minus infinity to plus infinity here for all practical purposes you want to set a limit within which the instrument function has non-zero values and this is the range that we have set. But for fitting the data the range that we have set is much larger from here all the way up to here and we have got the results of the fit here it is something like the first component is 1.299 nanosecond second component is about 5 nanosecond shift is a measure of the difference between the peaks of the instrument function and the decay that always happens because of something called color effect in the detector and the amplitudes will show up if you open this up actually they are here from here you can calculate the relative amplitudes of t 1 and t 2 and chi square turns out to be 1.16472 there is no need to go to so many decimal places 1.16. So, if you see a chi square of 1.16 you think it is a good fit more or less good fit. But now if you look at weighted residuals I do not think we discuss weighted residuals in the last class maybe in the next class we will discuss what it is. But if you look at the weighted residuals this give you a measure of how what kind of fitting we have in the entire range of fit. So, you see between say 9 nanosecond and 31 nanosecond you have got a good fit. But before 9 nanosecond at short time the fit is really bad which means that we have to play around with this range or change the guess values that we started with and see if we can get a better fit. Now, how do you choose a range that is to be fitted? So, see it is up to 31 nanosecond here we could have gone further if you went further then practically the data is all 0. So, you are fitting 0s that will always be a good fit. But that does not and since it is a good fit it will make your chi square look better but it makes no sense because what is the point of fitting a flat line that does not really give you any idea about the time constants that you get as a result of fitting. So, it is important that we cover the entire decay we go all the way up to some point where the decay has become 0 but it is also important that we do not go any further. And about the other one range of the instrument response function it is very important that we cover the entire range where the instrument function may have nonzero values and to do that it is usually better to look at it in a semi-lock plot so that you do not miss any after pulse that is there. So, we will do the fit once again with a little bit of different range and see whether there is any improvement or not. Now, what we have done is first of all I change the range I have increased the range of the instrument response function to make sure that I am not missing out on any point that actually contributes. And I also made this range of the decay a little smaller I started much later. So, you can if you can see I am actually missing out on this much of decay which is not good still my fitting is no better than what it was if you see still the initial part is not fit very nicely which means that perhaps my fitting model is not right two exponentials may not be the right function to use let me see what happens if I use three exponentials instead keeping the same range. Now, we have fitted to three exponentials keeping the same range and now you see the residuals are nicely distributed about the mean and chi square is as good as it gets it has a value of 1.03. But still I am not satisfied with this because you remember we are actually losing out on the initial part. So, I would like to change the range once again and see whether there can be an improvement here we go I have now we are using a triple exponential model and you can see the fitting has started here almost at the top and throughout the decay is quite good. So, is what that tells me is that by exponential model is not all that good and you have to use a triple exponential model which actually makes sense in this case because the sample we are looking at is a protein. The emission of the tryptophan moiety of protein is what we are monitoring here and it is well known that tryptophan even free tryptophan in water or some of the solvent always has a triple exponential decay. Time constants we are getting are 1.7 nanosecond 5.6 nanosecond and it is 2.8 into 10 to the power minus which is a very small component which actually may be believable. What we have seen here is that how we fit the data and while fitting a data it is not something that gets done by itself actually does, but then it is not believable. What you need to do is while fitting data you have to spend considerable time work with a model that makes sense because you know if instead of try exponential model if I use an 6 exponential model or 5 exponential model fit might be even better because there is something called over parameterization. While fitting data if you use a larger number of parameters the fit is always better, but that may or may not make sense. So, we must use a model that makes sense for the system that we use. So, the take home message here is that the fitting process might look very mechanical, but actually it is not. It has to be done keeping in mind what kind of system we are looking at what it is that we expect to see. Of course, that expectation might cloud our vision that danger is always there, but you cannot do it without thinking what kind of a system we are looking at right. That is what we wanted to show in this module. We get back to the class after this and we will start a discussion of the different fitting models one can use. We have already talked about single exponential and multi exponential models, but not every decay has to follow one of those rate laws. We will see what are the other situations that can arise while fitting a data. That is it for today.