 In the previous module, we have studied the schematics of TCSPC or time-correlated single photon counting experiment. This is where we were, let us go a little further ahead today. And before we do that, let us just revise what we have learned. This here is the essence of TCSPC, essentially you are trying to record the difference in arrival times of a start and a stop signal and you are trying to plot a histogram. This is a histogram that we are talking about in the previous module. And what lies at the heart of TCSPC is that this histogram that you construct is actually of the same shape as this fluorescence decay that you are trying to construct. Now, let us get into a little bit of nitty-gritty of the instrument because the entire purpose of this is that we should be able to use the instrument when we do it. Of course, you might understand a little better when we see the instrument for ourselves, when you go to the lab, but we need some preparation for that as well. Now, there are certain terms that we will need to know. First one is tack range. Tack range is essentially the time for which a tack waits to get a stop signal before resetting and starting all over again. So you can think that the tack range is the maximum time measured in your experiment. So in this decay that you have, you can think that tack range is the full scale on x axis on time axis. Typical smallest value of tack range is that I know of is 26 nanosecond. 15 nanosecond is more common. What kind of tack range you will use depends on what kind of decay you are looking at because for a good analysis of the decay, it must be complete. So if your lifetime is say 5 nanoseconds, then well remember the decay law, right? It equal to i0 e to the power minus t by tau or it i at t equal to i at 0 divided by e to the power. So what happens when t equal to tau? t equal to tau, it becomes 1. So i at time t equal to tau is equal to i at time 0 divided by e. What is the value of e? Approximately 3 we can say 2.7 approximately 3. So very roughly we can say that a decay is in one lifetime the signal or population whatever you want to think decays to one third of its value at time 0, right? i at time t is equal to i at time 0 divided by approximately 3. Which means fluorescence intensity will decrease to about one third of its value in one lifetime. So to which value will it decrease in 3 lifetimes? Not 0 is correct but used as answer, yes? One lifetime, one third, okay let us go stepwise, 2 lifetimes? Yes? 1 by 9, one third of one third, 3 lifetimes? 1 by 27, 4 lifetimes? 1 by 81, 5 lifetimes? Yeah, so it becomes very small. So typically you want to keep your tech range to about 5 times of the lifetime. So 50 nanosecond tech range is fine provided your lifetime is no longer than 10 nanosecond. At the same time you do not want to keep the tech range very large. Your lifetime is 1 nanosecond keep a tech range of 500 nanosecond. Then what will happen? 1 nanosecond, so 5 lifetimes it becomes practically 0. So for 200-5, 195 nanosecond out of 200 nanosecond you are going to be recording 0, it will be less time. So it is important to choose the correct tech range. When we say we are choosing correct tech range what are we actually doing? Setting the tech window, letting the tech wait for a bigger signal, right? Or we can say that see the maximum signal that we can usually get is 10 volt. We are saying that when tech range is 15 nanosecond then 10 volt signal is equivalent to 15 nanosecond. When tech range is 100 nanosecond then the same 10 volt signal is equivalent to 100 nanosecond. What does that mean? What is this 100 nanosecond? Delta T remember? Time of charging. So what we are saying is for different times of charging we are keeping delta V to be same. How do we do that? The voltage remains same of the capacitor. That means the capacitor is not the same. You have somehow changed the capacitance. How do you change the capacitance? Of course, nowadays everything is inside a chip. So God knows how they do it. But if I go back to the good old days of components, electronic components then what you actually have to use is a variable capacitor, right? Has anybody seen an old fashioned radio set where you actually turn a knob and go from one station to the other? What are you doing there? You are changing the capacitance in an LCR circuit. That is why the characteristic frequency changes. And in conventional capacitors what happens is you have these plates that go into each other like this, right? If you bring them out, of course the plates are not like my fingers. You can think that my fingers define the outer edge of some circle, okay? When they are like this then there is maximum overlap of areas. If you bring them out then overlap decreases and capacitance decreases. So you can use a device like that to change the capacitance and therefore tech range. The point I am trying to drive home that this is not magic, this is not voodoo. You can understand the principles by high school level modern physics that everybody has studied. Second thing that is important is number of channels. Remember tech range, it sounds foolish if I say it like that but a tech range is something that is associated with the tech. Sounds like stating the obvious. But the reason why I am saying it is that it is important to remember the number of channels is not associated with tech. It is associated with multi-channel analyzer, okay? It is very important to use the correct number of channels also, okay? Because suppose I have 50 nanosecond tech range and I have 1000 channels. You never have 1000 channels. It is always in some binary number but let us say for convenience let us say we have 1000 channels. Then what is the time resolution? What is the time per channel? One channel is, each channel is a point remember. So what is the resolution I have? 50 nanosecond full scale, 1000 points quickly, 50 picosecond. 1 nanosecond is 1000 picosecond so it is actually easy, okay? Now I am saying instead of 1000 channels I have say 5000 channels. Then what is the resolution? Earlier it was 50 picosecond per channel when number of channels was 1000. Now let us say I increase the number of channels to okay, 10,000. Then what is the resolution? 5 picosecond per channel, 1, 2, 3. Now the question is how many points will I use? Should I use 1000 points? Should I use 10,000 points? Again the answer to that depends on what kind of decay you are looking at. Suppose typically you should have 100 points per lifetime. So if you estimated lifetime is 1 nanosecond then 1 nanosecond divided by 100 how much is that? 10 picosecond? 10 picosecond per channel is a good resolution. But if your lifetime expected is 5 nanosecond there is no need to use 10 picosecond per channel. You can use 50 picosecond per channel, right? Why is this important? Because if you use a resolution that is not good enough then you do not get a good decay. If you use a resolution that is unnecessarily good then you waste time. Because remember how the experiment is happening in every point in every channel counts are being increased. So a poorer resolution essentially means 2 or 3 or 4 or 10 or 100 channels have been merged, okay. So it goes up faster. So it is important that you use not maximum not minimum but optimum number of channels for your experiments. Why? Because you do not want to spend an entire lifetime doing an experiment. But you do not want to do an experiment whose data is not reliable. How will you know exactly what number of channels to use? We discussed an example a little while ago. This comes from experience. And experience comes only when we do experiments with our brain switched on. If you do experiments with brain switched off there is no experience that is gathered even if you do the same thing over an entire lifetime, okay. So it is like driving a car or like swimming. Once you learn it you do not think you just do it, right? So that experience has to come. So initially when you do the experiment you have to do it consciously. And you have to do rough experiments in the beginning to understand what kind of time constants you expect. Because you are not going to know, right? Looking at the sample you cannot tell whether lifetime is 100 picoseconds or 5 nanoseconds. So many experiments are important. Thinking is important. Experience is important. Now we move on to a rather important question of data analysis. You got the data, fine. How do you fit it? How do you get the data? We have already discussed the fitting model. Most commonly used fitting model is single exponential or multi exponential. Later on we will talk about some more fitting models also. But now our problem is that we do not have the ideal situation where we are exciting with a delta pulse. We have a pulse of a finite width. So what we get Fd is really a convolution of the instrument function, laser pulse as the instrument sees it. You can think L for laser and the actual fluorescence decay F. Convolution means a mixture, a hopeless mixture that cannot be separated easily. We will see what it actually means graphically. It is important to understand that the graph we see is not the one we want. It is the one we want mixed with the laser pulse as the instrument sees it. So this is what is called the convolution integral. This is what we actually get. Fd at time t is F at time t-t dash multiplied by p at t dash integrated over values of t dash from 0 to infinity. This is convolution integral. And for the uninitiated, this I am sure means nothing. If you are very good in mathematics perhaps you can see this and make sense of it. But let us not take a chance. Let us try to see what it actually means. Let us think of a laser pulse of a finite width. So we can think that every point on this envelope is the tip of a delta pulse. And how many delta pulses would be there under this envelope? In principle infinite, in theory infinite, in practice the number will actually be finite. Because as we discussed already we worked with some picosecond per channel resolution. So depending on how many picoseconds are covered under this curve. Suppose we have 100 points here. Then we are going to get 100 delta pulses. So experimentally the number is not infinite. It is determined by what kind of time resolution we use. Let us see the effect of one such delta pulse. Let us take a small one at initial time. This delta pulse. Well what I have drawn is not a delta pulse. This is a delta pulse. What I am saying is that is going to give rise to a decay like this. What about the next delta pulse? That will also give rise to another decay like this. What about the next delta pulse? Same. And this goes on. This is what the meaning of deconvolution is. All these delta pulses under the instrument function gives rise to a decay. If you look at any time t here, the fluorescence intensity is given by a sum of all these fluorescence intensities from all these decays at that particular time. That is the meaning of the convolution integral. Let us see if you understand it a little better now. You see this arrow. This is the time t we are talking about. And we are saying that the delta pulse occurs at time t dashed. So what is the time for which the decay has actually taken place? We are measuring at some time absolute time t and the decay sorry the delta pulse is at some time earlier time t dashed. So what is the time for which the decay has actually taken place as a result of this delta pulse? T minus t dashed. That is very easy. So that is the time for this particular delta pulse. So if I took any other value of t dashed this t minus t dash would have changed. Now see this amplitude here or intensity at time t0 here we let us call it p t dash. So what will be the intensity at time t? F is the fitting function remember e to the power minus t. Instead of e to the power minus t now we have to write e to the power minus t minus t dash because the delta pulse is at t dash. So t minus t dash is the effective time. So if it is just for single exponential decay t e to the power minus t minus t dash and p t dash is the intensity at time well t equal to t dash here. So now we come back to this integral. What did we say? The intensity is going to be given by the sum of the intensities that occur as a result of all these delta pulses. So if you write the most general expression what is the range of t dash from 0 to infinity. So that is the integral. See f t minus t dash multiplied by p t dash. Summation is replaced by integration integral 0 to infinity but 0 to infinity for not t but t dash. So that gives us the intensity that we actually see at one particular point of time. Important to remember that at one particular point of time. I have not written this for the entire decay only at the value of t where the arrow presently is. To get the intensity at this value you have to work out this integral. This is the meaning of convolution but do not get scared by the integral sign and integral is just the summation. How do you actually do it? You do it by a method that is called iterative reconvolution. So anyway this is what it means. You see we are adding all the intensities due to all the decays coming from all the delta pulses and the sum is given by this integral. How do I do it? I do it by a method called iterative reconvolution. You see we are saying deconvolution and we are talking about we have to do deconvolution because here we have a mixture of the decay as well as the pulse as the instrument sees it. How do I deconvolute? The easier way of doing so-called deconvolution is to take a guess function. So you assume that the lifetime is 5 nanoseconds. The moment you assume that you know what f t – t dash is going to be okay. So what you do is with the guess value as it is called the assumed value you can easily construct well not easily if you have to do it manually it will take time for computer it is very fast. You can construct the fluorescence intensities at every point okay and then you compare the graph you have constructed and the graph you have experimented right. Of course you will not get it right the first time so you have to do it again that is why it is called iterative deconvolution. Deconvolution because you are knowingly convoluting the instrument response function with the decay law that you think is correct and iterative because you will never get it right the first time you have to do it in several rounds or several iterations okay. So I think we will stop here today and next day we are going to talk about goodness of fit because it is very easy for me to say that we will compare the graphs how will the computer compare computer does not have eyes. So next day we will learn first of all what are the eyes of the computer how does it know whether the fit is good or if it is bad and then we will talk about some more decay models.