 So, welcome everybody, I'm really happy to see the number of participants today. Our speaker today is Fabrice Cordelier, who will present an introduction to collocalization. Fabrice is working at the Baldo Imagic Center, it's a microscopy facility in France. We as panelists are here to answer all your questions from the Q&A and moderate the session. So don't be shy, ask your question, we will also interrupt Fabrice if needed to ask some questions to him and he will make some breaks. So we are here together for one hour 30 webinar and the floor is yours, Fabrice. Thank you, thank you, thank you very much for giving me the opportunity to give you this really short introduction about collocalization. I won't go into fancy advanced topics, I will just try to see, to show you the basic tools that you can use and the basic elements that you can use to build your own workflow of collocalization. Of course there are a lot of methods, depending on the topic you're working on, but my idea here with this first introductory seminar is really to revisit the generic methods and to try to make you understand what is behind, how to use them, how to combine them and how to play and get significant numbers out of your images. So let's dive into this black box that is collocalization and everything is always starting with an image and I really like to use this image that is from the Fiji website. This is a nice image with two channels, one red and one green and everyone knows that when you're looking at collocalization what you're expecting is to see some yellow dots, some yellow areas and if your monitor, if your screen is not that good and if the compression about the network is not that good, you may see that we've got indeed some areas, some yellowish areas here, so why should we bother on and why should we try to quantify this collocalization? Because we see it, yeah, but once more, depending on the means to display the image and so on, you may be tricked into thinking that you've got indeed collocalization, so you may think this is an obvious problem, but when zooming a bit, you will realize that the two signals are indeed separated. So this is why it's important to look at the image, this is a starting point, especially to identify the areas where you may have collocalization, but then it will be important to find a way to measure and to assess that indeed your diagnosis of collocalization is true. But the problem is, this is a problem that I come across quite a lot working on the facility, the problem is that collocalization compasses quite a lot of different things, different topics, and in fact collocalization is some kind of big word that encompasses several typical experimental situations. When the user crosses the door here on the facility, sometimes here she is expecting collocalization, but in fact here she is expecting some kind of core expression, which means that you see in this example, with the single cell with two laborings, one in the nucleus and the other one in the cytoplasm, you will never see these yellowish dots, and the user will ask for collocalization when the thing that he's looking for is core expression, so two proteins located in the same structure in the same cell. So already at this step, you see that there is one thing that matters, the resolution and the topology of where the event is taking place in here, what you are expecting is two signals not to be at the same location, as you would expect when talking about collocalization, but you're expecting the two proteins to be in the same, in the same structure, or in the same cell. Then what you may expect is some kind of core occurrence, meaning that the two proteins of interest are on the same spot, on the same structure, on the same location, but without having a specific stoichiometry of association, you may have one green for two reds and another structure, you may have two reds for four greens and so on. So in this case, you're indeed expecting core occurrence in the same structure, but no real relationship in term of intensity. If you are expecting a certain stoichiometry of association to the structure, you will rather be looking for correlation, correlation of intensity of the two labelings on structures of interest. And finally, sometimes what you want to see, you want to see the patterns of distribution of the proteins are somehow related. And in this case, you will rather talk about core distribution. So of course, depending on what you want to do, you may use certain tools. Core occurrence will use some tools that are not the one that you will use for correlations. This is why you've got a huge panel of tools that you will take benefit of. And then you will try, you will start your journey into the co-localization analysis world. And for the purpose of this talk, I've been separating this journey into five big steps. Checking data integrity, then pre-processing, then choosing a reporter or metric, then comparing and interpreting and finally assembling the workflow as you've been establishing which tools, which elements you will assemble. First, let's have a look at how we check for the data integrity, because already when preparing the sample, you can totally screw your co-localization analysis. Already when you're acquiring the data, you can screw your co-localization analysis. So there are three main points that you should take care of, and we will see which important steps we should go through. First of course, you will have to choose dyes that you can image separately. If you're looking for co-localization and you're really not choosing the dyes you're using well to, if you're not choosing the appropriate dyes that will be easy to separate when doing the imaging, then you may come across some troubles like having a bit of one channel going into bleeding through the other channel. And this is quite easy to assess simply by using this representation. I hope that we see the pointer on the screen. By doing this kind of plot where you will use the two intensity for the same pixel, the green intensity and the red intensity as coordinates, x, y coordinates, and you will plot it on this kind of diagram, which is called the cycle diagram. So if you're looking for co-localization, what you will see is a big, and if the intensities are linked between the two channels, you will see a big cloud more or less compact on the graph. If you have some lead through from one channel to another, you will see some kind of relationship between the two sets of intensities quite close from the axis. And this should be a sign that maybe your acquisition parameters or the choice in the dyes are not really appropriate and you may go back to the bench or to the microscope to do better acquisition or better prep. The other thing, of course, is your imaging. Your imaging several dyes through your microscope and the microscope has some kind of defects. Of course, even if you're using the best objectives and so on, you may have images where the channels you're imaging are not well registered. And in this case, you should absolutely before starting to measure anything to do the co-localization, you must assess that your setup is really appropriate and that co-registration occurs. If it's not the case, then you may try to analyze this non-core registration and try to correct for that. And one way to do it is to use some reference objects that you know that are multi-labeled. You acquire them in the same situation as your sample and you check if everything is well aligned, well co-registered. And then maybe go back to the microscope and try to fix the things or simply characterize and correct, which is something that I will talk about in the preprocessing step. Finally, resolution is really important. You should know at which resolution you're working. You should know the limits of your microscope because, of course, if you're taking the crappiest objective on the facility, if you're using a simple magnifying glass, of course, everything will co-localize. So, word of advice, if you want to prove that there is co-localization, just use the magnifying glass. It will be easier. But, okay, when you're going for a diagnosis of co-localization, this diagnosis is for a specific resolution. In fact, and this is a bit hard for biologists to understand it, it doesn't mean that because you find co-localization, the two proteins are actually at the same place. In fact, the true conclusion of the co-localization test will be that knowing the current resolution, I can't exclude that indeed the two proteins are at the same location in the same surrounding. So, this is why it's important to using some reference slides, some reference beads to measure this resolution, and when publishing always say, okay, I've been doing co-localization tests at that resolution. Of course, with the expansion of the new method, the new super-resolutive methods, you now see that the two proteins that were co-localizing are not co-localizing anymore. This is not that the previous paper was wrong. It's just that the technology has evolved and the resolution has improved, and now you can separate your proteins of interest. So this is, it's really important to always do the three steps, choose the dice well, check for core registration, and finally assess your resolution so that you can state it when publishing. Sometimes I see users coming to the facility with images that are not really, really good for co-localization, and unfortunately, in most of the cases, there is not that much we can do. Image processing is not a magic one. You can't really improve the images afterwards, or you can do it, but in the limited thing. And it's always best to have the best samples, the best acquisitions before jumping to the next step, which will deal with image processing, the pre-processing, and choosing and applying a metric. So these are some of the commandments that we should apply. You should choose your dice well, know your microscopes and its limitation, use appropriate sampling, of course. Take benefit of the full dynamic range, especially if you want to work on coincidence or correlation between intensities, avoid saturation, or, of course, you shall go back to your bench and talk to your facility people. Now, questions so far? Yes, Fabrice, at least, I think one is really interesting right now is how to check properly about substance through and can it be done with controls samples? That's what you mean? Yeah, yeah, yeah, this is this is one way, one way to do it. Of course, this is something that people tend to skip, but having mono-labeled samples is something that is really important. This way it will allow you first to check if you've got it through and potentially do some like in flow cytometry where you do some some kind of gating compensations and so on. But well, if you already on the sample, you already have the evident bleed through, maybe, maybe try to separate a bit more, choose dice that are spectrally more separated. But sometimes you don't have the choice, so you will try to compensate. No, the question right now, one we need to clarify anyway, but regarding the bleed through, would you recommend to always work on, let's say, sequential acquisition mode or always then I see you nodding. Yeah, it's it once more. The main answer that I will give is it depends, for instance, if you're working on fixed samples, yeah, you've got the time to take the time and do some sequential acquisitions. If you're in live mode and you've got something that is moving fast within a cell, a vesicle or whatever. Of course, this will be a trade-off, so yeah. Okay, thanks, maybe you can continue. Okay. Okay, about preprocessing. So as I told you, we will do, we have at least four phases of four ways, four meanings for a single word, colorization. And of course, we may want to work on objects and coincidence of objects or coincidence of intensities. And this is the nature of the image. An image is a support carrying some information, and you've got several types of information within a single image. An image is, of course, a collection of pixels, and you may want to work on the intensities to see there is some kind of coincidence of intensities, some kind of correlation of intensities. It's also a collection of frequencies. I won't go into details about collection of frequency, but you've got some, this is one way to perceive an image. An image is also a collection of objects, and we can work on the coincidence of objects, but we must first define what is an object. And sometimes the object that the image is carrying, the objects, they are small enough so that you can summarize them, extract one point of interest. This could be the center of the structure. The structure is close to the resolution. So this is another way. And this opens some additional methods about how to assess collocalization. And nowadays, now that we have access to super resolution, an image is not an image anymore. You will extract a list of detection, a list of events, from which you can go back to a collection of objects. And we will see with some examples later about what I'm talking about right now. Let's go into the preprocessing. I've been talking about bleed through and crosstalk, and if you don't have any other means, if you can't go back to the bench, if you can't go back to the microscope, or if you optimize the full process and you can't improve anymore, there are ways to try to correct for bleed through and crosstalk for one channel going into the other, or for a single light source, exciting the two dies at the same time. This is called spectral convolution, but in fact I would rather use spectral and mixing. There are some nice plugins that do the job, but of course you will have to have some references to know how much one channel is bleeding into the other and the reverse way around. About the chromatic shift, if you have the miss registration between the two channels, of course the simple translation can correct, but once more, when correcting over the image, keep in mind that you're correcting at the pixel level, not, you won't be able to correct at the sub-pixel level, unless you're doing some kind of approximation about the distribution of intensities and you will have to do some interpolation, which may impair later on the processing. About corrections, sometimes you've got a low signal, you've got a high background, a lot of noise on the image, and in that case, using some kind of filtering or denoising could help, but be careful when using filtering, linear or rain filtering, of course you will impair, you will modify the intensities that you're working on. So if you want to assess the correlation between intensities, then this will be a problem. And about denoising, once more, don't use it as a black box, try to understand what is behind so that you don't create additional artifacts. I've been talking about resolution and sometimes you're missing a bit of resolution and in this kind of situation, of course the convolution could be a pre-processing method of choice. As long as you're sure that the PSF is not changing too much about the field of the acquisition and the depth of the acquisition. Be careful about the algorithm that you're choosing. It should be conservative to have exactly the same intensities but distributed in a different way. So be careful, once more, the convolution is not to be used as a black box but it can restore a bit of your resolution. And nowadays machine learning is also a way to do a bit of restoration of the image. But keep in mind this question, is the processing changing the intensity? The intensities? If so, be careful when using correlation between intensities. If you're missing a bit of resolution, of course you may ask help from a friend and this friend could be called super resolution. If you're missing resolution just try to push it. You've got expansion microscopes. This could be also a different method to try to improve your resolution. Finally, if you're planning to work on the objects, you must, from the image, define what is an object, what is a background, and group the pixels that are part of the object into the actual objects. But before, the first step, so differentiating the objects from the background pixels, how do you do that? Do you do a simple threshold? Do you do some adaptive or local threshold? Well, is it good to work on the signal you're trying to quantify and try to isolate the structures from this signal, running the risk to get rid of part of the signal? These are questions that you should ask yourself. And I will invite you to go to the YouTube channel and to see Kota Murat's talk about where he's talking about this kind of topics, not only about this kind of topics. Then, of course, you will have to isolate and delineate the objects. You will have to group the objects, the pixels into objects. This is the Connectivity Analysis. Maybe you would like to work rather on contours, so you will have to work on snakes to try to isolate the contours. Once you have the objects, maybe the thing you would like to do is, especially if you've got some really small objects relative to the resolution, you may want to work not on the objects directly, but summarize the objects as points of interest, for instance, the centers. And I will show you some method that works on that. And the opposite way around is true. When you've got points, you may want to work directly on the detections like in super resolution. Do you want to work directly on the detections or maybe do first the desolation to have these ties that then you can try to work on? This is a question that should be addressed. So, pre-processing, you've got corrections to try to revert the problem to the data. You've got restoration to try to improve, to push a bit further on the resolution to get a grasp at the resolution that is in the image but that requires some processing to get onto your image. And finally, if you're planning to work on the images, you've got to do some segmentation. But how? These are the questions to ask. Questions so far? Should I jump to... There is a couple of questions that maybe you can answer. Romain, I can't hear you. Sorry to mute. I'm in an open office. Sorry, so there is a question regarding the unmixing, spectral unmixing. Can it be used to help to solve the bleed-through problem or is that another problem? It can be used to try to revert the bleed-through but you've got to make sure that you've got references that were done in the same situation. And once more, this shouldn't be something that you use first. First, you should try to improve the sample prep and to improve the way you acquire the images and if you're still stuck there, you should try to make sure that you have the information, the help from the facility, staff and so on, then maybe try this. But once more, it's better to have... I do... This is personal preference, but I do prefer to start colloquialization analysis with really the data as raw as possible. If you're starting to process and to over process, into the workflow is not adding additional stuff that will lead to over estimation of the colloquialization. Another interesting question. Actually, it's quite frequent within the question. It's about how to use control, some people drop thresholds and so on. I guess that's what you will cover in the next minutes. So maybe it's not necessary to talk much about it, but there is one question that I think is really interesting is what do you do with your control images? Do you have to show them, for example, in your manuscript? Well, if you want my opinion, this should be part of the dataset that you released together with the manuscript and you link it and so on. But in practice, how many P.I. do care about having all the controls? How many P.I. will agree to have an additional, let's say two or three pages in the manuscript with controls because who cares about the controls after all? That's a good question. So this is why making all the dataset for one study available, publicly available, applying the fair principle and so on. Making all this available would be the best. This way, if it's not in the paper, at least you can find it somewhere. You can review the controls, you can review everything and make sure that what you agree with what has been published. Are we good? I think we're good for the moment. There is more controversial questions that we may cover at the end when you will have, cover your content, I think. Okay. Okay. Now, you've got your images down. They have been acquired the best way as possible. Maybe you've been processing them just a bit, just a tiny bit to correct few things. And now comes the moment where you've got to choose a reporter, a metric. And so, we, with Susan Bolt and others have been trying to separate to categorize the reporters and the metric. And here is one way we present the thing. And this might be a bit controversial, but okay. Let's assume that we've got a set of reporters and we want to split them into two groups. The thing that we do is by looking at what exists, what has been published, I would say that you've got two big families of reporters or metrics. You've got the indicators on one side and on the other side, you've got quantifiers. So indicators is a number and a scale. So you put this number over the scale with minimum colorization, maximum colorization and in between values. This is just an indication and it works really well when you've got several experimental conditions because you will be able to compare them. But having only one value, it will be complicated unless you're on one side of the scale. So you will need to find tricks and we will see in the next part the tricks when you've got only one single experimental condition. So these are the indicators on one side. About quantifiers. Quantifiers are a bit easier to understand because they will rely on physical parameters. For instance, you will try with quantifiers to define the level of overlap between two structures. So the percentage let's say of the area that is involved in the colorization process or the percentage of the volume that is involved in the colorization process. Or maybe the distances between sets of coordinates and try to see the percentage of those sets of coordinates that from one channel to another lie behind the resolution limit. So of course, quantifiers will rely more on physical parameters, physical descriptors. So you will have a direct readout for the indicators you have to have the scale and you have to have several values to compare. So but enough categories let's jump into the generic indicator that everyone is using. Let's talk a bit about Pearson correlation coefficient. This might be scary and you can skip it but you just have to keep in mind that the Pearson coefficient is simply an indicator on how well the site of the hologram where you've got intensities from channel one and channel two serving as coordinates. It will define how well the cloud of dots here looks like a line. If you take twice the same image and try to compute the Pearson coefficient, you will have something that is close to one. If you have two signals that are separated, first on the site of the hologram you will see one cloud here another cloud there and no real relationship between the two. You will have an exclusion and a Pearson coefficient close to minus one. If you don't have any correlation, then the dot cloud will be spread everywhere and the Pearson coefficient will be zero. So and I think this has raised some kind of controversy but I personally find it hard to interpret when you've got a value alone that is far from one minus one or zero. I find it quite hard to interpret alone. Of course, if I've got several experimental situations drug, no drug and I compare the two situations of course I will position the two values and be able to say, yeah, I've got more collocalization here than there. But if I've got only one value and this value is far away from the well identified three values, I find it hard to interpret. Of course when using the Pearson correlation coefficients, you're comparing your dot cloud to a single line which makes it hard to use when for instance, you've got let's say let's imagine a receptor and a ligand for the receptor. Of course, there is a moment where you will have enough ligand going into the receptor and then you will have the receptor saturated. So you will have collocalization up to a certain level where you will reach a plateau. So when the shape of the curve won't be aligned and in this kind of situation, the Pearson correlation coefficient won't be good the appropriate one to use. And this is where the Spearman's coefficient comes into play. Basically the thing you will do when you've got something where you've got saturation that happens saturation at the biological level not over the image you will have this kind of curve. So comparing it to a linear distribution is not something that is relevant. This is where the Spearman's coefficient comes into play where instead of having the single intensity values you will replace each single value by its rank. And by doing that you will linearize the curve and you will be able to apply the Pearson coefficient that we've seen earlier. Of course, the condition to be able to apply this coefficient is that you don't have a curve that does something some kind of wavy pattern. It should go up, it should go down, but it should be it should go in only one direction to be applied. So many other indicators exist. Some have been engineered so that instead of having a range between minus one to plus one you've got a range between zero and one which makes it a bit confusing because sometimes people will use some of the coefficients that are between zero and one and relate them to a percentage of codocalization which is not exactly the case and not always the case. So quantifiers now and of course you all know about the Mander's coefficient. Mander's coefficient is I've been trying to summarize the principle here. You've got a structure that is labeled in red, another one in green, and you've got an overlap here. You will collect the intensities within the overlap region and those intensities you will divide them by the total intensity of the red channel. This will give you the M1 coefficient. In fact the M1 coefficient is just the quantity of material that is engaged into the percentage of the material that is engaged in the process of codocalization. M2 is exactly the same that the reverse rate round for the other channel. Oops. And so already here we've got this kind of measurement and you've got some kind of value that you can really interpret alone. But of course if you've got and I hope that you have several experimental conditions this is even better because you can position them one relative to the other. You've got a derivative from the Mander's coefficient where instead of looking at the percentage of signal that is engaged in the codocalization process you will have to look at the surface or the volume that is engaged into this process. And once more having one value is okay because you will be able to have an absolute value but having multiple experimental conditions is even better. I've been talking earlier about small objects really small objects that have size and overall size that is close to the optical resolution. If you've got two tiny dots close to the optical resolution and you're trying to measure an overlap between the two using the Mander's coefficient of course this won't make sense you won't have enough pixels to define a precise percentage of overlap and you will have extreme values if you've got nine pixels on one side and nine pixels on the other side the overlap can take only few of the possible values one-third one-sixth etc etc so it will be difficult to use this kind of method the Mander's coefficient on really small objects this is why it's okay to try to summarize the problem by isolating the small objects and maybe summarizing them with one set of coordinates with the center of mass or the geometrical center of the objects and once we have this summarized version of the objects we can do several things we can define a rule we can define our own metric about what is co-localization let's assume that we've got these two objects and we just check for the center of the red object we could set up a rule saying I consider that there is co-localization when the center of the red channel the object on the red channel is falling onto the green object so in that case of course we don't have co-localization here in this situation where we have the plain red circle which is perfectly centered over the green object then of course its center is falling onto the green object and I consider that there is co-localization and now comes a specific situation where I've got a ring that is perfectly well centered over the green object the center of the red ring will fall onto the green object and I will conclude about co-localization and this is at this step that you realize that maybe maybe in the workflow even if it's not appropriate to measure the overlap maybe having a look at how the overlap behaves in combination with this method will allow me to go even further in the conclusion I don't see much overlap between the ring and the circle which means that they might be separated but when using this strategy here the center is falling onto the green object which surely means that the red signal is outside of the green signal so you see that and this is quite important by going into the co-localization black box understanding all the methods and so on you may start to think about additional ways to analyze your data and find several methods that will give you one focus that will allow you to conclude about what's going on in your sample here on the previous slide I've been only summarizing one object by its center here I will use the center for both channels and now I will set up a simple rule if the two crosses are separated by a distance that is less than the resolution then I can't exclude that the two objects and the two centers are co-localized so you see it makes some weird things because if you've been applying the right something rate you may have two crosses that are separated one from the other but if the distance is below the optical resolution the two crosses will actually co-localize and it's weird it goes against the principle that to have co-localization you should have yellow when merging red and green so in this kind of situation we are above the resolution so we don't have co-localization even if we have a bit of overlap here we still have co-localization and once more with the ring and the circle we have co-localization do we have questions at this step? we certainly have questions no but it's really interesting some of them are very very specific and since I don't know exactly what you will cover it's a bit difficult to decide but for example I will take one about the Mender's coefficient do you recommend on choosing a Roy yes no and how to subtract background and what to use as control Mender seems highly sensitive to the choices and the thing is on each of the methods you presented briefly I think we can make an OBS Academy event of at least one or two hours yeah so the problem is this kind of question and I'm not skipping the question but it all depends on the biological problem you're working on it all depends on the nature of the noise you're working on and I don't have a precise answer except once more go to your facility people and try to get the devices from your facility people because well if you've got noise you can always try to filter to the noise and so on if you don't have a way to improve the sample and the acquisition about the region of interest about region of interest yeah on the Mender's it all depends on the on the distribution of your structures so I couldn't I couldn't say on this region of interest I can comment with one project we had where you have cells that have different phenotypes and basically if you look at the overall image you might not see something but if you do the analysis cell per cell then you see that you have a difference of phenotypes between cells and you end up with a difference in your results but it's more tedious because now you have to find the region of interest and at the same time if you're if you're thinking that maybe the Mender's coefficient may depend from one region of your cell to another the thing I would recommend is maybe try to measure object per object and then try to make a map where you call you make a color map of the overlap between your objects and maybe you will see some regions that are homogeneous with a certain percentage of overlap other with another one etc etc and if you want to learn about that if you want to because this will require a bit of automation but if you want to learn about that we've got an excellent set of videos about an introduction to the macros with Fiji on our youtube channel all the questions you might answer them later it's about 3d vs 2d and about the software that exists for localization okay so 2d vs 3d of course if you can go for 3d go for 3d because the biology is not 2d anymore it's 3d so gone just try to imagine if you've got a localization that is taking place along the z axis okay red green okay and if I slice just here what will I conclude this object is alone so you can't unless you're forced to and but please go 3d about about softwares well I'm the author of this plugin which is called and I insist on the because it was written with a german my german colleague and ja is ja in german so it's just another localization plugin but well it's not that I don't want to recommend it it's just that it depends on what you want to do and how much you want to go into automation and so on so depending on if the method is there go for for it if you're using IC you've got additional plugins and if you're writing your full workflow in IC have a look at the tools I refuse to give you advices on the tools because there are many out there and many that are really good it all depends on your full workflow and how you can integrate and jump from one tool to another and maybe you want to to get stuck into image fj or maybe you want to go to IC in that case explore what is there and have fun I think we'll have to let's say to complete this kind of questions on the image forum on the thread we'll have a complete list of software we are often using or recommending and don't despair if you see that we are not answering your questions it's also that sometimes I mean in the Q&A window sometimes it's really difficult to make something that will be less than 1000 words about the software when you come to the software I will give I will do auto promotion for one chapter one review chapter at the end where we have the table with depending on what you want to do the kind of software that exists okay great okay about comparing and interpreting now well you will have to do a lot of statistics you will have to first plan your experiments to know how many cells, how many objects etc etc and to be honest I was trained as a biologist I am now a bio-image analyst and statistics is not something that I really master so the advice once more would be to get help about statistics get help from people to know how to do them in France we have had a really nice training which is called image stats statistics for images and where we have been able thanks to the organizers to have a look at what is an experimental plan on the statistical point of view and I can tell you that it's better to ask them before you start your full experiment because these are the questions that we always get how many cells, how many objects which condition, which test etc etc this is not the kind of thing that personally I feel confident to tell you to explain to you but I can show you all the tools before or after this discussion to give them material that they can use I've been talking about comparing and the problem that I personally found with indicators if I've got only one experimental condition so how from one experimental condition can I put some kind of diagnosis of collocalization when I've got only one value well I need to generate additional values additional metrics and one way is simply simply to take one data set my only data set well I hope that in this kind of situation I had several images of the same experimental condition where I will establish the person coefficient between channel A and channel B I will have a certain value and you see that this is typically the kind of mid-range value that I find personally hard to interpret how do I do to get something to compare it to maybe I take exactly the same data set at least for one channel and I use a second channel that's rotated by 90 degrees and try to see what is the person person coefficient and I see a huge drop which means that yeah I surely had something there so this is one way to do it of course if the image is crowded with the signal just flipping one image it won't be enough because you will always have some kind of random correlation between the two signals so that if you've got some sparse objects this is one way to generate the second data set from the original one then there are methods that have been engineered simple methods where simply you will take the original data set but you will displace one channel you will translate one channel relative to the other and I don't know if this will be visible on the webcam but the thing I like to do when I'm explaining this Van Stinsel approach is taking my two hands like that channel A, channel D when they are overlapping I've got maximum of correlation maximum expectable correlation and now if I translate one relative to the other I will progressively lose the correlation I will do it in one way in the other way and each time each time I'm translating one image relative to the other I'm computing the Pearson coefficient and then I'm plotting this series of Pearson coefficient relative to this transmission if I had first co-localization then by shifting I will lose the correlation if to the opposite I started with exclusion when I'm shifting I'm running the chance to have an overlap when translating so depending on the shape of the curve that you have at the end you may say yeah I started with co-localization and then I've lost it so this is indeed co-localization one additional thing with the Van Stinsel approach which is nice with the chromatic aberration the maximum won't be at zero it will be slightly shifted the width of the curve of course depends on the size of the structure if I've got two fingers it's easy to lose the correlation if I've got a finger like that and another one like that I should move quite a lot in order to lose the correlation so this will make the bigger del shape finally a third strategy to try to generate something to compare your value to is to randomize you will take one of the two images you will cut this image into small blocks taking into consideration that due to the way the image is formed you've got a local correlation between intensities this is due to the point spread function of your optical system so the bricks that you will make in your image are a bit bigger than one pixel in general if you've been applying the right something it will be three by three pixels so you cut this image into small pieces you put all the pieces in a bag the bag and then you just get pieces and reassemble them into a new image this gives you something that on a biological sense doesn't have any sense but at least this is exactly the same information but located differently and now the thing you will do is compare this randomized green image to the red channel you will compute the Pearson coefficient you will do it once, twice a lot of times and then you will end up with a big distribution of Pearson coefficient to which this corresponds to a situation where co-localization may have happened but on a random basis and now the thing you will do is take back the two original images and compare the Pearson coefficient that you got to this distribution and if it's far away from this distribution, this will mean that you've got a chance to have actual co-localization so this is another another way to work with the images when you've got only one experimental situation but be careful because getting numbers out of the images is fine but it's fine also it's like with the ring circle paradigm that I've been talking about it's important to have several informations and to combine the informations before stating about co-localization because if you're looking at this slide you will see that the percentage of co-localization in terms of surface is the same but it doesn't correspond exactly to the same experimental condition here you've got an overlap here you've got only part of the green signal that is engaged in the co-localization process here you've got a bigger co-localization area but a bigger object and though if you're measuring the Mender's coefficient here you will find exactly the same and finally you've got the reverse situation so it's important not only to focus on the indicator on the matrix from co-localization but maybe to widen a bit the analysis so that you're sure but that you're comparing or describing exactly the same phenomenon so what question should you address knowing the matrix that I've been presenting and the comparison and interpretation method that I've been presenting what type of co-localization method is the most appropriate for your problematic are you working on the intensities are you working on physical coincidence are the published methods adapted to your problematic yes because this is not because you find Pearson coefficient, Mender's coefficient in all comfortable softwares in all image processing softwares that are linked to the microscope in all the publication that these are the method that you should use maybe maybe a different metric will be more adapted to your specific problematic so depending on the interest that you will give if the answer is no you've got to be creative to build your own metric to characterize it and to use it and publish it well if you're happy if your problematic works on your specific problematic if the method that have been published work on your specific problematic and if if this is okay then just go and find the tool that is the most appropriate for your work through especially if co-utilization is only one part of the work through you've got to be able to change all the steps so that you can automatize and remove the bias the human bias in the analysis about the tools we've been writing a review with Patrice Mascalchi and at the end of this review you will find the table where you have some kind of list of all the a lot of tools that exist and when to which what kind of co-localization that allow you to do and where to find them do we have questions here again I would say yes we do have questions for example one that I like is we have someone wanted to do a co-localization between three channels or three objects do you have recommendations well most of the tools that exist are done for only two channels but once more if it's trying to find the overlap between the three then one of the workflows that I will present after will help it will just have to be generalized to the third channel then well if you're working on the intensities I think that the easiest way will be to go two by two with the tools that exist but if you're willing to implement for instance the Pearson coefficient I'm for sure you can extend it to three dimensions but I'm not sure that we've got an implementation that already exists about that my answer would also be that it will be difficult to make it understandable by viewers because 2D scatter plots is already challenging sometimes but then if we make 3D graph yeah it will be difficult yeah you know for the 3D graphs the thing I would do is use there was a plugin I can't remember the name to do these scatter plots on RGB images there is an old plugin about that so this would be a start point we'll find it for the post on the forum maybe you can continue for the moment thanks Fabrice okay so this is the final section and in fact it's just few examples really basic examples just to show you that collocalization study has not to be complex and that by thinking about the elements the individual tools that you may put together all the tools that exist within the image you may already have your workflow set to do what you want to do this is a true story with a user who came to the facility with this kind of labeling with cells cells that are labeled in red and green and the idea is to identify the the ones that are carrying the marker A only B only and both A and B so the question is how to isolate each single cell how to count each type of cell and how to estimate the percentage of co-expressing cell you see this is not the collocalization the sub-cellular collocalization as a lot of us think about this is this is still collocalization that collocalization is not the right word to characterize the situation this is called co-expression so first how to isolate the individual cells of course on this kind of image the thing I would like to do is to set a threshold to identify the background and differentiate it from the object but how would I do it especially on the yellow cells there are methods you just open image and you will see that there are plenty of methods that exist to set a threshold are they appropriate to these problematics I don't know but this was a good opportunity to give you this an explanation about a method that exists that is implemented in YACOP which is cost automatic threshold what cost automatic threshold will do is first do a scatterplot of all the intensities and in fact it will put a threshold as high as possible to start with and it will look at the Pearson coefficient below here and if this Pearson coefficient is still non-zero then it means that down there here there is still a bit of correlation of correlation then it will move the threshold down a bit have a look at the Pearson coefficient if there is still a correlation it will move the threshold down until it reaches this step where you see this big cloud is uncorrelated so now we can set up the threshold here which maximizes the number of pixels having a correlation of the intensities and lowering as much as possible the uncorrelated population so this is one way to set automatically a threshold where you twist already the problem by trying to maximize here the number of pixels that you take into account and pixels that have intensities that are correlated I'm not sure this applies really well to this problematic where you've got co-expression it's not said that you've got a correlation of intensity between the two but I thought this was a good moment to tell you about possible ways to set thresholds on this image the thing I would do is try to get the two masks the mask with red cells the mask with green cells simply by using some binary combination between red and green I will have the double expressors the cells that are the yellow cells the one that are carrying red and green intensity then the double expressors I will I will use them and I will remove them in a way from the red mask to have the red only cells of course I will have small crappy little things that I will have to figure out later on I will do exactly the same for the green mask to have the green only cells and finally I will recombine the three I will be able to use for instance analyze particles in image it count the double expressors the red only cells and the green only cells and you see no minus coefficient no fancy Pearson no fancy randomization and so on so I will use my regular tools that you can find in image you can do the color utilization workflow and have the numbers at the end another workflow which will be the last one more related to super resolution of course with super resolution microscopy you end up with a lot of detections so small crosses and you see that there is something happening here it seems that there is you don't have crosses overlapping so but you seem to have a distribution that is related between the green and the red channel so how can I do to prove that here I've got some relationship between the two signals how to do color utilization on that well you've got three choices the first one is to take a good old Gaussian blur or to take the pss the point spread function to convolve the image to have this blurry image where you can do all the measurements that we've seen previously but what is a point to do super resolution if to quantify colorization you should go back so for sure you should adapt the method that you're using maybe you need to define the zone of influence of each single detection and to do some kind of desolation and then work on the ties on how they overlap and so on this is one way to proceed the other way would be to work on special statistics and to evaluate the relationship between the two distribution and I must say that I'm not an expert on that field but we know two experts on that field so Florian and Thibault who will give you the seminar next week so if you want to know precisely how they would process this type of images so join us on the next seminar finally word of advice colorization is dangerous because well if you're not using the appropriate method in the the application field it was made for you may end up with with numbers that won't mean anything or will screw the full story of your paper but you might be invented you might know the tools you might know the tools for colorization the tools that you may find in image and try to think sit down and try to invent your really own way to do colorization of course you will have to to test it to make sure that the method that you came up with is is behaving the way you're expecting so well that's advice when doing colorization think be creative test get help and repeat until you you end up with a good a good process and that's the end of the talk so I'm pretty sure that we still have a few questions thank you Fabrice I think we do have some questions I found the color inspector was it the one you were thinking about that's it there is all the questions regarding a creative way of doing analysis so I will try to say that loud can you assign objects to groups based on the degree of colorization e.g. object A is colorized to object 1 and 2 but higher volume of A so you see it's going to be crazy so I think you mention it you can do whatever you want as long as you document it explain it explain your choices and explain why you're doing it sure sure and you see with the development of networks like Nubi as well we are training a lot of Nubi in image processing and we are training people to do their own macros and so on for sure these things so creating new methods and also assessing that they are working really well testing with generating some static datasets on which you may test the method it's something that is becoming easy and I'm quite surprised to see that we are all going back into the same things so Pearson, Mander I don't know which one to use so I will open the Jacob tick of the boxes and tick the one method that works for me which is just stupid just have a look at your images and try to describe what you try to see what you want to describe on the biological point of view and then try to build the metric that will reflect this and try to see if this is responding the way you are expecting and so on and work on synthetic images to be able to vary all parameters the metric that you've built is actually responding and I would say maybe if you feel alone and lost in this call localization world maybe do not hesitate to post on the image forum maybe look if there is a similar question first and then post on the forum maybe one of us will help you with that I don't know Marion or Anna or Thibaud did you see a question that I forgot to ask but that would be very interesting to ask right now so I would like to do a question I'm sorry just report question from a user so you exposed how from the facility point of view you would do analysis of call localization and question from user was how do you convince users that maybe they are not aware of these different tools that effectively the risk call localization so what is the suggestion the trick you usually use you present them with other paper I think they can find in literature or you know opinion leader in the field or what do you do thank you what do I do when I've got a user coming to the facility and asking for call localization I would say first I will I will listen to the user about his the problematics the biological problematics then I will ask for images to know the kind of structure he is working on depending on that can I summarize the problem as dots or is it is it what is the shape and and so on and then I will assess the quality of the images sometimes even if we are an image facility that takes into the full process into into account sometimes when the user becomes longer in front of the microscope then it degrades the quality of the image and then then once I've got everything okay then we start to work on the on the metric and if the user is not convinced then this is this is a problem in general this is not the user that is not convinced because the user will have read all the papers about call localization before coming to me this is a PI in general that is that might say okay you should do Pearson you don't have any correlation between your intensities so this is a bit stupid so sometimes yeah making a bit I've got a set of papers that I sent to the user to be sent to the PI but I didn't have to directly send to the PI yet maybe in my former lab does it is it the kind of answer you are looking for? Yeah I think it's enough thank you Ana, Marie-Anne, Thibault do you see any questions to cover right now? or good so with that we'll do our best to fill the gaps of all the questions we didn't answer in the Q&A and prepare the post on the forum and with that I would like to thanks Fabrice again. Thanks to you who have been typing answers to the questions and thank you also to the CREC team who solved the technical issue in no time so we could start with only a few minutes delay that was impressive also so thank you guys see you soon