 saying we are very happy to have you all back or have you today our speaker today is Fabrice Cordelier. He will present an introduction to collocalization. Fabrice is working at the Bordeaux Imagics Center. It's a microscopy facility in France and we as panelists, we as panelists are here to answer all your questions from the Q&A and moderate the session. So don't be shy, ask your question. We will also interrupt Fabrice if needed to ask some questions to him and he will make some breaks. So we are here together for one hour 30 webinar and the floor is yours Thank you. Thank you. Thank you very much for giving me the opportunity to give you this really short introduction about collocalization. I won't go into fancy advanced topics. I will just try to see to show you the basic tools that you can use and the basic elements that you can use to build your own work through of collocalization. Of course there are a lot of methods depending on the topic you're working on but my idea here with this first introductory seminar is really to revisit the generic methods and to try to to make you understand what is behind, how to use them, how to combine them and how to to play and get significant numbers out of your images. So let's dive into this black box that is collocalization and everything is always starting with an image and I really like to use this image that is from the Fiji website. This is a nice image with two channels one red and one green and everyone knows that when you're looking at collocalization what you're expecting is to see some yellow dots, some yellow areas and if your monitor, if your screen is not that good and if the compression of the network is not that good you may see that we've got indeed some areas, some yellowish areas here so why should we bother on and why should we try to quantify this collocalization because we see it yeah but once more depending on the means to display the image and song you may be tricked into thinking that you've got indeed collocalization so you may think this is an obvious obvious problem but when zooming a bit you will realize that the two signals are indeed separated so this is why it's important to look at the image this is a starting point especially to identify the areas where you may have collocalization but then it will be important to find a way to measure and to assess that indeed your diagnosis of collocalization is true but the problem is and this is a problem that I come across quite a lot working on the facility the problem is that collocalization compasses quite a lot of different things different topics and in fact collocalization is some kind of big word that encompasses several typical experimental situation when the user crosses the door here on the facility sometimes here she is expecting collocalization that in fact here she is expecting some kind of core expression which means that you see in this example uh with the with the single cell with two labelings one in the nucleus and the other one in the cytoplasm you will never see these yellowish dots and the user will ask for collocalization when the thing that he's looking for is core expression so two proteins located in the same structure in the same cell so already at this step you see that there is one thing that matters the resolution and the topology of where the event is taking place in here what you are expecting is two signals not to be at the same location as you would expect when talking about collocalization but you're expecting the two proteins to be in the same in the same structure or in the same cell then what you may expect is some kind of core occurrence meaning that the two proteins of interest are on the same on the same spot in this on the same structure on the same location but without having a specific stoichiometry of association you may have one green for two reds and in another another structure you may have two two reds for four greens and and so on so in this case you're indeed expecting core occurrence in the same structure but no real relationship in term of intensity if you are expecting a certain stoichiometry of association to the structure you will rather be looking for correlation correlation of intensity of the two wavelengths on structures of interest and finally sometimes what you want to see you want to see if the patterns of distribution of the proteins are somehow related and in this case you will rather talk about co-distribution so of course depending on what you want to do you may use certain tools core occurrence will use some tools that are not the one that you will use for correlations this is one you've this is why you've got a huge panel of tools that you will take benefit of and then you will try you will start your journey into the co-localization analysis world and two for for the purpose of this talk I've been separating this journey into into five big steps checking data integrity then pre-processing then choosing a reporter or metric then comparing and interpreting and finally assembling the workflow as you've been establishing which tools which elements you will assemble first let's have a look at how we check for the data integrity because already when preparing the sample you can totally screw your co-localization analysis already when you're acquiring the data you can screw your co-localization analysis so there are three main points that you should take care of and we will see which important steps we should go through first of course you will have to choose dyes that you can image separately uh if you're looking for co-localization and you're really not choosing the dyes you're using well to to uh if you're not choosing the appropriate dyes that will be easy to separate when doing the imaging then you may come across some troubles like having a bit of one channel uh going into bleeding through the other channel and this is quite easy to assess simply by using this uh representation I hope that we see the pointer on the on the screen uh by doing this kind of of plot where you will use the two intensity for the same pixel the green intensity and the red intensity as coordinates x y coordinates and you will plot it on this kind of diagram which is called the cycle diagram so if you'll if you're looking for co-localization what you will see is a big and if the intensities are linked between the two channels you will see a big cloud more less compact and on the graph if you have some lead through from one channel to another you will see some kind of relationship between the two uh the two sets of intensities quite close from the axis and this should be a sign that maybe your acquisition parameters or the choice in the dyes are not really appropriate and you may go back to the bench or to the microscope to to do better acquisition or better prep the other thing of course is your imaging your imaging several dyes through your microscope and the microscope has some kind of defects of of course even if you're using the best objectives and so on you may uh have images where the the channels your imaging are not well registered and in this case you should you should absolutely before starting to measure anything to do the co-localization you must assess that your setup is really uh appropriate and that core registration or cures if it's not the case then you may try to uh analyze this non-core registration and try to correct for that and one way to do it is to use some reference objects that you know that are multi-label you acquire them in the same situation as your sample and you check if everything is well uh well aligned well co-registered co-registered and then maybe go back to the microscope and try to fix the things or simply characterize and correct which is something that we'll uh I will talk about in the preprocessing step finally finally um resolution is really important you should know at which resolution you're working you should know the limits of your microscope because of course if you're taking the crappiest objective on the facility if you're using a simple magnifying glass of course everything will co-localize so world of advice if you want to prove that there is co-localization just use the magnifying glass it will be easier but okay when you're you're going for a diagnosis of co-localization this diagnosis is of all specific resolution in fact and this is a bit hard for a biologist to to understand it it doesn't mean that because you find co-localization the two proteins are actually at the same place in fact the true uh the true conclusion of the co-localization test will be that knowing the current resolution I can't exclude that indeed the two proteins are at the same location in the same surrounding so this is why it's important to using some reference slides some reference beads to measure this resolution and when publishing always say okay I've been doing co-localization tests at that resolution of course with with with the expansion of the new method the new super resolutive methods you now see that the two proteins that were co-localizing are not co-localizing anymore this is not that the previous paper was wrong it's just that the technology has evolved and the resolution that has improved and now you can separate the your proteins of interest so this is it's really important to always do the three steps choose the dice well check for core registration and finally assess your resolution so that you can state it when publishing sometimes I see users coming to the facility with images that are not really really good for co-localization and unfortunately in most of the cases there is not that much we can do image processing is not a magic wand you can't you can't really improve the images afterwards or you can do it but in in in the limited thing and it's always best to have the best samples the best acquisitions before jumping to the next step which will deal with image processing the pre-processing and choosing and applying a metric so these are the some of the commandments that we should we should apply you should you should apply you should you should choose your dice well know your microscopes and its limitation use appropriate sampling of course take benefit of the full dynamic range especially if you're you want to work on coincidence or correlation between intensities avoid saturation or or of course you shall go back to your bench and talk to your facility people now questions so far yes fabrice uh at least um i think one is really interesting right now is uh how to check properly uh about sublease true and can it be done with controls samples that's what you mean yeah yeah yeah this is this is one way one way to do it of course this is something that people tend to to skip but having mono labeled samples is is something that is really important this way it will allow you first to to to check if you've got it through and potentially do some like in floor cytometry where you do some some kind of gating compensations and and some but well if you're already on the sample you already have the evident lead through maybe maybe try to separate a bit more choose dice that are spectrally more separated but sometimes you don't have the choice so you will try to compensate although um yep uh no the question right now um one we need to clarify anyway but regarding the lead through would you recommend to always work on um let's say sequential acquisition mode or always then i see you nodding yeah yeah yeah it's it's once more the the the main answer that i will uh i will give is it depends it depends for instance if you're working on fixed samples yeah yeah you've got the time so take the time and do some sequential uh the acquisitions if you're in live mode and uh you've got something that is moving fast within a cell the zika or whatever uh of course this this this will be a trade-off so yeah okay thanks maybe you can continue okay okay about preprocessing so as i told you uh we will do we have at least four phases of four ways for meanings for a single word call equalization um and of course we may want to work on objects and coincidence of objects or coincidence of intensities and this is this is the nature of the image an image an image is is a support carrying some information and you've got several types of information within a single image an image is of course a collection of pixels and you may want to work on the intensities to see there is some kind of coincidence of intensities some kind of correlation of intensities it's also a collection of frequencies i won't go into details about collection of frequency but uh you've got some um this is this is one way to perceive an image an image is also a collection of objects and we can work on the coincidence of objects but we must first define what is an object and sometimes the object that the images are is is carrying the the image the the the objects there are small enough so that you can summarize them extract one point of interest this could be the center of the structure the structure is close to the resolution so this is another way and this opens some additional methods about how to uh assess collocalization and nowadays now that we are we have access to super resolution an image is not an image anymore you will extract a list of detection a list of events from which you can go back to a collection of object and we will see you with some examples later about what what i'm i'm talking about right now let's go into into the preprocessing i've been talking about bleed through and cross talk and if you don't have any other meaning and if you can't go back to the bench if you can't go back to the microscope or if you if you optimize the full process and you can't improve anymore there are ways to try to correct for bleed through and cross talk for for one channel going into the other or for a single light source exciting the two the two dies at the same time this is called spectral convolution but in fact i will rather use spectral and mixing there are some nice plugins that do do the job but in of course you will have to have some references to know how much one channel is bleeding into into the other and the reverse way around about the chromatic shift if you have the miss registration between the two channels of course the simple translation can correct but once more when correcting over the image keep in mind that you're correcting uh at the pixel level not you won't be able to correct uh at a sub pixel level unless you're doing some kind of approximation about the distribution of intensities and and some you will have to do some interpolation which may impair later on the the processing about corrections sometimes you've got a low signal you've got a high background a lot of noise on the image and in that case using some kind of filtering or denoising could help but be careful when using filtering linear or rain filtering of course you will impair you will modify the intensities that you're working on so if you're you want to assess a correlation between intensities then this this this will be a problem and about denoising once more don't use it as a black box try to understand what is behind so that you you don't create additional artifacts i've been talking about resolution and sometimes you're missing a bit of resolution and in this kind of situation of course the convolution could be could be a pre-processing method of choice as long as you're sure that the psf is not changing too much about the field of the acquisition and in depth in the depths of the acquisition be careful about about the the algorithm that you're you're choosing it should be conservative to have exactly the same intensities but distributed in a in a different way so be careful once more the convolution is not to be used as a black box but it can restore a bit of your resolution and nowadays machine learning is also a way to do a bit of restoration of the image but keep in mind this question is a processing changing the intensity the intensities if so be careful when using correlation between between intensities if you're missing a bit of resolution of course you may ask help from a friend and this friend is could be called super resolution if you're missing resolution just try to push it you've got expansion microscopes this could be also a different method to to try and then to to try to improve your resolution finally if you're planning to work on the objects you must from the image define what is an object what is background and group the pixels that are part of the object into the actual objects but before the first step so differentiating the objects from the background pixels how do you do that do you do a simple threshold do you do some adaptive or local threshold well is it good to work on the signal you're trying to quantify and try to isolate the structures from this signal running the risk to get rid of part of the signal these are questions that you should ask yourself and i will invite you to go to the youtube channel and to see kota muras talk about where where he's talking about about this kind of of topics not only that about this kind of topics then of course you will have to isolate and delineate the objects you will have to group the objects the the pixel into objects this is the connectivity analysis maybe you would like to work rather on contours so you will have to work on snakes to try to isolate the contours once you have the objects maybe the thing you would like to do is if you especially if you've got some really small objects relative to the resolution you may try to you may want to work not on the objects directly but summarize the objects as points of interest for instance the centers and i will show you some method that work from that and the opposite way around is true when you've got points detections like in super resolution do you want to work directly on the detections or maybe do first a distillation to to to have this ties that then you can try to work on this is a question that should be addressed so pre-processing you've got corrections to try to revert the problem that you may have when acquiring the data you've got restoration to try to improve to push a bit further on the resolution to to get a grasp at the resolution that is in the image but that requires some processing to to to to to get onto your image and finally if you're planning to work on the images you've got to do some segmentation but how this these are the the question to to ask questions so far do you have questions or should i jump to there is a couple of questions that maybe you can you can answer for me i can't hear you sorry to mute i'm in an open office sorry so there is a question regarding the unmixing spectral unmixing can it be can it be used to help to solve the bleed through problem or is that another problem it can be used to try to revert the bleed through but you've got to make sure that the you've got references that were done in the same situation and once more once more this shouldn't be something that you you use first first you should try to improve the sample prep and to improve the way you acquire the images and if you are still stuck there you've been asking the help from the facility staff and so on then maybe try this but once more it's it's better to have i do this is personal preference but i do prefer to start colorization analysis with really the the the data as raw as possible if you're starting to process and to over process then you must check that the processing that you're you're putting into the workflow is not adding additional stuff that will lead to over estimation of the colorization another interesting question actually it's quite frequent within the question it's about how to use control some people drop thresholds and so on i i guess that's what you will cover in in the next minutes so maybe it's not necessary to to talk much about it but there is one question that i think is really interesting is what do you do with your control images do you have to show them for example in your in your mind script well if you want my opinion this should be this should be part of the data set that you you release together with the with the manuscript and you you link and some but in practice how many p i do care about having all the controls how many p i will agree to have an additional let's say two or three pages in the manuscript with controls because who cares about the controls after all it's a good question so this is this is why making all the data set for one study available publicly available applying the fair principle and some making all this available would be would be the best this way if it's not in the paper at least you can find it somewhere you can review the controls you can review everything and make sure that what is that you agree with what has been published are we good i think we're good for the moment there is more controversial questions that we may cover at the end when you will have uh yeah cover your content i think okay okay now you've got your images uh down they have been acquired the best way as possible maybe you've been processing them just a bit just a tiny bit to correct a few things and now comes the moment where you've got to choose a reporter a metric and so um we with uh Susan both and others have been trying to separate to categorize the reporters and the metric and here is one way we we present the thing and this might be a bit controversial but okay let's let's assume that we've got a set of reporters and we want to split them into two groups the thing i would do is by looking at what exists what has been published uh i would say that you've got two big families of reporters or metrics you've got the indicators on one on one side and on the other side you've got quantifiers so indicators is a number a number and a scale and you put this number over the scale with minimum localization maximum localization and in between values this is just an indication and it works really well when you've got several experimental conditions because you will be able to compare them but having only one value it will be complicated unless you're on one side or the other side of the scale so you will need to find tricks and we will see in in the in in the next part uh the tricks when you've got only one single experimental condition so these are the indicators on the one on one side about quantifiers quantifiers are a bit easier to understand because they will rely on physical parameters for instance you will try with quantifiers to define the level of overlap between two structures so the percentage let's say of the area that is involved in the colloquialization process or the percentage of the volume that is involved in the colloquialization process or maybe the distances between sets of coordinates and try to see the percentage of those sets of coordinates that's from one channel to another lie behind the resolution limit so of course quantifiers will rely more on physical parameters physical descriptors so you will have a direct readout for the indicators you have to have the scale and you have to have several values to compare to compare so but enough categories let's jump into uh into the the generic indicator that everyone is using let's talk a bit about Pearson correlation coefficient okay this might be scary and you can skip it well um you just have to keep in mind that the Pearson coefficient is simply simply an indicator on how well the site of the hologram where you've got intensities from channel one and channel two serving as coordinates you it will define how well the cloud of dots here looks like a line if you take twice the same image and try to compute the Pearson coefficient you will have something that is close to one if you have two signals that are separated first on the site of your hologram you will see one cloud here another cloud there and no real relationship between the two you will have an exclusion and a Pearson coefficient close to minus one if you don't have any correlation then the dot cloud will be spread everywhere and the Pearson coefficient will be zero so and I think this has raised some kind of controversy but I personally find it hard to interpret when you've got a value alone that is far from one minus one or zero uh I find it quite hard to to interpret alone of course if I've got several experimental situations drug no drug and I compare the the two situations of course I will position the two values and be able to say yeah I've got more co-localization here than there but if I've got only one value and this value is far away from the well identified three values I find it hard to to interpret of course when using the Pearson correlation coefficients you're comparing your dot cloud to a single line which makes it hard to use when for instance you've got let's say let's imagine a receptor and a ligand for the receptor of course there there is a moment where you will have enough ligand going into the receptor and then you will have the receptor saturated so you will have co-localization up to a certain level where you will reach a plateau so when the the shape of the curve won't be aligned and in this kind of situation the Pearson correlation coefficient won't be good the the appropriate one to to use and this is where the sperman's coefficient comes into play basically the thing you will do so you see when you've got something where you've got saturation that happens saturation at the biological level not over the image you will have this kind of curve so comparing it to a linear distribution is is not something that is relevant this is where the sperman's coefficient comes into play where instead of having the single intensity values you will replace each single value by its rank and by doing that you will linearize the curve and you will be able to apply the Pearson coefficient that we've seen earlier of course the condition to be able to apply this coefficient is that you don't have a curve that does something at some some kind of wavy pattern the it should go up it should go down but it should be it should go in only one direction to be to to be applied so many other indicators exist some have been engineered so that instead of having a range between minus one to plus one you've got a range between zero and one which makes it a bit confusing because sometimes people will use some of the coefficients that are between zero and one and relate them to a percentage of codicalization which is not exactly the case and not always the case so quantifiers now and of course you all know about the manders coefficient manders coefficient is i've been trying to summarize the the principle here you've got a structure that is labeled in red another one in green and you've got an overlap here you will collect the intensities within the overlap region and those intensities you will divide them by the total intensity of our the rate channel this will give you the m one coefficient in fact it will the m one coefficient is just the the quantity of material that is engaged into the percentage of the material that is engaged in the process of codicalization and two is exactly the same that the reverse rate round so for the order channel oops and so already here you've got this just kind of measurement and you've got some kind of value that is that you can really interpret alone but of course if you've got and i hope that you have several experimental conditions this is even better because you can position them one relative to the other you've got a very negative from the manders coefficient where instead of looking at the percentage of signal that is engaged in the codicalization process you will have a look at the surface or the volume that is engaged into this this process and once more having one value is is okay because you will be able to to have an absolute value but having multiple experimental condition is is even even better i've been talking earlier about small objects really small objects that are that have a size and overall size that is close to the optical resolution if you've got two tiny dots close to the optical resolution and you're trying to measure an overlap between the two using the manders coefficient of course this won't make sense you won't have enough pixels to define a precise percentage of overlap and you will have extreme values if you've got nine pixels on one side and nine pixels on the other side the overlap can take only few of the the the possible values one third one six etc etc so it will be difficult to use this kind of method the manders coefficient on really small objects this is why it's it's okay to try to summarize the problem by isolating the small objects and maybe summarizing them with one set of coordinates with the center of mass or the the the geometrical center of the objects and once we have this summarized version of the objects we can do several things we can define a rule we can define our own metric about what is collocalization let's assume that we've got these two objects and we we just check for the center of the red object we could set up a rule saying I consider that there is collocalization when the center of the red channel the the object on the red channel is falling onto the green object so in that case of course we don't have collocalization here in this situation where we have the plain red circle which is perfectly centered over the green object then of course its center is falling onto the green object and I consider that there is collocalization and now comes a specific situation where I've got a ring that is perfectly well centered over the green object the center of the red ring will fall onto the green object and I will conclude about collocalization and this is at this step that you realize that maybe maybe in the workflow even if it's not appropriate to measure the overlap maybe having a look at how the overlap behaves in combination with this method will allow me to go even further in the conclusion I don't see much overlap between the ring and the circle which means that they might be separated but when you're using this strategy here I see that the center is falling onto the green object which surely means that the red signal is outside of the green signal so you see that by and this is quite important by going into the collocalization black box understanding all the methods and so on you may start to think about additional ways to um to analyze your data maybe combine several methods that will give you one focus that will allow you to conclude about what's going on in your in your sample here on the previous slide I've been only summarizing one object by its center here I will use the center for both channels and now I will set up a simple rule if the two crosses are separated by a distance that is less than the resolution then I can't exclude that the two objects in the two centers are colloquialized so you see it it makes some weird things because if you've been applying the right sampling rate you may have two crosses that are separated one from the other but if the distance is below the optical resolution the two crosses will actually colloquialize and it's it's weird it's it goes against the principle that to have collocalization you should have yellow when merging red and green so in this kind of situation we are above the resolution so we don't have collocalization even if we have a bit of overlap here we still have collocalization and once more with the ring and the circle we we have collocalization do we have questions at this step oh we certainly have questions no but it's really interesting some of them are very very specific and since I don't know exactly what you will cover it's a bit difficult to to decide but for example I will take one about the Mendes coefficients what do you recommend on choosing a Roy yes no and how to subtract background how and what to use as control Mendes seems highly sensitive to the choices and the thing is on on each of the methods you you presented briefly I think we can make an obvious academy event of at least one or two hours yeah so the problem is this kind of question and I'm not skipping the question but it all depends on the biological problem you're working on it all depends on the nature of the noise you're working on and yeah I don't have a precise answer except once more go to your facility people and try to get uh the devices from your facility people because well if you've got noise you can always try to filter to denoise and so on if you don't have a way to improve the sample and the acquisition about the region of interest about region of interest yeah on the Mendes it all depends on the on the distribution of your structure so I couldn't I couldn't say on this region of interest I can comment with one project we had where you have cells that have different phenotypes and basically if you look at the overall image you might not see something but if you do the analysis cell per cell then you see that you have a difference of phenotypes between cells and you end up with a difference in your results but it's more tedious because now you have to find the region of interest and at the same time if you're if you if you're thinking that maybe the Mendes coefficient may depend from one region of your cell to another the thing I would recommend is maybe try to measure object per object and then try to make a map where you call on you make a column map of the overlap between your objects and maybe you will see some regions that are homogeneous with a certain percentage of overlap other with another one etc etc and if you want to learn about that if you want to because this will require a bit of automation but if you want to learn about that we've got an excellent set of videos about an introduction to the macros with Fiji on our youtube channel all the questions you you might answer them later it's about 3d versus 2d and about the software that exists for collocalization okay so 2d versus 3d of course if you can go for 3d go for 3d because because the biology is not 2d anymore it's it's 3d so gone just try to imagine if you've got the collocalization that is taking place along the z axis okay red green okay and if I slice just here what will I conclude this object is alone so you can't unless unless you're forced to and but please go 3d about about softwares okay well I'm the author of this plugin which is called Yakop and I insist on the Yakop because it was written with a german my german colleague and ja is ja in in german so it's just another collocalization plugin Yakop but well it's not that I don't want to recommend it it's just that it depends on what you want to do and how much you want to go into automation and so on so depending on if the method is there go for for it if you're using ic you've got additional plugins and if you're writing your full workflow in ic have a look at the at the tools I I refuse to give you advices on on on the tools because there are many out there and there are many that are really good it all depends on your full workflow and how you can integrate and jump from one tool to another and maybe you want to to to to get stuck into into image effigy or maybe you want to go to ic in that case explore what is there and and have fun yeah I think we'll have to uh let's say to to come to complete this kind of questions on the image forum uh on the thread we'll have a like a complete list of software we are often using or recommending yeah and and don't despair if you see that we are not answering your questions it's also that sometimes uh I mean in the q&a window it sometimes it's really difficult to make something that will be less than 1000 words so and about the software I will come to the softwares I will give it yeah I will do auto promotion for one chapter one review chapter at the end where we have the table with depending on what you want to do the kind of softwares that exist okay great okay about comparing and interpreting now um well you will have to do a lot of statistics you will you would have to first plan your experiments to know how many sales how many objects etc etc and to be honest I was trained as a biologist I'm now a bio-image analyst and statistics is not something that I really master so the advice once more would be to get help about statistics get help from people who who who know how to to do them in France we have had a really nice training which is called image stats in statistics for images and where we have been able thanks to the organizers to to have a look at what is an experimental plan on the statistical point of view and I can tell you that it's better to go ask them before you start your your full experiment because these are the questions that we always get how many sales how many objects which condition which test etc etc this is not the kind of thing that personally I feel confident to to to tell you to explain to you but I can show you all the tools before or after this discussion to to give them material that they can use I've been talking about comparing and the problem that I personally found with indicators if I've got only one experimental condition so how from one experimental condition can I put some kind of diagnosis of colloquialization when I've got only one value well I need to generate additional values additional metrics and one way is simply simply to to take one data set my only data set well I hope that in this kind of situation I had several images of the same experimental condition where I will establish the person coefficient between channel A and channel B I will have a certain value and you see that this is typically the kind of mid-range value that I find personally hard to interpret how do I do to get something to compare it to maybe I take exactly the same data set at least for one channel and I use a second channel that's rotated by 90 degree and try to see what is the person person coefficient correlation coefficient and I see a huge drop which means that yeah I surely had something there so this is one way to do it of course if if if the image is crowded with the signal just flipping one image it won't be enough because you will always have some kind of random correlation between the two signals so that if you've got some sparse objects this is one way to generate the second data set from the original one then there are methods that have been engineered well simple methods where simply you will take the original data set but you will displace one channel you will translate one channel relative to the other and I don't know if this will be visible on the webcam but the thing I like to do when I'm explaining this Van Stinsel approach is taking my two hands like that okay channel A channel D when there are overlapping I've got maximum of correlation maximum expectable correlation and now if I translate one relative to the other I will progressively lose the correlation I will do it in one way in the other way and each time each time I'm translating one image relative to the other I'm computing the person coefficient and then I'm plotting this theory of person coefficient relative to this transmission if I had first call localization then by shifting I will lose the correlation if to the opposite I started with exclusion when I'm shifting I'm running the chance to have an overlap when translating so depending on the shape of the curve that you have at the end you may say yeah I started with call localization and then I've lost it so this is indeed call localization one additional thing with the Van Stinsel approach which is nice if you have some chromatic aberration the maximum won't be at zero it will be slightly shifted the width of the curve of course depends on the size of the structure if I've got two fingers it's easy to lose the correlation if I've got a finger like that and another one like that I should move quite a lot in order to lose the correlation so this is what this will make the bigger bell shape finally a third strategy to try to generate something to compare your value to is to randomize you will take one of the two images you will cut it cut this image into small blocks taking into consideration that due to the way the image is formed you've got a local correlation between intensities this is due to the point spread function of your optical system so the bricks that you will make in your image are a bit bigger than one pixel in general if you've been applying the right something it will be three by three pixels so you cut this image into small pieces you put all the pieces in a bag you shake the bag and then you just get pieces and reassemble them into a new image this gives you something that on a biological sense doesn't have any sense but at least this is exactly the same information but located differently and now the thing you will do is compare this randomized green image to the red channel you will compute the Pearson coefficient you will do it once twice a lot of times and then you will end up with a big distribution of Pearson coefficient to which that this corresponds to a situation where co-localization may have happened but on a random basis and now the thing you will do is take back the two original images and compare the Pearson coefficient that you got to this distribution and if it's far away from this distribution this will mean that you've got a chance to have actual co-localization so this is another way to work with the images when you've got only one experimental situation but be careful because getting numbers out of the images is fine but it's fine also it's like with the ring circle paradigm that I've been talking about it's important to have several informations and to combine the informations before stating about co-localization because if you are looking at this slide you will see that the percentage of co-localization in terms of surface is the same but it doesn't correspond exactly to the same experimental condition here you've got an overlap here you've got only part of the green signal that is engaged in the co-localization process here you've got a bigger co-localization area but a bigger object even though if you're measuring the Mender's coefficient here you will find exactly the same and finally you've got the reverse situation so it's important not only to focus on the indicator on the matrix from co-localization but maybe to widen a bit the analysis so that you're sure about that you're comparing or describing exactly the same phenomenon so what question should you address knowing the metrics that I've been presenting and the comparison and interpretation method that I've been presenting what type of co-localization method is the most appropriate for your problematic are you working on the intensities are you working on physical coincidence are the published methods adapted to your problematic yes because this is not because you find Pearson coefficient Mender's coefficient in all comfortable softwares in all image processing software that is that are linked to the microscope in all the publication that these are the method that you should use maybe maybe maybe a different metric will be more adapted to your specific problematic so depending on the interest that you will you you will give if the answer is no you've got to be creative to build your own metric to characterize it and to use it and publish it well if you're happy if your problematic works on your specific problematic if the method that are been published work on your specific problematic and if if this is okay then fine just go and find the tool that is the most appropriate for your workflow especially if co-localization is only one part of the workflow you've got to be able to change all the steps so that you can automatize and remove the bias the human human bias in the analysis about the tools we've been writing a review with patrice mascalchi and at the end of this review you will find the table where you have some kind of list of all the of a lot of tools that exist and when to which what what kind of co-localization that allow you to to do and where to to find them do we have questions here again i would say yes we we do have questions for example one that i as i like is we have someone wanted to do a co-localization between three channels or three objects do you have recommendations well most of the tools that exist are done for only two channels but once more you will if it's if it's trying to find the overlap between the the three then one of the workflows that i will present after will help it's it will just have to be generalized to the third channel then then well if you're working on the intensities i think that the easiest way will be to go two by two with with the tools that exist but if you're willing to implement for instance the Pearson coefficient i'm for sure you can extend it to three dimensions so but i'm not sure that we've got an implementation that already exists about that my answer would also be that it will be difficult to make it understandable by viewers after because 2d scatter plots is already challenging sometimes yeah but then if we make 3d graph uh yeah it will be difficult yeah you know for the 3d graphs the thing i would do is use there was a plugin i can't remember the name to to to do these scatter plots on rgb images there is an old plugin about that so this could be a point we'll find it for the post on the forum now maybe you can continue for the moment thanks for this okay so this is a final section and in fact it's just few few examples really basic examples just to show you that collocation study has not to be complex and that by thinking about about the elements the individual tools that you may put together uh all the tools that exist within the image you may already uh have your workflow set to do what you want to do this is this is a true story with a user who came to the facility with this kind of labeling with cells to cells that are labeled in red and green and the idea is to identify the the the ones that are carrying the marker a only b only and both a and b so the question is how to isolate each single cell how to count each type of cell and how to estimate the percentage of co-expressing cell you see this is not the collocation the sub-cellular collocation has a lot of us think about this is this is still collocation that collocation is not the right word to to characterize these situations this is called co-expression so first how to isolate the individual cells of course on this kind of image the thing i would like to do is to set a threshold to identify the background and differentiate it from the object but how how would i do how would i do it especially on the on the on the yellow cells there there are methods you just open image and you will see that there are plenty of methods uh that exist to set a threshold are they appropriate to these problematics i don't know but this was a good opportunity to give you uh this uh an explanation about a method that uh that exists that is implemented in in yakup which is uh cost of automatic threshold what cost automatic threshold will do is first do a scatter plot of all the intensities and in fact it will put a threshold as high as possible to start with and it will look at the person coefficient below here and if this person coefficient is still uh none zero then it means that down there here there is still a bit of correlation of uh of correlation then it will move the threshold down a bit have a look at the person coefficient if there is still a correlation it will move the threshold down down down until it reaches this step where you see this big cloud is uncorrelated so now we can set up the threshold here which maximizes the number of pixels uh carrying having having a correlation of the intensities and lowering as much as possible the uncorrelated uh population so this is one way to set automatically a threshold where you twist already the problem by trying to maximize here the the number of pixels that you you take you take into account and pixels that are that have intensities that are correlated i'm not sure this applies really well to this problematic where you've got co-expression it's not said that you've got a correlation of intensity between the two but uh i thought this was a good moment to tell you about possible ways to to to set thresholds on this image the thing i would do is try to get the two masks the mask with red cells mask with green cell and simply by using some uh binary combination between red and green i will have the double expressors the cells that are the yellow cells the one that are carrying red and green intensity then the double expressors i will i will use them and i will remove them in a way from the red mask to have the red only cells of course i will have small crappy little things that i will filter out later on i will do exactly the same for the green mask to have the green only cells and finally i will recombine the three i will be able to use for instance analyze particles in image it count the double expressors the red only cells and the green only cells and you see no minus coefficient no fancy Pearson no fancy randomization and some simply by regular tools that you can find in image you can do the color optimization workflow and have the numbers at the end another another workflow which will be the last one more related to super resolution of course with super resolution microscopy you end up with a lot of detections so small crosses each represent the detection and you see that there is something happening here it seemed that there is you don't have cross crosses overlapping so but you seem to have a distribution that is related between the green and the red channel so how can i do to prove that here i've got some relationship between the the two signals how to do color optimization on that well you've got three choices the first one is to take a good old Gaussian blur or to take the pss the point spread function to convolve the image to have this blurry image where you can do all the measurements that we've seen previously but what is the point to do super resolution if to if to quantify colorization you should go back so for sure you should adapt the method that you're using maybe you need to define the zone of influence of each single detection and to do some kind of desolation and then work on the ties on how they overlap and so on this is one way to to proceed the other way would be to work on special uh statistics and to evaluate the relationship between the two distribution and i must say that i'm not an expert on that field but we know two experts on that field so Florian and Thibault who will give you the seminar next week so if you want to to know precisely how they would process this type of images so join us on the on the next seminar finally word of advice colorization is dangerous because well if you're not using the appropriate method uh in in the the application field it it was made for you may end up with with really with with numbers that won't mean anything or will screw the full story of your paper but you might be inventive you might know the tools you might know the tools for colorization the tools that you may find in image and try to think sit down and try to invent your really own way to do colorization of course you will have to to test it to make sure that uh the method that you came up with is is uh is they behaving the way you're expecting so well last advice when doing colorization think be creative test get help and repeat until you you end up with a good a good process and that's the end of the talk so i'm pretty sure that we still have a few few questions thank you Fabrice um yeah i i think we we do have some questions um i i found the color inspector was it the one you were thinking about that's it um there is yeah there is all the questions regarding um a creative way of doing analysis can so i will try to uh to say it out loud can you assign objects to groups based on the degree of colorization e.g object a is localized to object one and two but higher volume of a color so you see it's going to be crazy um so i think you you mentioned it you you can do whatever you want as long as you document it explain it explain your choices and explain why you are doing it sure sure and you see you see with the development of networks like newbies where we are training a lot of new newbie in image processing and and we are training people to to do their own macros and so on for sure for sure this these things so creating new methods and also assessing that they are working really well testing with with generating some um um synthetic datasets on which you you may uh test the method it's something that is becoming easy and i'm quite surprised to see that we are all going back into the same things so Pearson Mander's uh i don't know which one to to use so i will open the Jacob tickle the boxes and the one method that works for me which is just stupid just have a look at your images and try to to to describe what you try to to to see what you want to describe on the biological point of view and then try to build the metric that will reflect this and try to see if this is responding the way you are expecting and so on and work on synthetic images to be able to vary all parameters and see if the metric that you've built is is actually responding and i would say maybe if you feel alone uh and lost in this uh localization world maybe do not hesitate to post on the image forum maybe look if there is a similar question first and then post on the forum maybe one of us will help you with that i don't know marion or ana or tibaud did did you see a question that i forgot to ask but that will be very interesting to to ask right now so i'd like to do a question i'm sorry so just report question from a user so you exposed how from the facility point of view you would do analysis of co-localization and question from user was how do you convince users that maybe they are not aware of these different tools that effectively the risk of localization so what is the suggestion the trick you usually use you you present them with other paper other thing you they can find in literature or you know opinion leader in the field or what do you do thank you what do i do what do i do when i've got a user coming to the facility and asking for localization i would say first i will i will listen to the user about his the problematics the biological problematics then i will ask for images to know the kind of structure he's working on depending on that can i summarize the problem as dots or is it is it what is the shape and and so on and then i will assess the quality of the images sometimes even if we are an image facility that takes into the full process into into account sometimes when the user becomes loner in front of the microscope then it degrades the the quality of the image and then then then once i've got everything okay then we start to work on the on the metric and if the user is not convinced then this is this is this is a problem in general this is not the user that is not convinced because the user will have read all the papers about localization before coming to me this is a p i in general that is that might say okay you should do Pearson yeah well you don't have any correlation between your intensity so this is a bit stupid so sometimes yeah making a bit i've got a set of papers that i sent to the user to be sent to the p i but i didn't have to directly send to the p i yet maybe maybe in my formal app does it is it the kind of answer you are looking for yeah i think it's enough thank you ana marion tibaut do you see any questions you would like to to cover right now or good so with that we'll do our best to fill the gaps of all the questions we didn't answer in the q&a and prepare the post on the on the forum and with that i would like to to thanks fabrice again thanks to you to you who have been typing answers to the questions and thank you also to the creek team who solves the technical issue in no time so we could start with only a few minutes delay that was impressive also so thank you guys see you soon