 We're very happy to to have the zero cost for deep learning, zero cost deep learning for microscopy team here with us. And so we are Nelbias Academy. We started last year with giving these webinars about the image analysis and so far we've had more than 15,000 registrations and 53,000 views on YouTube, which is great. We're very excited to continue this. And today we have zero cost deep learning for microscopy team. We have Guillaume Jagmet from Obokademi University in Finland and Romain Leine from MR who is a MRC research fellow from UCL London. And as a moderator we have Lukas von Chamier with us also from London and me and Mafalda and Rocco from Nelbias team will be moderating this webinar. So Romain, I give the floor to you. Great. So thank you very much for the introduction Elmas and I want to thank all of the organizers and everyone in the Nelbias team for organizing that fantastic initiative. I think that's just been great to keep up with everyone through those means. And I'd like to thank you for inviting us over as well and to give us the chance to speak a little bit about zero cost. So let's get started. So can you guys see my slides? All good? If I don't hear anything I'll assume it's all good. Excellent. Thank you. So zero cost deep learning for me essentially is a platform that intends to make available deep learning for microscopy and to make it available in a hassle free way right without requiring any programming skills and or even computational power and I'll introduce this to you very shortly. But I'd like to start by showing you a couple of a couple of context to this a little bit of context to this and as a lot of you might have seen in the recent years there's been a huge boom in the number of papers and publications both from the bio imaging community as well as the computer science community in developing and using deep learning for image analysis ranging from image segmentation denoising and quite a wide range of tasks and showing really quite promising levels of performance. But I would like to give you just a very brief introduction to what I think was a deep learning revolution in the recent years and where that kind of revolution kind of came about because in fact deep learning and the idea of neural network is actually very old it's not a it's not recent idea it comes back from from the fifties back in the days where computers would take a whole room full of electronic equipment but at that time you remained only a sort of mathematical oddity because no one could really optimize those neural networks and that's only back in the eighties and in the early 2000s with invention of back propagation algorithm and the development and the availability of GPU technologies that make actually that made actually neural network and deep learning practical and practically feasible and from then on actually that's in 2012 through the presentation of the so-called alex net network that really showed in the context of image classification right given a specific image let's get an algorithm that actually identifies what's in this image and you you can imagine many applications for this of course and so this alex net work back in 2012 was the first one to show convolutional neural network massively outperforming many other many other approaches and that's when the attention was really driven to deep learning in terms of image and computer vision and so then in 2015 a new architecture came about called units that actually made beyond beyond the idea of just image classification that made deep learning very suitable for image analysis going from an image to another image and in particular here the authors demonstrated that it's really powerful for image segmentation as demonstrated on electron microscopy data as you can see here and the rest is effectively history right in the recent years and since 2018 there's been an absolute enormous development and of neural networks and availability and platforms and so on and what I'm showing you here is only just a very small subset of what's actually been created by a fantastic community but what really is the fuss about deep learning and for this it's important to remember that for classical algorithms the effort is actually put onto the design of mathematical functions that will perform a specific task so here in this case on the left I show you a noisy image that needs to be denoised and that's the kind of on the right that's the kind of target we're trying to aim at so we're going to design a couple of mathematical functions that will go from noisy to denoised image so that could be you can think of Gaussian filters, median filters you know there's a range of very easy classical algorithms that I'm sure most of you will have used for this but the effort is put into designing those and then the input the input is fed into this algorithm and the output is deterministic in the case of deep learning the approach is really quite turned upside down because in this case we actually start from the data itself so we prepare typically a bunch of per data bunch of input data and the equivalent output data where actually the input represents what we intend to feed into the network and the output being what we intend to recover from the network and at that at that stage what we do is we actually feed those input and output into a neural network that will therefore learn and tune its own parameters in order to be able to provide eventually a function that will actually be able to transform the inputs into the outputs as faithfully as possible and so that's the training stage and you see that's really is the essential step for deep learning for obtaining a model that is performant and then once this training is finalized we can now obtain a train model where the parameters are actually fixed and then feeding a new image now leads to an output that is performing the tasks that we intended for for that neural network in this particular case we actually can obtain the equivalent denoised image from the from the input image and so you see that what's really important in the case of deep learning is the training step and in fact it is all about the training or the performance and even the task that is performed by the neural network really depends on the training and what's really important to know about a training is that training leads to a form of embedding of structural priors within within the function that is built within the neural network within the model and so I like to actually demonstrate or showcase this from a very elegant example from W. Mearners paper from last year where they actually built a couple of neural network models that that are capable of of recovering from sparse sampling images right such as here of very very simple structures square rounds rectangles and triangles and they build neural networks that try to recover the initial underlying ground truth structure from those sparsely sampled images and so what the elegantly showed is that if you take a neural network but you only train it on rectangles it will only have ever seen rectangles so you will see and think that they are rectangles all over the shop right so it finds rectangles even when the underlying structure truly is a square or a or a circle or triangle and that's obviously suboptimal and similarly if you train it only on triangle it will find triangles so what this means is that the training data set needs to contain all of the representative or the important shapes that will be observed in the data subsequently so in this case if you train a neural network on all four shapes then disregarding of what was fed into the sparse sampled image sparsely sampled image it recovers the appropriate images or the appropriate substructures so the the take home message from this slide is that it really is essential to train neural networks on your own data so training is essential and so if we go back to to this slide for a second and what's great about neural network is that it can approximate pretty much any transforms provided it can learn it and so I've shown you here an example of denoising task starting from a noisy image and ending up from a denoised equivalent but in fact you could feed any equivalent image and if you find enough enough enough data within the training data to to to learn that transform it will learn it and you can also massively outperform classical algorithms for a wide range of tasks and that's also really quite important for for bio-image analysis and so those are good things about deep learning right it's performance and you can also be fast but what's less good about it is that deep learning actually needs quite a high level of expert computational skills and still today requires a lot of maybe programming skills and quite adventure in needs us to be quite adventurous in how to set up deep learning architectures and also quite importantly is the training step really requires high computational powers and in particular the access to GPUs for those for those training sessions to actually complete within reasonable times and so those are actually hurdles towards the use of deep learning for the bio for the bio-image community and the bio-medical researcher community as well so that's actually what we've seen as really a difficulty towards using deep learning and so I like to show this image which is the a picture of Michael Hopkins an ISS astronaut who's performing some basic maintenance on the ISS and so that's a range of computer hardware and software handling that is actually quite not trivial at all and so I kind of like to compare that to training a neural network in some cases where which really requires a difficult combination of handling of hardware and software so hence we wanted to build a platform that would make all of those things a lot easier and so hence our platforms are costier for make so it all starts from your computer and your computer does not need to have any specific computational power all you need to have is the data that you will wish to use as training data as well as the new data that you then therefore need to to process through your newly trained model and so the the platform actually outsources all the heavy-duty computational expense to the cloud by using a range of resources made available by by google in particular google drive and google colab and so by uploading all of this data all of this all of the computational power is only required on the cloud and therefore is not on your own machine what we provide therefore within this platform is a range of notebooks which are basically essentially a range of methods that that contain user interfaces and that contains all the individual parameters that the user needs to think about and set in order to set a training session going and so this enables the communication between google drive and google colab and therefore the use of gpus that are made available in google colab and so all of those that interaction between the now on the cloud data and the notebook and colab makes the network training occur all the way on the cloud but we've also incorporated the important step of validation and i'll get back to this a little bit later on which allows the users to validate and explore the performance of a particular model on a validation data set and also to run predictions so that means from the train and validated model to be able to use that specific model directly on the cloud to process the new data and to obtain say as i shown you before the denoise data from the noisy images and the final step is of course to recover down through the user's machine the train model for further use and results from the predictions obtained in step number four and so you see again that all of this is performed in the cloud and therefore zero cost the the platform essentially provides a range of advantages one of which is it's actually completely free and the access to gpus are made available for free via via google colab and we're taking advantage of this it's we've also built a simple user interface that requires therefore no coding and that's therefore very accessible to the wider biomedical research community and quite importantly it's a single platform that performs both well training prediction and quality control through that validation step and that also builds trust with respect to the use of models built by deep learning in this case and so all of this is performed in the cloud and takes the shape essentially of just a web page and so it only requires web browsers and so that that makes it also very versatile and we'll take a look at that when we get around to taking a look at the demo so what can the platform actually do the platform and so what I should say here is we've built a platform around the idea of incorporating a lot of pre-existing networks that are built by state-of-the-art computer scientists and bio image analysts and we've only incorporated in those networks we've not built actually our own networks so we've closely interacted with all the developers to be able to do this and so what can you actually do performs image segmentation and object detection so we've incorporated unit in order to do image segmentation as you can see here both in 2d and in 3d as shown here in the segmentation of mitochondrial network from electron microscopy data but also we've incorporated stardust as a nuclear segmentation network on fluorescence and bright field images as well as fast object detection using yolo v2 but that also that is also capable of classifying each individual objects that is detected within an image and so you see that can be really powerful if you're looking at say whitefield or brightfield cell migration assay live cell migration assay right here another type of task that we've incorporated is that of image denoising and what we've built is a care network based on the work from viagra total and so that is really quite a network that is very powerful in in denoising and even removing a number of artifacts such as those obtained in sim microscopy as you can see here but we've also implemented self-supervised denoising methods such as noise to void 2d and 3d and here you see a live cell mitochondrial data network or dynamics denoised in 3d and another type of task that we've also implemented is something that is very quite exciting to me is image to image translation and the idea is is to actually feed the network with for instance bright field images as input and fluorescence images as output and so what this means is that now the neural network learns to predict a pseudo fluorescence output purely based on the bright field images and that's something that would be extremely difficult with classical algorithms and so we've implemented the oncomolital approach right here but as well we've implemented methods based on on GAN on GAN architectures such as here pics2pics networks and so pics2pics here in this case allowed us to predict fake nuclear signal say DAPI signal purely from an acting label on signal and Guillaume will show you a little bit more data on this later on so but the bottom line is you can do all of this but if you cannot trust the model to give you something that is robust and reliable what is the point so can you trust your model and so we've incorporated the validation step as I've mentioned to you and let me just introduce you to a couple of concepts that we've incorporated within the platform so imagine you've got a source image and you want to denoise it and you obtain this prediction can you actually believe that this prediction is is robustly representing the underlying data on the underlying structure so the trick here is to actually do a little bit of comparison to data for which you have acquired some actual ground truth target data and so the the the concept is pretty simple is let's compare those two do they match if so then yes we can trust the model and one very intuitive way of doing this is to actually compare simply the difference take the difference of those two images and compute something called the root mean square error and so a low error is a good thing in this case and here we can actually plot this as a map and so a low map or a map showing low values is a good thing in case of plotting the errors but we can also incorporate other structural similarity methods such as the SSI in metric right here for which a high structural similarity close to one is a good thing and in here in this particular case we obtain a high structural similarity also highlighting that this particular prediction is of good quality and can be trusted and so we can extend this to to the whole network for that kind of data set essentially but beyond this actually those kind of metrics and analysis really allow us to go a little bit beyond just building trust into the model because we can also then start comparing different kind of predictions using using those metrics so here we've got noisy input and we've got the ground truth to which we can compare it and so we've compared for this particular data set three kind of denoising methods and we have the metrics that can compare them in this particular case for this particular data set the care prediction performed best but what we can also see beyond the the metrics themselves is the structural similarity maps also gives us an insight into how the individual networks perform with respect to artifacts and in what places the artifacts tend to occur so that gives really a good understanding in terms of how the model actually performs in the case of segmentation we use a very classical metric which is called the intersection of a union the iou metric and so all that this performs essentially all that this computes is the sum of all the pixels that are overlapping between the targets in the prediction masks essentially divided by the sum of all the pixels that belong to either of the two images and so here if we have segmented so if we have the masks the target get mask from this image and the predicted mask from this image we can now compute the overlay between the target and the prediction the white pixels here represent the overlapping pixels so the good ones so the the intersection represent all of the white pixels whereas the union represent all the white pixels including the false positive and the false negatives that are shown here in green and magenta in this image and so you see that the union will always be greater or equal to the intersection so a iou close to one is a good thing in this case and so here in this particular case we obtain a very high iou and that's a great thing for this particular dataset and similarly to before we can use this iou to do a number of things and in particular here what i'm showing you is how the iou metric can be used to check the evolution of the network performance over the number of epochs so over the time over which it trains and learns to perform the task of segmentation and so you see here as the number of epochs increases the iou increases itself but as well if we look at representative images the number of epochs leads to better masks that's highlighted by a higher number of white pixels in those images and similar to before we can also use the iou to compare a range of different segmentation approaches and here in this particular case again for this particular dataset we found that looking at the metrics the iou metric here that the nuclei is a general nuclei model actually perform best in this particular case but you can think of using that kind of approach on a range of different dataset and on a range of different networks so the zero cost doesn't live on its own it belongs to a wider community to the deep learning bioimage ecosystem and talks to a range of different other communities and in particular interfaces quite heavily with the csbd team for which we actually at zero cost can provide a training engine for instance for modern for networks like care noise to void and deconoising similarly for stardis 2d and 3d zero cost can can provide an efficient training engine for those networks and those two guys then therefore start talking to fiji quite quite efficiently through their respective plugins but zero cost also interfaces with the deep image j community as well for which we also provide a training engine for a number of networks unit 2d and 3d as well as deep storm and deep image j of course is implemented within fiji in this case and we also can integrate zero cost the alphomic within larger image analysis pipelines such as interfacing with track mate and here again Guillaume will show you a little bit more data on on how this can be done but we also interface more and more more and more recently with the bioimage.io community as well which is a a repository for pre-trained model notebooks but also dataset from which from which those different pipelines can be made available and of course all of those different teams talk to each other very efficiently and so you see that this interface actually makes that whole community grow more powerful and grow more efficient and so just to wrap wrap this up what i've talked to you about today is is our platform which is essentially platform that's great when you are forced to work from home and we've we've also put quite a lot of effort into document documenting this as best as we can and so we've got our github page that you can see here and we'll take a quick look at that during the demo and we've also implemented a range of different image analysis tasks that makes it very versatile and that's a good place to compare a range of networks as well and so just to to finish here i'd like to thank all of the people that have contributed to this absolutely enormous efforts and so the teams and the people that have contributed to this is actually unprecedented at least in in my in my experience we've talked to beta testers developers people from the bioimaging the deep learning community and so on that's been an absolutely huge effort i'd like to thank everyone and i'd like to thank you all for your attention thank you very much all right thanks a lot chroma so i don't think we have any specific questions to ask you right away so what we're going to do now while people start to ask questions the q and a and then i will slowly give a quick demo of what our wiki looks like while roma takes his breath and then after that roma will will take you through a full demo of one of our notebooks thank you i'm going to share my screen now so hello everyone by the way so so my name is is geom jackmei so i'm a group leader at robo academy and and i'm one of the labs that's been involved in in developing this zero cost platform and what i would like to do now is basically just take five minutes to show you what basically our website look like because at the moment most of our platform is really based on on github and maybe feel a little bit intimidating for some of you that just want to get started and analyzing your images so so if you you know google zero cost deep learning for my course could be in google very likely you you will find this github page as that's actually hosted by ricardo enriquez's groups and then you will find that zero cost deep learning for microscopy and as with many github page you will first see you know the code and and so on and and then a quick explanation of what it is all about and importantly you will also see here quite a lot of video material for to help you get started using the platform you will see talks that roma and others that have given for the past few months but also video tutorial how to use the platform so it's a really good place also for you to to get started if you want to start using the platform and then we will of course also put the the nubia talks when it's available on on youtube here as well but what i just wanted to show you is that here on the github page then you can click on our wiki and that's where most of the exciting stuff actually is and in our wiki we as roma said during his talk we try to put quite a lot of efforts in kind of adding documentation to to help people get started with the platform as quickly as possible and and you will see here on on the side of the wiki we have a lot of kind of documentation we have step-by-step guides we have tip and tricks and then specific section dedicated to some of the things that roma will explain during his presentation on data augmentation and so on and we also have detailed description on some of the networks that are available through the platform and but if you scroll down then here is basically where you will see all the different deep learning networks that are available through our platform and our platform is really a collection of jupyter notebooks that can be run through google collab to do specific tasks and that's where basically most of the useful information can be found is what kind of deep learning networks that that others have created what can they be used for what have they been shown to be used for and then if you want to start using them then you can basically right click here so for instance an example for start this 2d you can right click here to open it directly in google collab and then you will have it that now opens in a different tab so so i will not show you too much the notebooks because that's what roma will do now and but one thing that i just wanted to highlight is one of the things that we work really hard on is to try to make sure that the platform was as similar as possible for the different deep learning notebooks that we provide so so for instance if you open a stardust 2d or or noise to avoid 2d so one which you will maybe use to do a segmentation especially of or nuclei and ones that will be for denosing since the overall structures of the notebook is exactly the same and you will have of course information and code that is dedicated to those different notebooks and but but hopefully if once you've learned how to use one of those notebook you should be very quickly able to to move to a different deep learning task which hopefully will will be really useful when you start to want to use deep learning for a wide range of bioimage analysis and so yeah so this is a very brief introduction so so once you kind of reach our github then don't be alarmed by by the the main github page travel to the to the wiki page where you will find a lot of information and videos on how to get started and some links to access the various notebooks so on that side was my brief introduction of of our github and then I will leave now Homa to to go through one of our notebooks so that you get an idea on actually what you need to do if you want to to use them cool thank you Giyoma and so I will now share my screen again what we'll be doing today as a very quick demo we'll be to do a little bit of image segmentation of nuclei using stardis 2d and so as Giyom just showed you here's the wiki page from our github and then starting a notebook in a colab session is as easy as opening that colab that colab widget right here and so what which will open there you go a web page in colab on stardis there you go and so yours might look might look slightly different I've used the sleek dark background version of this because it because I find it easier to to see but let's and now we've got the notebook open let's take a quick look at another essential item that we need for training which is the data set so in the chat you can see that we've shared a link as well to a Zenodo page from which you can download representative data which is the data that we've used and demonstrated our stardis notebook in our paper but it's completely made available for you to get started on playing with this notebook and so all that data is is essentially a set of of data a set of pet data I should say right from which we have equivalent initial images so here they're fluorescence fluorescence images of nuclei that you can see right here and the equivalent segmented images that are that correspond to that particular image and so we've got these pairs for a range of data which you can see right here so by downloading this you now have a little bit of training data set what I'm trying to highlight to you right here is that this burden of building the training data set typically is on the user and the emphasis should really be put on how to build and curate the training data set for any deep learning model to work properly or as intended okay so I've already uploaded that data set to my google drive so we can simply go ahead with taking a look at the stardis notebook so what you see on the left right here is actually the table of contents of the notebook and as Guillaume said all of our notebooks are roughly around the same template and you'll find first a range of installation to install all the dependencies for that particular task then a range of parameter settings so that will mainly consist in telling the network where to find the training data and then there will be a step of actually running the training data set uh running the training on the data set sorry and then there's step number five and six which are respectively the the model validation evaluation of you evaluate your model right here and then in step six once the validation has been performed we can then use the preach the this the model that we've just trained on on the range of data so let's get started with this and so the initial steps so I should just say before I get started that all of this is we've tried to to document this as much as possible within the notebooks themselves so following along simply the explanation should get you to some relevant places already and so we can already start getting gpu access and also start mounting the data so that means we can access the data that's on a specific google drive account and this is what I'm doing right here for you all you need to do is to get an access code that you can paste in and you therefore give an access to that particular session to your to your google drive and so it's now mounted we can now start installing these and so while the installation is performed let me show you on the left how the google drive is now accessible within that session and what you'll find right here is all sorts of data that's available that's essentially a copy of my of my entire google drive and so I'll just be going into the folders where I've copied that status 2d data and so I've got range of data and you will recognize some of the folders that I've just mentioned to you and so if I now open this you see that the data I've just shown you infigy is also there now right so let's finish the installation there's a second step to this that allows everything to be to be set properly and that loads all the key dependencies and then we're good to go so now again let's get to the stage of setting the the parameters and so if we open the folder that's relevant for me I've got training images the images and the masks are in those two different folders I can now input them into into the the user interface right here let's give the name let's give the model a name I like to put the the dates of today 15 start this model one and then model path I will tell it to save to save the the model into a folder that I've created already previously which is called models and then we'll run a few fewer number of epochs than than initially than then set by default we'll run it just for 10 epochs in the interest of time in this case right and so all the other inputs are essentially more subtle parameters for for setting the networks and we're not going to talk about this today in the interest of time because the default parameters already work pretty well so we have now loaded the the data sets source and targets are available right here we've got access to data augmentation we'll initially not do any data augmentation but in on the other hand we'll also load a pre-train network right which is available from the from the stardis from the stardis repository directly made available by the original authors of that network so we'll use this as a pre-train model that's a way to essentially widely speed up the the training speeds because it instead of starting completely from scratch it starts from a model that's already been trained on a range of data and so I'll set this up so that you'll use this this pre-train network and we can now start initiating initializing a range of variables for this creating the model and the data set objects which might take just a minute or two and then following this we'll be able to start the training right here so are there any questions while this is happening Guillaume and Lucas yeah both me and Lucas are answering to to various questions but I don't think there are questions on the training yet okay excellent excellent so this might take a little bit of time just to load the individual data sets so what we have right here is quite a large number of images as you see right here that we've manually created or I should say Guillaume has manually created for this and that has that has made this quite a powerful training data set for what we're going to try to do today how are we doing with time by the way we are also fine with time so we have another good 15 minutes for the demo for this demo yeah okay brilliant there you go so the time that it took right here is essentially to to step through the data and load a range of range load the relevant data set to RAM and that takes a little bit of time due to data transfer and that's what it's doing right now now everything all the different variables are set and therefore the training and the training started and what we can see right here is there you go training is happening epoch one out of ten is is going to go through a bunch of steps for each epoch here in this particular case 23 steps per epoch it's just finished the first epoch and moving along to the second one so while this is happening is just going to take one minute I'll just move straight back up to show you something that I've only I've only brushed on very quickly and so that initial that initial installation that happened all the way at the top of the of the notebook actually told us which kind of GPU was made available to us through this particular co-lab co-lab session so what we see here is that we've got a Tesla T4 GPU made available through that particular session so you'll find if you try this at home that you may obtain different kind of GPUs which will therefore impact on how fast your training and your individual steps will go for and you can have an idea of how much RAM is available on this particular card and so on and so forth so that also gives us a little bit of information about the virtual machine that Google collab has made available for us for this session so let's go back to to our training so we're now nearly done and so this will allow us to therefore what we'll do next once this is finished is we'll do a little bit of evaluation of this model and for this we'll be inspecting the the loss function and we'll also be doing a little bit of error mapping exactly as I've shown you in my talk and so for this error mapping and for the quality control we'll be using a set of ground truth images that that we have available right here and I can give those path already and so that we can save just a little bit of time I'll put that in there and so if we now all the different epochs have been executed and now it's just completing a little bit of fine tuning of the network and there you go and so the network is now been trained and the model if I go and show you into my into my very busy model folder where you see here is right here the folder that now contains all the information about the model that we've trained and since I've I've used a link that is directly onto my my google drive then my my created model here is already saved onto the google drive so there's a range of information in there including a pdf report which I can show you actually let's do that within that folder there's a pdf report which if I open right here actually will inform you on all the different settings that were used for that particular training session so that will include all the parameters that were set but also a range of information about the different versions of the different important libraries that were used also the location of the different training data set right and so that's quite important to keep track of what's what's happened for each individual models but now let's evaluate our model so let's run this how we told it to just evaluate the current model and so we can take a look at the the loss the loss function but because we've only trained it over 10 over 10 epochs that's very few to visualize anything really useful on the on the loss curves but you see that generally speaking both the training and the validation losses decrease which is an an indication that the the performance of the model is improving over the number of epochs and so now let's do a little bit of error mapping on this particular data set so I've already given it access to some test data set that were not present in the training data set that's important for reliability of the validation step and so what this step will therefore do is do exactly what I've shown you in my talk is to compare images to ground truth images that we've provided and so what you see here is the ground truth that was provided the the predictions right here and you see the overlay here is showed in dark magenta and whereas the light magenta in the green represents the false positive and false negatives so if we take a look at the intersection of a union it's 0.852 and that's already really quite good for this for such a short for such a short training session so the trick right here was actually to use a pre-trained model as I've shown you earlier so if we wanted to improve a little bit on that performance one thing that we could do is we could go back up and then train a second model so let's change the name of the model we run this I'll just take one second there you go but this case will implement data augmentation as well so data augmentation will essentially flip and rotate your the provided training data set to increase the diversity of of the content of the structural content of the data and will make it more robust we could also add a little bit of add further number of epochs so train for longer essentially that would also improve the the performance of the network so let's let's just do data augmentation for this in this particular case and also do some pre-trained model is a pre-trained model as we've shown before this will only take a couple of seconds now that everything's already in RAM that's great and we can now start the second training session which will now create a second model that I will be taking a look at so I think that gives you a very good overview on how to run a training session and so while this new training session is running I will just very briefly show you how the last part of the notebook looks like which is so beyond quality control which is how to use the train model so now that you have your train model it's sitting somewhere and onto your google drive you can now use this section to simply load new data that it's never seen to actually perform in this particular case the segmentation using the currently trained model that's as simple as providing the right folder and the folder where the masks should be saved but we're not going to cover this today in the interest of time because that's that's also fairly straightforward so Guillaume I think I'll hand over back to you for the for the next step in in those presentation thanks so maybe to disrupt thing a little bit maybe you could show the very last cell in Stardust which is if you're interested in making prediction in very very large images then we've also incorporated that in the Stardust notebook so there now we're talking about giga pixel size data then if you now train the data set using small images then you can still start to apply them to very large data set do you want me to run a little bit of data on this no no no you don't have to I think it's just nice to to mention it cool yeah yeah so okay so yeah I'm going to start sharing my screen and then we can go to the rest of the program so what I would like to do now in this presentation is to give you a few example on how we use the zero-cause deep learning from my Coscopy platform in my lab to do research so but before I start this I just wanted to briefly say where I'm calling you from because of course the webinar a little bit unpersonal and it's nice to kind of learn a little bit about each other so just to to let you know so my lab is located in Finland in Turku which is a small city right at the end here of Finland's very south next to the sea and has to be expected in Finland we have warm summer and fairly cold winter and here is an example of the of the river that's across Turku which is frozen in the winter and and one of the reason why I'm kind of highlighting this is that I'm hoping to see many of you to join us next year as because we are hosting the the big elmi meeting which is one of the international european light microscopy meeting and we will have some session on image analysis so I'm hoping that that many of you will be able to come and visit Turku and that we can also meet in person there okay but now regarding as the work that we do in my lab so I thought I would give you a very brief introduction so that you get an idea where I'm coming from so I won't talk too much about the data that we actually generate but I just wanted to let you know that in my lab we are working to try to understand cancer cell migration using microscopy technologies and here is an example of an ovarian carcinoma cell that has been labeled to visualize the actin cytoscoleton and that is migrating in a complex 3d environment and basically this is the kind of things that that we do and why we started to kind of use a deep learning to analyze our microscopy images is because we really thought that this kind of new technology could help us get more out of microscopy data so it's very very brief introduction of the kind of things that we are interested in but hopefully kind of give you an idea of what I'm going to show you next so really the purpose of this talk is to try to show you how do we use deep learning and what do we use it for and because of this then I thought it would be nice to kind of give a little bit of overview of what can you actually use deep learning for and if you think about it you can actually use deep learning for almost any kind of image analysis task now so there's many different deep learning networks that have been published to do a lot of things like object detection segmentation image classification and so on and an example that I want to show you here during this talks that some of the things that we are using in my lab for research example of segmentation we do a little bit of deep learning enable image registration image restoration and denosing and image to image translation so I just want to kind of go through a few of those examples to give you a flavor of how we use the zero cost platform to do research and the first example I want to talk about is denosing and so why do we care about denosing well we mostly care about denosing because it's very important to use denosing to try to improve live cell imaging so I've shown you early on the movie of a migrating cell and you probably know that when we use fluorescent microscopy that lasers are very toxic for cells so you need to use low laser power in order to image a sample otherwise you will influence what cell do and you might even kill the cells you also know that if you start using fluorescent microscopy in living sample you need to have molecules that are proteins and and so on that are fluorescent tags to be able to see them and very often you can do that especially for for protein by expressing your protein of interest and if you express it at too high level so that you can see better than very often that really affect the biology so so if you over express a molecule of interest you might affect the behavior of the protein so it's in your interest to use very low or under genus expression level of what you want to look for and and if you do those two things together then you will almost always get noisy images which means that it's important to kind of have technologies that you can use to try to restore your images and remove most of the noise to only get the data that you want to have and so here is an example on on some of our early days on using deep learning for denoising using the fantastic tool developed by Krüler-Tall which is noise devoid as that's enabled to do some denoising of microscopy data and and here what you're looking at are some of our noisy images and so here we are looking at a cancer cells that expressing a molecule of power paxilin which forms a splack in the cells and but here paxilin has been undogenously tagged so we are looking at the undogenous molecules we are also imaging cells on our hydrogels so we are not on a cover slip so so the imaging is a bit more complicated and because of these our images are quite noisy so there we've been using noise devoid to to really try to denoise the data as much as we could and to better observe the structures of the cells here it's making to to attach to this environment and for us in this case this was really important because what we really wanted to do it was to to understand how is the cells accessing using those structures forces on this environment so to be combining this live imaging using a high resolution traction force microscopy to understand how the cells interacting with some violence and so so here I just want to I'm not going to go into any details it's just to to showcase on how we can now using this kind of technology to to better understand processes in living sample another so so you you may realize so as part of the zero-cause deep learning for microscopy we now provide a three different denoising a deep learning network so one is scare one is noise to voice that I've just introduced and the other one which is we've recently made a notebook for which is deconozing but here in this slide I just wanted to re-emphasize some of the things that that Homer presented in his talk is that because now in this platform we we have different tools that that can perform the denoising and learn how to design microscopy images in a different way then it's it's really useful to to use a quality control matrix to try to identify the best denoising strategy for your data and the other thing that I want to highlight here is that and deep learning is not the only way that you can denoise data there's a lot of other algorithms that have been developed over the years to to denoise data and it's always good to also compare how well those denoising data or denoising algorithm compare against deep learning so so avoid the deep learning hypes always do a sanity check and is really deep deep learning also the techniques that you're using is it really the most optimal for what you want to do so here's an example of using pure denoise which is a figgy plug-ins that's quite good at denoising data and here you can see in that particular example care is performing the best on these datasets but pure denoise can denoise your data but you get much better results with this care something else that we use a lot in the lab to to do our research is structural elimination microscopy so so why do we use structural elimination microscopy is because it's supposed to be relatively life friendly and you can do multicolor which is two things that we of course care about a lot when imaging our sample and however when people say that structural elimination microscopy is life friendly is a really mean it's life friendly compared to some of the other super resolution microscopy technology and actually if you try to do some live imaging with sim there's one thing that you will realize very quickly is often you get really noisy images because you still need to use quite high laser power to do live cell imaging with sim which means that especially if you use red fluorescent proteins you will either get you will either kill your cells quickly or bleach your sample or then you need to use low laser power to to to prevent this which means that you get nosy images so here's an example of breast cancer cells and to look at the actin that cell set junction in between cancer cells and you can see that our live imaging here is called nosy but using care using care implemented in zero cost we can really try to improve the quality of our microscopy images so here I can play the movie and then hopefully you can really appreciate that we can get nice data using the sim on those really dynamic process but we can also using after that care implemented in zero cost to to really improve the quality of those microscopy images and the way that we do it is that we train for care specific training data sets either using fixed samples so that we can really have high laser power and really nice looking images or basically a dedicated live sample where we kill the sample just to generate a training data set so here is an example where we generated high and low quality images to train the care sample and then we can get really high quality prediction out of it and using this is a model that's that we've generated using a dedicated training data sets that we've trained in zero cost then we can then denoise our live imaging data so that's something that we now do quite a lot in the lab is to produce dedicated training data set to improve our live imaging data and so that was about denosing so I just want to mention some of the other things that we're doing so one of each is image registration so I won't spend too much time on it but one of the projects that we have in the lab is to use zebrafish embryos to study changes in morphological features especially related to their vasculature and then we end up with hundreds and hundreds of images of zebrafish embryo and to be able to compare them nicely to each other and we need to align the images and it's actually something that's proven to be really challenging to do using classical registration techniques so then we've implemented a deep learning algorithm called Dr. Mim which is specializing in doing nothing registration of 2D images so so then if you're interested in that kind of things then check out we have a non-book to do that but what I want to spend most of my time talking about is object tracking so in my introduction I was telling you that in my lab we're really interested in cell migration which means that we spend a lot of time looking at moving cells and we want to track them in order to gain a quantitative information on how specific molecules or environment regulates the ways that cancer cells move and for many years we basically did that using manual tracking because there was not really anything that was working for us and but recently using those deep learning strategies we've been using for instance stardis together with tracking algorithm trackmates in order to improve our automated tracking pipeline and here is an example from our recent papers that we are currently putting together where we are tracking collectively migrating cancer cells which have their nuclei labeled and then we use stardis to detect the nuclei and then we use trackmates to do the automated tracking in this particular case we're interested in the molecule called myosin-10 and we found that if we remove myosin-10 then we can decrease the ability of the cancer cells to move or at least they move a bit slower and so so if you're interested in that kind of application last year we've published a small protocol paper in the Faculty of the Thousand Research where we basically explain on how we combine stardis and trackmates to do automated tracking and so actually there's several ways you can do that but here is the way that we've been doing it is that we first train a stardis model the same way that Romain just shown you in the demo using zero cost and as part of the zero cost notebook there is one of the outputs that you can choose which is dedicated to do tracking using trackmates so it's called a tracking file in the notebook and with this we can now do a batch analysis both on the detection stage but also in the tracking stage using trackmates which is available in Fiji and start tracking 100 and 100 of videos one after the other in order to gain quantitative information on how cells are moving so so I've shown you an example using cancer cells where the nuclei was labeled but you can also do the same using bright field movies so here is an example of T cells so that are migrating on cover slips and then here we've trained using zero cost dedicated stardis models that can recognize the cells from bright field images so the model was trained in the zero cost platforms and then we did the tracking using trackmates in Fiji so those are data provided by Nathan Roy and actually using this pipeline we were able to reproduce some of this data showing that in this case the T cells are migrating differently depending on whether they are attaching to I can or they can so here is an example of the training data set that we've used in zero cost to train the stardis model where we have a combination of the bright field images that we are interested in and manually annotated mask here which are labeled image so you provide this kind of data to the stardis notebook in order to train your model then you can do the tracking after that and actually we've been really excited about using stardis and trackmates because it's really saved us a huge amount of time and allowed us to produce a lot of data related to cell migration and because it seems to work so well then actually we've recently teamed up with Jean-Yves Tinoves's group with one of the main authors of trackmates to develop a new trackmate that can incorporate directly within trackmate in Fiji deep learning and machine learning elements so it's been a real pleasure to work with Jean-Yves on this project because he's really a trackmate magician so he's been able to implement a lot of different things that will be coming out soon in trackmate in Fiji and so basically among the new things that you will be able to do is that Jean-Yves introduced in the new version of trackmates that will be really soon then you will be able to do tracking directly from elastic projects from VECA so if you have data that can be directly segmented using a VECA model then you will be able to do that directly in trackmates you can use stardis directly within trackmates to do the detection and then trackmate takes on as a tracking you can also use stardis custom models so for instance if you want to train it using zero cost deep learning from microscopy then you can import that directly into into trackmate in order to do your tracking directly into Fiji. Trackmate will be able also to to take and do tracking directly from labeled image in order to do the tracking of instance segmentation results so for instance results acquire using cell posts or on bedside or any of those also deep learning strategy that produce labeled image so so there's a lot of different type of inputs can now will now be available in trackmates which is really exciting at least for for us and so then you will be able to to track cells track different objects you will of course be able to do lineage tracing and and so on but what's also really exciting is that now in the new trackmate you will be able to follow changes of the shape of the time which which allow us to do a lot of different new kind of analysis of our migration data you can also of course follow changes of intensity results region of interest over time and I will show you an example of that and then you can also do some 2d to 3d labels magics that I will show you in the future slide so here is kind of what the interface look like so if you've used trackmate before you will know that in one of the steps you need to choose what kind of detector you want to use to basically identify the object you want to track and so now here are some examples you will have access to a stardis detector or an elastic detector or label image detector so so this new version of trackmates that's that's Jean-Yves groups and us have been working on we really completely change how we can do tracking directly in 3d and so here are just some examples of biological application on how this can be useful and how you can combine zero-cost deep learning from microscopy with these trackmates is that here we've trained a deep learning model in zero-cost to detect the nuclei here so here you're looking at cancer cells that have been labeled both as a nuclei but they also express an arc activity reporter so it's a it's a fluorescent molecules that when it goes to the nucleus and you know that arc which is an important kinase gets activated and and we can use a stardis integrated in trackmate to identify those nuclei and then we can track over time the changes of intensity of the arc reporter in the nucleus all of this directly into the new version of trackmate and then of course you can track the cell at the same time so this is kind of what the video looks like by eyes you will not be able to see many changes in the heard video and you can see that the trackmate here is able to track the cells very very nicely but then because it's all about extracting quantitative data from those movies we are able to to get information on the changes of arc activity in the nucleus of the track cells over time and to correlate that or not to the ability of cells to move at this time so so here is a hit map where each of the lane is a different cell and then the changes in color it highlights the changes of herc activity in this particular cell so so for instance in these cells and the herc activity at the beginning of the movie was really low and then suddenly you get a peak of activation of herc in these cells and so on so with with these abilities now to follow shape and intensity over time in in trackmate you can really gain a lot of different type of information in in your tracking experiment which is for us very exciting and as I said the new version of trackmate will be able to to track directly labeled image so for instance now if you want to use cell pose which is a very popular cell segmentation algorithm which is based on deep learning then you could for instance use our zero-cost deep learning for macroscopy cell pose notebook to predict your cells and then you don't load the labeled image and then you can input that directly into trackmates to to track those cells over time and then we have a function in our zero-cost deep learning for macroscopy notebook to do that so so here is an example where you have collectively migrating cancer cells where you have the actin and the nuclei labels here are the predictions made by cell pose so those were done in the zero-cost deep learning for macroscopy cell pose notebook so we didn't retrain the network here we just use the site-to-to model provided by the cell pose offers and then we've used this labeled image here directly into trackmates to do the tracking so so here is an example using cell pose but you can of course imagine that you can use any other kind of of instant segmentation strategies that will give you a labeled image so think it's blindest which is also very good for segmentation or on bedside and so the last thing I wanted to highlight regarding this this new trackmate is that now you will be able to because the shapes are stored during the tracking you can now use trackmates to track objects in 2d and then track them in order to generate 3d label so here you have a 3d rendering of a asinide of cancer cell going in 3d and then you can see that you have nuclei everywhere and that what we did is that we used star this 2d in trackmate to identify the nuclei at each z planes then trackmate track those structures and then export 3d labels out of those tracks which allow us to generate a 3d segmentation using only a 2d algorithm and this is really nice because training 3d algorithms is really challenging because you need to annotate the data also in 3d which is really time consuming so now using a trackmate you can use a 2d segmentation algorithm track the segmentation over your z-stacks in order to reconstruct 3d labels which really can accelerate the ways that we do 3d annotations so so here I just want to to acknowledge the fantastic trackmate team which is led by Jean-Yves Tinevez in the Pasteur Institute in Paris so Johanna in my lab has been pushing this forward and then also there's of course other people involved in the project and we don't yet have a release date for trackmates version 7 which will contain all those new improvements but hopefully it should be in the next couple of months so so stay tuned if you are interested and so the last thing I wanted to to talk about regarding the use of the zero cost depending for microscopy platform is model chaining and before I start on this I just wanted to highlight one of the technologies that Romain talked about which is using image to image translation technology for instance using pix2pix and that's basically predicting one image using another one so here we have images of acting we have images of the nuclei and then we can train pix2pix in this case to predict what the nucleus will look like based on the acting and this is what the pix2pix prediction looks like on this particular data set and you can see that the images are not only very realistic which is what pix2pix is really good at but they're also fairly accurate so the prediction are really good so so why do we care about this then I will just give you an example from our ongoing research in the lab where we are interested to understand how cancer cells interest these auditelial cells and here in this particular context we are flowing cancer cells on top of a monolayer of auditelial cells and what we are interested in of course is to try to understand how this is regulated so so we basically just have this kind of raw data which is low resolution bright fill images where you see you have auditelial cells at the background flowing cancer cells and then here you have some cancer cells that are attached so what we currently do in the lab is that we train first a stardis model in order to detect and specifically recognize only the cancer cells and this work really nicely we can then use trackmates to specifically track the cells that are attached because they are the ones that we care about or at least the cells that are attached for us for a short period of time all the ones that are freely flowing on top we're not so interested in so by using tracking we can only keep the cells now that are attached so this is already pretty neat because we can now get a lot of quantitative information on on this specific population of cancer cells just based on those bright fill images but using image to image translation algorithm we can go even further so here is again the same movies that I've just shown you and then we can train a specific pick-to-picks network in order to predict where the nuclei in all the cells present will be and this is what kind of result we get which is also really accurate I'm not showing you the data here but here is is a fake image of where the nuclei will be on in this particular movie but you can also predict other places such as cell cell junctions which is one of the areas that we are really interested so so just from those bright fill images we can predict where the junctions are and where the nuclei are then we can use segmentation algorithms so here we use stardust to detect where all the nuclei specifically of the endothelial cells are located here we use cell post to detect where all the individual endothelial cells are located to identify the junctions and then we can put all of those things together in order to create our 8-bit video games where we can identify the nuclei of the endothelial cells the junction of the endothelial cells and then the attached cancer cells in this system just a prediction based on those bright fill data cells and we are now fine tuning this pipeline in order to try to better understand how cancer cells are interacting with endothelial cells on the pro and and what kind of parameters is is regulating them so if you have more question about this i'm happy to chat about that later okay so this is kind of all the application i wanted to to to mention i just wanted to also highlight that the our platform is always growing we currently have now 26 different notebooks to do a variety of tasks that are related to deep learning i also want to say that we have at least two to five new notebooks that should be released fairly soon in the coming months so so stay tuned we'll have some exciting things that will be coming and i also say that we are very interested in having people to contribute so so if you're interested to to generate your own zero-cost deep learning notebooks then don't hesitate to reach out we have some guidelines regarding this so so don't hesitate to to be in touch with that something that you're interested to contribute to and don't hesitate also to reach out to us either via github or the image.ac forum if you have any questions and and so on and so i will now just want to to acknowledge the whole zero-cost deep learning for microscopy team so as roma said it's really been kind of a joint effort from many researchers around the world that have been helping us into building this this pipeline and finally i just want to acknowledge my labs so i've shown you some of their data so data from gotier yohana and then susan and i also say thank you to the funding bodies that support our work and on this i think i've taken a little bit too much time so now i'm gonna stop here and hopefully answer the question you may have and i'll let roma carry on with the final demo if we still have time for that um so there aren't any questions that need to be addressed quite immediately um so we could um use the remaining six seven minutes to go through one um final demo notebook demo what what do they organize us think yeah i go ahead i i see some head knowledge okay cool i'm i'm glad because that's that's a notebook i'm really excited about and i will also address a range of issues that um that yohm has also mentioned in his presentation and is the issue of um essentially um creating training data set in uh segmentation tasks right and so um that actually the notebook that will be showing you leverages uh a collaboration and and the work from wey ooyong and in particular his is fantastic work on on kaibu which allows actually to bring the human into the training loop and so let me show you in particular how this works uh through this just one slide so let's imagine you've got a data set that looks like this and you wish to actually segment all of this and you've got very huge data set so deep learning will be great for you um but you still need to segment a subset of this data set to provide a training data set uh for for the model um so what you will do is you will then take an annotator and then start drawing things by hand right and that's exactly what um kaibu interface actually does but what's really important is um once you have um a really hand drawn a handful of little small patches of your data set you can already start sending those data sets to um to a background trainer and what i mean by that is a model that will train in the background uh while you keep segmenting by hand and so what happens is your little patch will which will take just seconds a few seconds to to segment can be sent to the trainer and that can be directly used to start a training session using imjoy um and um while this is being trained we now have access to a model that is tuned to that particular data set admittedly on very few patches so it will not perform very well initially but will perform somewhat so what this means is that we we now have a model that can be used to pre-segment any new small patches that you start labeling by hand so what this means is we enter into a virtuous cycle of creating more segmented data set as well as improving the quality of a small model that will therefore um perform better and better over time so that means that within 10-15 minutes of manually segmenting a few things and then using um the the model prediction to pre-segment and then um improve on the segmentation that offered by the um by the segmentation model then you can build a very strong data set pretty quickly and so you see how um you know using the prediction and feeding that back into a user interface that can be analyzed by hand by the user is actually very powerful so we now have the human in the machine working in parallel towards building a better data set and a better model and so um what this allowed us to do in particular is build data sets such as what um Guillaume has just shown you earlier and this in fact what i'm showing you right here is um was also built through the trackmate 3d extension to be able to do 2d to 3d trick um but without further ado let me just show you um that notebook and it's called interactive segmentation zero cost a notebook and it's also available from our wiki and so i've already pre-installed things in the interest of time um i will just load a little bit of data to be able to um to show you just a very quick example of this so kaibu have got a lot of data from a fantastic colleague um right here called chantal robinet um and she's got some um she's she's happy to share with me um some neurobrastoma cells neurobrastoma um that were labeled for lamin and so what we have here is a little bit of um nucleus nuclear images like this right so i've loaded that data set it doesn't have any um masks associated with it yet um and so we can now uh start giving a little bit more information in terms of where the model will be saved that little model that trains in the background and similar to before um let's give it today's date kaibu model one um and let's and we can also use a pre-trained model in this particular case as well so let's let's prepare prepare model and let's start uh the important part which is the uh user interface which is what's coming up in just one second and as i was saying that user interface leverages um both uh the um um kaibu both kaibu and yume joy which are work from wei ooyong and so this work is in collaboration with him um so we now have a the user interface and we can actually full screen this but let me just load first a small small patch and so you see if i get um a bit of images this one is adjust the edge of a cell you know we've got a handful of um handful of nuclei in this particular patch uh but let's let's move to um full screen there you go um sorry just a little mistake uh there you go it's back now we can full screen this get an image so this is a little bit of background um this is quite a range of nuclei there you go we've got a couple of nuclei in here let's run already a prediction from that pre-trained model that was telling you about and we already have a handful of segmentation but what you see right here is a number of them are actually not done properly so we can actually delete handful of them um the ones that we think may well be a little bit um a little bit dodgy and so what we can now do is just redraw the ones that we think were done poorly there you go and here's another example right here um there you go there was a couple of examples in that corner too let's redraw this and redraw this too and so you see that's as simple as this the red ones are the ones that were already kind of segmented okay from the pre-trained model the green ones are the ones that have modified and so we can now send that little patch already for training uh it's now being sent to the machine as I was telling you um so we can now actually now we already have one little patch sent for training we can actually start the training so if I now take a quick look at what's going on over there a training session has started in the background and um every time I will send a new patch it will now take into account that new patch for training and will it prove overtime so now I've got on a new patch um run a prediction which is now already using the new data set that sent um it's relatively decent we might want to maybe modify one or two or amend one or two of those um little examples this is not ideal so I'll remove these um so let's draw a handful more by hand so you see that this will just take a minute because now you see that the vast majority have already been segmented pretty well the vast majority of those nuclei so we can just resegment one or two Roman yes I can just one really quick question um somebody asks if you can show if the brightness can be changed in Kaibu or um so there's a little bit of adjustment that can be made to the images but not a whole lot at the moment the images are saved or the patches are saved such that the brightness should be um pre-optimized or contrasted optimally already by default um but it's also a work in progress and we or a way I should say keeps adding new functionalities to Kaibu including that kind of things um so if there is if the particular image transformation that you're looking for isn't available quite yet uh it's likely will be available to you very soon so I will be sending this for training um and so we now have um we now have sent a couple of patches and so let's try predicting on this for instance and there you go and you see that it's already done a decent job at quite a number of them it's missed some low intensity ones so maybe I would I would redraw one or two of those right here but you get the idea you get the idea and so you see how very quickly you can build here even just in a couple of minutes you can build some very good quality patches segmented patches with their new with their masks um in the view of building a training dataset and so if we go back to the notebook now um we can now check the annotated training images and I've only I've only been built two of them right here so you see them there but what we can do also is stop the training and then take a quick look at the very few um we can do the evaluation of the model just like before just like in the other notebook and in particular look at the loss function it will look pretty messy because you see the points at which I've added the um the individual the individual additional patches to the training and you see how um the evolution of the loss curves will um will look like over time and every time you lad a specific patch of course it'll kind of spike off um but you could also do the error mapping on that model eventually and also do use that train model um also eventually once the validation is performed I should say that right here uh all of this is only um working on cell post models um and and we can use some of the cell post pre-trained models um initially to start already running predictions even before any patches have been segmented um but way is also working on a range of other type of segmentation networks to be implemented in this um but I've shown you right here even just within a couple of minutes I could already start building very good quality um segmentations um so in the interest of time I will stop right here and um I'll uh um I'll um be happy to take any further question in the forthcoming minutes and otherwise um I'm really happy to have taken part in this and I hope you guys all taken took something back from this and um we're always happy to receive contributions feedback or anything and um also suggestions and um network to implement and other aspects of how to improve the platform all together um so feel free to get in touch with us if there is anything and I'd like to thank the organizers again for for having us today yeah I'll just echo Omar so thanks a lot for for inviting us um and so I hope also thank you for joining the webinar I hope you've found it interesting don't hesitate to reach out I think we're gonna spend the next few minutes answering the all the questions that are left and and I also think that many of those answers will be put on the image.hc web pages um so so if if you can't find an answer just yet then then bear with us we will find a time to put it online later on Thanks a lot guys this was very well structured and very nice and easy to follow it was great thank you um would you like to answer any of the questions live? I've not kept up with the um questions in the last few minutes so if you guys did you guys see any burning questions that we could discuss live in the the next five ten minutes? I can't can read some for you and I think we can answer most of them probably better in the written way one question was does kaibu work in 3d? it doesn't yet but using the trick that Guillaume described earlier of linking z planes or segmented z planes across across the z axis works really well in fact and the data that I've shown you on the Neoblastoma was actually done this way and um and again to to to to repeat the advantages of doing it this way is that it's a lot easier to train a 2d model as opposed to training a 3d model so that's a that's a good trick that a lot of people start using okay thank you very much again so the question in the in the chat all right so we will stop this webinar uh the recording will be available on youtube and please remember to give feedback and suggest new topics so that we can organize organize new webinars for you guys thanks so