 I'm Sharon Yates. I work at the... I'm a researcher at the University of Oslo and I'm going to be presenting some of the analysis tools that have been developed as part of the human brain project. Specifically those that can be used either in combination with or that rely on 3D brain atlas resources for analysis. The tool that I'm going to focus on is Elastic. Elastic is a segmentation tool that has been developed by a team in Heidelberg. So it's not actually developed in Oslo, but we are closely collaborating with them. It is a tool that can be used to segment any type of image, so you're not restricted to brain images, images of brain sections. You can segment anything, but we use the tool in the context of brain sections. The second part of the talk, you're going to hear about Ugex, which is another tool that can be used to analyse human data. So just a bit of background about tools for segmentation. The traditional approach to segmentation is to threshold based on labelling intensity. For example, with a tool such as ImageJ, normally when you do segmentation what you're interested in is pulling out the labelling or something in the image that you're interested in. So in this context it would be labelled cells, immunohistochemical labelling. The challenge is by doing just a traditional thresholding of intensity is that it can be very difficult to remove non-specific labelling. It's also difficult to batch analyse, and a new generation of tools and plugins have been developed to kind of solve some of these problems. Elastic is one of these that uses a supervised machine learning approach. Here you can kind of see an example, so I don't know if you can see. At the top you have a brain section from a mouse, and it's been segmented based on the thresholding intensity. Thresholding of the intensity with ImageJ, and you kind of get a lot of speckles and kind of things that you can't remove non-specific labelling. And this one here was done with elastic. It's much cleaner, and we pulled out just the plaques, which is labelled AMLOID plaques, which is what we were interested in in this case. So what is elastic? It's a toolbox for interactive image classification, segmentation and analysis based on supervised machine learning algorithms. It allows you to do segmentation, it also allows you to do a lot of other things. It has lots of different workflows. So kind of what we are going to demonstrate here is not the only thing that you can do with it. So I would kind of encourage you to go and have a look yourself on the elastic website. It's free and open source. It allows batch analysis, and it is relatively user friendly. Okay, so that kind of leads us to the question of why would you want to do your segmentation in the first place? So in this case I've taken an image of a brain section from a mouse. I've pulled out the labelling. In this case it's called binding labelling. Once you have that segmentation, what can you do with it? At the University of Oslo, what we've done is developed a workflow that we've called the Quint workflow, which allows you to quantify and spatially analyse labelling in the context of reference atlas space. There's actually an article coming out soon on this. What you do is you segment your image, your brain section, and this can then be combined with that customised atlas maps, which are produced with the quick-knee software that you just saw. This combination is done with an application called Newtile Quantifier. So this is also software that's available online. You can download it on nitric.org. It comes with a very extensive user manual. It basically does a quantification for you with output per brain atlas region. You get reports, you get coordinates, which allow you to visualise your objects in 3D atlas space. We have our own viewer that you can also access to be able to look at the objects in 3D, and you get these images, which are basically the segmentations overlaid on the atlas maps. Going back to elastic, for classification with elastic, it has two main workflows. The first thing you would do is you upload your images to the pixel classification workflow. That one does a classification based on user annotations that you manually put on yourself, and the pixel features intensity, colour and texture. The added value of using elastic over just using the traditional intensity thresholding is that it also takes into account the colour and texture of your image. Often that's all you need to do, then you can export segmentations. In some cases, the segmentations that you get out are not good enough for quantification, because you might have a lot of nonspecific labelling that's also pulled out. In this case, it can be useful to do an object classification step, which follows the first step. Where you can filter out objects based on object level features such as size and shape. Here's an example where I have an image of a section labelled for amyloid plaques, where I've used the pixel classification workflow first to extract all the labelling. Then you have this nonspecific labelling around the edges that I want to get rid of. I've applied the object classification workflow and classified based on object size and shape, because you can see that the circular objects are very different in size and shape compared to this elongated nonspecific labelling. Can I ask what the pixel classification is based on texture? I'm going to explain a bit about that here. The texture can recognise textural patterns. The algorithm is built up so that it detects patterns within a 10x10 pixel window, which means that if you have light pixels followed by a bit darker pixels and then goes to the very black pixels, that is a pattern that it can recognise. It recognises the edge, not just based on the intensity difference, but also that it's this kind of light to dark texture. That's why one of the considerations you have to take into account when you do classification with elastic is that the resolution of your image actually has quite a big impact on both the quality of the input output and the processing speed, because the bigger the image, the slower the processing time. Ideally, it's best to resize your images so that the edges of objects fall within this 10x10 pixel window, because if it's outside of this range, then the programme is not going to be able to see it, to detect it. My kind of recommendation is usually to downscale as much as is possible without losing the information that you're interested in, because if you downscale too much, you might lose information, you might even lose objects. The practical session is a bit similar to the quick knee one, in that you have access to some images which are lying in the session 4 folder called images. We're going to classify and then extract segmentations with the pixel classification workflow. One of the silly things about the elastic programme is that when you export the segmentations, the images are always black by default, which is very confusing for new users. This is because basically a black colour has been applied to all the classes. That's kind of the default setting in the elastic. Even though it has been classified, it doesn't look like it's been classified, which is why we have to apply the Glaspier Lookup table to the segmentations just so that we can see or basically apply different colours to the different classes. At that stage, once you have your segmentations, that's when you would take it into the new tool software. We're not going to have time to cover that today, but we're going to explore some of the new tool output, some of the coordinate output. You can have a look in our viewer and then, of course, you can go and look yourself if you're interested in the workflow. Read the article when it comes out and kind of download the software and read the user manual. This is just a sneak preview of how the objects look in the 3D viewer, which is the mesh view. I think we're going to do this in an interactive way. I'll open elastic on my computer and then you can do it live. Open the software first. Wait. I don't know what happened. There's something to do with... Just knocked it or something. Has everyone got it open? I was blocked by my system administrator. I downloaded it, but I didn't try to open it. This is actually a session where you can work together as well just to get a feel. As you can see, there are a lot of different things that you can do with it. The segmentation is definitely not the only thing. You can use it to work with 3D images with 2D images. You can also work with videos moving over time. It does a lot of different things. We're interested in pixel classification at the top. What you want to do is to save your project somewhere. You can save it into the folder with your images in the session 4 folder. I'm going to save it here. Then it should open. There. Then you want to upload your images. To upload your images, you go to add new and add separate images. Then just highlight all the images that are in that folder. Is everyone at the stage ready to move on? To make the file more stable, a tip is to save all the images to the project file. To do that, you highlight all the images and click on edit shared properties. Then change the storage to copy to project file. Then press OK. Then it says that you have to save the file at the step. Go to project and then save project. I wouldn't usually go to save as, just save project. The next step is to go to feature selection. I think mine is still saving, which is why it's being a bit slow. You can see at the bottom there's a kind of... For the feature selection, you want to select the features that you want to include in the classification algorithm. The advice that I would give is just to include everything. If you have very, very big images, that tip can be to select features a bit randomly like this. These are the different scales. This is what I was talking about with the... That it can recognise edges of objects or textures up to a scale of 10x10 pixels. You can see that the number at the top, if I have these numbers here, represent the scale. This is the 10x10 pixel window, whereas this would be a 1x1 pixel window. It's recognising textures on different scales. We'll just include everything and press OK. Then the next step is to start the training. Here, you would create some classes. In general, you want to extract your labelling, which is the call-binding labelling here in black. You usually would have background and then labelling as your classes. I'm going to change this one to label background. Is everyone following? Then, if you have the handout, it tells you how to navigate. It's really useful to have a mouse. If you hold control and then use your wheel, you can zoom in and zoom out. To navigate, you press shift and then just move around your image. Usually, the first step is to annotate your image. To place some labels of each class. I usually zoom in until I can see the individual pixels. Then, I place some labels. I can say, OK, this is background. Label some example pixels. It can be useful to label not just lines to take a blob or whatever, so that it starts to recognise if there are any patterns. For example, you can also just have a go. Place a few labels. You don't need to place so many before you press the live update button. When you press that one, it does automatic predictions on your image. To be able to see the result, because it shows the probabilities, which are not so easy to see the boundaries, how it's really classifying. I usually turn that off. It has lots of different overlays that you can turn on and off to be able to explore the segmentation. I'll turn on the label segmentation. When I do that, I see that it's pulling out a bit more than I wanted to pull out. I'm going to place some more labels. You can turn on this uncertainty overlay. That identifies basically all the pixels in your image, of which the classifier is unsure of the class. By specifically labelling those pixels, you very quickly improve the predictions. I'm going to zoom in. It's always at the boundaries between background and labelling. That helps to place more labels. I'm going to switch. I'm going to say that this is labelling. This is background. So if you had dual colour staining or something, you could also label all the different colours. You can create as many classes as you want. The more classes you have, the more time consuming the updates take. It gets more heavy. I usually try to keep it to as few classes as I can, but definitely if you have two or three or four things that are interesting, you can try and segment in one image. Can you also classify the probabilities of class number one? Very certain, class number two a little bit certain, and class number three uncertain? You definitely could do it. It depends a bit on what you want to do with the image at the end. In my case, I just want to extract the labelling, because I want to take it into the quint workflow to allow me to do a quantification. In a way, there wouldn't be any reason to have this in between uncertain class. You just have to make a judgment, an educated judgment about where you want that boundary to be. You might want to use it for something else, another workflow or something, and then definitely you can do whatever you want to do. I'm going to zoom out. The more zoomed out you are, the longer these updates take, because that actually has to do a prediction of everything in the image. It's very important to be a bit patient and wait until it's updated, because otherwise the program can crash. It's a live thing. At the end, when you feel like you've got all of your things perfectly trained, you would zoom all the way out to do it? What you do then is you can move to... If you think that the image is very, very large, and you think that maybe it's a bit too much for the program to handle, then my recommendation is to train zoomed in and then scroll around to get a feel for how it's predicting in other parts of the image. When you're happy, you can actually select the next image in your series, because the whole point of training a classifier like this is that you want to apply it to maybe a whole series of images, but then you need to adjust that algorithm so that it fits not just for your one image, but also for other images in the series. These images that I've uploaded are in a way like training images, so they're a subset of a whole series. Then you'll run it later on. Then you can apply it to a whole series of images. You can apply it to hundreds of images, and it will export segmentations automatically. You can leave it running overnight, and you don't see anything, but you get that output for everything. This is another image. I'm going to look a little bit. The difficult part with this is always deciding what is labelling and what isn't labelling, is that you have to make your own judgement. It is, of course, easier to make those decisions if you have higher resolution images, so that's always the difficult thing with deciding should I work at a higher resolution, but then the extraction is more time consuming, it's more computationally heavy. I might not get as good output, but I think it's pretty good. It says current view, where you can open any image in a series, and then it automatically applies the algorithm to the next image and does a prediction. In a way, everyone can just play around a bit with their images until they're happy. Once it's finished updating, I'll show you what to do when you're happy with your classifier. Then you go to the prediction export thing here, which allows you to choose your settings for export. You can export different formats and you can combine it. What you want to export is going to depend a lot on what you want to use the image for, but in the context of our workflow, it's only compatible with PNG images, so I want to export a PNG image. I have to have the 8-bit, the unsigned 8-bit. You have to change the type of image to a simple segmentation. I want a segmentation image, I don't want a probability file. If you wanted to do an object classification afterwards to remove nonspecific labelling, then you would want to export the probabilities because that's the input that's required for the object classification workflow. These kind of things are all explained as a user manual on the Elastic website, so there's a lot of information there. Then you just click the export tool. If you want to apply the classifier to a whole series of images, you go to the batch export applet, and then you can upload hundreds of images and apply the classifier, and it will do an automatic export of all your segmentations, but we're just going to apply it to these training images that we have. It takes time, so I think for this number of images, it takes minutes. If you get that far, you can export them. Otherwise, I think we have some images that you can have a look at, some example output. Is there a way to download some of what the classifier is? The actual algorithm. I would have to ask the Elastic team. It is. It is. It probably is. They're quite active on Github, so if you have any questions for them, what to do with the programming side, I would contact them directly, because that's not... I mean, it's already set in the project file, so you can also explicitly write the classifier to disk and then version them and exchange them with the classifier to the images. I know that, because I'm not a computer programmer, I know you can work in headless mode and you can do all sorts of things with it. I stick to just using the user interface. What we're trying to do mainly is to make these tools accessible to people who don't have that programming background. That sounds great. We get questions into our help email at the elements. It's not really significant things. I don't know. That's true. If you have microscopic images with shading artifacts in the four corners, for example, how does the classifier... how stable is the classifier? How stable if you have... If you have continuous shading shift to the borders, there you would have a theme labelling if you want to detect that. How stable does that sound? It wouldn't be able to use the intensity thresholding because that would pull out just in certain parts of the image and not others. So then the textural component might be able to pull things out if it identifies that kind of border between that light to dark textural border that would be present both in the dark part of the section and might still be able to get a good segmentation. But you just have to have a go and see if it works or not. In principle, it's just a training example in the case of the... In that case, it would simply use two simultaneously in the corners and in the center of the image and it should be able to... It should be able to do it. That's kind of one of the advantages. It doesn't learn a threshold or something like that. It's much more complicated. It's a whole range of textural features. Future vectors I mean, you can select the features. It chooses from a whole range of textural features and then it learns a non-linear variation of these features. So hearing from these... fully descriptor I'm not sure if it has explicitly fully descriptor but mainly... You can choose them there. It's a feature selection over there. You select your own features. The features are... It is kind of texture and intensity, color are the features that I can recognize. First and second derivatives at different blur levels and all sorts of stuff. Yeah. When you were training the classifier, you mainly did the pixel classification. You didn't do the object. No. If I wanted to do object classification... This, for example, I think is... This is something that's not specific labelling. I think this was something... some artifact around... something. You could have circled some of those and it could have fallen out for you. If I wanted to do object classification on this image, I would export the probability maps. In the probability maps in the object classification workflow which looks similar to this in that you can... You also select your features. Which features do I want to include in my classifier? But then it has object level features such as object size, object shape, how many spines does it have, where is it positioned in your image? It's kind of... It's a massive list of features you can select from. So I can see that being useful if, for instance, we have people trying to train the classifier for analyzing behavioral videos of a mouse face or something like that. It might not be able to do something really intricate that it might be able to find whiskers if I don't find... Yeah. I think I've seen videos of elastic being used to track mice moving around over time based on they kind of recognize distinct features of the mice and how they're moving around in 3D over time and I don't know... You can do a lot of things with the software so it's definitely not just what I'm showing here. That's all those tissue bubbles. We actually have documented where all those things are. In the... Okay. They had some... Yeah. That's important. Tag them all. Yeah, okay. It's useful I should read a bit more the documentation. Um... People doing this to learn. Yeah. Which would have been a lot easier. Should we see? We can go to session 4. Um... Yeah, so this kind of shows some typical outputs. So this is after the um... Glasby lookup table has been applied because the output that you get from um... elastic is black. Um... I'm going to see if... It's the wrong format. That's silly. I've exported the wrong format here so I can't actually open them. Um... How's it going for everybody? It's kind of pulled out. A lot. Okay, I actually exported um... I was planning to export PNG images simple segmentations but I exported the wrong format and I haven't actually got examples of the black images but if you do export PNG images for the segmentations to apply colours you would use image day so I'm going to open an image in image day. And then obviously this one already has colours but it would be black and then I would go to um... image lookup tables and apply the Glasby lookup table. And you'd get out something like this so this is just kind of a random table of colours that it applies so it applies a different colour to each of your channels just and those colours are um... um... Yeah, it's just a table of colours but you can change the colours by going to image colours and then edit lookup table. So if I want... wanted to look a bit nicer and this is all explained in the manual so it's just kind of... it can be a bit when you're a new user and you start using the software and you just get this black image out it's just kind of oh my goodness it hasn't worked what did I do wrong? But yeah. And then you're ready for the next step which would be to... so if you have these segmentations then you have Atlas maps that you've exported from the quick knee software then you can use the new tilt software and actually get quantitative data per Atlas region. For the next part of the session um... I have some new tilt output so you can open there's actually in the handout there's a link to viewer which is our own um... 3D Atlas viewer called meshview I think the... this handout should also be included in the folder with session 4 so you can open the link um... and then it looks that's right I'm gonna open it and scratch yeah so here this is just to get you a feel for kind of what you can do with the output of new tilt so what I would go to is to hide all the regions in the Atlas and then just switch on the route um... and then you can navigate to some of the coordinate files which are the output of new tilt they're located in the session 4 folder in the new tilt output folder and their jason files so if you select this 3D combined that kind of is the whole the whole image series and then you can reduce the point size so it actually shows each pixel and then you can see how um... you know these are literally all the objects that have been extracted through the segmentation that have been those segmentations have been kind of anchored or matched with the Atlas maps and then the coordinates have been extracted and you can see everything in 3D and you can kind of switch on different regions to kind of see where things are located this is also something that you can just have a play with