 Pete Mencant will introduce the QPAS software today and the fresh new version of it, I believe. And Melville Gilbert is actually a developer that works with Pete and Laura Murphy is a bio-image analyst. All of them at the University of Edinburgh and we are looking forward to hear about QPAS. Thank you, thank you very much for the introduction and the opportunity to speak and thanks for joining. So yeah, I'm happy to have the chance to introduce the latest updates in QPAS and the latest means today. I don't know if you noticed. Actually, I think within the last hour, if you go onto the QPAS website, I haven't quite managed to update the this page yet, but if you search there you'll find the very latest release for this webinar. And so that's what we're going to talk about. But I want to just begin by giving a little bit of a background to the software, what it is, why it exists. I'm going to then spend most of the time during the webinar just showing it in action. So if you're familiar with the software, you're going to see the new things and the new version of the new way of working and I'll understand why these, can you hear me? Can hear you, yes. Thank you. My phone lied to me in that case. Let's see, yes, you're going to understand why these changes have been made and then we're going to move on to some questions and answers. So feel free to ask your questions then and they will be put to me as they come up. Okay, so just a few words then briefly about why QPAS exists. So my background is in bio-image analysis and really from the computer science side. So at my PhD at Queen's University in Belfast working on really noisily confocal microscopy images of retinal arterials, a little bit of retinal image analysis, but I really got into the bio-image analysis side whenever I joined Heidelberg University working in a core microscopy facility there. And at that point, quite a large part of what I did was helping users with the analysis for their specific projects. And I thought that because lots of people have the same kind of questions, the same kind of problems, it would be useful if I would try and focus on the teaching to try and explain these concepts as many people as possible. And I wrote this sort of image analysis handbook, which is freely available online if you want to see it. But throughout all of this, I find that really if you want to do bio-image analysis, you need to be able to communicate across disciplines. And so I come from the computer science side, but I spent my career around people with different expertise from me. And I think the clever algorithms can be part of it to analyze these images, but really these algorithms need to be implemented into software to make them accessible and user friendly. And so whenever I was in Heidelberg, I got very much into the open source image analysis world, particularly with MSJ and Fiji, but there are these system of open source tools widely used within the field. And UBIAS is one of the best things for really introducing people to these tools. And so that's why I'm quite excited about these new initiatives in order to give presentations on them. And it's nice to be part of it with my own software. So there are these fantastic open source tools already there. MSJ and Fiji was the ones that I focused on most of the time. But whenever I left Heidelberg at the end of 2012, I discovered that these tools didn't really suit every type of image because then at that point, I started to queens again as a postdoc and that's when I met these whole slide images found in digital pathology. And so if you aren't really from this field, the briefest of introductions is that we have a digital scan of an entire glass slide, a single two-dimensional image could be about 60 gigabytes in size. And it's stored in this kind of pyramidal format. And so things like Fiji, they can be very good at large, handling large datasets, but they tend to be large multi-dimensional datasets. But if you have large two-dimensional images, you've got a whole unique set of problems because there's no way that you can read even an whole image plane in one go on most computers. So unique software, which is specifically designed to be able to handle these ultra large two-dimensional images and just pull out the bits of the image that are relevant any particular time for visualization or for analysis. And so whenever I started working on this, I found that none of these existing open source tools really handled it. And I spent about two years trying to adapt them, trying to write image day plugins, trying to get it to work. And I just couldn't get anything that really worked very nicely. And so after a while, I started to think about, would it be better to try and write something with scratch and see where it would end up? And that's what eventually became QPAP. So I didn't set up to write it, but that's what I did really throughout my postdoc. And so I would say that QPAP is still probably mostly used for pathology applications, but it can do more. I was always designed to do more because once I started developing the software, I started to think about the things that I find difficult with the other open source tools that I knew. And so my goal with QPAP is that it should give an open source platform for whole slide image analysis in particular, but it can also give new tools to address other bio image analysis challenges. So not everything, it doesn't try and do everything. And if there's something else and there's an existing open source solution that can do it, then by all means use it. And I'm not interested, QPAP isn't interested in competing with that. But the priorities of the functionality that's built into QPAP are really determined by the things that can't be solved easily with other software. And I think that the hard parts of image analysis would ideally be defining the question. You shouldn't have to try and wrestle your results out of the software that you use for analysis. But I think analysis needs to be fairifiable. So you could have a really fancy, really impressive algorithm, but you need to know that it's done the right thing. And so I think visualization and usability is really important. But if you can do this, I think that software can be written that can solve many problems. Because in the end, in bio image analysis, everybody's project is diverse, everybody needs to do something slightly different, answer a different question or different types of image. But if you can solve these in a really generic general way, then the same kind of solutions can become widely useful. So that's the kind of philosophy underlying QPAP. And its approach to working with images is slightly different from other software that you might be familiar with. Although it's designed to be incredibly general, and so it can be applied to lots of applications. Basically, you start with your image, you start with pixels in your image, and the pixels are just numbers, but you might have billions of them within a single image. We go from our pixels to identifying objects within these, and an object might be a cell, it might be a structure, it might be a region of, say, tumor, or a gland or a vessel or something. So an object is basically like a region of interest. And then you want to query that. And so you could query your objects, you might have 100,000 objects, 100,000 cells, you want to look at what type of cell they are, you want to look where they are relative to one another, where they are relative to different objects and so on. But really in the end, an awful lot of image analysis problems can be boiled down to these three steps. We go from our pixels, we identify the objects, and then we query the objects. And that's really what Qpath tries to streamline and give lots of tools in order to be able to do this really effectively. And so if your application sits with its model, you might find Qpath a good choice, particularly if it's two-dimensional, but Qpath isn't entirely limited to 2D, as we'll see in a moment. And so that's really it for the introduction, and now I would like to go on and start to just show you a demonstration of this. And as you can see, if you look on GitHub, the software was changing up until about 11 or 12 o'clock today. And so this is really incredibly recent things that you're going to be seeing. I'm going to want to give firstly an overview of the software. So if you don't know it, then this will give you a background basically as to kind of the reasons why Qpath is now quite widely used and the kind of things you can do with it. But if you are familiar with it, you're going to see the new functionality, the new tricks, and the new ways of working with it that you might not have seen before. After that, we're going to move a bit beyond the basics of the kind of stuff that Qpath has been doing since the beginning and look at the new functionality around pixel classification, how to handle multiplex images and scripting and so on. And then finally, I'm going to speak a little bit about some work in progress and some of the latest changes. Part of that is because I made a faithful decision around Friday that I wanted to make a small change to it, which snowballed into rewriting quite a significant part. And so there's some very new things that you want to see before. So whenever I left Queen's University in 2016, that's when Qpath was first released. And I was in industry for a year and I joined Edinburgh after that. And so I'm still working towards the next release of that. My Melbourne is with me. It's working a lot better. So the intention is that the next stable release of Qpath will be in the next couple of weeks. I hope that it will be in time for the presentation. But then I thought maybe the stable one is going to be the one that fixes all the bugs that are pointed out over the next week. So if it crashes during the presentation, it won't be too surprised because it's incredibly recent, but that will be fixed then for the stable release in the next week or two. And after the presentation, if you want to download it, try it out. Please let me know if you find any trouble so it can be fixed as quickly as possible. I'm going to move on to the demo now. But in case I forget to return to the thanks slide later, I do just want to thank Melvin, Lauren, Offer for helping with this, Julian, for a new bias for organising it, and the others here within this slide as well. Some for supplying images and others for supporting it. Okay, so whenever you start Qpath for the first time, you're going to get this set up screen. And so all of this is kind of documented online, but basically you can decide how much memory you want to be able to give to Qpath. And that's the main thing that you get to choose here. So you don't want to give it all the memory available on your computer, but you want to give it quite a bit. And then if you want, you can say the region that you're in. The real reason why that's there is that you can potentially have some localisation issues depending on how a decimal separator is represented in different regions for consistency. I recommend that you would use English United States, just as the region because I don't know if I'm the United States myself, but this helps make sure that everybody's working with the same person of the software. Okay, okay. So I set up some images here. And you may see as I show this, if you're familiar with image J, you might see that I've been influenced to quite a bit because I still a big fan of image J. I still use it an awful lot. So if I have an image, one of the things I like about image J the most is whenever I want to open it, I just drag the image on top of the window. And so that works the same with QPAP. So here I've chosen a file. I've dragged it on top. And immediately I'm confronted by this prompt to set the image type. So I have a choice. I can choose this. So this prompt didn't appear in the last version, although the concept was still there in the background. And in this case, I have a bright field image and the stains are hematoxin A since these are going to pink and purple stains. And so I'm going to choose bright field H and E. And then I'm going to click OK. And that is my image type. And QPAP is going to use this image type for a few different things, as we'll see in a moment. Once I open the image, we have it here within the viewer. We have an overview at the top of the slide. But if I zoom in, you can see the level of detail present within this image. If I move over on the left, I get some information about it. Project will come to you in a moment. The image is specifically the image that we have open. You get to see the size of it in pixels. So it's about 46,000 pixels by around about 30,000. I can zoom in and I can zoom out. And up here in the corner, we will see the current field of view. I can also click on here to navigate. For the purpose of this demonstration, there's lots of shortcuts within QPAP. So I'm going to choose this option to show the input display, which means that down here in the bottom left of the screen, you should be able to see whenever I click a mouse button, I zoom in and out with the scroll wheels. And also if I press a key, you'll be able to see it. So if I can carry it away with my shortcuts, you'll still be able to see on screen what I might need to do. Okay. So I use the scroll wheels to zoom in and out. I click to move the image around and then I just drag the image to where I want to go. Or I can click up here and then I got navigation window. If I want to open another image, I can select this. And then I can just drag it on top of QPAP. So it can become quite annoying to see this image type question every time you open it. If you click and show details, QPAP tries to give you help along the way, you can see an explanation of why this setting exists. Actually, I'm going to go back here because I do find that on a Mac, sometimes clicking on an image makes me find it go a little bit crazy as it tries to show a thumbnail for a whole slide image. So I'm just going to make sure that I select that. Not a QPAP issue, that one. Okay. So this is why we need to set the image type. But if you don't want to be bothered by that dialogue every single time you want to open an image, it also tells you that you can go to the preferences and you can have that type set automatically. So it's important to know that this setting exists for reasons that we'll see very shortly. But it is also worthwhile checking for all the little help texts and prompts that QPAP is able to give you because they can really make using the software a lot easier. So if I go up to the top, this is where the preferences are, this little cog wheel. I can click on it. I can search for type image type. And I can choose I want QPAP to automatically estimate every time I open an image in that way. I won't get the prompt, but I do still know that the image type is there and it's an important thing to worry about. So having done that, next time I open an image, I will not be bothered by the same question. QPAP will automatically guess the image type. And you can see it here on the left. We have the image type set. If it's wrong, I can double click and I can change it if I need to. And so one of the reasons why the image type is so important is that QPAP does a lot of work with these kind of bright field images where we have particular stains that we need to separate. And if you're familiar with fluorescence, often you would just split the channels and that would give you your separate, basically you look at your separate markers or separate stains. But for these bright field images, we need to digitally separate the stains and that requires some kind of characterization of the stain color. And that's what we have here. Hematoxylin, eosin are the stains that we've got. We don't have a third stain and so we just have residual layer, which basically means QPAP can separate up to three stains. And if there's only two, everything else will go into this extra so-called channel called residual. If I would like to be able to perform this digital stain separation, I can do it really, really quickly just by typing a number. And so number one will give me the original image. Number two will give me then the first of these stains, three, the next, and so on. And if I would like to see the lists that corresponds to these numbers, I can go up to the top of the toolbar, click on this little brightness and contrast window, and there I can choose them as well. And so if you're familiar with bright field images and you do have the feature, you might have to use this custom color deconvolution plugin, which you can definitely separate stains and so on with that. Here in QPAP, it's all built in because it's designed to handle these kind of images really from the start. But in the moment, we're going to move along this toolbar and I'm just going to introduce some of the other commands within the software. But the first, before I do that, whenever you're working with images, it's good to try and get the good practice from the very beginning. And the good practice in QPAP is that you want to be able to manage your images in a project because you can easily be working a study with 10, 100, even a thousand images. And rather than see if all of these different data files scattered across your computer, QPAP can manage them all on the project and then start to look after it for you. So before we go into further, I would like to create a project. So I'll go back here to this project tab and show that we have the option to create a project. And I can choose a folder where that'll be and I can set it all up that way. But there's lots of things in QPAP that if you read the documentation, you can learn this is one way to do it. But there's lots of ways in which you can really speed up the process. So project is basically a folder on your computer. I can create a folder anywhere. I'll drag it on top of QPAP and it will figure out that what I probably want to do with that folder is create a project. So dragging things on top tends to be an awful lot of a much faster way in order to be able to do things. So I do not want to save changes there for that existing image. Okay, so I drag that on top and that created my project. I'm going to select a few images. I'm going to drag them on top and QPAP then I still want to import them into the project. We can look at some of these options later or they're explained in the documentation. But if I just say import, then they will all be brought in. And now you can easily click on one and switch to the next. QPAP will automatically estimate the image type because we've already changed that preference. But this then gives us the easy way to start working with images. And we're going to work with images from a few different sources. And so I'm going to drag in some more images while we're here. But we're going to use probably a bit later. So now we have a range of different images all added to our project. And the method by which we do that is an awful lot more streamlined in QPAP version 0.2 as it was previously where you had to do lots of choosing your files explicitly in file choosers and so on. Okay, so now that we've done that, we might want to be able to manage these project entries in a slightly nicer way. So if I want, I can select a few and then I can add some metadata to them. And I know that these images I got from a very useful image reading library called OpenSlide, the QPAP users. And so I can set that as being the source. Whereas these images I got from a very nice group as to the Lahoy Institute, where I was teaching QPAP workshop. And so I will add that in. And so by adding this additional information about our images, we're able to then sort them and then sub-caterach rise them as well. And so we can have a project with this very basic kind of metadata approach. We're able to create some kind of nicer organization and structure of our data. We also, thanks actually to Melvin recently, we have a little search option here at the bottom. So I really highly recommend if you're working with QPAP, certainly with 0.2, you definitely want to create a project in the early stage. It will make managing your image much, much easier. It will also help resolve issues if you move the location of your images, which was always a problem in the past. So QPAP, one of the things about it is that it never changes the pixels of your image permanently. And so it cannot write anything over the top of your original image file. The project does not contain your image. Image is duplicated. It just contains a link to where they were originally whenever they were added. And so if I open up the project directly, you can see that for each one of these images, it creates a little sub-directory and it stores some basic information, but it doesn't copy the images. And so it leaves them where they are. But the locations of these images are all managed within the project. And so if they move and if you open up the project again, then QPAP will help you track down where your images were. And this is a new feature as well, which was, it was always a problem in the past if you, for example, close, close your project. I'm going to slightly rename this folder. So now the paths to these images are going to be slightly different. I open the project again. QPAP knows that it cannot find these in their original locations, but I'm able to tell it roughly where they should be if my Mac doesn't get too confused, because I believe it is having a bit of that trouble that it sometimes has with Finder. Oops. Okay. I'll tell it the new locations. It shows them in black, which indicates that it now finds the images. I can apply the changes and the project has been updated. And so this makes it much, much easier to be able to handle the fact that you've got your data files and your images, but the project helps link them all together and fix the links if they get broken. Does anyone have any questions at this stage? Can you hear me? Okay. So there are a few questions. For example, this one that says, can want to create new image types, say for a PAS stain, stain, sorry, for instance. Currently, no. However, if you double click on the image type, you can set it to be bright field other. And so basically this allows you to then specify your stains, whatever they might be, so I can double click and I can, I'm not restricted to hematoxilin and ASM or hematoxilin DAB. They just happened to be the most common ones I've been working with in the past. And everything else in the bright field world falls under bright field other. If you have more than three stains, then these kind of default stain separations are not necessarily going to be terribly useful for you anyway. But you should probably set the image type to be bright field other just to represent. In the future, in future versions, I think it is going to be worthwhile to add additional kind of common image types to this list because I think it is a little bit too restrictive. But essentially you can't add new image types, but these are basically common ones plus everything else are the options that it gets you right now. Okay. Thank you. There's also another question. Can you set a default type if you always use H&E, for instance, rather than always auto-estimate it? So I will, you cannot set a default type globally, but if I choose a few images here and I drag them onto my project, you see here whenever I'm importing the images, I can set the default type for the ones I'm importing specifically at that moment. Perfect. Thank you very much. Thank you. What you can also do is you can choose to dynamically rotate your images. And so this has been an issue that I've seen sometimes with whole slide images which are scanned and orientation, which is not the orientation that the user wants them to be scanned at. The entire image could easily be gigabytes of size. And if you want to try write a new whole slide image that has been rotated, it's going to be very slow. It's really hard to write these pyramidal images. And it's an awesome lot of work. But Qpaf can actually perform the rotation dynamically as you read it. This is particularly useful for say tissue micro arrays or maybe a different orientation from what you would like them to be up. And so that's what this additional option in here is. Okay. Thank you very much. There's a lot of other questions, but just for the people that I have questions, we also answer the questions in the Q&A window. So just have a look if you find the answer that you're looking for. You don't need to really ask a question. Thank you very much. Thank you. Okay. So I'm not sure if some of you at the start just talked yesterday, but he mentioned a little bit of a Qpaf. And one of the reasons why they were mentioning it was for its annotation tools. And that's why I want to talk a bit now is about these annotation tools. So that's what we see here in the top. So if we move along the toolbar, most of these ones here over shown red are for annotations. So this is just to show and hide the middle panel. This is the move tool, which allows us to move around the image. And then after that, we have basically the tools that you might expect to draw ellipses, rectangles, and so on. All of this is documented online. So I don't want to speak very much about them, but there are a few annotation tools which are particularly nice. The other shortcuts as well. And so you can find under the tools menu what these shortcuts are. You can find in the documentation what the shortcuts are as well. And before I forget, if you know image J, you might know that there is a command finder. Qpaf has one as well. Same shortcut. So ctrl and l or command and l. And so you can search for anything you might want in there. So say you want to draw a rectangle, you can search, it'll tell you the shortcut key, and it will now also give you a little bit of help text. If you put your mouse over it for any command within the top, you can get some help text to describe what it does. And so that's ctrl and l. You can remember as l for list, we'll bring up a list, and then you can search for anything you might want to do. And that will show the shortcuts as well. But one of the most useful tools are two of the most useful tools are the brush and the wand, if you want to annotate within Qpaf. So here we have a few pieces of tissue. And if I choose B for brush, I can activate the brush tool. And let's say I want to annotate a certain region within this, then I can start to draw. And as I draw, I might decide that actually I want to annotate a lot faster, a larger region. The easy way to do that in Qpaf is simply zoom out. So if you zoom out, it'll draw a large region. And if you zoom in, you can draw a smaller region. You can even, at the risk of confusing your fingers or your mind a little bit, you can try and zoom out while you're still drawing. If you want to see a little bit more clearly what you've got, you can choose to fill in these annotations. And so that's shift and F to fill them in. And if you want to erase a bit, you can press the alt key, and then it'll become an eraser. And so using these combinations, you can annotate quite quickly, and you can get used to the kind of resolution that you might want to annotate at. But you might wish to see that, okay, what you want to annotate there, it's pretty obvious. Ideally, Qpaf will be able to identify a little bit better than just drawing with a brush. And that's where the wand comes in, which is kind of like a slightly smarter brush, which uses the intensity information within the image. But it's still just like the brush, it has this kind of magnification dependent aspect to it. So if I want to annotate a smaller region, I zoom in, and a larger region, I would zoom out. And again, like the brush, it has the ability to adapt it using the alt key into an eraser. So let's say that I want to annotate this piece of tissue, but I annotate too far, I can hold down the alt key, and then I can push it back into place. And so by switching between the brush, the wand, the alt key, and not the alt key, I can create detailed annotations. If I hold down shift, I can add additional parts to these annotations, and so on. And so there's lots of different annotation tricks that you can use. And we want you can even start to apply this in detail at, say, a nucleus level. And there are additional tricks that can help you with that. So let's say that you wanted to annotate two regions, which are very close to one another, but you didn't want your annotations to overlap. So here, I'm drawing with a brush one nucleus, here I'm drawing another. I can hold down control and shift or command and shift, and that will then prevent the annotations from overlapping. So this is really useful if you want to annotate dense regions. And so very often, I would use the wand to get started, but then sometimes it can lose control a little bit, where there's per contrast in the image. And then I can use the brush to clean up, because it's a lot more predictable what the brush is going to do. And then using these shortcuts of command and shift and control and shift, I can then start to remove regions that I might not necessarily want to have. And so I said, with Qpath, basically, you're interested in taking your pixels, converting them to objects, these kind of annotations are examples of objects within Qpath. Other examples of objects within Qpath would be cells. And so I press R for rectangle, activates my rectangle tool. I can draw this, I can draw a little rectangle. And I'm going to run cell detection. I don't want to have to find it in the menu. So I go press control and L or command and L to open up. I start to type in cell detection in the expectation that's going to find it for me. And then I can run the cell detection command. And so we're going to see later alternatives to this cell detection command. But we can see that this is given as another way to generate objects automatically by attempting to detect the nuclei and then expand out these nuclei to approximate the cell areas. We're going to look at that in a little bit more detail soon. But you can see that within Qpath, then, we have the ability to go to our image to annotate regions of interest or to automatically detect regions of interest as we go. And all of this information is then stored within the same image at the same time. This is a little bit like an overlay within Fiji, although it's not exactly the same. Because each one of these cells that we get immediately has measurements associated with it. But we'll see that again soon. But I just want to show you at this point how we can go from a technical image in Qpath and start to create objects within it, which are somehow meaningful towards the analysis. I don't want to spend too much time then going through all of these options because it's all well documented online. But the ones in here are mostly to do with showing and hiding. This is where we can generate measurement tables. And then again, this is to show and hide different components of the viewer. You can create a counting grid as well if you want. But then the last thing that we'd like to do before leaving this image, I'm going to just delete all my objects for now, is because of the connection with Mubias, because of the profiles of, I think, quite a few of the people who are here, I suspect that quite a lot of you are familiar with image J and Fiji. And that's why at this point, I just want to show you how you don't have to choose to use Qpath or Fiji or image J, but you can use them both together. Because as I said before, I was never really interested in Qpath and competing with what other software can do very well. So there are certain things within Qpath that allow you to link up with image J quite nicely. So let's suppose you're looking at this image and you're pretty confident you could detect these pieces of tissue in image J without too much trouble. There are other ways to do it in Qpath, which we're going to meet later. But let's say that you just want to set a threshold, detect your regions of interest, and that'll be it. And you know how to do it in image J, and so you don't really want to mess around with your industry much of the new software, but you're still using Qpath because it can handle these whole slide images. So what we can do then is I can draw a region of interest, just an annotation around the entire image, and I can click on the little image J icon, and I can say that I would like to send this region to image J, and I can choose the resolution that I would like to send to that. So image J can't by itself handle the kind of whole slide large image that we have within Qpath, but it can handle a small part of it. And so this allows us to choose that we might want to down sample by let's say a factor of 100. So we're going to make a very small, comparatively small image within image J. So that's what we've done here. We sent our region to image J. It's opened up an image J instance within Qpath, and there we have it. We can start working with it. So I will then start to apply my image J processing. So firstly, I'll convert it to grayscale, I'll smooth it a little bit. I will set a threshold, all standard image J, I will create a selection. And if I want to get that back into Qpath, I can do it all from image J, and it will send it back. And Qpath will take care of all of the translations, the additional rescaling and everything you might want to do in order to be able to get these annotations from image J back into Qpath. And so if you're familiar with image J refugee, and you want to get started with Qpath, but you're ready of a lot of things that you know how to do, this connection between the two pieces of software can allow you to use them together. And that also works for, let's suppose that you want to use Qpath's annotation tools, but potentially do everything else within image J. So I'll create a couple of annotations within Qpath. I will select a region called series that I want to send this region to image J, and let's say, I'm going to go down sample it by a factor of 10. So I'm going to make a smaller region from it. Qpath will also send its own annotations to image J in an image J friendly way. And so that's one of the ways in which you can use Qpath, primarily as an annotation tool. I mean, link it up with other software, but you might want to be able to get your annotations out and then do further process. But I hope that by the end of this webinar, you'll see that you often can do everything that you might need to do within Qpath without having to link it directly or explicitly too much with other software. If there's any urgent questions at this point, let me know. Otherwise, I'm just going to move on then to, if you've worked with Qpath. Sorry? Sorry. Two questions about Omero and the way it's imported into Qpath and how does it link with Qpath? So I'm just trying to figure out because I have quite a few questions about it. Okay. I don't think I mentioned Omero at this point, but just let you introduce how Qpath and Omero links then. Yes, I will try and show that to you. Hopefully, you can see my screen at the moment as I try and open an image in Omero at the moment, but I think that my internet connection might be suffering a little bit. So this is going to IDR in order to be able to open an image within the Omero WebViewer. And I can just copy the URL from that. And if I go into Qpath, open URL, I can paste it in, brings up this import dialog again, and it will add it then to the project. And so that means I can begin to work in Qpath with the image from Omero directly without having to download the entire file. All right, great. There's also another question. Do the key value from Omero automatically import into Qpath? Not at this point. So the Omero integration, it's still at a fairly early stage. A lot of the stuff that's in Qpath is dictated primarily by what has been a priority for the research projects that I've been working on. And this hasn't been yet, but I would certainly like to be able to build up and to see the Omero integration improve. So by the moment, it doesn't have that direct connection. Okay, thank you. I think that's all for now. Okay, good. Okay, so this is one of my favorite examples that I often show because it's a freely available image. And so if you want to try this out, you can easily download the image and you can do it. In this case, we have an example, a tissue sample, I think it's, well, it's not human. But this is for key 67. And so we can see some nuclei and browns, some of blue. And so what we would essentially like to be able to do here is get the percentage of tumor nuclei which are brown. So we need to be able to identify the different cells, be able to classify them as being tumor and non-trumor and be able to classify them as being brown or not. So positive or not. I do need to stress the fact that I'm not a pathologist here. And so I can use the wrong terms or I can classify the wrong cells. And so I just, I try and make the tools user-friendly enough that the pathologist and the person who is reliably identifying the cells can use it for the purpose of this demo. I apologize in advance if I get some of this wrong. Okay. So what we would want to be able to do is to select a region. I'm going to select a slightly modest region for the purpose of this demo. I'm going to then run the cell detection. And you can see that I have a couple of different options. One of them called cell detection, which I showed you before. But in this case, where we want to be able to distinguish whether a cell is positive or negative, there is a command which will allow us to do that in one step. And that's called positive cell detection. And as the help text explains, it's equivalent to cell detection, but it just has an extra threshold for positivity. So I will double click on that, opens up the dialog box. Again, it's documented online if you want, but what you should know is if you put your mouse over any option, you can see a little bit more information about what it is and how it can be optimized. For the purpose of this demo, I'm going to choose some settings, which I think should be relatively fast, not necessarily maximizing accuracy, but they're going to be good enough for what we need. Okay, so having done that, what you see is QPAP has identified the cells within this region, just using these default parameters and classified them as being positive or negative, based upon when you're placed in. If I want to change the display of these cells, there's a few ways I can do it. Again, all documented, f for fill, or I can right click on the viewer and I can choose to only see the nucleus, or I can choose to only see the cell boundaries, or I can choose to see only just the centroids, or I can see them both together. And so one of the parameters was the cell expansion. So QPAP has not at this point done anything smart to try and estimate where the cell should end. All it does is try to detect the nucleus and then expand by a fixed distance or until it hits a neighboring expanding nucleus. And so we're going to see later some ways in which that might need to be improved in some potential ways in which we can start to improve it. But at this point, it's going to be good enough because all that we want to quantify is the ceiling within the nucleus, but we need the additional information of the size of the cell or an estimated estimate of the size of the cell to help us to be able to distinguish between whether a cell is a tumor or non-tumor. So I can double click on any of these to be able to see the measurements for that cell. I click over here on the annotations tab. It will then show me the measurements for that particular cell. And as you can see, it's made quite a loss of measurements along with the detection. Quite a lot of measurements just happen automatically. And if you want to then be able to look at all of these measurements in one go, there's two ways you can do it. I can click on this little table. I can click to show the detections. I can even view histograms of the various different measurements in there. But one of the ways in which I like to be able to visualize the measurements is in context. And I can do that with measure show measurement maps. And so this basically gives us a color coded view of each one of the measurements that's been made. And so if I choose, for example, the DAB OD means, so basically this is some, it should correlate with the brown staining within the, within the nucleus. So what we can see within this kind of visualization of the measurement is that high values or yellow low values are blue. Get high values on the nucleus, which are staying brown in the image. So that all makes perfect sense. But more interestingly, if I look at the nucleus to cell area ratio, what you can find is that some of the cells tend to have a higher nucleus to cell area ratio. And these correspond with particular cell types. And so in this case, I would venture to say that these correspond with the trimmer cells. It tends to have a higher nucleus to cell area ratio. They might also have distinguishing measurements in addition to that. So they might slightly have a larger area and so on. So none, not one of these measurements is going to be perfect for discriminating between a trimmer and a non trimmer cell. Where I'm going to say with my non pathologist view of this, these sort of larger density packed, relatively density packed nuclei are the trimmer ones here. I'm going to say the non trimmer ones. And so you can see that there is a relationship between these measurements. But it's not perfect. What we would like to be able to do is to train people to be able to classify and distinguish between the different cell types and be able to do this as effectively and efficiently as possible. To do that, we can give you about the additional information. So I've added additional smooth features. So basically that will create a weighted sum of the neighboring cells as well. And so now we get a kind of a smoother heat map, where you can really start to see that we get higher values in these trimmer regions and lower values elsewhere. And we can begin then to start to annotate some of these regions with the annotation tools that I showed you before. And I'm going to say this is trimmer. And I'm going to say this is not, but remember, I don't want to overlap. So I'm going to hold down the keys to prevent an overlap. I'm going to say this is stroma. And so based upon these few examples that I've given, I've drawn my annotations, I've right clicked, I've assigned a classification to them, which I hadn't done before. And so this is telling Kupath a little bit more about what the type of images are. Then I could say that I would like to train a classifier to apply to these. I just press live updates. Then you can see the Kupath will then recolor all of these cells based upon a random forest classifier based only on these two annotations and all of the measurements that we made. So what you should see is that red indicates the trimmer thinks, or Kupath thinks that it's a trimmer in your case and positive. So basically brown in this case. Blue thinks it's a trimmer in your case and negative. And then either light or dark green indicates the thinks that it's a non-trimmer nucleus. And so I said initially that what we wanted to do in this image is to be able to distinguish between the tumor and the non-trimmer nuclei and then further distinguish the percentage of the tumor nuclei which are positive. And you can see that now within just a few steps, we've done that within Kupath. We detected the cells. We added these smooth measurements just because they might help. We don't have to do it. And then with a couple of annotations, we trained up a classifier and we get then the percentage of positive trimmer cells within that region. If I'm not happy with it, and I think it's made some mistakes, I can draw some other regions. I can assign classifications to them and I will interactively refine that classifier that we've got. And so here then we have that information. Furthermore, if I draw an annotation anywhere, then Kupath will be able to give me just within the particular region automatically the different cell types and the percentages of them. And this is something that if you had worked with the earlier versions of Kupath, you might be familiar with a thing called the object hierarchy, which is a really strict way of arranging all of your objects. And so if I move this here, because this annotation wasn't completely inside, it would no longer give me the measurements I want. That's all gone now within the latest version of Kupath. So this will be documented more fully online, but essentially if you can see in your case within there, within this square, it will be counted, which wasn't the case in the earlier version, because it's a lot stricter and it made you do things in a much more strict and fixed way. Okay. And so you might be familiar with this kind of approach from the earlier Kupath tutorials that I gave, but they were all for the earlier version. And so this is now basically how it looks currently. The steps as well, I should say, they have been logged in here. So if we want to go back and we want to find out the settings that we use for cell detection, I can double click on them and I can create, I can reopen the dialogue, or I can go up to the bottom, I can create a script. And this is like image days macro macro recorder, where it will automatically generate a script that I can then apply across the project. Say run for project. I could apply that script elsewhere. So in some cases, we would need to refine this and clean up a little bit, but that's how we can start to begin to do batch processing. Pete, I have a question here from the Q&A window. Someone's asking if the training classification uses the GPU or the CPU? The CPU. Okay. And also another question. Can you explore the heat maps or send them to image j for further processing? The heat maps. So in a sense, so what you can, there's a couple of things you could do. So here, for example, I've created a measurement map. I can send a snapshot to image j, which is just purely a visualization of it. Or I could send, I'm going to select a small region because image j is going to have to do quite a lot of work to be able to handle all of this. If I send this region to image j, then I can send all of the objects there. But I don't think image j really has quite the same kind of measurement map then built in the, I don't know, kubath tries to convert all of its own stuff into the most image j friendly way that it can before it sends it. I don't think there's anything that's directly comparable to the measurement map visualization. So you can send the visualization or you can send the original data if you want to do further with it. Okay. Thank you. Is this a bit of time to the last question? Do you recommend using a small or large training sample to train the classifier? And is it best to limit the number of examples? I don't know that I can really give a good general advice on that. The one thing I would say is give it a varied training set. And so it can be really tempting, especially with the one tool that you just choose a big region and give it thousands and thousands of examples because it's really easy to annotate. But all of the cells in there look much the same. And then you completely dwarf the rare cells that you might also want to be able to classify accurately. So I would say a smaller diverse training set is important. Okay. Thank you, Pete. Okay. I'm gonna switch now to something which is entirely new in 0.2. And that is what if we don't want to go through the creating cells? What if we want to measure, say, areas as opposed to necessarily cells? So here is another example. And let's say, again, at the risk of being wrong, I'm gonna say that these maybe tumor cells, I'm gonna annotate one of these up close to it. I'm gonna annotate a region. I'm gonna use that trick to make sure it doesn't overlap. And I'm gonna say this is stroma. And I would like to be able to identify these regions but not necessarily the cells inside at this point. We can do that now with trained pixel classifier. I say like prediction. Then Qpath is now training a pixel classifier across the image. And is doing it initially based upon the field of view. But I can choose the resolution at which this classifier is trained at. So depending on the level of detail that I want, I can choose the type of classifier as well. So I'm gonna choose an artificial neural network for now because it tends to be faster and arguably more accurate for some of these images. And if I'm gonna say I would like it to find the background. So basically, I wanted to find the tumor on the background and everything else I'm putting into this stroma category. Then I can just with a few annotations start to be able to identify these different components of the tissue. But in addition, I might want to be able to refine that classifier in some ways. And so I can adjust and give it different features. I can give it initially smooth features if I want in addition to changing the resolution. But I'm gonna show you another image which I think is gonna make that a bit more interesting. But before I do, I will also show that once I've done this, I can take my class location which is computed just in real time across the image. And I can also say that I would like to convert those into annotations using Qpath. And so this gives me another way that I can annotate images very quickly in Qpath by overtraining a pixel classifier within a region and then converting those into annotations, which I can then if necessary further refine. But what I would like to do then is to show the pixel classifier on a completely different type of data. So I'm going to open. So this is from Yvonne Donbrowski. This is a different image. Well, I'm going to let people open the IRR once. And so this is a brain slice. And so what we're looking at is we're wanting to identify the myelinated axons or the proportion of the axons which are myelinated. And so this is a confocal microscopy Z-stack. So it's not a typical pathology image at all. But we have then this first case, our fluorescence image. You might remember that I could digitally separate the stains previously in Qpath by pressing numbers. That works for fluorescence as well. So if I want to toggle the channel on or off, I can just press the number of that channel. But let's say that I want to be able to identify the axon. So everything in green. I'm going to find a resolution that allows me to annotate one. I'm going to, I want to give it these specific classifications. So if I right click, by default, it gives me a tumor storm out all of this kind of stuff. I want in this case to be able to have something a little bit nicer, a little bit more correct. So yeah, I'll accept. Axon for now. I will select that. And I will select another little region close by. It's quite good. Oftentimes regions which are nearby so that you can try and learn the distinction between them. I'm going to say I wanted to ignore that. So I only want it really to determine the axons versus everything else. And now I'm going to say tree in the pixel classifier. I'm going to go back to the neural network and I'm going to say I would like it to be at a highest resolution. I will say light prediction again. The C button at the top controls whether it's displayed or not. Okay. And you can see that what it's shown in red there is what has been detected. So I actually, maybe I should change that color a little bit. I'm going to change it to, change it to yellow for now. And if I continue to train it, it should then hopefully update the color as it continues to train. Maybe it doesn't. I'm going to switch to probability view. Okay. So what we're seeing here is these bits that are starting to come through in yellow. These are where Kupath is detected. It's going to be easier for us to see that actually if I convert to a gray scale visualization. And so it's not done terribly well. But because in this case what we want to identify is pretty characteristic in its structure. We can be a lot more informative in terms of the information that we give it. So what we're looking at is in channel two. So I can say that I don't want Kupath of the features from channel one or channel three. I would like the features at different scales and at different levels of smoothing. And I would like it to have some features that I think are going to be informative. So these Hessian features we're going to see in a minute. I often forget whether it's going to be the max or the minimum eigenvalues. I'm going to choose both for now. And so basically now we have I think more informative features going into the classifier. And we can see then immediately how that improves the kind of identification that we've got. I can then find other regions within the image where I might want to add it. And I can move across the Z stack. And you can find that at the minute it's quite good for the thicker axons or more in focus. But depending on what I want to detect I can then adapt this. But I can also see what the features look like. And so here we can see these are the underlying features that are being used to train the classifier. I can see that the maximum eigenvalue is probably not the one that I wanted. The minimum seems to be a lot more informative. I can go back in here. I'm going to remove the maximum. And then we can see that our classification starts to clean up a bit. And if we want to choose regions then and perhaps tell it that we also want to detect these you can see it already immediately starts to be able to identify more of what was previously fainter. And so we can begin to train this. So all of these calculations I should say are being calculated in in 2D. I'm used to also support 3D but my changes of the weekend have removed that functionality at least temporarily. But it can then be applied in 3D and it will be applied across the Z stack as you view it. And you can choose whether you want a higher classification as the output or you can choose whether you would like the probability in which case the how intense the pixel is depends upon what we've actually how confident the prediction is. There are some advanced options so if you wanted you could then start to try and do kind of a feature reduction as well. And you want to actually visualize what the features are. You can press the show button and it will generate an image J stack showing how the features look. So after PCA I've reduced it quite a bit. If I go back to the original I get this full feature stack that we could visualize previously. And so we can start to generate these kind of information. I can also generate the probability map from here. And as I showed you before I am able to convert this into annotations other QPAP objects they've needed. And the last thing really that I would like to show you when it comes to this the pixel classification is something that has been I've been asked about for a while. Here's the ability to apply these kind of techniques using the steam separation. And so that's something that it's one of the changes of the weekend that's now possible. So I can train a pixel classifier and I can actually give it the information about this separated state. But in that case all I might want to be able to do in this image is to be able to distinguish between what's brown and what's not brown. I don't have to go through the entire pixel classification thing. I can also go to this create a simple thresholder. And basically this is a very simple kind of classification based purely upon applying a threshold. So if I say I want to take the brown steaming I want to set a threshold and I want to do a little bit of preprocessing in order to smooth it out. I'm going to say in this case everything above is shimmer and everything below is stroma. So now I have essentially something that acts in QPAP exactly the same way as the pixel classifier but generated purely based upon thresholds. And I can adjust then the preprocessing to see the impact that it has. And I can even adjust the type of filter that's applied although Gaussian is probably the more normal one that I would choose in this case. I can adjust the threshold. And as I do this QPAP is automatically giving me the proportions of the different tissue components within that region. And it might be the case that I want to run this at quite high resolution. Here you're able to automatically and immediately see the impact of the resolution upon the results that you get. And so if I see the timber percentage there it's about 65. It goes down to 62 as I increase. And so you get immediate feedback and you get to see when you're adjusting parameters on ISD matters and when it doesn't. And you get to see exactly the impact of how it looks within the image. And if you need to run a high resolution but you don't want QPAP to classify the entire image you can choose you want to limit it to the annotations. And then QPAP will only bother to calculate the prediction in the area that you're looking at which if your computer is a little bit old and tired as mine is. So this is I'm running this on a I think about a six-year-old iMac so it's not the most powerful computer that you're seeing all this in. You're able to then restrict the region that is actually being classified. And so that can make things perform an awful lot better. And so I know there are some alternatives. There's Weka and there's Elastic and these are fantastic for the applications where you might want to use them. But I find that in pathology I very often wanted to create a quick classification and a resolution different from the whole slide image resolution. I wanted to convert it into measurements and so that's why in the end I integrated this into QPAP in order to be able to make this possible. And so all this pixel classification stuff is new and becoming increasingly more powerful. Okay because we're running out of time I just want to show you then a couple of things very quickly. You so far have only seen one image opening QPAP at a time. You can open more if you want. So I can right click on the viewer. I can create go to multi-view. I can create a second viewer. And in this case I'm going to run a very fast cell detection. So this is one of the La Jolla Institute images that I'm showing you. And so what I'm interested in doing here is being able to distinguish areas of tumor based on the site of keratin screening and the image on the left transferring these over to the image on the right and then getting the CT3 positive cells where they are relative to the tumor and kind of the density. So this is we're starting to bring in our cell detection plus our pixel classification and then add to that additional distance measurements as well. Okay so I run the cell detection and I get the percentage of positive cells just on its own. I will go in here and I will very quickly generate a few annotations. Three in a pixel classifier, fairly low resolution and with quite a bit of smoothing, live prediction. So I just want to get roughly these areas and then I'm going to say I want to create an annotations from these. I don't want any very small regions as part of the annotations and I don't want to split it. You can see we've disconnected regions here so QPAP can treat that as one annotation or can treat it as multiple annotations. So QPAP objects move anything very small, don't want to split it, press okay. So now we have our annotation and I will transfer it over onto the other image. I hope I might have given it a little bit of work to do. Yep, okay. So the way in which it transfers is with the shortcut shift and E. There's transfer last annotation as the command but the shortcut is the one that's chosen to match with MSJ. So if you know a MSJ, you can pick up QPAP more quickly. So these are not exactly the same tissue section and so the annotation doesn't immediately align but I can choose this rotate annotation command. I can shift it into location just roughly in the interest of time and then now without having to annotate the trimmer in here or to train a classifier to detect it, we'll be able to then transfer. So this is a very simple shift in rotation so it's not like a total image registration or anything like that but we can get that then over into QPAP. And the entire reason why I wanted to be able to do that is to show you then a command which is called distance annotations and what this will do is it will then measure all of ourselves and then gives me the distance from those cells to the whatever annotations we've got. So I choose measurement maps. So we have the distance to the closest trimmer annotation. You can see this kind of heat map because every one of ourselves now has an additional measurement which tells you how far it is to the boundary of the trimmer annotation. But let's say we wanted to do something else in addition to that and so we don't want to know just where the CD3 positive cells are relative to the trimmer or how far away they are from the trimmer but we want to know the density within the trimmer and also within the margin around it. So we have a command in QPAP to expand these annotations. We can choose how far I want to expand them. I'm going to say 200 microns for example. I'm going to say I want to remove the interior so this is going to generate for me then an annotation which is around the outside of the trimmer but doesn't contain the trimmer region. Okay so that's what we have here and so now if I double click within that annotation we have the percentage of positive cells or the density of positive cells within the kind of margin around the outer area and also we can separately then get the percentage that are within the trimmer region itself. So there's about 10 percent of the detected cells are positive inside but around the margin it's substantially higher density. I just want to show you a few more things very quickly but if there's any urgent questions just just stop me but there's a couple of more things that I would like to get to. That's fine I think you can go through them and then we can ask a few questions about the basic classifier and such. So I'll let you keep on with the demonstration now. Okay right so what I was going to show you at this point is okay so here we have a multiplexed image and so this is a this is actually from the bioformat sample data but originally from Perconelmer. So this is a Perconelmer image available under a Creative Commons license and so this has eight channels within it. As I said before I can press a number to turn the channel on or off which is quite useful but what if I wanted to see all of the channels side by side. We have within QPath then a channel viewer that allows us to do that. So now as I move the mouse over it I can begin to see not just a composite of the image but I can also see individual channels and if I was to say to annotate then this can help me to be able to see exactly what I'm annotating and let's suppose I wanted to change the name of these channels so I want to shorten the channel VMs a little bit. So one way is I can just double click on a channel. I'm going to remove a little bit at the end but if something is quite slow and laborious in QPath there's a strong chance that there is a shortcut for it and so in this case if I just type in the names of the channels I would like it to be okay these are the channels I would like and I can just press ctrl and v to paste them in there and so you don't need to do this you don't need to know this there are other ways to do it but there are very often shortcuts and nice ways to do things much more efficiently in QPath. So now I shortened all the channel names so why would I bother to shorten all the channel names? Well what we want to be able to do in this image is to identify the cells and then classify them according to the different markers and I would like to be able to take the name of the channel and use that as the name of my classification and it's going to be a lot easier to read and to understand if those names are short. So what I've done yep I shortened all the names of the channels I can go in these are all the classifications that are available to me and I can say that I would like to populate this from the image channels I don't need to keep what's there and this means that now if I was to annotate a region so here the ck positive region I can then assign it that classification um but my ultimate goal in here is to get to be able to distinguish for each cell what is positive for and so that could be positive for multiple markers at the same time so I will try very quickly to be able to create a cell detection oops the cell detection works for this kind of image in the same way as it works for for the bright field I can adjust the parameters I'm just going to keep everything fairly close to the default but you might remember that I expanded the cells to approximate I expanded the nucleus to approximate the cell areas I'm going to decrease that a bit um so I don't expand very much in this case because I really don't want to be able to quantify to start quantifying markers from neighboring cells I choose I choose that I wanted to perform the nucleus detection in the DAPI channel and now we have all of our cell measurements and for each cell we have all of the measurements for the different classes so what I want to do is to somehow be able to identify for each one of these cells if it's positive for a particular marker so this workflow is explained in a lot more detail uh online so if I go to help and if I go to uh documentation um in the interest of time I won't take you through all of the steps but it shows how you're able to take in this image do what I did to separate um to simplify the names check the cells visualize the measurements if you want to and then you can create duplicate images and then you can create independent classifiers for each one of the different markers and this could be based upon a threshold or it could be a machine learning classifier but the important thing is that once you've done that what you are then able to do within QPAP is to combine these classifiers and so you can load up um your six seven or more independent classifiers select them all apply them then sequentially and then you will get uh for each cell the classification you can then visualize that I showed you you could visualize cells according to the centroids and if you do that what you see each of the little marks here corresponds to a detected cell the shape of the mark corresponds to the number of markers that it's positive for and then the color is the exact combination and so this makes it a lot easier to interpret what's happening and you can even then go from that to be able to start to cluster the cells to query to interrogate them and to be able to look at distributions distances between them and so on and so in the interest of time I'm not going to show all of that now um but it is documented and I can explain more about it later if you want to but one of the very last things I would like to talk about then is that for this increasingly sophisticated analysis um QPAP cell detection was okay at the time it was it's fairly generic it works all right in bright field images um a lot of stress and savages but you can see it it can make mistakes if I um switch this I look only at say the DAPI channel you can see it's done a reasonable job I'm only going to look at it in your clay but it's a need some some strange mistakes sometimes it's gets things surprisingly right um not necessarily the most convincing of contours in some cases and so it might be good enough sometimes but for multiplex analysis where you'd want to be able to distinguish if the cell is positive for multiple markers you really need a good segmentation and that's why I was so interested in seeing the start as presentation yesterday and this is what's one of the reasons why I changed so much over the weekend and over then uh over the last day or two oops I'm gonna see if I have installed the thing that I need I hope that I do I have something within QPAP which is it's not in the public distribution yet but I can give you instructions for building it if you want to so basically within QPAP you can write scripts you've already seen that um but one of the scripts that I've done is to try and apply stardust using the pre-train models so stardust is a really really nice method if you weren't in the talk yesterday it's a really nice method for identifying your clay um and so it uses deep learning um uses TensorFlow and there's some pre-train models from the developers made available here which I've just downloaded from from their website and so if I've done this right um and if I set this up right oops I'm going to delete all of the other objects I will select a small region now because it might take take a little bit of time um I will then open up my stardust fluorescence model so this is a script that I've written in QPAP in order to be able to call this and to run it um I will run my script and I will have to wait for a little bit because yep we've happened to load in stardust there we go so here we have within QPAP then the stardust nucleus detection as opposed to the original QPAP model and so this can give much more convincing counters it can give much more accurate um nuclear segmentations in a lot of cases um and so the intention with QPAP is that as I described you go from your image to your objects and then you query them but the actual path for creating your objects for say detecting yourself or whatever you do with them there might be different ways that you can do it so you might integrate a completely different algorithm to be able to say run your nucleus detection and stardust as an example of an algorithm that you might want to do with that but you can still use QPAP to run it and then to do everything else afterwards like for example the classification or we didn't have time to look at it um distribution analysis and that kind of thing and so the ability to link up QPAP with this kind of algorithm means that you can start to use some really really nice work done by other researchers and then apply it with the tools within QPAP. One of the examples that um I particularly liked and I I showed it on on twitter is here is an especially challenging image I think by just select a region here I'm not sure if this is uh let's go down here and you can see there's a lot of very dense nuclei there if I run the QPAP cell detection it isn't with the default settings anyway oops I'm going to need to set the image type which is wrongly estimated in this case so that's important because the image is very dark it somehow didn't manage to estimate the image type correctly you can see QPAP's nucleus detection is not terribly good in this case um whereas if I go across here and if I drag on my script for stardust and I will make these scripts available and if I run it then you should see that's it's done something a lot more sensible with this image than QPAP did I will actually change the color of the objects so you'll be able to see them a bit more easily I hope so now I run stardust through QPAP and it has done a much better job of the cell segmentation but there's some advantages to running stardust through QPAP as opposed to other ways and that is that we can begin to do some additional things so for example um it's at a very early stage but this script is try I try to write it in such a way that it becomes easy to customize if you need to and so for example here I've said that I would like to not only detect the nuclei but also apply this cell expansion with oops sorry about that within stardust and also outputs the probability so how confident stardust is within the prediction and so this means I can begin to interrogate exactly um what my threshold ought to be to apply um within stardust for this kind of image um I can begin to then apply it uh approximately at the fill cell areas as opposed to only detecting the nuclei and then interrogate that data in a lot more detail and so I'm sorry run out of time um I yeah I guess I'll stop there and ask if there's there's any questions at this point thank you okay thank you Pete um so there are a few questions mainly about the pixel classifier and deep learning models that implemented in QPAP so the first one is will it be possible for users or developers to implement new deep learning models and framework uh framework sorry a bit like stardust would it be possible to integrate more in the future um in the future yes and so the tensor flow part is what I did on Monday and then fixed a little bit this morning so it's incredibly recent and at least in the managing tensor flow set in often different computers it's all a little bit painful um but I can give instructions for you just build QPAP it's one line you put in that you want tensor flow and it will build it with tensor flow support um but because this is so incredibly new it isn't yet very well sort of documented or standardized it's going to take some more time and some more applications to actually work out the details so it becomes more widely usable but certainly the plan is that it will be possible and stardust is really just the first demo of that in action okay great um there are a lot of questions or just people asking if it's possible to share the scripts that use stardust as you'd be able to to reproduce it so I guess I'll get online at some point um and also another question was is it possible to use pixel the pixel classifier sorry then save it and import it in other platforms so like by door to tensor flow or things like this or the programs not not really well so it would be very hard to if you talk about the classification the protection of results then certainly so you can export the predictions the probability maps and so on then that can be done um the pixel classifier itself it uses a lot of open cv methods um you could you would have to reimplement the exact feature calculations to make them work elsewhere and so it would be possible but it would be a lot of work to do it um but because it uses the open cv methods it serializes it to json so it is an open format that you could do potentially um but it's it you'd have to reimplement quite a lot of what's currently in cuba to make it practical hmm okay um so there are a few minutes left so very few actually so I'm just gonna ask a couple of more questions um a few of you were asking about image alignment if it's possible to align a few images within the same projects or as an as an alternative sorry transfer annotations from one image in a project to another image how like is it possible and how would that be done um yeah so you can there is this command for transfer the last annotation so that can transfer one annotation um by scripting many things are possible by scripting and there's lots of methods within qpath to try to make these things as easy as possible so if if you want to transfer annotations over and your transformation is simply an affine transform and you can figure out what that is then you can do that very easily but cuba doesn't have a whole image registration and non-rigid translations well like I said it doesn't have any of that built in um because there's other tools that already do that and they do it much better and so cuba doesn't try and replicate that but if it's a case of simply bringing objects over and possibly shifting translating rotating then that can all be done relatively easily by a script and there's multiple discussions about that on the forum already and there's lots of things I write within the software or that we write within the software now to make it easier so that the scripts become a lot easier to write over time so if something needs a script um in the past a script was maybe 20 30 40 lines but very often you can do powerful things in a script which is maybe only a few lines now um before all is done expecting a lot more accessible okay well thank you very much I think unfortunately that's going to be we don't really have any time for for another question except if you if you're okay with like a last question sure um okay so um let me see if there's a quick one that we can kind of answer um I think mainly it's about exporting annotation exporting things to to use them in other platforms so I think we've kind of like answered these questions already um yeah so on the new there's the new documentation website there's instructions about exporting annotations as json or as raster images and there is a tile exporter class which again tries to make it really easy by scripting you can export them as labeled images as multi-channel binary images and so the the intention is that it should be as easy as it possibly can be to get your annotations out in whatever format you need um nope just thank you very much um yeah I got a little bit carried away there's lots of things I wanted to show you but hopefully we can follow up in the forum as well and we can discuss um discuss things as well and thank you very much uh melvin julien laura offerer um and everybody who managed to to stay this long