 All right, thank you very much for the organizers for having this webinar. As I said earlier, my name is Nicholas Sophranief and I'll be talking about Napari, a multi-dimensional image viewer for Python. I also wanted to say thank you to all of you for coming today. I know it's a hard time in the world right now for a lot of people and I particularly wanted to extend a welcome and message of support to anyone from the black image analysis community in the US that has joined us. So with that, I want to give you a little bit of an outline for today. So I'm going to spend about 20 minutes on welcome and introductory slides. Then we'll have about sort of 10 minutes as we transition to about installing Napari and today's lessons. Then we've got three lessons that are going to be taught out of Jupyter notebooks, the first one on image visualization, the second one on manual annotation, and the third one on interactive analysis. And then we'll have a wrap-up and a conclusion. And as was mentioned earlier, I want to thank that we're being joined today by today's moderators, Kevin and Tally, who are core developers on Napari as well, and Rocco, who's one of the organizers of this event. So I think this audience will be well familiar that over the past 10 years, there's been an incredible improvement and increase in what we are capable of measuring with microscopes. And so this example of developing mouse embryo from the ColorLab is a light sheet recording that can be tens of terabytes or 10 terabytes of data I think can come out of an experiment like this. And there's so much richness that someone wants to extract here, the position and location of all the cells. These are real analysis challenges, visualization challenges. Thinking more generally about what do biologists really need to do with images, visualize, annotate, extract results in workflows. And there's a lot of amazing methods for this, but I think a lot of this is really quite challenging. There is a heterogeneity of quite incredible existing image analysis tools that are already out there and that people are finding a lot of value from Fiji cell profile, really very important tools like that. Something that's happened in the past five years, though, that I think is also important is that there's been an incredible rise of computer vision machine learning techniques, things like PyTorch, TensorFlow, that really enable advanced segmentation, tracking algorithms. But I think a lot of biologists still lack access to advanced algorithms like this. And that maybe if you go to a computer vision conference sometimes and it looks like things are sort of solved, but there's still definitely a problem disseminating these sorts of methods. And so the team, the Nepai team sort of got together maybe two years ago to start working on a kind of fast interactive multi-dimensional image view of a Python. And the reason why it was sort of for Python, I think for a lot of us is we sort of switched our analysis methods over to Python to kind of try and leverage a lot of those, say advanced machine learning algorithms or just other sort of virtues of the scientific Python stack. And so Nepai, as I sort of said, is it's designed for browsing, annotating and analyzing large multi-dimensional images. It's built on top of QT as a GUI framework. It leverages a Python library, VisPy, for performance GPU rendering. And some of its key features really are that enables a fast visualization and interactivity and then three-dimensional rendering and ND slicing. So it's got a sort of full n-dimensional data model if you have time, channels, arbitrary number of channels like that. And it's designed to sort of scale to large data sets. So it's not constrained by what fits on RAM or even what fits on your computer. It can pull data from remote data sources as well. It supports a variety of fundamental data types that we can leverage as layers. I'll say a little bit more about those in a moment. And by virtue of being in Python, it's very easy to integrate it with advanced machine learning methods, deep learning methods. And it's also sort of customizable and extendable. We have the ability to add custom key bindings, mouse functions, and we're designing a plug-in interface as well. So let me tell you a little bit about the Nepai viewer. You can see it has a couple of features. Like there is a layer-specific controls in the top left-hand corner. These are sort of GUI elements that allow you to control the properties of particular images that you're looking at. We have a layer list below that that contains a little representation of all the layers that you've added into Nepai so far. These layers, as I said before, can be of different types. So we have image types, points if you just want to mark particular locations, labels if you want to do segmentation and annotate particular regions. Pixel-wise capacity, shapes for drawing, polygons, vectors, and so we also have the support for multiple dimensions, as I mentioned earlier. So depending on how many additional dimensions your data has, you get additional dimension sliders that are indicated at the bottom of the screen. There's a canvas, which is where the image gets rendered. And we also have an integrated console that allows you to interact with Python with all the Nepai objects. So that's the Nepai viewer. Let me tell you a tiny bit about the Nepai team. So we have a steering council of three people. Myself, Lua Cray, Investigator at the Chandlerberg Biohub, and Juan Nunes Iglesias, an investigator at Monash University. And we have a number of core developers, including Tali Lambert and Kevin, who are on the core helping answer questions today and who have really done an incredible job driving this project forward. And we also have been lucky enough to receive contributions from a large number of people. And we're really open to contributions as well. And so I should say all this development is being done on GitHub. In the open, everything's open source. And we really welcome contributors from newcomers through to experts. And so with that, I'm actually going to switch to a little bit of some live demos and talk a little bit about the viewer before we dive into the lessons later in the course. But are there any important questions, Tali, that have come up at this moment? OK. So let me switch over to some examples. Let me find the one I want to start with. OK, so I'm going to start with this one. And so this is I've loaded here a pathology image into Napari. And this is a whole slide image. So it's about 100,000 by 200,000 pixels. And I can seamlessly load in and out and zoom in and out. So it's sort of Google Maps style rendering with image pyramids and tiling. And Napari will dynamically fetch just the amount of the image that you want to look at at each moment in time. So we don't have to sort of load this whole thing into memory in one go. We can just pull in exactly what you need. And then around here, I've also got some annotations. So here I've got tumors that had been previously annotated by a pathologist and that I loaded in. I can actually create more annotations. Maybe I'm going to zoom in here. I'm actually going to now create a new layer, a new points layer. I mean, I'm going to call this cells. And then I can come in here and in the points layer, I have, you know, various different tools. You know, this tool is going to allow me to add little points around the location of each of the cells that I'm interested in. And what's nice is that I can then open up the Napari console and I can then programmatically just grab access to that layer and then just grab access to those points. And then if I add some more and I run this again, I have more points. And so, and similarly, as we'll see in the lessons with the Jupyter Notebook, this is, you know, these sorts of interactions are also possible from a Jupyter Notebook. And if I were to edit this data here, that would actually then update what I was seeing on the screen. So that's an example of the points layer and some manual interactivity. I can also do use create a new shapes layer here. Maybe I'll sort of zoom out. So, you know, maybe I'm interested in just, you know, drawing a rectangle around this particular area. And then I can also go into the console and extract those parameters. And so in this way, I can both, you know, visualize in a kind of lazy fashion, very, very large data sets and annotate them. You show another example here. So this is one channel of kind of a volumetric time series from a lattice light sheet data set. And again, this is, I think, you know, tens of gigabytes. But I'm kind of just dynamically loading as I move the slider, each particular, you know, trunk of data that I need to visualize what's on the screen. And so that makes it very, very fast and very performance. On disk, this file is being stored actually as a ZAR file. But there are similar ways to do this if you have a directory of TIFFs as well. And you can dynamically load each one too. Over here, these are some of the other sort of controls that are relevant to images. So I can adjust contrast limits. I can sort of really go in and do that in detail. I can adjust different color maps as well. And so those are sort of two examples of interactivity and visualization. But I think one of the things that's very exciting about Naparie is the potential to couple to analysis and customization. And so while Naparie itself is really right now just a core viewer with these different layer types. And again, because it is Python based, it's quite easy to connect in to, you know, really incredible analysis routines. And so in this little example, I've actually got done two things. One is I've kind of added here this sort of custom GUI element. And we'll talk a little bit in a final lesson about what it means to add custom GUI elements. And I've also hooked up to some analysis in the background. And so here, you know, these are images coming from a high throughput screen done by recursion pharmaceuticals of cells infected with SARS-CoV-2. And I can sort of navigate around here and look at different examples from the screen. And each time I've sort of changed field of view, it's dynamically loaded the data. And so, and it's a five-color channel data. You can see it's quite nice. I can look at each color channel independently. I can adjust contrast limits of each one. I can turn them on and off separately. And what I can also do, let me make nuclei right, right. You can see the nuclei. All right, so as I was saying, though, before that one of the exciting things is connecting to analysis and kind of executing that analysis on command. And so I've rigged up a button here in the background that is going to call out to the Stardust segmentation algorithm. It's a very exciting Python TensorFlow-based segmentation algorithm and apply it to the current field of view that I'm looking at, the nuclear channel. And so this applied to the nuclear channel. You can see it's really done an incredible job. That's all the credit to the Stardust developers. But what's nice is if I now just sort of change field of view, OK, look at another one, I'm like, OK, how does Stardust do on this one? Let's just so see. OK, it seems to be quite nicely. And OK, these cells look really very different than the previous cells. I can just sort of see, OK, how does my algorithm do? I can run it. And OK, that is. And so it's kind of nice because I didn't have to sort of run. There are, I think, 10,000 images in this data set easily. And I didn't have to sort of run the algorithm on every single one. I could kind of interactively browse, see what I wanted to see, then check the algorithm, kind of do spot checks, things like that. And so I think this kind of analysis, visualization, interactivity, all being done together is, I think, a really powerful combination. Another example, this will be my final example, is here where I'm now going to do a pixel-wise annotation. I'm going to use a paintbrush to label some cells as into different classes. So if people are familiar with the elastic tool, this is sort of a demo that's inspired by that tool. So I'm kind of labeling here some background, some nuclei. Let me do a little bit of this guy. And then I'm going to label some cytoplasm, too. And again, the point of this example is really to see how this does by executing, trying to get it to run on the rest of the field. OK, improvement. And sort of generalize that sort of pretty well. I can maybe see here, OK, this looks like something a little funny. Maybe if I come back and touch this up as being actually background, if I run it again. OK, so it sort of improved a little bit. And here in this labels layer, I have different options like paintbrushes, fill buckets, erasers, color pickers, a lot of the functionality that you might expect from a graphics editing tool. But it really now kind of customized for the scientific use case. And the way this particular example has worked is there was a pre-trained featureizer based on a unit. And then there was a random forest algorithm, rather like elastic, that was running on top for the pixel-wise segmentation. But I really mean it to show it more sort of illustrative of what this kind of interactive analysis was with a tool like Napari look. OK, so those were the demos. I just had some gifts with them. Actually, this is a really great gift as well, that Tali made, which is really incredible. So this is some lattice light sheet data that I think he had collected. And what's happening here is every time the slider gets moved, the data gets loaded from disk, from a TIFF, I think. And then it gets deskewed, because the data, often with lattice lights, you can kind of collect it at a skewed, deskewed, and deconvolved, and then rendered. And what's so exciting about that is that now instead of having to store two copies of the data, like a raw copy and then a deconvolved, deskewed copy, we can just store one copy, but then we can visualize the data exactly as we want to see it. And in many cases, actually, when you do that process, then the data volume goes up as well. It's less space-efficient to store the deskewed, deconvolved data. And so there are many advantages towards this sort of lazy computation approach where you're combining visualization and computation in one. So those are some of the examples. We're really excited to see people out in the community building on top of NAPAR. It's been great to see some people tweeting out examples of using the tool. We really love to see that and want to make that possible and want to hear from all of you, particularly if you're running into problems. What's next for NAPAR? We've got a couple of features plans. One is around making a standalone app. As I mentioned at the very beginning, I think a big goal of this tool is to become accessible to a research biologist that maybe don't know how to do Python coding or use Jupyter Notebooks. This session today will use Jupyter Notebooks. It's maybe a little bit more sort of intermediate level, advanced level. But we really want this tool to be then accessible to people that don't have those tools as well. We're also going to be working on script and macro generation so that if you did a series of steps in the viewer and you wanted to then record those as a script that you could replay, potentially like in a headless mode, maybe you sort of annotated your first 10 images and now want to run that analysis on the remaining 1,000. And then we're also thinking about multiple link canvases. Let's say if you want like ortho views, improving our performance with remote data, adding infrastructure for analysis plugins that we've currently support some infrastructure for file IO plugins so that people can read and write. And yeah. So for more info, you can visit napari.org. It's our website. And you also can access or reach out to us on the forum for help and on GitHub. So we were on the image SC forum as I just said and on GitHub as well. And our Twitter handle is napari image. So that was sort of it for the slides, maybe now is a good time to again, sort of pause and ask if there are any questions from the moderators that need answering. I would say the main thing coming in is just support for various file formats. Does Napari support this? Does Napari support that? So you just sort of set up, but maybe clarify that. Yeah, that's a thanks, Hallie. Okay, so as was just mentioned, one of the things actually that characterizes the image analysis space is the heterogeneity of file formats. And Napari right now, we have actually done some work to make a plugin interface where you can write your own kind of file loader if you want. We have a number of support for built-in, kind of built-in support for the sort of standard, let's say non-proprietary format. So TIFs, JPEGs, PNGs, things like that. And then for some of them more vendor-specific formats, where people have been working on plugins for those. I'd say right now actually an incredible one has come from the Allen Cell Institute, the AI for reading CZI size files. And I'd say right now we're in a place where things that you can read into Python. So if there is exists a way to sort of get it into Python, then it's pretty easy to turn that into an Napari plugin. And maybe that's something that we'll provide links to when we post answers to these questions in the forum as well. Okay, so with that, I wanted to now transition to installing Napari in today's lessons. So as was said, I'm gonna be teaching today using Jupyter Notebooks, which I think were introduced really nicely to this community by Guillaume Weitz in one of his new bias presentations a few weeks ago. And the training material is accessible on this repository. It's, you can get it with the new bias slash Napari slash 2020 at the bit.ly link. But it's also Sofraniaf and my GitHub handle and Napari training course. And so here are there's some installation and setup instructions that hopefully people have received before the presentation. I think if this is sort of looking very new and intimidating then you're welcome to follow along. We actually have a kind of pre-built way to interactively follow along using Binder as well, which is a way to do remote notebooks as well. So I was not planning to go through the instructions now, but if there are, if the moderators do want to flag anything Arjen, I can do that as well. So sorry, Nick, there is somebody reporting that the link for Binder zip folder is not working. And so Tali addressed people to the right button there. Yeah. So if you can remind how to run in Binder. Should I open it up? Should I see if the binder works? Yeah. Okay, so I click the binder. We'll see it. I'll let it spin in the background. But yeah, so I mean, we're doing great. We're doing well on time. So I'll spend a couple of minutes here. So to get going with this notebook, if you want to get going locally, you can clone this repository or you can download it as a zip. And the zip download, you can also get from, I think it's, gosh, I can't remember one of those. Clone or download here. Yeah. So you can also do download as a zip right here. So that's where you can download. And then once you unzip, you can navigate into their main directory. And then you can, the recommended approach is to create a condo environment using an environment YAML that we provide. So you can run this command and then activate it with Napari. Or if you are comfortable with all these tools and used to managing your own environments, you can clip and store Napari and second image to file or our requirements. And we're also using a tool, Magic Gooey, that's for one of the examples. And then if this is all working, you should actually be able to just type Napari into the terminal and it should launch an empty Napari window. You could also run Python check setup.py and see an image of themself nuclear. Let's see the binder. Okay, so the binder has run. So if you've launched the binder like this, it's really important to first activate your desktop. So you can see here, I turn on desktop. And so what's happening here is that Napari is basically like a, is it like a local application more primarily than a web-based application? And so it needs a desktop to run in. And so here we have a virtual desktop and we're using a sort of NoVNC to access it. And so this is run. I will show how to get going here. But before I switch over to my local copy, but just to kind of get people going who want to get going with the notebooks on binder, you can see now I navigated into the notebook. And again, there's actually this special command that you have to run if you're using the notebook with binder to connect it up. And it's important to remember to wait a little bit until after you've run this command for this computer to be ready. And once it's run, I'm just gonna run these first two just to kind of check that it's working for people. Let's see, so if I run this one. Okay, so I have an apari over here. Okay, so that means the binder link should be working. So if people run into more problems with binder, we can come back to that. I'm gonna kind of switch back to my local copy of this and sort of go from here. So as I said, earlier, as you sort of see in apari, it's a tool that we can use for visualization and supports a wide variety of layer types and interaction patterns. And so if you are following along now with this notebook locally, the first thing I want to call out is that we use QT for our graphical user interface. And so when using an apari with a Jupyter notebook or an interactive Python session, you have to first create the QT application. And so that is done using this %guiqt command. And so I will just sort of do that now. If you are running this inside a script, so a Python script, not inside a notebook, then you can create the QT application inside a context using with an apari GUIQT. But we won't be doing that today, we'll be using today the notebooks. And so now that that's been created, I can import an apari and I can create a viewer, which is our sort of kind of basic object. And so if I run this now, you can see that a viewer has popped up on my screen, it's empty, I'm gonna make it kind of full screen. It is unlike tools like Ipy volume or Jupyter widgets, it's not embedded inside the Jupyter notebook, it is a separate screen. The advantage of that is that it allows us to use sort of native rendering technologies, sort of web-based rendering technologies, which when you are local are more performant. And that's kind of why we had to have this sort of second window in the binder context, because it had to be a desktop somewhere. So if I come back here, I can also, something that's helpful for sort of teaching, but maybe less critical when you're just using the notebook or using an apari for your own work, is that we do have a sort of notebook screenshot functionality, so that if you want to capture a screenshot of what you happen to be looking at in the notebook at any moment in time, or looking at an apari in any moment in time, it will appear, you know, let's see. And here the napari viewer is empty, there's nothing here, so the screenshot just has the empty viewer. And unlike the real napari, the screenshot is not interactive. So it's just, okay. So there are a couple of different ways to load images into the viewer. So, you know, one really simple way is dragging and dropping files. So you can just, you know, from the finder drag and drop files onto viewer. You can also select files from our open file menu. So, you know, up here, we have a bunch of open options. And then there is the viewer.open command where you give a file path, which we can use from the notebook. Or the one that we'll look at first is that we can actually load the image data into Python using sort of standard Python loading functions and then pass that array using the viewer.add image command. And so there's really like a sort of a range of sort of expertise and sort of sophistication with Python that are on display here. So, you know, the drag and drop is the most accessible and now this viewer.add command is maybe the most flexible. And for these first three options, the file will actually, the path will actually pass through our plugin interface and so depending on what plugins you have available, you can maybe, you know, support a wide variety of file types. And so again, here in this final one, it's really anything that you load into Python, you could potentially view in the party now. And so I'm going to use a very nice TIFF file reader to read a TIFF file data nuclei. And you know, this file was installed when you installed the repository and is at the data location inside the lessons. And so hopefully if people run this now, they're able to load in the data and see that, okay, we've got three dimensional data. It's an array that has a shape that is 60 by 256 by 256. And now let's try adding it to the viewer. And so I'm going to add with this command. And if I come back to my viewer on the other screen, okay, it doesn't look like much has happened, but actually now we've got this slider here and I can slide through. And you know, this is 60 steps. So this is the full 60. And I can now sort of zoom in, zoom out, move around. I can kind of reset where I was looking at. I'm actually just going to do this in the binder as well just to kind of see that it happens. So we come back to from the binder before to run that's the screenshot. Okay, I'm going to load the to file again and then I'm going to add here. And now I've added it and it's in this window here. And you can see now that this is a lot slower. That's because it's sort of going through the node VNC. But you know, I think this can again be good to give a sense of, you know, if you just want to take to easily and quickly explore using the party, it can be a good, a good method. Okay, so I'm going to go back to teaching now just using my local notebooks. But if you are following along with the binder, hopefully that's, that's what I can keep to. So let's see. So sorry, Nicholas to stop you. So somebody in the audience asked if you can please go slightly slower while you show the different cell execution in Jupiter. So they can follow. Thanks. Thank you. Thank you. Yeah, absolutely. So the, I mean, maybe even recap. So the really important ones are that I've loaded in this, this data. And then I've used the Nepari, the viewer add image command. And so add image is how a sort of, you know, most flexible way of adding data that's already been loaded into Python into, into Nepari. And then I've taken a screenshot. And you can see that a couple of cool things that happened here. If I just even focus on the screenshot, the non-interactive thing is created this layer that's actually called nuclei. And it's done that by looking at the variable, the name of the variable here. And understanding what that name was and that this layer should have that same name there. So if you see in the, let me go back to, let me go back here. So there are a number of different control panels that we saw earlier. And so, you know, everyone should maybe spend a tiny bit of time playing around with them. You know, these are nuclei, so maybe we want to make them blue. We have contrast limits. If I right click on this slider, I actually get this expanded view here, which I can type into. And so I can get finer control over what I'm looking at there. I can type in contrast limits. I can adjust opacity, which, you know, right now with only one image present is less exciting. So let me go back to the viewer, so no books. Okay, so color channels and blending. So as I said, you know, right clicking on the contrast limits slider brought up that elongated version of the slider, which I could type specific numbers into. And then I was able to adjust contrast limits and sort of change the color maps to blue. And in general, I use any of those drop-down menus. And so again, I can sort of take that screenshot. You know, I think for the people following along, and you want to, you know, the screenshot functionality is less critical now because you have the actual viewer there, but it is something that you can use if you want to say create a frozen version of this notebook to share with someone else who, you know, isn't going to install tool. So I, as I mentioned earlier, that, you know, we have this concept of layers that get added to the viewer. And I can execute this code cell, which is going to look at what layers are inside the viewer. So I can run this print viewer dot layers. And right now we've only got one layer, which is this nuclei layer. So it's an image type image layer. And as I said it, you know, Napari figured out that we should name this layer nuclei because that was the name of the variable that we loaded in from this, which was pretty cool. So we can then go in and we can get that nuclei led by referencing it with its string name. And then we can look at some of its properties that we can control from the GUI too. So if I just run this where I'm going to print the color map, I'm going to see that the color map is blue and then that it's an object corresponding to it. I'm going to see that the contrast limits that 0.07, 0.035, those things that I just edited in the GUI, and the opacity is kind of still at one. And so as I mentioned earlier, what's really nice about Napari as well is that you have this kind of bi-directional communication with the Jupyter Notebook and the viewer. So, you know, before we just saw how if I change things in the viewer, then they update in the Notebook. And so now let's go to go the other direction. So now let's change some properties in the Notebook and let's see how they update in the viewer. So I'm going to change the color map to be red. I'm going to change the contrast limits to be a tighter range. I'm going to reduce the opacity just to see. I'm actually also going to rename the layer as well. I'm going to call this a sort of division. So if I run this, if I come back now in my other window to Napari, you can see that the color map is now red, the contrast limits are now smaller and the opacity has changed. And so it really is that ability to kind of go back and forth. And, you know, again, all of this is happening right now in across all our slices. So you can look across slices. If I take a screenshot, just to capture the screenshot, you can see that it's now red too. So actually stepping back, you know, we could have actually passed these parameters as keyword arguments during the first add image call. And so let's actually now kind of just do this again. Let's add a new copy of these nuclei with those nice kind of 0.7, 0.35 blue numbers that we found before. And let's actually also set the blending mode to be additive so that these color channels blend together. So let's see if we run this now. I run the add image command. And if I go over to the viewer, you can see now that I've got, you know, both images here and they're blended together. And so I've got this. It doesn't actually, in here, you know, if I'd had a translucent, then I wouldn't have seen the other one. So again, these are sort of different types of blending. And again, all this is sort of happening in three dimensions. So I'm taking a screenshot just to capture it. So here this is sort of just, you know, two views of the same data. Let's make it maybe a little bit more exciting. So we're gonna go and we're gonna use the M-read from TIFF file again. And now we're gonna load in more image data and look at its shape. These are gonna be some membranes. So let's see, okay, oops, the same shape as before. So if we remember the nuclei, they were 60 by 256 by 256. This is also the same. And now I'm going to add these to the viewer as well, I'm gonna add the same image to the viewer as well. Also with a additive blending mode, I'm gonna set some contrast limits. I'm not gonna set the name. The name will get automatically in this variable, but I will set the color map to be green. And so if I run here, maybe now let's come back to the viewer. You can see now that I've got the membranes. And again, it's sort of all blended together, which is really quite nice. And I can take a screenshot of that too. So now we've seen how to add basic data to the viewer from the notebook. As I sort of said, this data is really, it's all present there in 3D. And so I can scroll through now to a different slice, say at the bottom of the field of view. And if I take my screenshot now, it shows the bottom. Actually, one thing I'll mention, I'm sort of mentioning screenshots all the time. If you just want to take a screenshot that saves out the disk, you can take one here, either including the viewer or not including the viewer, depending on what you want, if you just want the image as well. And we have a bunch of saving functionalities as well, which I'll talk about. So this data, as it says here, this is really, it's a 3D volume. And we can look at sort of different slices through that volume and different orientations. And so here we can actually, in the viewer, use the roll dimensions button. Now to take what is maybe sort of effectively, I get an XZ slice. And so now, we've got 256 parts in the slider, and you can scroll through. I can do maybe do one more. I can see that again. And if I go one more, then I'm sort of back to where I started. And so that can be quite nice as well. As a sudden, unfortunately, we don't have the ability to display them simultaneously right now, but we will be working on. I can also take a trans... So we need QR mutant? No, steel? Oh, okay. Am I back now? Yes. Yes. Sorry, what happened was, I took a transpose and the Zoom dialogue box popped up and muted me. So let's see if I try and transpose again. Now I transposed. Okay, it was good. I actually was sort of worried. I was like, oh my gosh, was that a bug in the party? And I didn't transpose properly, but instead I just muted myself. So apologies. Okay, so if we go back to the notebook now, you can see on me sort of take a little screenshots you are at. Now a little less critical. And okay, so in addition to supporting a 2D rendering, in a party can also do sort of full 3D rendering. And so to enable the 3D rendering mode, let me come back in here. So it's actually this little wireframe button. And I'm gonna turn off the membranes for a moment. So here you can sort of see just the nuclei in 3D. So it's very nice. I can kind of move around. I can also do things like adjust contrast limits of these as well. And we have a different blending modes. So I can maybe come in here. Actually I want that. I want maybe this, just different parameters of blending and 3D rendering that people might be interested in. And so with that, that kind of concludes the first notebook. And so now is maybe a good time if there are any pressing questions on visualization. We learned how to visualize 3D images, look at 2D slices, blend different color channels. And the next lesson will focus on manual annotation. So maybe if there are any moderators, okay. So let me go on to the second notebook. And so if you're using binder, you should navigate back to the top directory and launch the second notebook. And so this one is gonna focus on manual annotation. And again, we saw a little bit of this in the demo that I gave right at the beginning of this presentation. And so again, now we need to actually gonna close this Nepari because it's a new notebook. And so this is actually gonna create a new Nepari and we have to do a new GUI context. So I'm gonna run that. And then I'm also gonna import this Nepari and create an empty viewer like before. So again, that's popped up in a new window. And then this time we're actually gonna load our data directly into Nepari using one of our built-in readers. And so here I just have to pass the path now and I do viewer.open and I've said I want to use our plugin. But maybe you have your own file type and you've written your own plugin so you want to use your own kind of custom plugin there. So let's see if I run this. That's fine, those are just some info messages and let's see here, okay? And so we're back. I've gotten a nuclei loaded again with a slider. And so if I take a screenshot of that, it looks the same as when I loaded it kind of directly. And so what we're gonna do in this example is think about annotating, dividing and non-dividing cells using the point slat. And so we already saw an example of the point slat earlier when I was looking at the pathology example. And so to get going with this, I'm actually gonna first add the point slats from the notebook. I think last time when I was doing the pathology example, I added the point slats directly from the viewer, but here I'm gonna add them from the notebook and I'm gonna name them. So one point slat is gonna be for dividing cells, one is gonna be for non-dividing. I'm gonna set the color of the dividing ones to red and then non-dividing to blue. And I'm mostly gonna have this sort of n-dimensional be true property set because we're kind of thinking about these things as being in three dimensions. And so if I run this and now let me go to the viewer, you can see that I've gotten two new layers here. I can actually use the up and down arrows to sort of scroll between them. You can see when I have this top one selected, it's got it knows that its face color was blue. When I've got this one, it knows that the face color was red. And then when I select the nuclei, I've got different controls up here for the image layer. And so now what I'm actually, one more thing I need to do before I can add points which is enter the add points mode. I can do that by clicking in the GUI, but I can also do that programmatically as well. Let me take a little screenshot first just to kind of see that those layers got added. Okay, so now I'm gonna enter add mode and I'm gonna again enter it programmatically. And then this is just sort of reinforcing the theme that either you can kind of do it from the GUI or you can do it from kind of Python and the notebook. And now actually if I look over in the GUI, you can see that this add mode has been activated. Okay, so now I'm gonna do some clicking. So I think this is not dividing, this one's not dividing, this one, this one, this one. Up to you what you decide to do with the ones on the edges can count them or not. This one's not dividing and then I think this one is dividing. And so I just annotated them all in one plane now but you can sort of see the little spheres in Z. And if I kind of come back to the viewer and take a screenshot, you can look like this. I can actually look at these things completely rendered in 3D as well. So I can just kind of enter a 3D rendering mode and I can sort of see around. You can see that this one is dividing. I did not get the center of the cell. If I wanted to correct that, I could but for now it's okay. And again, I can come back to the notebook here and take a screenshot. So let's say I wanted to now get the number of cells for each class. I could just look at the length of the data property in each of my two layers dividing and non-dividing. So if I kind of print that out right now, I can see, okay, I had... Oh, I said, I spelled this wrong. Apologies to the title. One dividing cell and... And 18 non-dividing cells. And if I want to get the coordinates out, I can sort of see, okay, I've labeled everything in that 30th Z plane. I could have labeled them in different Z planes. And what's cool now is I can actually save a CSV file with this data too. So I can save actually, maybe now I'm gonna save two CSV files, one for dividing, one for non-dividing. And I'm gonna use our built-in plugin writers to save them. And actually if I go to the Jupyter Navigator, you can see I just about a second ago created these CSVs of dividing. And actually I can load it up into Jupyter. So it's pretty cool. You can just see that what I just created. So these are the non-dividing ones. And this means that if you have a biologist collaborator that wants to load this data up into Excel, they can very easily do an annotation in the party and then save data in a way that they can get into the tool like Excel very easily. Okay. So that is an example of how to use the point slant. Now I want to show another example of how to use the shapes layer to draw our polygons. For this example, our polygons are always constrained to be in 2D, although they can be in arbitrary as you slices. But for the sake of this example, I'm actually going to take a maximum intensity projection of that across the Z axis. And I can do this actually using the data directly that's in the Nepari viewer already. So I can just do a dot max along that as a 0th axis. And now I have a copy of this, this maximum intensity projection here. I can actually remove, I can select and remove all the current data from the Nepari viewer. And then I can just kind of add in this maximum intensity projection. And so now if I look at the viewer, you can see that I've got the maximum intensity projection loaded in. So there's, we no longer have the slider at the bottom. This is just a single 2D plane. And so I took a screenshot here. I'm now going to add an empty shapes layer which I can use for drawing polygons. I'm going to give it a name. I'm going to call it like clear outlines. Then I'm going to give it a face color, maybe an edge color and maybe an opacity. And before I sort of created an empty layer in the empty shapes layer in the GUI directly, but here I did it in a notebook. And so if I do a screenshot again, I didn't like that. Let's see. Computer is struggling. Why did it not like my shapes layer? Apologies. Let's see if my viewer, okay, my viewer has recovered. I hope that didn't cause too many problems for other people. I might actually just start this again rather than worry. Apologies for a little bit of a technical difficulty there. I'm going to hop down, created my viewer again. I loaded in my data again. I loaded in actually the full data. And I'm going to take that maximum intensity projection again, and now I'm going to add the shapes. Okay, so here I'm back again. Apologies, not quite sure what I didn't like that, but I'm now going to annotate some cells. So you can see here that I'm clicking around with the polygon tool. I can use escape to finish drawing. And I can come around this one. Let's say I like do this and I'm like, oh my God, this is really bad. I'll show you how I can correct that in a moment. So I can come in here. We've got a vertex selection mode. So I can rearrange things. We also have the ability to add and delete vertices. So I can sort of add in vertices or I can be like, no, these are all terrible. So again, things you might expect from a kind of graphics tool, but now in a sort of scientific context where it's really easy to kind of get coordinates of all of these things as well, I can actually just grab and reshape the entire shape as well. I can actually change it so that this one has, you know, a different color than the other one as well. Okay, so maybe I'm gonna draw one more quickly. Let's got the green color too. All right, so if I look here, let's say I want to get the data from this shapes layer. I can go, it can index into the name of the layer, nuclei outlines. And I've got now a length three list of three arrays each array has the vertices of each of the shapes. So it's, you know, shape the first shape I drew, the second shape I drew, the third shape I drew, and these are the X, Y coordinates of each shape. And I can actually save those if I want using an SVG writer plugin that we provide. And now let's see that should have generated an SVG. It doesn't look like it's updated here. I think this is actually showing something that I generated a while ago, but you can now open that tool in your favorite graphics, you know, Illustrator-like program. And you should be able to see that SVG there as well, which I think is pretty cool. And okay, so another thing that is, I think very commentative about this, I think very common to want to do with after you've drawn polygons is to extract sort of masks or segmentation information from them. And so to do that, we can actually use this two labels command to convert the shape into what we refer to as a labels layer. And so if I run this fast, I can see three labels have been found and generated, which is good because I drew three shapes. And if I add this to the viewer here, you can see now, so this is sort of the vector representation of what I had before, which were the shapes that I drew, you know, that were like this. Whereas here, this is now the pixel-wise representation as well. And so like we saw in the other example, in this layer type, I can have a paintbrush that I can paint with. We have various hotkeys that you can use to quickly sort of toggle options. There is an eraser functionality. There is the ability to kind of preserve any of the existing labels so that you only sort of edit. Like if I would sort of go a little nuts here, I'd sort of preserve the existing labels, but I'm editing free space. But I can undo that because that seems like a terrible drawing. But that could be nice. Let's say if I've edited this cell here, and then I want to maybe edit this cell, but I don't want to kind of paint into this one. I can kind of, you know, use that. I'm going to, I know also you can paint with a transparent background as well to remove something. So that is where we're, at now with the labels layer. And that's, if I now want to save out the labels, I can actually also save them as a TIF. So if I run this label saving, I've saved out a TIF and using a built-in writer. And I can actually just reload that. So I think that should be, that file should have appeared here. My TIF that I should have just made. And I can reload that. And now I've got two copies. So this was sort of my original copy, and this is the copy that I just loaded in. And so it's very easy. If someone did some annotation, you can sort of save that out, then you can load them back in. And, you know, maybe something that we might want to do with annotation like this is just kind of like iterate through all the labels that we have and then find the pixels that are inside that correspond to each label ID inside the data from the viewer. Maybe let's add up how many pixels there are. So again, area, maybe let's extract how much signal there is in that original nuclei channel. And maybe let's look at the ratio. And so a sort of simple command like this, and I can kind of just get how much say signal was in each of my nuclei. And, you know, the point here is just to sort of illustrate that you can then easily kind of connect these sorts of annotations into analysis. And so with that, we have concluded the second lesson. So we've seen how to use the points, shapes, and labels led to produce manual annotations in the party and save those annotations in meaningful formats. And so the next lesson will be around a little bit of interactive analysis. But again, maybe now is a good point to pause and see if there are any pressing questions. Hey, Nick. There was a question about the difference between or adding an image and what the layers are. So maybe it'd be good just to kind of review what the layers are and things like that, yeah. Thanks. Love it. Okay, so yeah, let's step back a little bit. And maybe we can even use this example. So, you know, layers, they can have different types. So they could have, you know, and a different type of layer, you know, really corresponds to a different type of data. And so I think our most basic layer is really this image layer. And you can see it's got a little image icon. And if you have an image layer, that means that, you know, the data that you're looking at, it's really kind of like an array of pixels in its heart. And, you know, things you might want to do with that array of pixels are, yeah, adjust its contrast limits, adjust its color map. And, you know, often in an imaging experiment, the image data is sort of that fundamental thing, that kind of thing that comes in first that you maybe get off your microscope. And then other layer types might often relate to more derived sort of data. So here, this layer type, which has this little shape icon is shapes. And I can have multiple examples of this. So I could like have, I can create another shapes layer here. I can create another one if I want. I can rename them. I can, and so here the data type, it's really sort of lists of arrays of vertices. And it's sort of useful for like polygonal annotation. I can, you know, if I just want to draw an ellipse around every shape or maybe if I want to do a bounding box, maybe I just want to, you know, draw bounding boxes around each one. Then the next layer type was labels. So this is, you know, I have paint brushes and fill buckets. And again, I can have multiple different copies and I can paint. We had points earlier that I, when I was adding in different points and we can kind of use them all together. And then also we support a surface layer type for if you have meshes and we support a vector field or vectors layer type. Let's say if you have a like polarization experiment or something where you want to look at, you want to render vectors. So that's a little review of the layer types, but I'm glad we did that review because it's really, those are a fundamental basic concept within the party. Nick. One more question. Common thing that comes up is a performance and lazy import. Can you just make a word or two on, you know, what Nepari is doing versus what someone has to do and what library's deskers are that you work with? Yeah, okay, great. Yeah, very important here. So generally I think what Nepari itself sort of does right now is kind of not getting the way. And I know that's sort of a funny way of putting it, but you know, when you, or it's maybe let's say it's cover about requesting exactly what it needs to request. And then so, you know, we know exactly at each moment, you know, what anyone wants to be or needs what data needs to be present to create the view that you're seeing. And so we can then be very careful about just requesting that data. And then it really is sort of the thing that we're requesting the data from in some senses, it's a little bit of their responsibility to actually then like fetch that data in the best way possible. So there is a slight separation of concerns between the sort of more visualization viewer side of things that Tali mentioned Nepari and then libraries like Dask and Xar, which I'll introduce for a moment because I think they're important to understand in these ecosystem as well. So, you know, Xar is a very exciting, a kind of chunk based file format that has a kind of on disk representation that's very easy to load into Python in an array like syntax. And what's nice about that is because it's sort of a chunked on disk, you can very easily access chunks independently. So you don't have to sort of, you know, if you just want to see this little piece over here, you don't have to get all of it, you can just get that chunk. Dask is a library that pairs very nicely with both Xar and Nepari that handles sort of, yeah, like lazy loading and distributed computation. And it is capable of setting up a compute graph that understands exactly what, you know, what data you're trying to request and then sort of making sure that even if you have, say, computations in that, that you're just grabbing what you need. And so that's a little bit of an intro to those two tools. And I think we can maybe put more links to them in the questions afterwards. So I want to do a little bit of this interactive analysis example. And if we don't get through the whole notebook, that's, I think, just fine too. But I'm going to get started a little bit. So I'm going to do again, I have to do the %guiqt and I'm going to import Nepari and create an empty viewer as before. And so this time I'm going to, now I'm going to use the tifile in reader again. And I'm going to take this maximum projection right away because I just want to work with the 2D data for this example. And so here I'm going to, I have my shape, my data, it's the N256 by 256. And I'm going to add that data to the viewer. I'm going to stop taking the screenshots now. I'm just going to show the data in the viewer. And so here we can see again, we've got our one image layer. And, you know, we might want to do some analysis on this. So I'm going to load in some filters from a psychic image. So psychic image is a very popular image processing library in Python. And one of the maintainers of psychic images, also one of the founding members of Nepari, Juan, you know, in Iglesias. And so we're really making sure that Nepari and psychic image will work together right from the get go right now. And so I'm going to import some of these filters. And now I'm just going to go through and I'm going to do five course to add image. And I'm going to apply a different filter to each of, to this image and see what it looks like. And so I've done that. And now you can sort of see here, we've got many, many layers. And they're all sort of on top of each other. And I can turn on their visibility on and off. And, you know, this could be a really great way if you just want to explore, you know, what are these different filters? So let's, you know, let's look at them. Okay, so horizontal, so okay, that sort of looks like this, you know, vertical, so okay, that sort of looks like that. So, you know, clearly there's a difference in orientation between these two things, you know, quite interesting. It seems like it's pulling out, you know, a little bit of contrast around horizontal or like going across. If I, if I, if I transition something in the vertical way, so a horizontal edge, whereas this one is maybe looking a little bit more about vertical edges. So, you know, that's, that's sort of interesting. You know, this filter has clearly done something else. This is, wow, really popped out edges here very, very clearly. This one, okay, it looks relatively similar, it's a little, a little, well, it needs to look very, very much the same. And so, you know, this could just be a fun way, you know, I could do things like adjust color maps. I had another one in there, let me kind of come in here. I can make this blend as well, so I can sort of blend into the original image. Really just a whole kind of host of visualization things that you might want to, you do here, just to really as, you know, as you explore an analysis, I think this is again, really about exploration. So, if I go back to the notebook, all right, let me remove all those extra layers. I can just sort of go through the list and remove them all. And now I just have, I'm back to the beginning where I just have this one there. Okay, so now I'm going to explore a little bit around some interactive segmentation. So, I'm going to import some processing utilities from Psychic Image and some morphology things, some feature things, some measuring, segmentation. And I'm going to do a filtering of the nucleon and to get a sort of foreground background separation and then add those to the viewer. And so, if I run this, I can now see, all right, I did a foreground background separation and this is now a label as layout. It's just a very simple label there. It's just nothing in foreground. But we can see that, you know, there are some like little funny patches around the edges. There are some holes in here, some holes. So, what I can actually do is use some Psychic Image functions to remove some of these small holes and remove some of the small objects. And so, I can just take that data and then I'm actually just going to update the data in the foreground layer in place because, you know, I kind of only want to just keep around this nicely processed data. So, if I run that, now you can see that those little edges, those little small objects around the edges are gone. The holes have been filled. This looks quite a nice mask where I've got background and then nuclei in the foreground. Okay, but now I want to segment it. So, I get, you know, a different region for each nuclei. And the method I'm going to take here, it's got a marker controlled watershed. And maybe actually I should link there some nice tutorials in Psychic Image for this approach. Suffice to say now, what I'm going to do is I'm going to try and end up with a marker with a point that's located in each of these nuclei. And then use that point to seed an algorithm that can then find the boundaries of the nuclei. And so to find those points, I'm actually going to do something. I'm going to take something called a distance transform. And I can use this function from scipy ndimage distance transform to take it. And if I load that into the viewer, you can see, okay, what is the distance transform done? It's basically for everything in the foreground, it's asked how far away is that pixel from the boundary? And so you can see that the dark pixels are either outside, you know, they're in the background area. But then as we move in towards the center here, it gets brighter. That's because these pixels are further away. So if you're really far away from the boundary, that means that you're very bright. And you can see here what's kind of interesting is because there's a bit of a choke point here, these things are a little bit closer to the boundaries than these things here. So already you can kind of begin to see, oh, this is a hotspot. This is a hotspot. You know, this is a hotspot. This is a hotspot. And that's maybe gonna help us find the centers of these cells. And so if I actually do a little bit of smoothing in this case, and now you can see, okay, I've got maybe really more clear hotspots for the cells. And so if I go in and I try and find the peaks of those that smooth distance transform, and those peaks correspond to points, so I can add them as a points layer with a red color. I can see that, okay, this has happened. So I've done a good job here. I've found these points really well. That point looks a little weird. This point looks a little weird. So what am I gonna do? Okay, I can come in here. Okay, I'm gonna select that one. I'm gonna delete that one. I'm gonna grab this one. I might just move over here. Maybe I'm gonna add a point here. Maybe I'll add a point there. Maybe I'll add a point there. Maybe I'll add a point there. See what happens. And so now I've been able to use, oh, look, there's a spurious point up there as well. So let me select that guy and delete that one. And so now I've been able to clean up the points. And so now I can sort of come back to my notebook. I can get those new peaks from the layer that I named peaks. And then I can run a segmentation. And so here I'm gonna run now this market controlled watershed segmentation using the processed foreground data from before, the distance from before. And these markers that came from these points. And so if I look, how is it done? Oh, it's done a pretty good job. Maybe a little bit of funkiness going on there. Maybe something that can be edited there. But I can actually take the paintbrush now and manually correct that if I wanted. And so there you can have a workflow where you did an automated segmentation on thousands of yourselves, but then you did a quick manual touch up as well. So that's an example. I can actually also- Nick, sorry, can I stop you? It's just to clarify. Because we have some question about layers. To resume it is, what are layers? Layers are everything that come out from some operation we do with a Python library. So it can be a segmentation, can be a channel, can be another dimension of the dataset, right? Yeah, so the layers themselves, they correspond to these different data types. So there are the points, the shapes, the labels. If you have a multi-dimensional dataset, say you have a five-dimensional dataset or four-dimensional dataset, we would sort of refer to that as one layer. Maybe you can think about it as it's data and then the property and key properties associated with that data. So here in this example, I've got one, two, three, four, five layers and they're accumulating in this list. And it's not necessarily the case that every operation has generated a layer. It's only kind of when I chose to sort of say, okay, let's make a layer, let's make a layer here. And in this particular case, you can maybe think about them as like critical steps in your analysis workflow. But I could have not saved out the distance transform as an individual layer. It's like only because I wanted to look at it. That helps. And I can save out this segmentation like before. Again, don't worry about those all messages. That's all fine. Okay, so we're gonna, we're approaching maybe like five to 10 minutes left. There are maybe two quick things that I wanted to mention before wrapping up. These are definitely the most advanced parts of the tutorial. And so really no worries now if this goes a little quick right now or if something doesn't quite work, you might have to install magic GUI. That was maybe a little bit of a question if it was present in our requirements depending on when you downloaded this package. So, but what I want to show here is just a tiny little example of how you can extend the GUI, extend the viewer with kind of a custom GUI element. And so we're gonna use a library that's a magic GUI that's maintained by the Nepari team. And it's got its own documentation page here, magic GUI. And it can be used to kind of really minimally make simple GUIs for when you kind of specify kind of functions. So, you know, you don't actually have to write the GUI right now, which is quite nice. So I'm gonna import it. If you are unable to import magic GUI right now, you might need to pip install magic GUI which you can do directly from the notebook if you add an exclamation mark before the pip install, but you will need to restart your notebook as well. But again, no worries if you don't follow along with this bit because we're almost at time now. But I did want to sort of say like the concept here is that I'm gonna add a, I want to do an interactive thresholding where I have a slider of, that can go anywhere from zero to 100 and that's gonna compute a percentile. It's gonna be used to inform a percentile value that I'm gonna use to do this kind of thresholding at. And so if I run this now, I'm gonna sort of decorate this, my little function, my little threshold function with my magic GUI call. And I'm going to add this GUI element that I get out to the viewer, to the window actually as a doc widget. And so what does that actually look like? If I come here, you can see now that I've got this little thing down here. Actually, I can move it around and pick it up. And up here even. And I've got my own little doc widget here. And if I run the doc widget, you can see now I've got this slider and as I move the slider, it changes a threshold. And they could have been a much more complex piece of functionality, could have been a much more complex analysis routine that took in a parameter. But this is sort of nice because now I can kind of interactively do parameter. Especially I'm like, okay, no, this is not so good. This looks maybe looks kind of good. And so magic GUI makes this sort of extension very easy. The last one, a bit of extension that I want to show as well is what it means to add a custom key binding. So we can also, again, using a decorator to the viewer bind the custom key. So in this case, I'm going to bind shift P to do that kind of filling of holes and objects using the data from the threshold. So if I add this now to the viewer, I come here and nothing's actually changed in the viewer. It looks the same. But if I do shift P, I can do that fill. So okay, so that didn't go so well. So maybe I lower the threshold shift P. Okay, that looks nicer. And again, I can kind of use that keyboard shortcut to enhance my interactive exploration of the data. The final one I'll just show is actually that keyboard shortcut can really be quite complex. I could really just add in that whole segmentation routine that we just saw before as now I'm going to bind it to the shift S. And so if I'm in here, if I now do shift S, okay, it did. And now it's using the sort of the points from before. If I, you know, what to sort of do else, I could shift B, shift S, do again. That's maybe that's good. So maybe I come in here. Okay, so that is just sort of some really quick examples of, you know, what's it like to add features to the viewer. I'm gonna go back to my slides and just close entirely and then we can take a few questions overall. So I wanted to say, you know, after today's lessons, you know, some possible next steps for people are to explore some of our more advanced tutorials. I'm gonna go to the Nepari website quickly. So we have a homepage Nepari.org. We're in the process of actually improving some of this stuff but we've got some tutorials in here that cover some of the basic methods around the different layers. So I think there were a lot of questions about layers today, so that's really great. And there's more information about the different layers here. There's also some more advanced application tutorials, actually a really nice one that Kevin made about annotating points with videos for Nepari. If you want to do a kind of video annotation, maybe, you know, you want to do annotation and then leverage a tool like deep lab cut. This could be a really great tutorial for you. Tali mentioned some people asking about Dask. Here's a really great tutorial that Tali wrote about using Dask to process and view large data sets. So this could be a really great thing for people to look at next. In general, we also have our documentation page which has a more comprehensive API reference. So if you want to sort of look at the parameters to the image layer, you can really kind of go in there and find those. There is also some nice developer resources if you're interested in contributing, codes of conducts, contributing guides, a little bit about mission and values. It kind of gives a sense of what we're doing, where we're going with this tool and a little bit about our roadmap, what we're working on right now. You also might be interested in learning more about our plugins. So there's stuff about how to create a Nepari plugin. We actually provide a cookie cutter repository for people to get going if you want to make it an IO plugin. Maybe you have that kind of custom file type. And a little bit more about some of the advanced concepts like events and threading. Maybe if you're interested in an image acquisition context and you're interested in our multi-threading API. So these are more advanced features. So with that, I mentioned the plugin interface. I mentioned the IO plugin. More info in napari.org. We're always monitoring the image SC forum and then the napari tag, please reach out to us there. And we're on GitHub, raise an issue, file a bug report, or on Twitter. And we love to see, tweet out a screenshot of any cool examples that you're looking at. We can tweet out cool screenshots of your data. And so really want to say now, thank you. Again, on behalf of the whole Nepari team in particular, I also want to thank the moderators, Kevin and Tali, that have been so helpful today. And thanks again to the organizers. So I think maybe, Rocco, there are some questions now or I think... Yes, so there are some technical questions. And as you remember, we will answer in the forum. One is more pushing from users is something like, is Napari the image of Python? Or where does it sit in Napari compared to other historians who like ImageJ? Yeah, no. So I think the concept of the ImageJ for Python, I think that it's a reasonable way to look at it. I think we do want to sort of be foundational in that sort of way. I think ImageJ has been absolutely incredible for the Java community. And Python has sort of, has lacked a viewer in, or is it not lies, it's been a little bit more harder to get sort of interactive viewer. And particularly the concept of plugins, where we're really definitely inspired by ImageJ there as well. And I think the other thing is, I think what's so great about ImageJ is it's so easy to get going with if you're a beginner. And so I think, there's a lot that we aspire to that. And it's actually possible to use ImageJ and Napari together as well. You can, there are cross language bridges that allow you to get back and forth between them. And we're working on, we talked quite a bit with that community and we're working on a little tutorial that's actually on the website, on the tutorial's website, there's a draft tutorial that explains how to use ImageJ and Napari together, which I think is very exciting too. Okay, thanks a lot. I ask Kevin and Tali if they want to comment farther more. There was one question that came up a lot about 3D, like can labels and shapes be in 3D? Can you annotate in 3D? So can you comment on how you extend what you showed here to 3D? Yeah, absolutely. So the labels can be visualized in 3D. They can be looked at in 3D as shapes can also be looked at in 3D but all our shapes right now are themselves 2D, if that makes sense. Maybe, yeah, probably won't load up a quick example, but they do have a concept of sort of depth. Right now, we don't support any interactivity in 3D. So you can't paint into the 3D view. You can't when it's rendering in 3D, you can paint into the successive 2D slices and you can actually extend your brush to have a larger volume than just the particular slice that you're painting into. I think right now, because we don't support ortho views, 3D painting is you kind of have to go slice by slice and similarly, 3D polygons, you have to go slice by slice. But if you do go slice by slice, it will work. But I think we want to make that easier and more user-friendly for people. Okay, Kevin, do you have any other comment you want to do? No, I think that's good. And I just want to say thank you. Great. Okay, thank you all. Thank you for participating and please feel the survey and give us feedback about which topic you would like to see. And I remember next webinar will be about single molecule localization microscopy analysis tool. So please register also for the next webinar. And I thank again the speaker and Julian Colombelli that is helping a lot in background. So thank you. Bye.