 Oh, hello, everyone. I hope you all hear me. So perfect. So welcome to this course on interactive biomech analysis with Python and Jupyter, organized by Nubias. I'm Guillaume Vitz. I'm actually from Bern. But it's also in Switzerland, so it's not too far. I'm working on my apologies. No problems. I'm working at the Microscopy Imaging Center and Science IT Support Unit at Bern University. So first thanks to the organizers for setting up all these really interesting courses. I followed a few of the videos, and I think it's going to be a great resource for the current and future biomech analysts. So I hope this course will be up to the level. So I have been using Python and Jupyter for several years now. I switched from using MATLAB, mainly to Python, for a research project that was doing microscopy and microbiology. So I did all my research with these tools. I published papers also using these tools. And now currently, I still use those tools in my position as biomech analyst at Univer, where I'm supporting the local research community. I'm writing software for them. And so I use these tools for medium to large scale projects. We also use Jupyter a lot in our Science IT Support Unit to give courses. So we give regular courses on Python for data science, image processing. And so we use systematic Jupyter and made only really good experiences with these tools. Of course, they are not appropriate for everything. Then there is a question of taste. So I hope I can show you how to use them and so that you will be able to decide where you think it's a good tool for yourself. It's a good tool for both beginners and advanced people. So I was saying that we use these for really beginners in courses. And it gives a very close feeling to the code. So you will see that you have very good control over the code. So we see now the poll. So many people have already used Python. That's a good thing. And it's also used by more advanced programmers. So if you've written an entire analysis pipeline, it's a very popular software also, especially for people doing data science. And KM actually mainly from there because it's a tool that gives you a way to really look at your data. So it's used a lot in data intensive domains, typically imaging as well. I have to say that I'm not a developer of those tools. I'm a heavy user of those tools. So this presentation is going to be a bit different from some of the previous ones where you had the actual developers of the software presenting. So this is more a user perspective that I will give you today. And so what is the goal? So there is what here I call the Python activation barrier. So you usually start somewhere here. You say, oh, you see maybe a very cool Python tool that you would like to use. And you try to see how to install those things. And the goal, the end goal would be to reach this point where you happily use Python ever after. But very often what happens is that people get stuck somewhere in the middle and they are really busy with Python, but not doing what they actually want. They are busy doing things like installations. And they're wondering how to install Jupyter. How do I make sure that I have a right Python version? How do I install a package? And so a lot of people then at some point give up because there is this kind of activation barrier that I see me quite often. So the goal in this course for us is really to somehow lower this barrier so that you will be able to explore these tools very easily. So you will see that we set up the entire course so that you can go through all the course material on your own without having to install anything on your own computer. So you will be able to work remotely and to use these tools without having to install anything. A few words about the course content. So this is not a course on the theory of image processing. It's really designed for people who have already knowledge in image processing, but not in Python and who want to know how they could do image processing in Python. So it's not an exhaustive course. Also the topics I am covering in the material that you will explore yourself. I select the topics that I use or that I think are of general interest, but there are many, many others that I'm not talking about. So you have to imagine this course a bit more like a tasting menu in a fancy restaurant. You get 12 dishes. You discover something that you like where you get only one bite. And then you decide, oh, I'm going to cook that for myself at home. And then you will have really to explore this by yourself. So it's in no means so there are a lot of topics and they're covered quite on the surface. So you don't expect to learn everything about image processing in Python during this course. And we also spend quite a lot of time on the technical aspect of using Jupyter Notebooks because it's an interface that is quite unusual compared to other graphical interfaces when you have regular applications. And so it works slightly differently. And I want to make sure that everybody understands how it is working and why it's working the way it works. So we'll spend a lot of time on this and a bit of time on the actual image processing. But the actual image processing part you will do mostly with the self-learning material. So the course is organized like this. I will introduce today the Jupyter Notebooks, the Python tools for bioimage analysis that we'll use. I will introduce you to the self-learning material. So I will tell you where you can find it, how you can run it in this first webinar. And of course, as explained, there is this Q&A. And I'm happy to have these three people who are doing the moderation. So there are two colleagues, actually Mikhail and Cedric, who also work at the Nibern. They both have back on the more engineering physics and now work both on image processing in biology for live microscopy. And Dominique Kutra, who you already have seen in a previous talk, I think, on Elastic. He's one of the core developers of Elastic, which is written Python. So he's very well placed to answer your questions. So then you will have one week to go through the self-learning material. Again, no download, no installation required. And in the second webinar, we will answer questions that appeared very often. And I will also go through some of the more advanced notebooks. So the content will depend a bit on how many questions you have and how far people are going. So we'll do a second poll at the end of this seminar to know if people plan to really go through the content. So the course material is available on GitHub. I will talk about GitHub and what it is later. But you can use this link. You will find all the notebooks there. And this presentation is also available. You can look at it. And so you will have all the links that I'm going to click around in this presentation. You can have access to it also here. So what is the Python ecosystem for BioImage Analysis? So of course, the first part is Python. So Python is a language. It gives you a syntax. And it's also a software that executes the code. So if you write Python code, then you can use Python software to execute the code that you wrote. It's used in a lot, a lot of domains, very different ones. Python has already a lot of functions and packages that you can use out of the box. But they are used in many other domains. So we are not going to rely on very advanced features of Python. So if you have no idea about Python, you can just look at a very basic course. There are links in the course material to some online courses also that you can follow to have an idea. There is also one notebook about really the essential Python that you should know to be able to go through the course. So we will mainly rely on additional packages. So if you're familiar with Fiji, it's a bit like having the modules in Fiji. There's additional components. And so the main components we are going to use are NumPy. To handle images as matrices, scikit image for all the image processing software. So scikit image is a very important part of this course. And it covers 90% of all the functions we are going to use. And then there is matplotlib to do plotting of the images. It's a very extensive library for plotting. And we are going to use a very tiny, tiny part of it. So we will not dive too much into the details. And then there are additional packages for specific applications, visualizations, image import, tracking, segmentation. And they are demonstrated in some of the modules. Then we need an interface for all this software. And this is going to be Jupyter. So we're going to spend quite a lot of time understanding what Jupyter is. And we need an infrastructure to run all this and to install all this. So we are going to see two services, Binder and Google Collab, which provide a way to run notebooks. I'm going to briefly mention how to install packages in Python. But it's going to be very brief, because the point is that you can do this course without installing anything. So I only give a few hints if you really want to install this on your machine in the future on how you should proceed. And I will also mention, of course, GitHub, which hosts all the data of this course. So the first part is Jupyter Notebooks. So I will present the Jupyter Notebooks, and then we'll quite extensively, then we'll probably do a break with a few questions. And then I will go back to the more bio-image processing part. But again, the bio-image processing part is going to be heavier in the next webinar. So how does classic software work compared to notebooks? So in classic software, so this is, of course, a simplification for all professional software developers, but for the point of showing the difference, I just simplified it. So you write some code in the file. Then you open a terminal command line window, and you execute your code. So you type something like Python in the name of your function. This runs the entire routine that you have. This results in a folder that contains your results. And your results can be different things. It can be images, segmentation. It can be actual plots that you generate. It can be results like a CSV file. And sometimes you re-import the CSV file in another software, like you would do, for example, in Fiji. This is the classical way. And of course, you see that there is a kind of a distance between the software that you write and the end result. So if you want to improve a pipeline, it's going to be a bit tedious to use this approach. And this is why notebooks are so popular in data science and also in image processing, is they kind of mix all these things together. So you see here I made a short video on how this works. So here you see a notebook. So the notebook is actually this kind of page, this white page on a great background. You see that it runs in the browser. We're going to see a bit more about this in a moment. And you see that we mix different components. So you have code, like in these gray cells, you have text where you can comment things about your code. And then you have a rich output. So you can have images. You can have plots and graphs. So you can really have your entire analysis pipeline in one plot. And you see here that I'm going to change one of the variables. So this is just a very basic thresholding of an image. You can change this parameter and re-execute your code, live, see the results of your thresholding, and remake a new histogram. And so this is the way you can use Jupyter to really explore your data. So it's really very nice to explore data dynamically. So what is a Jupyter notebook? In the end, it's just a text file. So if you can open any of these notebooks in your text editor and you will see that you have the content of these cells plus some formatting so that you know that this is a cell, this is a second cell, et cetera. And so what Jupyter is doing is just rendering the content of this notebook in your browser. And so there are different ways to do this rendering. Jupyter is one. There are other solutions. But today we focus on Jupyter. But remember that it's just a text file. If you have a notebook, you can essentially just send it by email to someone else. And that person will be able to open it via Jupyter. As I was saying, there are different types of cells. And the content is split into these cells. So each of these parts is executed separately. And there are different types. And I was mentioning you can have code. And these are these grayed out cells. You can have text, format the text. And you can have rich output. So you can have different types of content. So now I will just switch for a moment to actually live demo of how this works to illustrate a bit how these notebook work. So I just need to get back to this. So this is a notebook, how it would appear in your browser. This one is running on my own computer. We will run these things remotely. And you will see that it looks exactly the same way, whether it's running on your computer or remotely. So I mean, this cell here, I can start typing code. I will make it a bit bigger maybe so that everybody can see it well. I can write variables. So I defined a few variables. And then combine them. And then you see the output. So whenever you just have one variable and execute the cell, you see the content of that variable. You see that the variables you defined are defined for the whole notebook. It's not just per cell. So whatever you write in a cell is shared across a notebook. Be attentive to the fact that the execution doesn't depend on the order of the cell, the top down. So if I can really find here A and give it another value, and you see that this changes the value of C. So I really encourage you to have a top down approach so that your code should really run from the first cell to the last cell and avoid these kind of loops. Because this makes it really difficult to understand what code is doing if you have to jump across your notebook. So really try to avoid doing that. So you saw that just to write in those cells, you just click in that cell and then you can edit it. So you can do that in any cell. Whenever you execute a cell, it automatically creates a new cell where you can write. If you want to add cells in between these two, you can use the menu. You can say insert, insert cell above or below. But you're going to do that a lot, a lot, a lot. So you want to use some shortcuts that exist in Jupyter. So if I am currently in this cell, there is a shortcut that says that if I type A for above, it will add a cell above the cell. But of course, if I type here, this just types inside my cell. So I first have to get out of the cell. To do that, you have to type escape. If I type escape, and you see that this cell turns blue. So it's green if I'm editing. Escape, it turns blue. This means that I can do operations on the whole cell in that case. If now I type A, this creates a cell above. You can do the same. Now type B. Now I have a cell below. So A, B. So there are four or five of these shortcuts that you should remember. If you want to delete a cell, you type quickly twice on D. So if I want now to delete this cell, I type escape, and then twice D, and this suppresses the cell, okay? And you can do that with all sides. And what I didn't mention is when you have a cell, this is quite important. If you want to execute the content of that cell, you can click here on run, but you're going to do that a lot also. So to execute this cell, you can just type shift enter. So if that shift enter, this executes the cell. Now you have a menu bar here, like in other applications. Mostly you can explore it by yourselves. You have a file system, et cetera. There are a few specific two notebooks. So especially the cell tab here. So you can, for example, run all cells. So this will run from the top to the bottom for yourselves. It's quite practical. You can run cell all the cells which are above your position or below, et cetera. Finally, you saw that we had some text, formatted text. So this is, if I write text, I'm getting an error. So you see that you have also error messages in Jupyter and it does know what to do with this text because it doesn't correspond to any Python code. So the problem here is that you should tell Jupyter that this is not code. This should be text. And so the way you do that is you can go here to this tab here and you can select markdown. So markdown is a language to do basic formatting of text. This is what is used in Python. So we're going to use only these two types and not these two. So you have the choice between code and markdown and you can go back and forth. So now if I execute this cell, shift enter, you see that this creates formatted text. You can Google about markdown for details. I'll just show you a few examples. If you want to write a title, you use a hash. If you want a subtitle, write two hashes, et cetera. And you can format text, can use stars. This is creating Italic and you can use double stars. Just create bold. You can create lists and tables. So there are lots of possibilities, but you see it's a very simple language that allows you to do 90% of what you have to do in terms of text formatting for this kind of work. So it's a very nice language that you can use in other places too. For example, GitHub and we'll see that later. Now I go back to this presentation. So you have all the information here in the slides. Often there is information in the slides that I don't directly show, but because I'm showing it live, but if you want to go through the slides later, you should have all the information available. We have also, when you open Jupyter, the first thing you meet is not at this notebook, is a sort of file browser. So this looks like this. So you have files here. When you actually will look at the content of the course, we'll have many more files. So you have folders and you can just browse through your folder, click here to go one little up, and you can open any of the notebooks by clicking on them. You can select a notebook and do some operation on it. We'll see that later. And you can also move, this is your folder structure. You can also move in the folder structure by clicking here. I will want to go one little up. If I have a notebook, which is not yet active, I get options to duplicate, rename, et cetera, or to trash a notebook. This is all pretty standard. And there are just two more options. You can upload data. This will be more relevant in a moment when we see that this all can run online. And you can create new notebooks. So here you can create a new Python 3 notebook, for example. You won't get all these options. Or you can create a text file, a folder, or open the terminal. But we don't look at terminal today. So this is just a basic browser. This reproduces the content of your regular browser. So it's just integrated inside this notebook system. Finally here, you have multiple tabs. The only important one today is to know that you can see which notebooks are running. Okay, and so this brings us to the next point, which is how do these notebooks actually run? So the notebook is displayed in the browser, as you have seen, but the browser is not doing any calculation, except rendering your content. So it can be a bit heavy calculations if you do 3D rendering. But if you do Gaussian filtering of an image, the filtering is not done by the browser, right? So it's done by what is called a kernel. And we use a Python kernel. You can use Jupyter with other languages. We just focus on Python today. And so what this says is that to each browser, you have attached like a Python instance, and this is doing the calculation for you. Okay, so it's just named a kernel in this context. So the browser sends computations to your kernel, the kernel does a computation. And if you want to plot an image, for example, the browser tells the kernel, okay, I need that image, can you send it to me, then you can display the image. But the whole calculation is done here. And the interesting thing is that thanks to this, it really doesn't matter where your kernel is running, right? So you can run the kernel on different places so you can run it on your own laptop. You can run it in the cloud and you can run it on the server. So if your university has a cluster, for example, it really doesn't matter for you. The only thing that you will see is the interface in the browser and this one will be identical, whatever, wherever the kernel is running. Okay, so this is I think a very attractive feature of Jupyter notebooks is that if you have access to important calculation resources, typically a cluster and you can discuss with your IT people to install Jupyter on these resources, then you don't have to care about this, but you have access to high computing resources from an interface you are familiar with. So you don't have to learn again how to access the resources. So I think this is really an important feature. So the notebook kernel I was saying is attached to your notebook. So we see that we have here a running notebook as I was saying it's shown green here. This means that it's active. So active means that all the variables that I defined here are in memory and I can access to them all the time. Okay, so whenever now my kernel is shutting down either because I shut it down because it shuts down, those variables are lost. So the content of my notebook is not lost. This is there, this is just text, but the variables are not defined anymore, okay? So the kernel can be restarted in different ways. So you can say here interrupt or restart. Interrupt will just interrupt an ongoing calculation. Should be used for example, if you have a bug and an infinite loop and the calculation is not stopping, you can try to interrupt. It doesn't work all the time. Sometimes you have to restart the whole kernel. So you have these options. So you can just click on those. I will just show you another option. So you can also stop the kernel directly here. If you go here and you say shut down, the notebook turns gray and the variables are lost. So now if I go back here and if I ask what is A, it's thinking for some reason. I'll just reload this, it's done. Okay, it tells me that there is an error. It tells me A is not defined. Okay, so I shut down the kernel. All the variables which were in memory are not there anymore, but the content of my notebook is still there, okay? So this is quite an important aspect. Note that you can have multiple notebooks running, but each of them is going to have variables in memory. If you work with images, that's going to be a lot of memory. So you should be careful but not having 20 notebooks running at the same time because at some point the system will crash, okay? So it's a good policy to from time to time stop the notebook. One point more. So let me just define here D and save my notebook. You can just save it with usual controls. You can also change name here if you want. I don't have to keep this tab open. You see notebooks open in these tabs. I can close my tab, but my notebook is still active, okay? If I reopen it and I ask what is D, D is still defined, okay? So these things remain in memory. So to shut it down, I really have to say restart or shut it down from here, okay? So this is how these kernel works and it works everywhere in the same way. Yeah, so it's a good practice to periodically restart your kernel and run the whole notebook because then you will avoid having these typical issues of having a variable that is defined not in the right and not in the top down order. The other thing that might happen is that you define a variable and then at some point you decide, oh, I don't need it anymore. So you suppress this line here and you think it still works. It still works because you defined it previously, but as soon as you stop your kernel, right? This doesn't work anymore because A is not defined. And so this happens quite a lot when you are developing code because you are changing names or variables. And so it's a good policy to sometimes restart the kernel and just run everything to be sure that you didn't lose any variable by cleaning up the code for it. Okay, so now this was abstract. You couldn't see exactly, you couldn't do it yourself. Now I'll show you how in principle you can access to these notebooks yourselves. So the whole content of the course, as I was saying, is available on GitHub. So GitHub is a repository for code. It's based on Git. So if you are familiar with data repositories or images, for example, Zenodo or FigShare or things like this, it's similar, but for code, okay? So it's very designed for code. So all programmers are putting code there. Big companies, all the packages that I'm going to talk about today, they are almost all on GitHub. So you can have many versions of code. You can follow the history of code. There are many, many tools for developers to do lots of operations that I will not describe here. So we use it more like a repository, a place to store files. And you can go to this link. This will open the window and your browser. This, you can visit this repository. There is some description of how the repository should be used. So whenever you think you want to run this, you can hear and read a bit about it. You see that there are different files here. There you have also folders. So you can really browse through this content. A code file, so a .py file will be shown like this. So we will have some syntax highlighting. And a notebook can be rendered. So usually GitHub renders notebooks, but sometimes it fails. So this works. So you see that it renders it pretty much the same way as in the regular active browser that we had before. And you can read all of these documents as a book, essentially. So all the notebooks I created for this course have lots of comments. So everything which is done is commented. So in principle, you should be able to follow all these course. You see however that this is not interactive. So I cannot click in sales. I cannot execute sales. So this is a limitation. And we are going to see tools to go beyond this afterwards. If the notebooks don't render properly because it happens sometimes on GitHub, you can use another service which is called mdviewer. So you just copy the address of this repository. And then you go to this service called mdviewer. Everything you explained also in the slides and in the repository. And here you can just copy paste the address of this GitHub repository that I've entered. And this will give you a nice rendered view of the content. So for example, I made an index of all the content of this course. And you can browse through it. You can click on these links and it opens these notebooks and which are nicely rendered. Again, still this is not interactive. So this is a limitation. This is just HTML. So it's nice. For example, if you have your own pipeline and you want somehow to publish it to share it with people so they can see what you did. You can really use these notebooks as documents to share. This is another really, I think nice feature of notebooks. So this was about sharing notebooks and the viewer. So you see they have all the links. You can even create these links if you want to send them to people. But now we want to make things interactive. So how are you going to do that? So remember that I was saying you have these two independent things. Your notebook is running in the browser but the calculation is done by a kernel which can be somewhere else. And there are services that allow you to exploit this. So we're going to see two of those. One is Binder. It's an open source project supported by a few foundations that pay for computational resources and they're also supported by universities. And so they provide computational resource so that you can run things interactively without login, without to pay anything. It's really for free. There are of course limitations in terms of computation resources and data storage but it's really a great feature if you want just to explore the data. The other one is Google collab. So it's Google collab is like a clone of Jupyter developed by Google. And you can also run notebooks there on Google infrastructure and we'll see this bit later. So what is this Binder service? It's a webpage that you can connect with through a webpage and it explains you what it does. It turns a GitHub repository that we have here into a collection of interactive notebooks. Interactive meaning that you will have the same window we have here will be started remotely for you without you having to do anything. So there is a whole technology stack behind this that I'm not going to talk about. You can read docs if you're interested to know how this works. And it's really quite impressive, quite impressive software that is really nice to use for these kind of demos or if you publish a paper for example and you want people to be able to run your code you can put it on GitHub and people can run it through that service and really run it interactively without having to care about what resources you use. I'm going to just make a short example. So the service runs at different speed depending on how complicated your repository is. So I made just an empty repository here just to illustrate how this works for you so that we don't have to wait too much time. So I will just copy here this link. You have some additional options can open specific notebooks, et cetera. Then we hit launch. So notice that there is no login. You don't have an account or anything. It's free for everyone. I already built it so it has to build the environment and then launch it. So this is really fast in that case and because it's such a simple repository and you see that it opens for me this new webpage in my browser that is a Jupyter session that is identical to what I had before. So we have again this browser and we can do interactive computation. So we can create a new notebook. We'll open the notebook, I can type. Okay, so it's really interactive. So you can really run code and here I created the notebook but in the case of the course you will have access to the notebooks already made. So you can run them, modify them as you wish. So it's really a great way to give people access to these resources. The drawback is of course that this is not permanent so this session will stop. If you stop using it for 10 minutes, it will shut down and so all your work will be lost. If you still want to keep work that you did, there are two ways of doing that. You can first save your notebook, give it the name, my notebook and then you can download it. So this will download it to your computer and then you can reopen it if you want. So this is my notebook, I can put it back here and I can go here, it should be in here. So you see that there is my notebook here. So this now runs on my own computer and you can reuse whatever you did. So this is the first possibility just to download it. There are new options since not that long where you can store your notebook in the browser. So this works only if you use notebooks which are pre-made and that you modify. So imagine you open a notebook and you modify, then you can use this button that says save to browser storage. It will save it. Now your session stops, you have to restart it. When you restart, the original notebooks come back. So without your modifications and then you can use the other button which is restore from browser storage and this will recover your changes, okay? So these are the options you have to keep your work. This is not designed for heavy users. So if you decide, okay, I'm going to use Python to develop pipelines, you should not use this service, right? You should then definitely install the entire system on your own computer because this will be too complicated with the sessions that time out to handle this. But it's really a great resource. So I will see now if I can show you how to run the course itself. So let's go back to this repository. This is the course repository. How do you start it? You started by clicking here. There is a button that says launch binder. So you don't even have to copy the GitHub address here. You can just click on this button. Just as a note right now, don't click all at the same time right now. Maybe wait until also after the course or wait until you can also use Colab because this service is open for everyone. There are limitations. So just wait a bit or if you start it and you see that nothing happened or you have to wait a long time, just try later. It just means that too many people are just trying it out. So we have again this window that opens. You get some information. It says that it found already a built image. It's launching my server. And this step can take a bit longer depending on how many people are using the system at the moment, okay? So I will, okay, so it's starting up. And we have all the material here. So the material is in this folder called buy a pie and you have all the notebooks. And if I open one notebook, I can execute it. So shift enter. So this imports a package, imports other package. You see that here we import an image. I will say a word about this later and we can display that image, okay? And this image is already stored on this service. So there is a folder. And if I go up here and it's called data that contains all the images. So this is ready to use. So the data are uploaded there. The notebooks are uploaded there. You can just use them. And so this is the no installation, no download solution, which is I think the preferred solution. It's the easiest one to use. If you want to test, so if you see that your notebook, this is interesting, I want to test it on my own image. Now you can use the upload button, right? So you can pick an image from your browser and upload it and then process that image as a test. So everything will be available there. Now, if I close this and I close my session, the session is gone, okay? So everything is lost. I would have to save and download the notebook as I was showing before, okay? So this is an absolutely great tool to use. Special thanks to the MindBinder team. I told them I would organize this and that many people might try this system out. And since they have limits on the system, they agreed to increase the limits for this course. So a big thanks to them. Yeah, so this is how it looks when you bind the session stops, okay? So when the connection fail and then you have to restart. So you have all the information in the slides. So the other solution is Google Collab. Google Collab is an alternative. So it's Google version of Jupyter. You see that they changed a bit the layout, how things look, but the principle is exactly the same. So code is separated in the notebook into these cells. You can have code. You can have text, formatted text. It works exactly like a Jupyter notebooks. Just they named a few things slightly differently. They all run on Google infrastructure. So Google has servers where this is running. And you can use Collab to create notebooks, to upload notebooks, and you can also run any notebook which is on GitHub or any other Git service. You can run them directly on Google Collab. So it works a bit the same as before. So let's go back to our repository. And this was, this is the course repository. Now you can click on Open in Collab and this will bring you to this page. It asks you to log in on GitHub but you don't have to. So this is a mistake. I don't know if it's because of me or of their system. You just close this window and then you should see a window that looks like this. You see that this links to the repository, a specific branch. So there is, for people who are familiar with GitHub, there is a specific branch for Collab and you can open a notebook and it looks like this. So this is text again. So you can double click and edit. And these are code cells. So you can, it just warns you that to be sure that you really want to run it. And so this imports NumPy for example. So it works a bit the same way. You can add code or text cells using these buttons. They're also shortcuts that are a bit awkward. You have information here about how much RAM and disk you're using. So this runs on their infrastructure. They guarantee you some limited amount of computation. The sessions also have a limit. So it's maximally 12 hours of session. So it's much longer than by them but it's not guaranteed to be 12 hours. You see that what it was called before kernel is called the runtime. So you can interrupt the execution. You can restart your runtime. This works the same way as in classic Jupyter service. There are other options that you can explore on your own. One really important aspect of Google collab is that you can choose the runtime type. So if I click here, this opens this window and I can say if I want to use a GPU or TPU. So in graphic processing units or this is the specific Google version of GPUs, TPUs. So you can really use GPUs from Google with these notebooks. But this is one of the main reasons this has become such a popular resource, these collab notebooks, because people doing machine learning often need GPUs. And so they get for free essentially a GPU via this way. So this is one of the main aspects of using Google collab. There are two differences still with the regular notebooks with the binder solution is some of the, some of the, I'll just show you, some of the notebooks need additional packages installed. So there are lots of things already installed or directly on collab that you don't have to take care of yourself, but some specific ones are not. So here I have a notebook where I'm just very quickly looking how to use the stardust and cell-pose which are machine learning based. And you see that here you have to install them. So you have at the top of the notebooks you have all the time a cell that looks like this that you have to execute and it will install things for you. So you don't have to do it yourself, just execute that cell. The other thing that you have to take care of is how you access the data. So you should use Google Drive to do that. So you should download the data from Google Drive. And then when you execute this cell, you will be able to connect to Google Drive. So the data, they are, if we go back to the description, I explain where you can get the data here. They're on a Zenodo repository. So this is a data repository. So I made it for this course and you can download. So this is a data.zip file. So download it on your computer then upload it to your Google Drive and put it at the very top level of your Google Drive. Okay, so if I open Google Drive here, there is this data folder that contains all the data. So I already did that. So it's already there. So you have this first step to do before you can access the data. Once you have done that, you execute this cell, which is always at the top of the notebook. And you will see that it wants to make sure, it needs access to your Google Drive. So it wants to make sure that it's you. So it gives you a link. This brings you to a sign in page from Google. You select the account you want to use. Then you have to agree to share data with it. So maybe create a special account for this. And then you have to copy this address, this code, and you have to put it here. So there is a line here, type enter, and now you will be connected to your Google Drive. Okay, so this is how notebooks work in Google Collab. So there are a lot of resources also from Collab to explain you how all this is working in case something was not very clear. You see that it's a bit more complicated than using Binder, but you get longer sessions and GPUs. So depending on your needs, you might want to use different solutions. So finally, you saw that we had to install, in here we had to install some packages. We use this command called pip. So there are two, I don't want to spend a lot of time on this, but so there are two main ways of installing packages. One of them is pip with this Python package index. The other one is conda. So with pip, you use these commands that are always called pip install something. So you can look up if you want to install a package like scikit image, you can Google it and type pip scikit image, and it will bring you to that package index repository and this tells you how to install it. So you can run this command either from your command line and just like this, or directly from a notebook, okay? So in a notebook, you will have to type this exclamation point and whatever you want to install. So here I made an example, this package is not available. It tells me no module named skimage, which would be scikit image. So now I'm saying, okay, pip install scikit image, and it will install it for the notebook and in general on your computer. You can install multiple packages, you can install specific versions. The other solution is conda, and I'm saying this is my preferred way of doing this because conda has more features than pip. You can install, you can again Google it and look for the command that you have to type in your command line to install the package. It does a few things more. So if you install several packages, that depend on different packages. So for example, scikit image depends on NumPy and you might want to install another package that depends on another version of NumPy. Conda is going to look for the best combination of versions for you. So this is one of the main aspects of using conda to do that, and it's not limited to Python software. So if you need to install FFmpeg or CUDA libraries, you can do it through CUDA, which makes your life much easier. And you can create isolated environments. So if you start installing things on your computer just with pip without caring about any environment, you really quickly create a mess of versions and you will have conflicts of versions. So this is going to be difficult to handle. So the way of doing that is to create separate environments or like virtual environments on your computer where you can install a specific version or a series of packages that you need. Okay, it sounds very complicated, but it's redone in one command. You can say, conda create environment and then activate it. This is also explained in a notebook if you want in the repository, if you want to do a local install. And it will really make your life easier. So if you want to install this on your computer, really go for conda, it's the best solution. Again, there are more detailed instructions in the repository. An alternative is to use a graphical interface. So there is a thing called an anaconda navigator and this allows you to do all the things I just said using this interface. So you can create environments, you can install packages, you can even start Jupyter from here. You see here and other software. So this is a great way of installing all the things you need and handle these environments. So again, I don't want to spend too much time on this since for this course you can run everything without caring at all about this. But if you want to do a local install, go for conda or anaconda. Okay, so the next part is about the libraries we are going to use. But so we are probably going to take a few questions if there are a few questions and then we are going to spend the last half an hour, 20 minutes. We will see how far we come. What is not covered today will be covered next time. At least now you know how to access all the notebooks, how to run them via binder or colab. If something was too fast or unclear, you can just go against those slides. You can ask your questions. And we will probably have a way for you to ask questions during the week. So that if you are entirely lost, we'll answer these questions at the next session. So, yes. Yeah, so we have a couple of questions here which we would forward to you. Yeah. So one is, is Jupiter better than spider? What are the main differences? Okay, so I'm not a spider user. I know that several people use it even in the courses I give. Sometimes people come and use spider instead of Jupiter, but I'm absolutely not an expert. So I really couldn't tell you. So what I can say is a lot of these things are really matter of taste. So people will have big battles on what is the best way to, best tools to write software or even to do image processing. But a lot of it is in the end question of taste. But I don't know much more about spider. But you have seen that it's, for example, available in Anaconda. So if you install Anaconda, you will have spider available. Well, if I may comment quickly on that, I would say that the main advantage of Jupiter Notebooks is that you have your data and descriptions directly there and it's stored there. So if you generate plots, you can just save it all together and you will have some sort of report basically directly in your notebook while to best of my knowledge in spider, you cannot do that. Yeah, I think that's one of the great advantages of Jupiter that I showed also at the beginning. You go from image imports to final plots that you can put in a publication in a single notebook. So yeah. Another question is can Jupiter show the list of variables and functions defined by the user like either other IDE do like Matlab or spider? Yeah, so you can, I don't think you can do it with the regular, with the basic Jupiter. But what I didn't mention for the sake of speed and not to confuse people at all is that there are extensions to notebooks. So you should Google it if you want to use these extensions. So one that you might have seen here is that you can create a table of contents. So this is an extension for example, Jupiter Notebooks and there are several of them. And I know that some of them allow you to have like a list of variables that you're currently using. But I'm also not showing here that there is another version of these notebooks called Jupyter Lab and it's accessible if you install Anaconda, which is something closer to Matlab for example, where you have also extensions. And I think there you definitely have extensions to show variables. So if you want to look at any extensions, you will have to look that up yourself. So two shots to cover all this. Okay, another very interesting question is if I want users, biologists with basic programming experience actively use such notebooks, what is the best way to do that in your experience? So in my experience, you should have, as far as possible, avoid the installation step. So in 99% of the cases everything works well, but it also works well because people who are programming, probably like most of the audience here, they know what the command line is and they can deal with the different little issues that can occur. But in most of the cases, it's not a problem, but people who have a very basic experience of programming or no experience, they will struggle with these things. So the best, in my opinion, is to create a remote resource. As I was saying, you can, for example, ask your IT department. So if you have, for example, a cluster running at university, you can ask them to install Jupyter there and to have a bit, something slightly similar to this binder service, but where it's permanent. So people can log in, use their credentials and work there and their work is running remotely, but displayed in the browser. And so this is called a Jupyter Hub. There are different versions of these Jupyter Hubs, but it's possible to install this. Or you can do it yourself. So I'm doing it at our university. I set up one of these Jupyter Hubs using a, it's called Switch in Switzerland. So it's a kind of equivalent to Amazon or Google services for remote computing. So you can set up one of these Jupyter Hubs and then give people just access to that. And so they can just connect to it via the browser and don't have to care about installing anything. So I think that's the best solution. Otherwise you can just, if you have a very close collaboration, you can just help people install this on their own computer. And once it's set up, it works not forever, but for very long time. Okay. There is another question that I'm not familiar with. Can you use binder to make version of a pipeline directly back to GitHub? This, I don't think so, but I'm not entirely sure. I don't think you can push things back to GitHub. You can do this definitely again with some extensions in Jupyter. So you can handle Git directly from Jupyter. You can push and pull and do these kinds of things with extensions. But I don't think that you can do that in binder directly. All right, probably for now, we are fine. Okay, very good. Then I will call the next part, which is really more about bio image processing. You are reporting all the questions, I guess. Ah, yes. Yes, okay. Okay, so, and this will focus more on the packages we are going to use. So one of the main packages is NumPy. So NumPy allows us to handle images. So this is an image made up of pixels. Each pixel has a value that you see here in red. And you can just imagine that this is a matrix, right? So an image is just a big matrix of numbers. And you can do operations on these matrix even forgetting that you deal with an actual image. And so there is no proper or simple way of handling matrices. This is a 2D image but you can handle higher dimensional objects. There is no simple way of doing that in basic Python. And so this is why people have developed NumPy which allows you to handle these arrays. So they're called NumPy arrays. This is a 2D array. It has two axes. They're called axes, columns and rows in this case. And it's very similar. If you're familiar with MATLAB, it's very similar to MATLAB. So MATLAB has been the pioneer I think in creating this environment where your calculations are at the matrix level. And so NumPy somehow re-implemented this. So if you know MATLAB for example, you will not be too lost in this work. So of course you don't have only to deal with 2D images. You can have higher dimensional objects. So this is one plane for example and you can have multiple planes. So you have a third axis in that case which would be a stack for example, a Z stack, microscopy Z stack and you have multiple planes, okay? And so this would be a three dimensional array. Your arrays can be n-dimensionals, okay? But I think there is somewhere a limit but that we never reach in microscopy. So you can have time, channels, Z stacks. You can have like 5D arrays. And the really interesting thing is that the operations you do on these NumPy arrays, they don't depend on how many dimensions you have. So if you want to add two arrays together, you don't have to care about how many dimensions they have. If you want to do an operation on them, you don't have to care about that. So it really simplifies a lot of the work. So there are many languages, like more basic languages where you have to write loops, for loops to go through all the pixels to do operations. So this is hidden away when you use NumPy. So and NumPy is used in almost any scientific context in Python. So it's a very, very important user. So there is one notebook about NumPy. I think we should really, you can skim through the whole material. If you really intend to use Python, you should really try to go in detail through that notebook to understand whatever is done in the other notebooks. So I'm just highlighting here three very important thing you can do with NumPy arrays and that we do all the time in all these notebooks. So just that they have a bit of an idea of what is coming on you. So you have an array and an important part of NumPy is doing indexing and there are other things called slicing. So you use basically the indexes of your array. So an array has in this case two dimensions and you can specify, okay, I want to recover this number one here. It's row number zero, column number two and it returns you one. But you can do that in more complex cases. You can say, okay, I want to recover the line number one here. Remember that we count from zero in Python, so zero, one, two. So the line number one and this sign here tells, okay, I want to recover all the columns. So line two and all the columns. So when I type this, it will return to me this single line. Okay, so this is used a lot a lot in all these notebooks to maybe want to crop an image. So if you crop an image, you would say from where, from which row to which row and from which column to which column you want to use indices. So these indices in NumPy are very, very important features. What we do a lot when we do image processing is combining arrays, right? So you can do operations. For example, you might want to mask an image. So you will have an image and the binary image and you want to multiply them to mask some parts of your image. And for that you would do, for example, a multiplication. And I think the really important thing to understand is that this is not a standard matrix operation. You can do them if you want, but if you just write it like this, it's not doing a matrix operation. It's doing a pixel by pixel operation, right? So you take this first number, the two numbers, two by three, six, one by one, one. And so you fill your entire matrix like this. So it's really a pixel by pixel operation. So this is really important to remember. And exactly if you want to do a masking of an image, this is really useful because this is what you want to do, right? You don't want to do a true matrix operation that you have learned at school. The other really important thing is that you can do operations and you can do mathematical operations on the arrays. And again, this is going to operate pixel by pixel. So if I want to take the cosine of this matrix, I can just say cosine of my matrix. Of course I would just pass them the name of that array. And the output of this is a new matrix with the same dimensions and it took the cosine of each element in this matrix, okay? And so we do that a lot. And many, many functions in this course will do that. So they take it as an input and matrix and they output the matrix or an image of exactly the same size having done an operation on each pixel, okay? So we will see this kind of operations a lot. Again, try to become familiar with NumPy. It will serve you, not just in imaging, but in the whole scientific Python world. Okay, so then we want to do actual image processing. And for this, we're going to use scikit image, which is a library that implements a lot of the classical functions that you need and even more complex ones. I just copied here some of the text that you can find in the publication that describes this software. I was published a few years ago. You see maybe here a few names that you see, for example, ImageSC forum coming up, Juan Munez Iglesias or Stefan Landelwald, who are some of the main programmers. And the science as the project aims are one to provide high quality, well-documented and easy to use implementations of common image processing algorithms. And I put this because I have been using this now for several years and I can only say that this is very, very true. So it's very easy to use. They are very consistent. So when you know how to use one function, you will not be surprised by how you are using another one that you never used. And it's also very well-documented. So I will quickly go then through documentation to show you how that works. So I will just show you a very few examples of the types of functions that you have available. And I only put here very simple ones. You have much more advanced functions, but for the purpose of illustrating what is available, I'm just doing it with simple functions. So first of all, you need to import your image. So all the functions are written in that way. So you import your scikit image and module, and then there are sub-modules, a package, and then there are sub-modules for each class of operations. So there is an IO module for import and export and this operates like this. So you call that function, you give a filename or a pass, and it returns an empire. And almost all functions, they take an empire and output an empire. Okay, so this object here now is my image, is a empire. Now you can do a Gaussian filtering of your image. And so this is called, there is a specific module called filters in which you find all the filters. Among them the Gaussian filter, and it takes an argument, an empire, a plus some options. So several of these functions have options. Here you can put the options of how wide your Gaussian should be. And then there are some additional ones. For example here, I don't want to rescale my output. So this is exactly what you do when you're doing, for example in Fiji, when you do a Gaussian filtering, Windows pop-ups and asks you what Sigma you want. So this is doing the same thing. Okay, and the result is again an empire. So empire in, empire out. Then you have lots of other modules. So you can do lots of different transformations, like a rotating an image by a certain angle. You have the whole morphological operations that are available where you pass a mask, a binary image, plus the object you want to use to do the filtering. If you don't know about morphological operations, you should have a look at one of the courses that was about Mopholip J, which is a great library in Fiji to do these kind of operations. Then you can analyze regions. You can measure properties of the segmented objects. If you have a labeled image, for example, the area. So this is what you would do in Fiji when you do analyze particles. There are features to, there is a multiple features to analyze features in your image. So you can template matching with an image in the template, again, non-py array, non-py array. And you can do segmentation. So there are various packages to segmentations. Some of them really complex and advanced. Among them, you can active contour. So active contour is implemented directly in scikit image and you pass here an image and then initial contour. So this is just a tiny, tiny, tiny fraction of all the things that are available in scikit image. You can follow this link to the documentation called an API. And so if we go here to the filters module that I used, where I picked the Gaussian, you see that you have a whole list of other filters that you can use and they all work in the same way. So the first thing you pass is an image. And sometimes you need to pass a mask, but they all work in the same way. And regarding documentation, so if you go here, you click, you see that you have an extensive documentation telling you each, for each of these parameters, what they are and how they should be used. Then you have even examples and you have more complete examples. So you will never be really lost by trying out things and having it not working. When you're lost, of course, you can always ask questions on either on their own repository or on the images C forum where people are answering questions also about scikit image. Okay, so this is a very, very important package and it will be used throughout all the notebooks and you will see it be used in different ways. So you will become familiar with it if you go through the material. Then we need a package to plot images. As I was saying, we use myplotlib. It's one of the oldest plotting libraries. It's very widely used. It works again with NumPy arrays. So we can just pass a NumPy array and plot it like this image here is a NumPy array and it displays this image. In this course, it's used in an extremely minimal way. So we only plot images. Sometimes we plot two images or a histogram but we do it in a very, very minimalistic way. So there is a notebook about myplotlib in the course. You should have a look at it if you don't understand what is done in the notebook. So sometimes I am creating maybe subplots and you might wonder, okay, how does it work? So you can consult the plotting notebook that I made to just show how it is used within this course. There are multiple other plotting libraries for more data plotting, which are easier to use than myplotlib. The advantage of myplotlib is that it gives you full control over how your pictures should look like. If you need to do an image for a publication, you need to conform to some settings, you will be able to implement it virtually in anything. But again, this is going to be very superficial. Okay, so these are the three main ones that you're going to see in the material and then there are a series of other ones that are going to be used mostly in specific notebooks where I try to illustrate how to use them. So you have a full list here with the links. I will just go have a few slides where I can just illustrate how they work. The first one is Napariz. Napariz is a very recent software that has been developed to do rendering, 3D rendering, volumetric rendering. It's very powerful and very user-friendly. So you can create this kind of very advanced renderings of multi-dimensional plots and you have this interface that allows you to interact with your data. It's very easy to add custom interactions with this window and it has received, I think lots of people are very enthusiastic about that software because it was a bit missing, a good 3D renderer, a complete renderer was a bit missing in the Python world. And so I think people are very enthusiastic about this. There is one drawback and you see that this opens a specific window. And so this means that, and it relies on a specific package to do the rendering. So this will only work if you install every single local. So the Napariz part will not work either on binder or on Google collab. So if you want to test it out, you should install it on your computer. There is also, I think, a client that you can open separately and use it even without notebooks, but you can use it without notebooks. I really encourage you to discover this. It's a great addition to the Python world. Then there is Trackpy. So there is one notebook where I show how to do tracking of cells in a time-lapse movie. It has been originally developed for particle tracking, but it can be adapted to any other problem. It's very user-friendly, so the functions are very easy to use and very intuitive. It's based on pandas, so if you're not completely familiar with pandas, you might have to read about this. I give explanations in a notebook about what pandas is. So pandas is implementing data frames. If you know R, for example, it's very similar to data frames in R. And it's very simple to use at this level. So you will, if you explore that notebook, you will discover how to use minimal pandas. Trackpy also implements spot detection, so if you have a problem with spot detection, you can do everything in Trackpy, the spot detection, the tracking. It has also numerous features to clean up tracks. You can remove tracks. There are lots of options when you do the tracking in terms of the algorithms that you want to use. So it's a great software and I think more and more people also are using it because it also kind of came out of its own world, which was more physics and biophysics-based. Then another 3D rendering renderer, which is IPI volume. So this is also a great renderer and it purely works in the browser. So it exploits the browser technology. So you can really use it in your browser and this will work on Binder. It still doesn't work on Google collab, I think, because Google has some limitations on what you're allowed to display. But if you try this on Binder, it will definitely work. You can plot volumes, you can plot scatter plots, meshes, lots of things, you can even make movies. And you can interface it with another library, which is called IPI Widgets, which allows you to create interactive windows. And finally, you can export this view as a HTML file. So you can just save my figure as HTML and you will end up with on your computer an HTML file that you can embed then in a website or anywhere else as a demo, for example, or as an illustration. So this is really a great feature of IPI Volume. It's, to customize, it's a bit more complicated than Napariz. So Napariz is very user-friendly and you have all these controls to change colors and things. This requires more programming and so a deeper understanding of how IPI Volume is working. But you can do basic plotting very easily. Just if you want to customize it, it's going to require a bit more work. IPI Widgets is this library that allows you to create this kind of interactive applications. So there is a tiny bit about IPI Volume. I think in the notebook where I explain how to import images and how to display them. So you can create buttons. You can create lists and sliders. You can really create this kind of applications. And so those will live in a notebook. But you can, if you want, you can render them independently of the notebook with another service, which is called Voila. This is beyond these courses. It's not calling this course, but I really like this application. I think it's a great application. So you can really create interactive applications. So you see that this was also running on MyBinder. The demo is also on GitHub if you want to have a look at it. So you can create an app that then people can use without having to touch any notebook or knowing anything about programming. It's really a web app. A bit like you would do a shiny app in R. And yes, IPI Widgets is also used by other packages like IPI Volume. Finally, we are going to briefly show in one notebook two segmentation applications that have become popular. One of them is very, very recent. It's called CellPos. I think there is not even a final publication on it. You can find it on BioArchive, I think. But they have a GitHub account. They even have a webpage that you see here where you can upload an image and test it on your own image before installing everything. And it has been trained. So it's a detection algorithm, segmentation algorithm that segments cells and nuclei. And it has been trained on a very large data set of very different kinds of images. So it's very versatile. I see continuously people posting on Twitter like how this is amazing. It worked out of the box on their image that was completely different from what CellPos expects. I have tried it myself. It's honestly very, very impressive. And so you can run it as an application on your computer or you can include it in Python in your old Python code or in a Jupyter notebook. And this is what I'm illustrating very shortly in this notebook. So you can really run CellPos in a notebook if you want to integrate it into your pipeline. Another one which is very similar and which is a bit older, which is Stardust. You heard about Stardust in one of the previous lectures by the developers themselves. As CellPos, it's implementing an algorithm which is a bit much smarter than what standard deep learning approaches are doing. So they're adding features, specific features to detect objects. In this case, they are using a solution to segment what they call star convex objects. So if you imagine the star, it's an object where all control points can be are accessible from a single point within the object. And so exploiting some mathematics around this, they combine this with deep learning and provided this software that can segment not just nuclei, but any kinds of objects which are star convex objects. So you can retrain it. So there is a trained version of this algorithm that they trained also using a database of nuclei. And so if you have images with nuclei, fluorescence nuclei, you can try it on your dataset. If you have a very different dataset, you can retrain it. So there are instructions on their website on how to do the new training. If you want to use this, if you go through a notebook and see, okay, this looks interesting, again, it's very basic. If you want to know more, I really strongly encourage you to go and watch the course that they gave. I put the link here of the course that is available now on YouTube. They have also material on their GitHub repository to show how the system works and how you can run it. And they actually also use Google collab notebooks. So I hope this course will help you if you watch that training and had no idea what the Google collab notebook was, maybe this present course here will help you understand it a bit better. Finally, the last package is PyMHJ. It's a package that allows you to mix Fiji and Python. So for example, if you wrote a macro in Fiji and you want to integrate this with some kind of further data analysis that you want to do, you can integrate that in a notebook. So Fiji is not, I think, that great to do the data analysis part and the plotting part. I don't think that many people use it for that. It's absolutely great for an image processing part. But notebooks and Python really shine in the data analysis part or R if you want. And so you can basically transfer a macro inside a notebook and execute it and recover the output and create a plot, for example. And so then you have, again, integrated your whole analysis pipeline in a single notebook. So when you run this, it downloads Fiji for you or you can use your local Fiji. So there are different ways of doing this. You can also use plugins that you installed in your local Fiji version. And I was saying you can reuse macros. There are lots of things you can do. Again, I put the links in that notebook to the original repository. There are many notebooks already provided by the developers of PyMHJ. So you can really explore it. Just know that you can also use notebooks to do purely Fiji Java stuff. So you can really program your routines from notebooks if it's an environment that fits you. So there is a lot of interaction there. And I think it's a good thing that these work gets a bit mixed, along a bit what, for example, that has it does with a Clij, which is available in different, for different languages, but works everywhere the same. I think we should tend to mix these different worlds as best as we can. And if you saw this presentation about Clij, you also know that maybe that there is Clijspy, a version of Clij that works in Python. And with that, you have a tour of all the contents that you might want to look at. So again, you can go through, you can go to the repository at the end of this webinar and you can browse through it. You can just read it as you would read a book or you can run it interactively. Binder is the easiest solution because it doesn't require an install. It works out of the box. Colab has all the features that are described, but don't forget to first download the data and put them in the drive. And I hope you will be able to at least try out some of the materials. So I would really encourage you to go through the first set of notebooks until number seven. This will cover most of the things you would do in a very simple Fiji macro, for example, like filtering, detecting objects, et cetera. And the other notebooks are on specific topics. So if there is one that is appealing to you, you can just go in detail. The other one, you can just read them to have an idea of what they contain. And with that, I thank you for your attention. See you in a week. And I really want to thank all the developers of these open source tools, right? So there are many, many people who invest countless hours in these tools, but we can then use them. So I think one important point is don't forget to cite these software packages in your articles. It also allows these people to find money, to fund this development of tools. Also a big thank you to all the people who release their data publicly. It makes this kind of courses easier because then you can exploit this data as examples. It's also in general a very good way of sharing science. So a big thank you to all the people who take the time to do this. It's painful sometimes to do, but I think to these people too. And with that, we can maybe take one or two questions if there is time, otherwise we'll see you next week. And there is a poll maybe that you saw that asks if you plan to go through the material or not, just for me to know how much I should really go through the material next time or just answer questions, depending on what the feeling of people is. So is there any question, one or two questions? We can take one or two minutes. Well, you have more than one or two. Okay, so, sorry. Alexis is asking, main reasons you would recommend Python for an intensive Fiji user? At some point is a question of taste, right? See, if you are very used to Fiji, then go for Fiji and write everything in Fiji. If you're not really a programmer, you would have to learn Java, you would have to learn how to program an entire plugin. So there is quite a lot of overhead if you want to do these things in Fiji. Here, you just need to import those packages and you can write your short pipeline in a function and you will come a long way. There is an example of a notebook where I show a full pipeline from imports to exporting an actual image plot. And this would be a bit more difficult, I think, to do in Fiji. So I'm not a Fiji expert, so I don't know in detail, but I think in particular, the reason I would use notebooks is this feature that you can integrate everything, right? So Python is very good for the data science part, also through pandas to do plots. So if you really need that, I think it's a very nice integrated way of doing this. But as I was saying, the best thing if you want to mix these two things, so you can use this PyMJ, for example, and use the pieces of Fiji that you really like in Python. Okay, we have some specific questions, like which package would you recommend to select and work with Roy on an image? Or is it possible to do cytometry analysis in Python? I don't know. Yeah, so for Roy, so everything which is interactive, I think this is really, I would say was for a long time the weak part of Python and the reason why lots of people use Fiji, and I use Fiji a lot when I have to do very interactive things because drawing on images and things like this was for a long time very complicated. So today, I would definitely tell people to use Napari. So if you think you want to use Python, install Napari, there you can draw things, you can click, you can select regions, you can create all kinds of objects, of shapes, of labels, so it's very, very flexible. And so I would really encourage people to use Napari. There might be, I think, a course coming up about Napari, maybe in the next session, just check out what is going to be offered. Also, there is a question, does cell phones around in 3D very large, like 50 gigabytes data sets with appropriate amount of hardware? So I didn't understand the first part of the question. Well, can you run it on huge 3D data sets? Ah, yeah, so how much you can run is, are you meaning general or for Napari? This is about Napari. Okay, Napari, talk. I have to send cell phones. Sorry? Cell phones. A cell phones, oh yeah, you can run, basically Jupyter is just an interface, and as I was saying, you can run it using the current that sits anywhere. So if you have a cluster that has 24 cores, you can run anything. And if you have enough RAM, you can run anything via Jupyter, right? You don't need anything special. Just as an, and cell phones, you would also run it, you would create a loop and go through all your images against cell phones and start this, our machine learning base. So if you don't use a GPU, you will wait until the end of the universe to process all your images. So if you really want to do intensive use of cell phones and start this, you run it on a GPU. So that's the only thing that really matters. And just one note more, like Napari has been redesigned also to handle very large images. So if there is a technology behind Napari, which is called Dask that you can use that allows you to open as large file as you want. Yeah. As far as you know, do you think most Java plugins available in Fiji can be called from Python? This I didn't explore enough to tell you. I know that there are limitations in how the plugins are written in what are the requirements that the writers put there. I know it's possible at least for some of them, but I haven't explored this myself in detail. So I cannot really tell you lots of details on this. But I'm happy to look into it if people have specific questions. This would be a great thing to show next time, for example. Do you know specific packages for 3D point cloud rendering? For this, you can use the IPI volume, for example, or you can use Napari. So one of the two, those two will render like scatter points where the shapes that you can choose. And so this works without problem. And about cytometry, do you know some libraries for analysis in Python? Not specifically for cytometry. No, not that comes up top of my mind. If anybody else has an idea, specific idea for specific plugin, it doesn't come to my mind yet. There is question I'm not familiar with. For video, is it only possible to work with using NeckPie? How do you spell it? N-A-C-K-Pie-Y. No, so I don't know that package. I have no idea. Do you know how robust is PieImageJ because ImageJ MATLAB is sometimes buggy? No, as far as I tested, it works. What I noticed is that on some versions of Fiji, there are some issues. I put a specific version in for example in Binder to be sure that it would work. I think ImageJ and Python are quite stable, so I don't see why it should be particularly unstable. Of course, this is the two which is, I think, still under development, so there might be some changes. The only thing which might be complicated is installation because you need the right Java. So if you want to install it, just go in PieImageJ repository and check out what they suggest. Julien should just say when we should stop overrunning the time. Well, I think there are some more specific questions already that we will probably answer in the offline. Offline in writing, exactly. And we can compile all these questions and look at them also in the next session. So I think for the sake of time, I think we can close here the question session. And so we'll be back next Wednesday, I think. Same time, same place, home. And looking forward to see all of you hopefully again there. Thanks for your attention.