 So the team today is composed of Raphael, Marie and Grégoire. And give you an introduction to the Cytomine software that we are developing for more than 10 years now. And actually the history of Cytomine started back in 2009 when we somehow had a dream about designing a web-based platform to ease collaborative research in biomedical fields. And at that time, we thought that it would be amazing to have a web platform that would allow different stakeholders and researchers with different backgrounds to share imaging data, to share annotations, and to work together on analyzing these images. So basically, we thought that it would be nice to have a commonplace where, for example, a biomedical researcher who have access to sample cohorts or biobanks to share their samples with people who have amazing scanners or microscopes. And together with computer scientists to design new algorithms that will perform different computer vision tasks through the web and eventually using some kind of easy computing environment like clusters. And we also had the idea that would be nice to allow external people, for example, remote experts, remote pathologists, remote biomedical researchers to annotate these images to help machine learning developers to develop new algorithms to ease these computer vision tasks. And so what we basically did is start this project when we got some local funding from Wallonia in Belgium. And we start this software development motivated by some application in lung cancer research, where these people had to analyze large numbers of histology slides. But we quickly realized that not only these researchers have these kind of needs, but many researchers in the biomedical research field, but also in education. And so we continuously developed this software with user requirements from very different communities of biomedical researchers. And then six years later, we decided to release this software under a permissive open source license together with the publication of our first publication in bioinformatics. And since 2017, we are proposing two versions of this Cytomine software. The official version is maintained by Cytomine company. And so you have here the links to the website of the company, but also the links to the GitHub and documentation servers of this version. And basically at my research institute, we are still developing this software by adding new experimental modules, new user interface, new data models, or new AI algorithms. And so basically, this version includes the features of the official version plus these additional modules. And you also have the links at the bottom of these slides. Since we deployed this software and we released it under the open source license, many, many different types of users have used it, either through collaboration with us or by asking the company to install this software in their own institute or in their own companies. And so we have here examples of many different universities around the world who are using it on a regular basis, either in biomedical research settings or in education settings, where teachers are providing histology courses through Cytomine to medical or very kind of medicine students. And since the beginning of this year, we are also involved in a very big European project with more than 40 partners, where the aim is to develop a kind of NCBI for digital pathology with DID to collect millions of slides and annotations and connect AI algorithm. And DID is to use Cytomine as the core platform for web-based annotation of these large databases. So here we show very simple examples of Cytomine applications. So this different researcher comes with different types of images. As I said, started with histology images. But for 10 years now, we are also working with very different kind of microscope images in the biomedical field, but also in other fields where large images have to be annotated in a collaborative way. And so we have application in lung cancer, breast cancer research, zebrafish development research, in morphometric studies like cephalometry or in the art domain or even other domains. And if you are interested to know more about these applications, you can have a look at our website where we try to collect papers that have been published by people using the software. So what is the main idea of Cytomine is to enable collaboration through the web. So that means that once you have installed the Cytomine software on a server in your institute, you can start sharing almost everything, including the images. For example, the very large images that are produced by digital pathology scanners. But you can also share the annotations, which are basically you can see this as a region of interest in these images to which you associate some metadata, like ontology terms or properties or textual descriptions. We will describe this today to you. But you can also share algorithms and the results, like quantification that has been produced by this algorithm. So basically we cite the mind each entity, like a project, an image, a user, an annotation, a result of an algorithm as a kind of unique identifiers that allows you to share this data around the web. And so here what you see is a typical workflow where you have images that you upload on a Cytomine server. Then these images are directly visible through the web interface using a regular web browser. Then multiple people can access these images and annotate them manually. Then this manual annotation can be used to train machine learning or deep learning algorithm that can be executed through the web interface and produce some quantification, like two more delineations. And these results produced by the algorithm can be proofread on the web interface, like correcting the segmentation contours, et cetera. So that at the end, the biomedical experts can generate some statistics that are useful for a specific analysis. This is a typical workflow. There are many different use cases of Cytomine. And please note also that Cytomine can be used as a desktop application because you can install it on your laptop or desktop Linux-based machine. But of course, then you will lose collaborative features as it's not installed on a server that is accessible to your collaborators. So in this two session seminar, we will first today discuss about the main concept of Cytomine, like images, annotations, et cetera. And we will focus on the interaction with Cytomine through the web application using a regular web browser. So that means we will show you how to visualize these images, annotate them, and at the end execute the algorithm through the web interface. But next week, we will go more into the details on how to interact with a Cytomine server using its API and using Python or Java clients that can interact with the server so that as a computer scientist or data scientist, you can basically import and export all the data from a Cytomine server, like export the annotation that the experts have done manually into a Python code that you have developed. Then train a deep learning model and then apply it on new images and communicate with the Cytomine server to visualize the result directly on the web interface. But so this second part will be presented next week and today, so we will mostly focus on the web UI with the online organization of your imaging data set into projects. We will briefly discuss about how to configure this project with access rights and user roles. We will show you how to visualize these very large images. Gregor will mostly show you examples on digital pathology images to the images. But at the end, I will quickly show you that we can also deal with other type of images, including 2D plus channel Z and time images and hyperspectral images. Gregor will also show you how to create and share these annotations or metadata and all the types of information that you can associate to your annotations. We will show you how to configure and the user interface, et cetera, so it's less or more complex with more or less features. And at the end, we will briefly show you some application of AI or computer vision algorithm just to give you an overview of what we will present next week. So without further delay, I will give the floor to Gregor, which will give you a live demo of Saitamine. OK. Hi, everybody. Do you hear me loud and clear? Yes. Perfect. So welcome, everybody. Thanks for Zendobias Academy to have invited us to present you Saitamine in a so detailed manner. Thanks, Rafael, friends and colleagues for the introduction. So we will here today present Saitamine as is on the web application. So I will present you all the main concepts that are driving the Saitamine experience and give you some examples of how to create some content. So for Saitamine, everything starts on what we call the dashboard, where you can have some messages and list of the previous projects that you have open and some metrics. But the first of it I will need to introduce what is a project. Inside Saitamine, every project means a space to work. So if I want to share some images with some colleagues to do some stuff, or even for me to work on them, I will need to create what we call a project. The project is a space where I will be able to put my images and do some work on them. To open this project, I will detail what is precisely a project when I will create a new one for you to see. But to put inside some data inside the project, I will first need to upload some image in my storage. So the concept here inside Saitamine is that any user will have his own image that belongs to him. So when I get connected inside Saitamine, I have my accounts. There is absolutely nothing which is available inside Saitamine without being authenticated. It's restricted on authentication application. On my account, and I will change data after, I have some data which are important, like my email, which is quite public, so it's not a problem to see it. The data I will change after this demonstration are the public key because it's keys that you will see several times in these webinars, on these two webinars, because the main force of Saitamine is that I can go here on the web application using my logging and password, but I can also make some requests through the API using my public and private key to get authenticated without having to give my login and password. So here on this page, I can manage my different data, including this one, and these keys can be regenerated. So I will do it after. It's not a problem to let it in the video. So this is for the main account. So in my storage, I will have only my own images, not the images that have been uploaded by my colleagues or friends, only mine. So in the main bar above here, you have the main concept, which has a project, the space where I put my image, my collection of image I can put inside my project. And the collection of terms that we call ontologies I will present it after, and a collection of algorithms also I will present it after. Let go deep inside Saitamine and firstly, by creating a project. So to create a project, I need to go on the project page. And the logic is every time the same on the right top corner of a table, I have a button to create some data. So I will create a project here, which is not very creative name, demoed by us at Academy. I will create an ontology using the project name to give some example later. And I save and directly I have the configuration of my project, which is open. I will come back here in this configuration tool to present you all the different possibilities that we have. But firstly, we will see that my project have a list of image, which is still empty, a list of annotations that I will sort after, and so on. So first of it, I will have an image. I already have one image in my storage. So you can see that the list of the image which are available to add to project is the same of the list of my storage. It's the list of the project, the image that belongs to me. And you can see also now that when you are leaving a project to do some stuff and check, for example, another job, you can go on your workspace and directly go back to the project when you were working before. So here I will add my images by clicking on Add. Then I will have some information regarding the magnification where the image have been scanned. And if I want to have more detailed information, I will click on this button here at the start of the line to open the box. And there you can see a lot of different information which might be useful for you, like a description, a tag, and properties that we will explain during this webinar, the slide preview that has been made by the scanner, the vendor of the trademark of the scanner, the side of the image, and the resolution which has been fetched in the metadata of the image to calibrate the image. So these are quite useful information. And I will make some demonstration on how to fulfill some more information. But firstly, we're going to open this image. So you can see that using the web interface, I'm able to navigate in the image by dragging, dropping with my mouse. And with the button plus and minus, I can zoom in or unzoom. So this is a viewer that is inside Sight of Mind to allow you to go to see on the different image, different structure. So you can see here in the legend box that as soon as I go to zoom too much, the box is going to be red to inform me that I will go beyond the resolution, the maximum resolution of the slide. I have seen in my image tab here that it has been scanned at 20 time magnification. So it's quite logical. If I go more, and for example, 40 time, it will be a numeric zoom. You can also see that when I leave my image and I still in the same project, I click on it, I will be still be there in my previous space location and with the right zoom I have to do. If I don't want to have this option to be able to zoom more than the maximum resolution, I just have to go here in the zoom and de-enable the digital zoom and then I will not be able to zoom too much to see my images. So this is the main interface of the viewer of Sight of Mind. On the left, you have a column here with all the tabs which are specific of the project. So we will see after the image annotation activity information and configuration if you are a manager of the project. On the top of the viewer here, you have all the annotation tools and I will make some examples. And on the right here, you have all the tools that are available for this image. So I have an information box with an height resolution, magnification and so on. The name of the image, I am the digital zoom to enable or disable. I have a box to manipulate some colors and to change some colors in life with saturation, you and so on on the image based on what you need for you to see the different structure or et cetera. So you can reset it after it for, of course. You have a concept of annotation layers which is really important in Sight of Mind. The spirit in Sight of Mind is that to enhance collaboration between user, each user inside a project will have what we call an annotation layer. It's like a transparent layer that you have just for you where you can draw your annotation. For example, here, all my annotations will be drawn on my own layer. In my project, I'm still alone so I cannot see the annotation layer of other user. I will show it after I'll to add some more users. And we can also see the properties I will show you later and so on. So this is very important to remember that on the left, it's a project-specific tabs and on the right, it's image-specific. So this being explained, I will show you now some different possibilities that we have for annotation inside Sight of Mind. So the first one is the points. So here I can select all the different annotation tools. So the first one is the points. The point is classically to spot a structure. For example, these are when this cell and this cell and this cell. So you can see that as soon as the tool is selected, I can continue to annotate. And then I have some points which are on the screen. The points, it's just a location given by a Nick and a Y. So it has absolutely no metrics. So in the current selection box, which give all the information of this annotation, you will see nothing regarding the metrics. If I draw a line, I will try here, for example, to make, for example, the distance between this, I click and I re-click here two times to close the line. So line is an annotation system which have a lens. So it have a metric. So there I have a lens which has been indicated on my current selection box and so on. So all the different annotations will have some metrics regarding to their nature. So I can draw a single line, but I can also draw a broken line by continue to clicking with simple click. And I just indicate the end of my annotation with a double click. So when I make the double click, it means the end of the line and I have here the lens of all the segments together. So the principle is every time the same, I can close this box. And if I go and select it a new annotation, it will rip up and give me all the metrics and formations I have regarding this annotations. Regarding of the line, we have a straight line. We have also the free end one which can be quite the same, but in free end. So I will give an example here, which is not very creative, but just for you to see. And there we also have the lens only. We have also now a collection of what we call the closed annotations and that will be rectangle, circle, polygon, and free end polygons for you to enrich your data. The first one is the rectangle. So I will move a little bit in this images. So the rectangle is just to click on one corner and double click on the other. And you have a rectangle which is created here. And then we have a perimeter and area and the data are calculated for sure, based on the resolution of your images. So this is for the rectangle. And for the circle, you just click on the start and this start of the circle. And when you click up for the end, you have the circle which has been created. And so on for the polygon, it's the same principle that's aligned. You just click and as soon as you're gonna click is gonna make some notes. And at the end you're closed and you have your polygon which has been drawn. And for the free end one, you let your mouse down and until you've done free your fingers, it will continue to draw. So you have all the different type of annotations that can be made inside Cytomite. And you will certainly agree with me that if I look at the screen, all the closed annotation are more visible than the open one, just because there is a fill in the middle. So if you want to see them without this fill, you can play with what we call the layer opacity. So you can change it here on your layers or maybe on all the layers which have been charged. So you can change this opacity here in the annotation layer box. Another point also which is important is for the dots, they are not very well visible in big resolution, but as you may see, if I unzoom, they will be quite more easily visible because they stay at the same size independently of the zoom I have. It's a feature that is necessary to be sure to find them when you are creating a point at the very maximum resolution. It's more easy to find them. So here I have one I can select. So this is for the creation of all annotation. If you have made some mistake during the step of creating the annotation, for example, this drawing here or this rectangle must be moved, you can select first the annotation and then select the move option and then we will be able to drag and drop it to place it at another place in the screen and so on for all the other annotation if you want to redo it. If it's not a question of moving, but a question of rotation, for example, with this one, you can use the rotate tool and you will be able to rotate this annotation on the center of the annotation. If you want to have this annotation on the center of your screen, you just have to click here, center view on this annotation and it will be perfectly placed in this area. If you want it to be changed, you have the opportunity to add some notes or to delete some notes using a tool here. You can also use the modify here. So when you click on the modify, you will be able to see all, sorry, I will center the view, you will be able to see all the notes and are able to move them to make some correction on the shape of your annotation. Here is for the polygons that I've made. If it's on the circle, I will center it also. You will see that an annotation like a circle has been transformed in a series of notes. So after it has been created, it can be update in terms of shaping and it will be considered as a free annotation and can be edited in quite any manner that you want. You also might want to complete area with some situation where moving some notes it's not sufficient. So you will have to create a adding a freehand area, for example, so here when I click on the draw and the pen with a plus, I will draw some annotation which are on the super position of the paste later one and there will be a fusion of the post. I can see, I can do it once again here and I will have my, sorry, I must firstly select it here. If it's not, I will create an annotation instead. So here I have made now an annotation which have been made of two region. So to make you see it, I will move them independently. So yeah, I want to add an area on this one. I will have to make like this and select it before and the same for a subtracting some regions. And we have some time and mainly I will create it here, for example, no, it's a, select it. If I do that, maybe I will be able to make a hole in my annotation. Yes, which is quite some common when you are running some AI scripts which are make some segmentation, for example, and that's why we have the option of filling all the holes. So I have selected the annotation and I just feel to be sure that all the holes in the annotations are not considered anymore. So that's it for all the different tools that allows you to create an update and maybe delete the annotation that you have using this one in your phone. Obviously you have to confirm before. So now I wanted to have a new images because I've made some annotation here. And firstly, I want to have a new image to make some other annotation on it. So I will show you how to add a new image. So here I only have a SVS1 CME1. I will just have to click on add file. I will choose for the demonstration obviously a very lightweight image to not having an upload which is too long, but that might, on some point, no, just here you can add some several images and start upload for the both images at the same times or just one before the other. If you just are clicking like that, the images is uploaded and then you will have it testing in terms of format and so on. And you will have it, which will be added there in your storage. And as soon as it's here, I can go back to my demoder by us Academy project and click on any image and have this one which is added with the other one. I can also, if I want to do it with a list of image and I know which project I want to use, I can also pre-select my project here and the next images will be uploaded and directly add to my project. So this one will take more time to do. So we will check after if it's fulfilled and where does it's upload is finished, we will have it in our projects after. So during here, I have my new images with this in all information and so on and I am able to open it. If you have some images like that that you want to open together because now I have two images, I can open one and for example, ask to go to the next images or the previous images, it's okay but you might want to have the both images open together. You just have to click here on the big plus on the bottom of the viewer and then you will have the both, all the image you have on your project and you will be able to see both images on two independent viewers at the same time. If you have, for example, normally this images is a crop of section which is here, for example, perfect. So I will have two portion which has quite identical and I will, you have another one box that I've popped up which was not there before when there was only one images is the chain here which allows me to link the both images and now if I navigate in one and zoom in one I will be also navigated and zoom in the other one. And you can do it with quite as many images as you want. The problem is not the numbers of images you are working on, it's more the size of your screen to be able to have some readable viewers open. If you don't have some place enough here by clicking on this small bar you can reduce the bar of the project to gain some more space. So now I have these two images that can be used in a synchronized way even that they don't have the same size as I explained the small one was a crop of the big one. And when you don't want to have this link anymore you can unlink them and when you don't want to work with the two image at the same time you can shorten it but if you first want to go to see your upload it's now you can see that in my workspace I have my Nubias project still proposed but in my viewer I have two different viewers you're a group with two images and the past one where I only have my single images that I have. So I can change here my active viewers to go back to several viewers that remains open as soon as I work inside Saitamae. If I really don't want them to be opened at the same time anymore I can close one of them by clicking on the red cross at the top of the left bar of the viewer and then I will have two independent viewers and I can close them directly from this small box which is moving here. So now I have my images which is still uploading there I will come back here and I will show you if I zoom and go back to my annotations I will come back here to give some example on how we can add data to these annotations. Firstly we can add a description which is quite common. I want to write a text here to say what is inside this annotation to remind for as a reminder for me if I work alone why I did this annotation and what this is it means and so on. So I just have to write some explanation here. I just this annotation during the bias webinar as an example. For example, and then I will have this text annotated here just for you to know that the text can be enriched with all its necessary bolt and so on and special characters and you can have access to the code of the text and so on. You can add some links, images and videos so you can enrich your comments but also you can make some piece of text because now if I copy paste this text several times you will agree on me that if I created two long texts it might be difficult to read on the small current selection. So we crop the text after several moments of characters and I can decide where I can cut this text before. So if I put the stop preview here and I save I will only have the first sentence and when I see the full text I will have the whole content. So as soon as the text is too long for the box you will have a C full text box to open it in a more readable format and if it's a very long text you will have a way to swipe inside this box and even if you want to put it in a more bigger size to be able to read if it's a really long text or maybe with images and video and so on. So using this text description tool you can add a description to your annotation but it's also the same if you want to add some description on your images and it's also the same if you want to add to some description on your project. So inside of mine we will try to use the exact same tools for the same function regarding the different objects. So the description can be done also on the project on an image inside the project or on an annotation inside this images. This is for the description. The other tools which might be useful is a tag. So the tag is just a common keyword that will be publicly available for all users inside the platform and that it's useful to find some contents when you are wanted to group some project or images inside a site of mine. For example, I can go here and say that my project in my information I will open this box for you to see and just to say that I have here the tags I will create which is Nebyas and then add. So I have now a tag which is Nebyas that I can use for example to tag also images that I have here and so on. And as soon as these tags are available they are available for you to retrieve some information. Here in the form I just have a sort on the name of the objects with project and image. So you can see that if I write Nebyas I will only have my project but if I click on plus I can have also a selection of the tags and then I have my project and my images that have been founded by the advanced search form which is available on the top bar every time. So you can retrieve your project and images on using this by having access to the tags that you have created or the tags that other users have created on the content you have access to. So this is why for what the tags are for. So the same is for the annotation I will be able, sorry it's here to use the Nebyas tag for example but by doing this it will not be on the main web search system it's will only available on the annotation tabs on the project and I will explain after I'll be using. Another system that we have it's what we call a property. A property it's also a keyword but which is more based on the key value system because you might want to have a system to give some values to objects that share the same key. When I use a tag it's just a word but here I have a dual system when I can play on the key and on the value to share information that are sharing the same inside the same group and so on for example are we given a quite simple example if you want for example your users to explore your annotation in a different order I will create the key N and the value one for this annotation for example and for this one I will create with the same key so it's N2 and I create the value value two and for example the third one here I will create the same key once again so N and then the value three. So they all have properties which share the same key but not the same value and now I can go here on this button here to open the property box and I can ask it to show all the properties that share the key N and you will see that you will have your annotation here which are lablet one, two, three it might be letters, it might be sentences it's an open text field so you can see that with this small example based on annotation you can have some structure which has grouped with the same key and we use property mainly to add some metadata some information that can might be considered as metadata to annotation, project and images and which are able also needed for you after to be usable when you are doing some scripting for example, so on the viewer for the image you can see the properties but it's quite obvious that it's not possible here on the image with the property of the image and for the project but the feature exists also here so you can add also a system of property to group your project or to group your images and you will be able to access this information with your scripts afterward if I go back to my annotation I have presented to you the description I have presented to you the tags and the properties and we have a third system of keywords which are the terms the terms are the third system of keywords which is part of the ontologies so for now I do not have any terms in my collection here so I can to edit an ontology I can edit it directly where I want to add a term but I also be wanted to be able to edit it in these tools which is quite general to add a term and as you can see you can add a term and associate it a color I will not be very creative today I will just create the team term A with one color and term B with another one and I let the system choose eventually a red one for example, too dark so let's see this one for example to have red and green and so I can go back to my image and now we will be able to classify all the annotations using the properties using for example, excuse me, the ontologies and the terms of ontology so for example here in this one now I have a selection of term A and this one I can choose term B for example and you directly see the consequences visually is that the annotations the feeling of the annotations gets the color of the classifications in terms of terms so I can select my terms one by one with this one but if I knew, if I know that I will create some collection of some annotations that share the same terms I can pre-select for example the team A and as soon as this term A is selected here every kind of annotation I will create will be pre-associated with the term A as you can see, if I want to change it I will switch it to term B and I will create now some other annotations that will belongs to B for example and these are all the manner that we can associate some classifications to annotation and this classification using the ontology will be very useful when you will do some scripting and AI algorithms and Raphael will go really deeper in the advantages and the use of these ontologies on his own part today and on the second webinar. So I will not deep too much inside but just to show you that here we have also this box which is available now you can sort all the annotations you have on screen regarding to the terms that they are associated to by just select all the box here so you can hide or see the terms A related annotation hide or see the term B or hide or see all the annotations which are associated with no terms and if the colors is too intense and give you a false opinion on the structure behind you can change specifically the opacity of these specific annotations I will have here a green and red one for you to more to better be able to see that it's only applied to one terms and not the other don't forget that you can also reset opacity here you can also have the main global opacity of your layer on the layer application here so here are all the tools to put some information related to your for example images and annotation here so if I recap I have a description which is a free text I have a tag which is a common keywords which is more there to retrieve your content when you get connected once again you have the properties which are able to add some values on the same key on different structure and you have the ontologies terms which are able to classify objects that you have here is this is using the annotations if you go now to the images you will see that you have the text you have the properties but you don't have the ontologies ontologies are really restricted to the annotations so it's really made to drive your classification of annotations when it's useful for your algorithms and so on for the project you have the description you have the tags, you have the properties but you don't have the way to select the terms but there it informs you that this project is linked to the Demo Neubuyas Academy ontology just because when I've created my project I ask it in to create an ontology with the same name and just for me to inform you that each project can be linked to only one ontology not a collection of one because it will be too complex in terms of terms to use and mostly if you have the same terms in two different ontologies but one ontology can be used in as many projects as you want so it's a want to end relationship relationship and if you want to change it now we can go on the configuration of your projects and it's quite useful for you to see what is possible to do here first of it I will add some colleagues because I'm quite alone here so I will add some members and I will select Raphael here and Renault here up to be my colleague and just add them on the list and you can see that the role it's not the same so it's time for now to speak about the roles of users inside Cytomine inside the main global platform you have three different roles as you can see on my account I am a user so the user is the standard user inside Cytomine is the user that can create projects and add images if I want to resume his role to this so it can create projects and image and invite some users and so on this is the role of the user we have a lower role which is a guest the guest is the users which are not able to create projects nor upload images so they will have a project tab to see the project where they are invited to work but they will not have any storage because they are not allowed to upload images and we have an upper role which is admin which is able to make the administration part of the platform which is my case so I am an admin but to have access to all the administration tools I must open an admin session so if I open my admin session I will have here the admin tab which give me access to more admin stuff without we will not focus too much time on it for now so I close it so on the platform I have three different roles which are users, guests and admins but inside a project all these users can receive some roles which are specific to this project and there I can be a manager or a contributor so the manager is with some black icon here is the one which have created the project or been invited to manage the project so he can have images he can invite some other users he can have access to the configuration box of the project and share all the different features that we will list and so on and for example, if I want to make Raphael a manager of this project I just have to click on his role and he will be updated directly the representative is only a flag to say that it's this manager which is the most able in the team to answer questions regarding this project so it's just a way to spot on one person to say if you have some questions please ask him before ask to the other one in terms of general settings of the project now I have some contributor where I might have different settings I want to give on the projects and liberties and a way to work regarding to the status of the project here if I go in this images I can now have access to the layer of my colleagues Renault and Raphael and see if they had made some annotation and they don't for the moment but if I want to work with them if I let my project like this it is in editing mode classic it means that every contributors will be able to add edit or delete project data on their own layer and on layers of other colleagues so if I go here I can open my images and decide for example to go on the Raphael annotation layer so I will add mine I will say no I will not write anymore on mine I will write on Raphael one and I will create an annotation sorry why does it annotation here which is on the layer of Raphael only I will deselect here so it has been created on the Raphael layer if I hide it I have mine and I have is one so I can in the full collaborative mode I can edit or even delete information on the layers of the other users if I go in restricted mode because I don't want my colleagues to be able to change the content of the layers of the other colleagues I say that the contributors will not be able to add edit or delete project data except annotations properties and descriptions and on their own layer only so using this mode all the contributors will be able to write data and so on on their own layers as they can see the data on the layers of the colleagues but not edit them and if I want to them to be in a read only just for your eyes only even the contributors and even on their own layers they will never be able to add edit or delete data but this belongs only to contributors the project managers and it's written here we're still able to see add edit or delete data on any layers in this project so it's really important to know who you will trust enough to give the role of a project manager and who you will let to be project contributor so we have a lot of usages in users in Cytomind Cytomind is used in teaching in research in diagnostics second opinion and so on so for example in teaching we generally have the teachers which are managers and the students which are contributors in researchers it can be the lead of the project which is managers and invite some colleagues just to add some data in a blind mode so he doesn't want them to see the layers of the other and so on so he let them in the role of contributors and the same for second opinion we have the pathologist which the case belongs to which will stay the manager and the other pathologist which are invited to give their own opinion will let in the contributor role for example and this is for the editing mode it is really important to set it correctly to the use that you want your colleagues to be able to work we have a more complex way to transform the project in what we call the blind mode using the blind mode now when I go to my list of image you can see that the name of the image has been replaced by a blind and an ID so I am a manager so I still am able to see the real name of the images but if I would have been a contributor I will only see the ID this ID for the image so if this is important when you want some colleagues to see some images without any information on the coloration the tissue and so on which are information that can be put on the name of the file this is sometimes useful in education for example or on a dual blind system for second advice in research and diagnostic so this is for the blind mode we can also decide that we are going to add the manager layers to all the contributors so for example the students will not be able to see the slide the layer of the teacher or you want just the inverts you will add contributors layers you don't want a student to be able to get connected inside-side to mind and see the layers of the other students but you can see the layer of the teacher so we have these two books and a selection of annotation layers which are available to contributor you can also add some complexity in the way of the people are allowed to work inside-side to mind this is for adding layers based on the role of contributors or managers but you might also want that as soon as someone gets open the images you see directly the information on one of the layers so here I can deselect the layers for example and I might want to have the layers of Raphael which is automatically selected when any kind of user is opening the images so now normally I should open no that's always like that I will open these images and I will see my layer and the one of Raphael my mind should normally always selected but the one of Raphael has been added to the selection because I have asked it to be selected in the default layers and I can make a multi-select collection of layers these will be the layers which will be by default opened in addition of mine I can also say that the default property that I have selected before to let the labels of the properties values of the properties being seenable here I have to select it manually but I might want also it to be directly selected when I go here and then if I open my I don't have an annotation here why doesn't work like all of that in demo there it has been automatically selected and once again another stuff here do you allow your contributors to download the images because you as a manager you can when you open an image sorry no when you open the box of an image you have access to the download tool here to download the image on your own disk this feature must be authorized for the contributors so the contributors will not have this button and the last button of the configuration is to rename your project or to change the ontology it is associated to so I will deselect here so in terms of access to the content you can change the editing mode in recently so just for your eyes only to restrict it you can edit your own data or to classic one you can edit any data of anyone on the layers which are available available by the fact that you have decided or not to hide the contributors on the manager layers and you can blind the information to hide all the sensible information if you want your users to work in the blind situation so this is for the access to the data you might also want that your contributors do not have access to all the tool nor all the boxes nor all the tabs here on the page inside this project so this is can be configured using the costume UI so for example here as a manager I have access to the image here on the left the annotation I see not the analysis by clicking on it to be green activity and so on so now I have the fact that as a manager I have access to all the available page here but not my contributors so the contributors will have a selection like this one so they will have access to the image annotations and information not the analysis nor the activity and you can do it for any kind of box inside the viewer of Saitama so for example if you do not want your users to use a review not the broadcast not the properties, ontologies, annotation layers and so on nor link image or you can let it so you can hide all these boxes and they will have a lighter what I will say a lighter interface viewer here with all these buttons which are not there I will show you by doing the same for the managers so now if I go here I have my viewer which do not have charged the selected boxes here so I only have these specific boxes available and so on I can do it for all the boxes but I can also do it for all the information inside the annotation details so the box that can be open when I haven't I don't have layers anymore which doesn't simplify my work in examples here there so I have my annotation box so here you can also decide which information your users contributors will have access to for example I can decide that the descriptions doesn't have any sense here nor the terms nor the attached files nor the properties so now I will go back and I hope it will be correct and my current selection has been adapted and so on so with the custom UI you can really manage all the information that you give access to it's also have an effect on the draw tool so as manager for example if I want my contributors to make some annotations only in the free and I will disable all the other annotation tools and I will check here for the moment to have it and then I just have the free and polygon normal points nor lines and so on are available here I have selected the same for the manager because I am connected as a manager and I don't want to every time change my accounts just for you to show the different features there so you can see that all the viewers objects which are on the project tab list on the viewers right tab list and all the annotation books and the annotation tools can be let's accessible or not regarding on the configuration on the project and I just pay attention now that didn't show you that if your slide is not on the correct angle you can make some rotations which are pre steps or reset it or and so on this for the viewer the main other page here is for the annotations so as annotations is a collection of all the annotations that have been done by all the users in your project so it's sorted by terms by default so if you don't use any terms you will have only the section no terms but as soon as you get you use some terms as site of mind has been by default developed for manage inside digital pathology it's based on terms so you have a pre-selection on terms but you also are able to filter this annotation collection based on the size on the display sorry here you can ask the size of the annotation to be large or small you can decide how many annotation by page and so on and the color of the outline because we outline all the annotation borders and regarding to the coloration or the tissue that you have you might want to change the preview of the borders of the annotations in the filters now you can sort the manual annotations that have been made by humans and because we sort a manual regarding to the analysis annotations and the reviewed one I will not spend some time on the review process because it's on the raffles part but just for me to mention here that manual annotation has made by humans analysis annotations are made by a automatic process like algorithms and reviewed annotations are annotations that have been fetched from the both collection and have been validated by a human afterwards so to make a validated data sets it's more on the reviewed annotation for the moment I just have created some manual annotations but I'm sure that raffle is its own part will focus on the analysis and the reviewed one I can select also annotations of only one images and not on the old images on the projects I can select regarding to the terms I can select regarding to who have done the annotation for the manual annotation of course I can select also for the annotations here and from selection of dates where the annotation have been created so using all these filters you can sort your annotation and at the end you can download the PDF or CSV or access files with all the informations related to these annotations like let's see for the PDF for example here you have a tab with all the informations like the ID of the annotation area the parameters the coordinate of the centers the image ID the image file name the users and so on so this tab you can have it in PDF you can have it in CSV and you can have it in Excel if necessary this is for the tab of annotations for the analysis tab it's a question of the run I will let raffle present it just for you to know that in the future there will be some algorithms that are available if you have activated them in your admin panel as raffle will show you and if you have also activated them in your project so all the list of available algorithms are there and they are disabled here so I did not spend time on this but if you don't have time you can have it inside your project and how to launch it using this one I will spend more time on the activity which is all the activities that have been done by members or and so on inside the project so I have an activity in terms of project connection image consultation annotation selections mostly which I have summarized here in terms of numbers of annotations regarding the fact they are manual analysis or reviewed and so on regarding to the term which term is the most used in the project and so on so you have a lot of activity charts that you can see here but you can also have a member's activity for example if I take my details you will see that all the different stuff I've done today in terms of what I have opened images how many times I have opened them what is my browser I have used for me to understand so if you are a manager and you have a contributor which testimonies that you have some problems you can see on which system it works to may advise him to change browser for example and so on and this are also my own chart activity and the list of image I have made and how many times I have opened them and so on so this is available for the tab here if you have allowed your users to see it for example you might want your manager to see the activity but not the contributor mainly if the information is sensible and you are also as a manager you are also able to see it in the view activity here for each user so I will check if this is my I will check if Raphael has made some stuff here it just it's here I've opened an image and so on so this is for the activity and the informations on the projects so for the other part I guess I will let Raphael to explain more what is on the algorithms and administrative part to mine the team can inform me if my time is spent I guess yes and if yes I can give you back the mic yes I think we have still 20 minutes left so I will you take the lead thank you very much I drop the mic news yeah okay I'm sharing my screen now so what we have seen is Cytomine is a complex database system I would say but it's more than that it's also possible with Cytomine indeed to apply algorithm but just before going into that I would like to quickly show you some other features of Cytomine also because we had some questions about it so first to let me tell you that Cytomine can be used to visualize other type of images of course than astrology images and so if I go to Cytomine server here I'm opening a project and I will show you that we can for example look at hyperspectral images so in this case you you have here hyperspectral images so these are very big images it's not only one slice but you have about first slice and you also these sliders here that correspond to the different channels that are in fact different spectral band of the image and so that's the same thing that we can also do for Z stacks or time points etc so with Cytomine you can also visualize this type of images and when you go here depending on the type of images so Gregor showed you that when you have astrology images you can adjust brightness, contrast etc but with this kind of images you have other types of operation that you can apply on your images, slice by slice etc and so that means that we can also have some studies where you can combine different image types within the same viewer so that you can have a look at your samples acquired using different microscope or scanners and compares the information coming from these different instruments what is useful also with this is that you can have a look at hyperspectral images so this was the case here hyperspectral images as I said but you can have a look at the spectral profile of your image so for example if I click here and create a point I can have access to the spectral information of this specific pixel over the different channels so you have the plot here of the spectral intensities over the different bands what is also interesting is that you can create what we call image groups and also links between annotations so this is a very new feature that we use when we have to analyze multimodal data for example in astrology studies where you have different staining protocols to stain your images and so if I go to this project here so as Gregor showed you you have the list of images in the project but with this new feature you can create image groups that is basically you link the different images that correspond to the same sample, the same patient and you can open all the images associated to this to this sample at the same time or what you can do is to create what we call annotation groups so with this here you have in the case of image group you have here a group of annotations that are linked that correspond to the same area in the sample but in the different images and so I open this one but here you see ok you can by simply clicking here you go to the next corresponding annotation in the next image and again you can click here and see all these annotations at once so that helps you to compare the information coming from the different modalities so now it's time to very briefly present you the feature of application of algorithm so we will go more into details next week about this but just to show you that with Cytomine you can apply different algorithm and next week we will show you that you can integrate your own algorithm and apply them on your images and so let me show you a simple example here with this project where you have only two images but what we did before this demo is to annotate some region of interest corresponding to tumors and non-tumors so here it means that we have associated you have drawn the annotation as Gregor explained using the drawing tools here and associating them some terms so here in red you see the annotation corresponding to tumors and in green you see annotation corresponding to non-tumor regions and in this application the goal is to have a model that will automatically detect the tumor region and then you can quantify for example the ratio between the sample size and the tumor size to study the onset of lung tumors in your samples and so to do this we have this analysis tab that allows you to execute some algorithm so for the sake of time I will not execute such an algorithm I will not train a model and I will just quickly show you the result of an application of a model but basically the idea is that your training algorithm will use the annotation that they have been done manually by the expert to train a recognition model and then you can apply this model on new images and visualize the results so here I selected one of these execution that was done a few months ago so you can here you see that we are visualizing the annotation created by this specific execution of this algorithm so if I go back here you see the list here so it was this run of an algorithm you can have information about the exact parameters values that have been used by this algorithm and so you can see the annotation that it has produced and if I click here it show the annotation within the image so this is the demo effect I should see here it is the image and you have the layer here corresponding not to the human but to the algorithm layer and what you can do with Cytomine is prove these detections so we will show you a bit more next week but basically you start reviewing the image and either you accept all these annotations like this and then you will see that the contours of these annotation will switch into green contours that means that they have been validated but if you are not happy with some of the detection here you can edit these like Gregor showed you and when you are happy of course for the sake of time you can select these annotation but you can click on validate my review so that at the end the system has stored the final annotation that you think are correct for this task and at the end what you can do is to execute an algorithm here it's not exactly an algorithm it's basically just some statistics that you can generate to have at the end some statistic in a CSV file so I will do this very quickly but basically I already did it so it's here and you can view this CSV file of course you can open it in Excel or anything to have a better visualization of it and what you do for example is also apply a stardust algorithm into one image so let's say you want to apply stardust in a specific region of interest so you will first draw this region of interest you say ok it's you associate the term myroy here you go back to the analysis tab you say ok I want to execute stardust in the image number one I want to apply it in the all the region of interest that has been enabled with myroy and what I want to detect is cells so I want to associate the term cell to all the objects that are detected by stardust here you have default parameters values I will not modify them but you execute the algorithm in the background it will run in the Docker containers it's running here I already launched a similar job a few hours ago so we can click here on the results of this detection and so what you see here are the stardust detection of cells oops sorry and something that Gregor did not show you but it's for example possible let's say you want to collaborate with your biomedical collaborator and you want to ask him ok is this algorithm working well on the data you can for example easily send a comment to this collaborator directly from the platform and it will send an email and the user will get an email in this mailbox maybe after a few seconds it will come yes it's here and so you get an email with the comment here and a direct link to the to the annotation so if I click here it will go back to the exact position of the annotation that has been detected by this algorithm so you can interact with your collaborator in this way so let's go back to the presentation I think we are almost done here so I showed you a few examples of this algorithm that are applied on your image data next week we will show you application of other algorithm to let you understand how to apply them how to integrate new algorithm here for example landmark detection here it's another example of segmentation using a new net model to segment add an operculum in zebrafish images so we have several examples but the idea is to let you understand how to develop this to integrate your algorithm into the platform so that you can easily let your users your collaborators apply them and interact with them and review the results on many different types of images just to let you know that these algorithms so I mentioned some unit models etc but with the psychomine architecture it's very flexible so that you can execute algorithm that are in fact implemented in very different languages or libraries so we will also show you next week together with Sebastian Posi how to integrate workflows that are coming from the most popular platform like image IC cell provider elastic many learning libraries etc etc and the idea is to use some container technology so that these algorithms are somehow packed in a standardized software environment so that you can reproduce the result and trace your results because all these algorithms are versioned so you have direct access also to the source code of this algorithm you have direct access on the platform or on the parameters values that were used to run this algorithm etc etc so we will show you next week for example how to detect cells and compare results provided by cell profiler, elastic etc etc so we are coming to the end of this first webinar so we have seen the main concept of psychomine web user interface and we just showed you how to apply algorithm but we will go further into the details next week we will also talk to you about the internal data models of psychomine so that if you are computer scientist or data scientist you better understand how to manipulate the data it's possible to manipulate the data through the REST API through Python client, Java client or even JavaScript client if you want to develop your own web interface on top of the psychomine core server we will discuss about interoperability and reproducibility and Sebastian Tosi will present you bio flows which is easily based on psychomine and provide a new an adapted user interface for benchmarking so it allows you to execute algorithm from very different platforms to compare them quantitatively with metrics and all this again through this web interface and we will show you that several tens of workflows have been integrated into this platform for very different tasks such as cell detection cell tracking etc so with this I would like to thanks my colleague of course and the Nubias network and the Nubias academy and all the funding that we got since 2010 I don't know if there are some live questions to answer but otherwise I think the idea is that we will continue to answer these questions on images c-forum etc but maybe Julien you can tell more about this okay so somehow this ends the webinar thanks a lot Raphael and Grégoire so we can let a few questions arrive live if you wish so I can transmit one to you actually that I see now it says thank you for the nice presentation I wonder whether it is possible to save the annotated delineations as separate files maybe as binary mask for example thank you in advance yes so it's something we will explain next week but basically everything is stored in a database inside the mine it's a special database but the idea is that everything can also be exported and so we have some API that allows you to export the data into for example a JSON file or you can also we have also API entry point that allows you to extract crops of annotation with binary mask or alpha mask etc it's indeed very very possible to do so I don't know if I will be able to show this live today yes maybe I can I will again share my screen if I succeed to find the okay so basically here what I did is to I used the API here you have the cytomine server API slash annotation and then an identifier as we said previously each object in cytomine has a unique identifier and then you can get a description of this annotation it's a bit messy but it's basically all the coordinates of the content of this annotation and then for example here you have a crop again of this annotation and you can also have the mask etc so yes it's possible to do so we will spend some time next week to describe this and describe the API and how to manipulate the data okay here's another question thanks for the great presentation is it possible to apply a deep model rather than for example cell profiler on histopathology images in real time so yes it's possible to plug any any models or algorithms into cytomine you have to respect some conventions that we will describe next week to describe your algorithm what are the input values what are the required libraries like tensorflow, keras etc and then indeed you can apply your workflow on your images of course when you say in real time applying a deep model on a very large histopathology image might take some time but you can apply this in the background and do other tasks in between and then you get the results okay here's another question so in your API your output is in JSON string how many significant figures do you support I'm not sure exactly what those significant figures means so so we have all these kind of annotations and geometries like rectangles circles, freehand, polygons etc etc and so as soon as it could be converted in WKT format we support these and we will also show in using bio flows that we are supporting tracks etc and annotation in 3D slides etc but I'm not sure I got the question right so there's a precision from the same person how many decimal points do you support I'm not able to directly answer this one right now so we will double check this and let you know okay so maybe if there's no final question like for a matter of time thanks again for the entire team and then we hope to see all of you