 time to get this party started. Okay so this is our first virtual meeting at Montreal Python and I'm really excited as one of the founding members of Montreal Python who could not attend for many years to be your master of ceremony tonight. It's gonna be about 90 minutes of exciting content and this is a virtually dynamic combo of both this live meeting and a hackathon that we're gonna have in two weeks so so stay tuned because we're gonna tell you about this hackathon that we're gonna have to fight this annoying COVID-19 crisis and please join us on Slack if you can and if you can't we're gonna figure out how to get you on Slack. So if you're on Slack remember we have a code of conduct we asked you to be excellent between one another so if there's something not working in the topic of the Slack channel you have the list of people to contact and tell us what's going wrong and we're gonna do our best to solve it. So good evening to all of you. We're gonna start in a very short moment so Emma if you want to get ready I'll pass the word to you in an instant. Please join us on Slack if you can and if you can't we're gonna solve that. Join us on Slack if it works and if it doesn't work we're gonna find a way to fix that. We have a code of conduct in Montreal Python and we ask you all to be excellent between one another and if there's a problem you have the list of people to contact in the topic of the channel on Slack and tell us and we're gonna do our best to fix that without further ado. So our first presenter tonight live from Paris and Emmanuel Gouillard. Emmanuel is a core contributor, how do we say that in French? I don't know. The sci-fi and the scikit image is also one of the organizers of Euro sci-fi and she's going to talk to us about interactive data treatment with scikit image. Emmanuel, I'll pass the screen and the Hi everyone, so thanks a lot to the organizers to keep the momentum of the community going. It's a really great idea to have this virtual meeting and it's a great pleasure for me to talk about image processing with open source tools to which I contribute namely Dash and scikit image. So I am a Python developer here at Plotli in Montreal and I'm also a member of the scikit image core developers team and the work I'm going to present here is some of it is funded by CZDi the Chandler-Kunberg initiative and I would like to thank CZDi for their support to open source software and science. So let me start by saying a few words about how I got into software development. Before I came to Montreal I was running a material science lab a research lab and I was studying glass at high temperature and for this doing a lot of imaging studies with X-ray at high temperature and this resulted in large and noisy data sets and image processing was required to transform this data into scientific results and this is how I got into developing being a developer of scikit image roughly at the time when it was created. So during several years I mostly considered open source software as a hobby and a way to make to meet great people and friends but at some point I realized that turning a hobby into a job could be a great idea and this is how I joined Plotli one year ago and at Plotli I work mostly on making image processing more interactive thanks to data visualization and interactive apps with the Plotli and Dash tools. So we're going to talk about image processing and images are very widespread source of data both in science and business. In research you want to transform images into numbers by carefully measuring some properties of objects in images it can be in life sciences where you have cells or organs but also in physics or in astronomy in other applications like in autonomous cars you want to be able to detect objects and distances in real time and with really good reliability you have other source of data such as satellite imaging which you can use to study traffic congestion or climate change and so on and so forth. So nowadays a lot of image processing is done using machine learning and in particular deep neural nets for the image processing task and this figure illustrates different kind of tasks of image processing from image classification where you want to attribute a label here balloons to an image this is what you have in Google images for example two instant segmentation where you want to be able to attribute to label all the pixels corresponding to different objects. So for this deep learning algorithms to work well you need to have a really good training set. A training set is a set of images with ground truth data that is already labeled images with all the pixels of each object for example and for this you need to do what's called image annotation. So image annotation means that a user needs to label some of the pixels for example to outline a rough outline of some objects or to draw bounding boxes around some objects here or it can be also to draw a very accurate drawing or contour of some objects. So this is for machine learning but also for more classical image processing algorithms it can be useful to annotate images to mark some pixels and seeds for origin growing algorithms or as landmarks for image registration algorithms and what I'm going to show you is how to use an interactive web application for image annotation because let's say that now we want to build an image annotation pipeline and we will want to base it on web technologies because you will need people you will need to share the application between different people all the people who are going to label the images it represents a huge amount of work but also people doing some quality control and annotations etc. So let me now introduce the Dash. What is Dash? Dash is a framework for building web applications in pure Python and in particular analytical visual graphical web applications and so Dash is open source it's developed mostly by Plotli but we also have great contributions from the community of users and the promise of Dash is that you will write your web application in pure Python without any JavaScript required. Here is a small code snippet to show you a Hello World web app written in Dash so this is a whole app. It's a Python script and when you run it with Python it will start a Dash server to run the app and to define the app you see here that in the layout you define several components one of which is an interactive component namely a text input and the other one is a simple div and below you see the slides are to define a callback which is a function called when one component here the input is changed and it means that the output component here the div will change when the input is changed so you see it's only a few lines of Python code to have a fully interactive app so this is of course only a Hello World app but if you go to the gallery of Dash applications which is here you will see that you have a lot of apps which are highly customizable with a lot of different components and apps that are styled with some CSS and so in a typical app Dash app you see that you have classical components like slides or let me go to fullscreen so here you have this app but interactive graphs like here are also first-class components of Dash apps so interactive graphs can change when some other components are changed like here's a slider but it's also possible for the graphs to emit some events like here this selection will change this other graph so the graph component of Dash apps usually uses broadly graphing library as a back end but it can use other graphing libraries too let me go back to the slides here is another example of a styled app and I also want to mention so here is a Python meetup so I will only talk about that Python but you can also use Dash for R and now the Julia language as well inside Dash apps you can use a lot of on-the-shelf components so in the Dash HTML components package you have the usual HTML elements like titles for example and you have also the Dash core components package which is for reactive components like by reactive I mean components which the user can interact with so sliders drop-downs radio buttons and so on we have seen the plotly graphs and we will talk more about it later on there is also a first-class data table components for visualizing data in a table some specialized library and the component factory which is used by Dash to build components is a React JavaScript framework and what's very interesting is that you have thousands of existing React.js components available on NPM which is a JavaScript equivalent of PyPy so it's possible to package one of these components and to use it directly with Dash and when we decided to build image annotation apps with Dash we started by writing a prototype component for image annotation called Dash Canvas based on existing React libraries React and JavaScript libraries Fabric.js and React sketch and I will show you a small demo of this Dash Canvas component so here is a Dash app for image annotation and so you have this image which is displayed plus a toolbar with different tools for image annotation and you can see that here I'm drawing on the image I think I also have the code somewhere which is here so here is where I defined my Dash component so it's an object which is part of the layout of the app and what I'm going to do now is press this button and when I press the button I change the JSON data which is a string with all the annotations and it called a callback using some image processing algorithm I will talk more about it later to segment the two cells here in this biology image from the annotation so here is a first example of Dash Canvas you can install it from PyPy there is also a gallery of examples and I will come back to the example to show you that with Dash Canvas you can have some different annotations you can draw freehand forms but also rectangles lines and so on and so forth you can modify the annotations like rotate them etc and so you can define some parameters of the annotations like colors, widths and so on either in the constructor the Dash Canvas object or in callbacks modifying these properties all these properties can be modified in callbacks so with the Dash Canvas package you have this object for image this component this dash component for image annotation and you also have some utility functions to manipulate these annotations using all the packages of the Python ecosystem like NumPy and Sacred Image so we will talk more about it later on so here are some snippets of code using Dash Canvas you see that it's a question of defining the Dash Canvas object with an image file and some optional parameters you can take a look at the gallery of examples if you want to know more but now that we have used this prototype for a while what we are doing now is to try to integrate this drawing tools into the plotly graphing library which is a widely used graphing library in data science in particular plotly is actually the most widely downloaded web-based library in Python these days and with plotly you get a lot of different traces which you can plot like scatter plots, bar plots, pipe plots and also more advanced plots like for example this scatter matrix here which is very interactive like for example if you select some part of the data it will update the other subplots so plotly is in fact a JavaScript library in fact with plot and dash you don't have to write a JavaScript but the devs write a lot of JavaScript for you to get all the interactivity in the browser it's the same with the sliders and the drop-downs which we saw there is of course a lot of JavaScript executed behind the scenes so that you get the interactivity in the browser and so since we have a very large number of users already using the plotly graphing library we thought can we integrate the drawing tools into plotly and not only in our new prototype dash canvas component so with plotly you have in the layout of plotly figures like here I define a plotly figure you can add what is called a shape and a shape is really a shape like a rectangle a circle or a line and in the current versions of plotly which are already released you can edit these shapes like here in this example so here is a dash up with a plotly figure and here I have a line shape and I can modify it and when I do this I can trigger inside my dash up here relay out data event meaning that something changed in the layout of my figure and this is what is used to plot the profiles of the different channels the red the green and blue along the the line if you want to inspect the intensity values of your image so this is already available with the current versions of plotly but you cannot with this example you cannot draw a new shape however in the next versions of plotly this will be possible as in the dash canvas demo I showed you a few minutes before and I cannot resist to showing you how it will look like using the some development version of plotly so what you will get in your figure is a mode bar so you may be familiar already with the mode bar of plotly figures but you will have additional buttons like for drawing rectangles circles close paths and so on so let's say that here I want to annotate some cars to build an autonomous car app I want let's say a different color for different objects etc and I can move to another image go back you see the annotations come back which means that all the annotations are wired to other components here database thanks to callbacks so the you can draw the annotations but you can also get them to do something in your app from the annotations so thanks to my colleague most of us and me who is implementing at the moment this shape drawing tools this feature will be included in the next release of plotly plotly 4.7 we have a tentative release day on May 1st so stay tuned it should come really soon but now that you have this geometry of your annotations so either in dash canvas or in a few days in plotly figures what can you do with these annotations in a few cases you are able to plug these annotations into a machine learning a deep learning pipeline directly but in a lot of cases you need some pre-processing some data cleaning and you need some further processing on the annotations so what you're what's fortunate is that dash is written in Python well you you can write dash apps in Python and with Python you get all the pie data ecosystem which is really batteries included meaning that you have a lot of packages for performing different tasks here I showed an example from the dash gallery which is using scikit-learn the machine learning Python package to perform a classification of images of digits and to represent this classification in a low-dimensional space scikit-learn is part of this scientific Python ecosystem which is relying on numpy numerical data way which is the object used for numerical computations and for image processing we will now talk about scikit-image which is like a system package to scikit-learn but for image processing so scikit-image is a toolbox for scientific image processing in Python it's open source it's a library meaning that it's not an end user application so it's meant to be used in your own scripts or in third-party libraries or applications it focuses on scientific images and therefore it's able to process both 2d and 3d images like in MOI or CT. From the statistics of visits of the documentation websites scikit-image.org we estimate that we have around 20,000 unique visitors per month let's say active active users of scikit-image so the number of core developers maintenance is quite small but we are happy to have a large community of contributors and we are always happy to have people helping along we are actively looking for diversity of contributors so if you're interested in contributing either to scikit-image or dash or plotly it'd be absolutely great so we can talk also on the slack about this if you're interested. Here is a short example of some first steps with scikit-image showing you typical code so what you do usually is to start opening an image array from an image file so you get this numeric called empire way and then what you will do is to use the functional API of scikit-image meaning that scikit-image consists of functions and you will call a function like this thresholding finalizing function on the array which will return another numerical array another image and on which you can call another function like here's this labeling function to label the different connected components and so on and so forth to build your image processing pipeline so most people actually use scikit-image for only let's say two or three functions which are very useful for them they use scikit-image really as a toolbox and unsurprisingly the most widely used function of scikit-image is an IO function I am read which opens an image file name to produce numerical array but we also have some functions for manipulating college channels functions for performing geometrical transforms like for example imagine that in your dash app you want to correct slanted horizon thanks to user annotation so from the annotation correcting the horizon it just calling as one single function from scikit-image transform.word.8 and in the previous example where I was drawing a profile to a lot I was drawing a line to display a line profile I used one of the numerous measurement tools shipped with scikit-image which is this draw.line function so here's that draw and line in the sense of real annotations but in the sense of returning pixel coordinates from which you can get the pixel values and plot the profile but on top of this let's say boilerplate utility functions you also have really more advanced algorithms for example for image filtering like if you want to remove noise inside images we will talk a little more about feature extraction also for image segmentation and I think I have another app where I want to show you if I want to let's say I'm a doctor and I want to measure things on this organ but I don't want to spend a lot of time just delineating the contour here what is called is one of the segmentation algorithms of scikit-image which is called active contours it's a kind of magic scissors algorithm so once again it's really a one-liner just one single function called on the geometry of your annotation so it's really easy to plug dash plotly for the annotations and scikit-image for processing these annotations so scikit-image is not a deep learning package at all because it doesn't do any GPU computation but it can be used for pre-processing and post-processing and I want to show you last example about performing this time classical machine learning using scikit-image and scikit-learn so you have a full sub-module of scikit-image which is dedicated to extracting features out of images so these features can be either points of interest or features characteristic of some patches of pixels etc and let me show you this example where let's say I want to remove the background of this image so this time it's not very scientific but it could be used in a retail for example and so I want just to draw a quick squiggle on this lady and so this demo by the way is hosted on the dash canvas gallery it's not plotly yet because we haven't released it yet I guess there is some transfer over the network let's see if it doesn't work I will come back to my movie so you will see you see here that the way it works is that scikit-image computes features of patches which are under the drawing and then the oh you see here it worked okay just took some time and then it called scikit-learn and random forest classifier to classify patches belonging either to the object of interest or to the background and here is how you get the segmentation which you can correct let's say here I didn't have everything I would like a little more I can start again until I have my grouters okay so to conclude on scikit-image partly and dash I would like to say that all these packages come with a very extensive example-based documentation for scikit-image you have in particular gallery of examples based on the Sphinx gallery package but it's also the case for plotly and dash which include a lot of tutorial based on examples so it's really meant so that you can get started quickly and find a lot of examples helping you to build your own apps and as a conclusion I hope I have convinced you that it can be very powerful to plug together some Python packages like dash and dash and plotly for the visualization the interactivity and some computational libraries like scikit-learn or scikit-image for running algorithms performing heavy-duty computations you can get really advanced apps in a very short time and since I know that in two weeks you're having a hackathon about COVID-19 data I would like to say that there are actually a very large number on existing dashboards for visualizing COVID data with dash here I have one example but if you want to see a lot of them you can go to the community forum of plotly and dash this is by the way where you ask for help about plotly and dash and in the show and tell section you have a lot of people who have advertised COVID-19 dashboards a lot of them include the source code and github so if you want to give it a try it can be interesting and I contributed to one of this dashboards which is here where you can visualize the number of active cases for different countries either by clicking on some countries or by clicking in a table let's say I don't want Italy you can use radio buttons to switch between linear and log scale between confirmed cases and fatalities so my talk was mostly focused on image processing but of course you have a large number of other potential applications here in this dashboards we use stats models Python package to make some statistical analysis of past data in predict a forecast of what would happen in the next days for example so this is another example of what's possible with the pie data ecosystem so I think I'm out of time now so thank you very much for listening and oh I had a challenge okay so I hope you enjoyed the talk and I will be very happy to answer questions in English in French as you wish thank you thank you so much Emmanuel everyone you can ask questions on Slack or on the YouTube stream as a comment and I'm gonna invite Isabel our moderator to relay these questions so so Emmerdale you're free to just keep your screens on on examples if that helps to answer the questions and yeah everyone please ask freely on Slack or on YouTube and Simon and it's it you are gonna be next so please get ready I leave it to Isabel hi Emma thanks for the meeting we had one question during your presentation you might already have answered I'm not sure and the question was how is any of this possible without JavaScript thank you very much Isabel it's a really good question and maybe I went too fast on this part and so of course there is JavaScript behind the scenes that is when I move this slider for example some JavaScript code is executed to update this figure which is by the way a plotly figure so plotly is a JavaScript library so a lot of JavaScript is happening but when you write the dash app or when you write a code for a plotly figure you write only Python code which itself calls JavaScript code but the difference is the libraries which have a lot of JavaScript code and what developers can do which is using only Python but for example the developers of a plotly and dash write a lot of JavaScript code so that you dash users don't have to write this JavaScript I hope it was clear yeah very clear thank you for answering that that's all we have for questions okay maybe I have a question like if if some people want you to ask questions later on is it possible to do it on the Slack of Montreal Python and if yes on which channel yeah Nick do you want to take that yes certainly so so channel meeting it is perfect it's very low volume right now so I don't think there's gonna be too much noise if people want to ask on channel meeting on the Montreal Python Slack so that's mtlpi.slack.com and if you want to join you can follow the link that we posted as a comment on the YouTube stream which is mtlpi.org slash fr slash slack in it's a little bit long so so go on YouTube on the stream and find the Montreal Python comment and with the link to join Slack channel meeting very low volume right now that so I think it's the best place to ask questions and I think that Emmanuel can answer your questions maybe right now it's getting a little bit late for her now or perhaps tomorrow hopefully we have another question that just popped yeah Nick do we have time to take it we have time for one more question okay it's from Colleen and the question is I've seen similarities between Skimage and OpenCV how are they different thank you for the question so Skimage and OpenCV have some similarities but so OpenCV is really for computer vision for example object detection processing of video streams whereas Skimage focuses more on scientific image processing images from biology or from astronomy and in particular with psychedimage it's possible to process images in 3D sometimes even in more dimensions than just three dimensions for example if you have a high-spec spectral images from satellite images and so on so it's maybe a little bit more versatile for the types of images which you can process and also psychedimage is natively Python library which is really well integrated with the rest of the Python ecosystem it has a strong focus on documentation so I think the learning curve of psychedimage is a bit easier than the one of OpenCV so this are for the advantages of psychedimage on the other hand OpenCV has also some advantages because it's very fast it uses some very optimized C++ code but we are working at the moment to try to bridge this gap and to accelerate psychedimage functions so that you will have the very rapid speed together with the good documentation and the 3D possibilities. Awesome thank you Emmanuel. Thank you. Thank you so much if people have more questions you can ask on our Slack channel and here is how you join mtlpy.org slash fr slash slackin and our next presenters are going to be our next presenter is actually an interview Simon Saint-Germain responsible the mise en marché chez Pixmob is going to tell us about being flexible and adaptable in this time of crisis and he's going to be interviewed by Izzit from Montreal Python who is a student at Université de Montréal and also a contributor to the Mutec AI art lab. I'll leave it up to you guys. We're going to do the interview both in French and English. We can try to cover every question in both languages. We might also forget some information so if you ever feel that the information we covered wasn't we didn't exactly say the same thing or something was missing please feel free to tell us and to ask your questions in the meeting channel on Slack. So this is for the little information for the interview. So my objective with this interview is to prepare you for the hackathon that we are organizing which will take place in two weeks from the first to the third of May. As for which I'm going to do a full presentation in a few minutes and then it's a bit of discovering questions on how we can react as an organizer or as an entrepreneur to adapt and face the crisis. So my interview will be about how to adapt oneself and how to adapt a business or an organism that doesn't sound well. How to adapt a business in time of crisis. So first let's begin with the short introduction of Simon Saint-Germain, works at PixMob. What is PixMob and what is your role in the business? So first of all, thank you for welcoming me. PixMob is a company generally known for LOD bracelets that illuminate fulls during shows. So few people will admit it but often we have seen in either the tour of Taylor Swift or in a show like Chantemandaise or something like that. But we have obviously been very well known this year for having made the show of the half-time of Super Bowl. So we make bracelets that illuminate during these shows but we also have a connected object platform called CLIC and this platform is especially common in the Montreal region to be the platform of Cedar, Montreal. So we are obviously very much in touch with the event. Okay, excellent. So being very much in touch with the event, I guess you were among the first to be affected by the crisis. My question was, what was your initial reaction to the crisis? Right, so yes, we've been impacted from the get go and I just realized that I haven't answered your first question which is what I do there, what I do at PIXMob. So I'm responsible for the marketing there, mostly on the CLIC side, so which is the smart wearable platform but I've been responsible for both brands, so CLIC and PIXMob for the past three years now. So going back to what we've done first, so as you said, because we are in the event industry, we have been affected by the crisis extremely early on. So I would say at the beginning of February, we started to see a lot of cancellations, a lot of events that are being pushed forward in the future but without any dates or whatever. So we started to see this wave of cancellation coming in very quickly, so we realized very fast that we would have to most likely change our business model or whatever we do. So what we've done is we've break down extremely quickly the whole company, so about 100 people into five little groups to work on what we call innovation projects and those groups were called, I just said it, innovation projects and they were really set up as startups. So come up with a solution very quickly, pitch it to the client very quickly, fail very quickly, be successful very quickly and don't get attached to the idea. The objective was to find a new way to either use our technology or something new to make it through this crisis. I see. So you might say, you answered my second question, which was how was plan the business reaction to the crisis? How have you planned for your response to the crisis? Exactly. But we knew that we had a period in front of us relatively short to adapt. As I said, we have a hundred people. We didn't know how fast that was going to go. Did it go back magically the next day or the next day? But we knew that our period to adapt to the short-term and today it's sure that at this moment what we're looking at is more on the long term, so to see the new entities that we created, the new ideas that we developed. How are we going to make that happen in the long term? That's interesting. The question of the long term is one of the questions that I would like to bring in the square too, to see there are a lot of initiatives that have emerged recently to face the crisis and which are interesting to make live in the long term. To continue a little bit on the subject, what could be a project to concretize as a crisis? What would help a project become a reality in times of crisis? You know, I'm a marketing person, so I will have to say that you have to have a market for your idea. A good idea is great, but if you don't have anyone to buy it, to buy in, to fund you, to whatever to keep it alive, your idea will go nowhere even though it's a good idea. And those who think that a good idea will always find someone to buy in, that's kind of hard to prove. So you have to work on ideas that have potential for being commercialized and then unless you are an NGO and you're not doing this for the money, which is good for you, but most of the times you need to have a market. And that's what we've done. We started out with five projects. I would say that right now we are concentrating on two that have a longer term perspective and one that has a mid-term perspective that we know will fade away because of the market. But we're trying to have some sort of vision to see how this could be, could this prevail throughout the crisis. Great. So, just to come back on Innovation Group, the Innovation Group, my question is, what are the different profiles that are in these Innovation Groups? What makes a Innovation Group perform and quickly arrive with concrete projects that are interesting? I think that's what makes PIXMob's strength, that is to say that, basically, we are not just a company. We evolve in the event industry, but we have a lot of internal talent that is huge. We have marketing people, we have sales people, we have software engineers, we have programmers. We have everything, the only thing is that we don't have a brain, but we have everything. And what we've decided to do is to put each of these talents in the teams. So, with someone marketing, someone selling, someone accounting, someone what kind of accounting, honestly, we had other things to manage during that time. But someone, the hardware, the software that was in the same team, the idea was to come up with new ideas. So, the other thing that we did is that we eliminated the roles and hierarchies. So, no longer really hierarchies in the company, which is sometimes a bit weird, and that's why we're in situations a bit weird, but we eliminated several hierarchical levels in the company, and the roles have become completely fluid. So, someone who was producing, in the past, can now be the head of product for a project. Okay. Okay, it's really interesting how to deal with the situation. Do we have questions from the public at the moment? I don't have the pressure. Maybe on YouTube? No. No, okay. Well, I guess that means that all of this is clear and interesting. Avant de se quitter, peut-être, est-ce qu'il y aurait un mot, un conseil, a word or advice for people who wants to participate in Agaton and start projects in this time of crisis? I would say two things depending on what kind of projects you're working on. Again, being the marketing guy, I'm going back to the fact that you need to do this for a certain person. So, validate the need or validate the challenge you're trying to meet or validate whatever you're working on. Make sure that you talk to people that will do this. The second thing I would say is fail fast. That's what we've done, and we are a business that has a cash flow, some revenue, some expenses, and so on and so on. So, in the situation, you'll be faced with, I think that failing fast is extremely important. Don't be afraid to fail and don't be afraid to get those negative feedback. We've worked on some projects throughout this crisis where we got to the point where we sat down with clients and they were saying, well, no, we're not going to buy into it. This is not worth it. We're not going to put our money behind this. And we're like, all right, although we love the idea, you have to scrap it extremely quickly and move on to something else. So, don't be afraid to fail fast and go get some feedback from the market. Great. Thank you. Merci. That was the interview. So, if anybody wants to join us on Slack, again, it's in the meeting channel and we'll take a question through the evening. And thank you. Merci, Simon. Merci. Merci beaucoup. Merci, Simon. Merci. Excellent. Et puis, on garde, Edith, en ligne, ici. Et Edith, elle va nous parler de ce hackathon virtuel qu'on va organiser dans deux semaines. Et bien, je te passe la parole, Edith. Merci. Je vais juste sortir mes disables positives. Oui, excellent. OK. Bonjour tout le monde. Donc, en tant que corridor de Glacétonique, Tony Glacier, je vais vous parler du hackathon qui est la deuxième partie du combo virtuel de l'édition 76 de Montréal-Piton. Voici notre magnifique bannière. Donc Glacétonique du premier au 3 mai 2020. C'est un hackathon virtuel en ligne, etc. On se joint sur Slack, on forme des équipes, on pitch des idées. L'objectif, c'est de faire émerger des projets qui vont nous permettre de faire face à la crise. Donc oui, émerger des projets pertinents dans le cadre de la crise sanitaire de la COVID-19. So, for Tony Glacier hackathon, it's a virtual hackathon. We get together on Slack. It's from May 1st to May 3rd. And so it's in two weekend next week. Yes, it's in two weekend. Our objective is to level up on relevant projects in the public health crisis context of the COVID-19 hackathon. À quoi s'attendre? On a daté un plus le context du concept hackathon pour hackathon virtuel. Donc premièrement, vendredi, on pitch les projets. Les participants sont invités à se joindre sur Slack afin de risoter, d'échanger leurs idées, de mettre leurs projets de l'avant et de former des équipes. Une équipe par projet. Samedi, on éleveure les prototypes, on fait de la démonstration d'un produit ou d'un service qui, on réalise le projet en fond, une sorte de MVP de minimum viable product pour démontrer notre idée. Dimanche, les présentations des projets. On n'est pas une compétition, on n'a pas de prix à donner. Mais on veut mettre de l'avant les projets qui ont été réalisés ou qui ont été continués pendant la hackathon. So hackathon, what to expect? We adapted the virtual, the presential model of hackathon to a virtual context. On Friday, we do project pitches. So participants coming on Slack present their projects and go do some networking and regroup themselves in teams wanting that project. On Saturday, there's a version of a prototype or demonstration of the product or service that is the actual project. So we work on our project and we try to come up with an MVP with a minimum viable product. And on Sunday, it's presentation time. Everybody is invited to present their project and to promote the future of the project. So participate, join us. C'est ouvert à tous les projets pertinents, pas besoin d'utiliser Python. C'est ouvert au grand public, pas besoin d'être un développeur ou une développeur. It's open to all relevant projects. You don't have to use Python, open to everyone even if you're not a developer. Information, all information is on our website, on the website of Montréal-Python. The address will be posted on the channel meeting in a few moments. Join us on Slack. Acheteur hackathon on the channel hackathon. Join us on Slack on the channel hackathon. That was all I had to say for the moment for Glacier tonique. Our goal is to bring people together. If you already have ideas, if you already have projects in court, come and write to us if you need. We're going to do our best to gather as many people as possible around projects that are linked with the crisis that we're facing right now. So join us for tonique, Glacier. This was all I had to say for now. We really want people to join and gather around projects that are relevant to the crisis. So if you have specific needs, please let us know and we'll try to gather the right people to move forward with the projects. So merci. Thank you. That was all I had to say. If you still seem to be online, you can ask us. I'm afraid we've already lost Simon, but you can ask us questions on Slack and then we'll link them. We invite everyone to fill out our little poll to find out if we did a good job with an online meeting. So please, everyone, fill this little survey there, bit.ly-np-76-survey and tell us if we did it well to have an online meeting for Montreal Python. And on that, it would not be a Montreal Python if I didn't do a glasses throw. So I'll do that just for you. And I say thank you and see you at the hackathon. Bye, everyone.