 I'm very happy to have a student here today because it's been a long, long time about you because I think two years ago, one year ago, two years ago I invited a student in Toronto to give a talk but it was cancelled for various reasons and so at the time she was still a PhD student working in Nantes, University of Nantes the history and philosophy of artificial intelligence in Nantes but now she already has finished her PhD and she is now a professor at University of Technology in Comparison so very exciting to see what you have in your hands Thank you and you have the floor Very good So hello everyone and thanks for the invitation So to briefly introduce myself I am Caltier Poiseuptore Researcher in a project funded by the Astute National Concert The project examines epistemological, ethical and legal issues related to artificial intelligence in oncology and these issues are studied through the lens of artificial intelligence real-life developments in free French concert centres and in a medical and informatics lab The project also aims at formalising a co-design method the design professional and the purpose of this method is to make it easier to integrate artificial intelligence-based devices into clinical practices So my research fields are philosophy of technology and philosophy and history of medicine I study decision support systems in medicine from the first algorithm used by physicians to contemporary artificial intelligence and special interest in their integration into medicine as well as on their impact on medical practice reasoning and knowledge and my research methodology combines historical approach and empirical approach So I will quickly present my doctoral research to introduce the research work that I will present today to add a bit of context So my doctoral dissertation is entitled Les Algorithmes avant l'organisation artificielle en jeu pratique et contexte de l'automatisation de la décision médicale à partir de deux cas d'études Its main aim was to shed light on the issues raised by artificial intelligence current developments in medicine and I chose to shed light on the condition for the integration of artificial intelligence into medicine by studying development and use of medical decision support algorithms I focused on the decision support algorithm used by physicians since the 90s namely decision trees, clinical decision rules and psychometric schemes I will give you more examples later but here you have for example a clinical decision rule which is used to assess the pretest probability of pulmonary embolism in short it tells the doctor the probability that he is patient as pulmonary embolism based on the symptoms and risk factors Thus my dissertation's research program was to determine how physicians have received these algorithms in the past in order to identify the conditions for the appropriation of artificial intelligence based devices today So my dissertation takes the form of two case studies on two different objects and by doing so I was able to study the effects of algorithm on medical practices from two complementary perspectives First from the perspective of the over-appeared in medicine and second from the perspective of overused by physicians So the first case study focuses on Hamilton depression rating scale in the context of general practice Hamilton depression rating scale is a psychometric scale that has been promoted by French medical authorities to general practitioners as a decision support tool for the genesis of depression However it is primarily research instruments used in psychiatry since the 60s therefore to assess the effects of antidepressant treatment Therefore this case study examines the reasons for the transfer of this scale from psychiatric research to general practice and I related this to an institutional desire to increase the quality of care and reduce costs and I then drew on the body of philosophy of technology to explain the fact that algorithm relays social norms intended to supplant the norms of medical practice The second case study focuses on the decision tree promoted by a medical society It is intended for the diagnostic management of acute pulmonary embolism So the aim of this case study was to determine how algorithms are used by physicians So to do this I relied on an empirical investigation in emergency medicine and this allowed me to examine how physicians adapt decision support algorithm to their practice contexts In the process they reinterpreted social norms these devices convey in light of their own practice norms and they can do so because the use of these devices pleased to consider knowledge and skills Thus the hypothesis I drew from these two case studies is that the future success of artificial intelligence based devices in medicine will likely depend on the ability of physicians to confer them usefulness and relevance by integrating them into their knowledge skills and value systems Thus my results allow me to suggest that in order to be successfully deployed in medicine AI based devices should be subjected to a complex contextualization effort So this hypothesis that is emerging in the literature on artificial intelligence in medicine As a matter of fact, artificial intelligence in medicine appears to be motivated by the desire to replace physicians in some other tasks however, simply to assist them A lot of comparative studies of physicians versus artificial intelligence performance sometimes if not always point to better performance by machines As a result, physicians rarely have a voice in the design of these devices However, past experience of technologies designed for physicians like for example expert systems in the 70s, 60s, 70s, 80s shows that they will not adopt technologies that do not take their actual practices into account So there are currently very few studies on the contextualization work necessary for the deployment and use of artificial intelligence in medicine My postdoctoral research starts with the hypothesis that in order to be successfully deployed in medicine artificial intelligence based devices should need to be contextualized It intends to fill the gaps in the literature by studying various devices developed for management of breast cancer So I will present today my current research around the artificial intelligence project led by a pathologist and an engineer in Toulouse And this project found part of a professional strategy to enhance the pathologist's expertise The fact is that this expertise has been undermined by genomic-based approaches However, the later have limitations that I will study through the example of Oucotype DI-X which is a gene expression assay for breast cancer So I've been working on this for about a month So the research that I will present is still a work in progress Pathologists have been central to the management of cancer for more than a century morphological assessment of histologic tissue samples both microscopic and microscopic And identification of particular tumor markers through immunoistochemistry are at the core of cancer diagnosis, prognosis and treatment For example, surgical specimen used for microscopic morphological assessments Emma Toxin in an Eosinstein T-shirt used for microscopic assessment and immunoistochemistry which is used to highlight certain markers which are used for prognosis or for predicting treatment effects However, this whole role is being challenged by the rise of precision medicine which is based on tumor genetics While genomic approaches do not threaten to replace the pathologist's expertise it is clear that the knowledge they provide is already guiding some clinical decisions This includes therapeutic decisions in some hormone-dependent cancer, breast cancers that do not over-express human epithemal growth factor risk factor 2 and have no or limited lymph node involvement These cases are usually treated with hormone therapy alone However, some patients with a significant risk of cancer recurrence are given additional adjuvant chemotherapy So the risk of cancer recurrence is related to the presence of undetected micrometastatic deposits after primary treatment or usually surgery and adjuvant chemotherapy is meant to eradicate these deposits So the risk of cancer recurrence is assessed on the basis of clinical and pathology criteria However, at times risk criteria are discordant or do not allow for an assessment of recurrence risk in some complex cases In such cases, on the basis of guidelines the oncologist attending the multidisciplinary consultation meeting will request a molecular assay and this molecular assay will assess the patient's risk and determine whether adjuvant chemotherapy should be given Thus, with the emergence of molecular assays the pathologist has seen his techniques and therefore his whole challenge by tests that do not belong to his epistemic equipment So on the basis of a case study I hypothesized that artificial intelligence could enable the pathologist to improve his analysis skills and this would enable him to regain a central role in situations where his traditional techniques proved to be insufficient In the literature, two arguments are commonly put forward to support the use of artificial intelligence in pathology It is higher to relieve the pathologist of repetitive tasks with low added value or to provide him with decision support In this presentation, I will focus on a case study illustrating the second perspective This is a project conducted at the Uncopole of Toulouse in France The project name is Apriorix It aims at creating a dataset of breast cancer images annotated images of breast cancer This dataset will later be used to train artificial neural networks to identify tumor components in whole slide images Here you've got an example of what this whole slide image is which is basically a scanned slide which is a pretty big image and here you've got what they intend to do on whole slide images with artificial intelligence At first glance, this project simply aims at providing the pathologist with a decision tool This tool would assist him in his histological examination but would also let him do what he did before in the way he did it For example, pathologists assess tumor aggressiveness by containing metotic cells Since they do not have the time to count them in the entire glass slide they do this at the greatest magnification in an area which is about 3 square millimeters which seems to be the one with the highest cell division activity According to the pathologist who initiated the project if an AI based device is where to inventory all metotic cells on the slide it will allow pathologists to see things that they are not able to detect with their current means If the machine is able to tell us that there are areas of high metotic activity and others that are weaker and that all of this may have meaning we have never been able to assess It is towards such things that we want to go Through the study of this project I intend to show that this dataset has a name that is beyond simple decision support It is nothing less than enhancing the pathologist's expertise faced with decision making tools such as gene expression assays but the outside is epistemic equipment Indeed, several papers have shown that pathologist's expertise has been undermined by genomics-based approaches Therefore, I will examine and compare the purpose and implementation of a widely used gene expression assay Oncotype DI-X with those of the creation process of this dataset So, Oncotype DI-X is a test that was developed by the American biotech firm Genomic Health in the early 2000s It measures the expression of 21 genes in tumor tissues In order to estimate the risk of breast cancer recurrence in patients with early stage hormone-dependent breast cancer that do not over-express human epidermal growth factor receptor 2 and have no or limited lymph node involvement which is in fact the most common cancer subtype Of the 21 genes tested, 16 are cancer-related genes mostly proliferation-related genes and 5 are different genes used to normalize the latest expressions Over-tests exist on the market with the same indication However, Oncotype DI-X is currently the test with the highest level of evidence according to the European Society for Medical Oncology Therefore, this test is the most widely used by oncologists So, Oncotype DI-X is a test performed in a central laboratory in California and it works as follows When an oncologist prescribes it after a multidisciplinary consultation meeting the pathology lab prepares what is called a formal infixed paraffin-embedded tumor sample which they call in fact a paraffin block and then they send the paraffin block to the central lab in California There, the tumor gene expression is measured using RT-PCR with a method developed by Genomic Health for paraffin blocks and then a proprietary algorithm is used to calculate a recurrence score on a scale of 0 to 1000 based on gene expression assessments Then the oncologist who prescribes the test receives the analyzed paraffin block the patient's recurrence score and two additional pieces of information which are based on the retrospective and prospective validation studies of Oncotype DI-X The patient's nine-year recurrence risk after five years of hormone therapy and the average benefit of chemotherapy, adjuvant chemotherapy in the group to which she belongs Indeed, prospective study, three prospective studies named TELOR-X establish indications of adjuvant chemotherapy for different risk groups Adjuvant chemotherapy is indicated for high-risk patients with a recurrence score above 26 but is not indicated for low-risk patients with a recurrence score under 10 and for intermediate-risk patients under the age of 50, the recurrence score between 11 and 25 Adjuvant chemotherapy has moderate benefits in intermediate-risk patients aged 50 or less So in this group, treatment options are discussed between the oncologist and the patient It appears that in some risk cancer cases, the introduction of Oncotype DI-X has minimized pathologists' involvement in medical decision-making This is consistent with the analysis made by Pascal Bouret, Peter Keating and Alberto Camborgio in a paper published in 2011 In this paper, these authors argue that Oncotype DI-X decentralizes clinical reasoning and judgment which means that these processes no longer take place within the hospital between different health professionals but in the central laboratory that performs the test In fact, Oncotype DI-X provides physicians with prognostic information, the risk of recurrence and predictive information, the benefit of adjuvant chemotherapy From a process that is made opaque by genomic health, indeed the test cannot be replicated and cannot be interpreted by physicians as conventional paraclinical tests can be As a result, medical decisions are no longer based on medical knowledge and skills but on the test results alone, and the pathologists I interviewed confirmed this We assess proliferation through the microscope in two different ways by culting methodic cells and by looking at a marker called key 67 Key 67 is an immunochemistry marker which is not very reliable This assessment has many limitations What molecular assays do well? The majority of genes in molecular assays are proliferation-related so it is a kind of super key 67 and sometimes yes, this is what tips the balance In addition, this quote highlights an equally crucial point Oncotype DI-X guides medical decisions because it has more weight than pathologic criteria Indeed the emergence of the test has been followed by studies comparing its prognostic and predictive performance to that of traditional criteria pathology and clinical criteria with results often unfavorable to the latter Thus, Oncotype DI-X does not only minimize the role of the pathologists in clinical decision making it also questions the robustness of the criteria on which his expertise is based However, Oncotype DI-X is criticized by physicians These critiques relate to its prognostic performance when compared to other molecular assays to its non-suitability for some medical settings compared to its inadequate financing system or at least in France The first studies comparing different molecular assays on the same samples have shown that Oncotype DI-X does not ratify the same patients in the same way as our tests with similar indications For example, a test called ProSignia or PAM50 According to the pathologist, it means that we don't do it because we would get into complicated situations If Oncotype DI-X told me low risk and if I did ProSignia PAM50 and got intermediate risk that would be embarrassing and that is what would probably happen quite often We don't do it What does it mean? We don't do it We don't conduct tests We don't do two tests On the same patient Some studies have even shown that sometimes over tests perform better than Oncotype DI-X So there are biological and technical explanations for these discordant classifications and performances Firstly, the patient populations used to develop the tests are not identical For example, Oncotype DI-X was developed on a larger and lower risk population than ProSignia For example Secondly, the tests do not target the same genes Oncotype DI-X targets 21 genes while ProSignia PAM50 targets 50 genes and some pathological, traditional pathological criteria Thirdly, different RT-PCR methods are used to measure gene expression and still, from a clinical point of view, these inconsistencies are problematic Medical decisions may differ from the same patient depending on the test performed which is no problem Secondly, Oncotype DI-X is promoted as a therapeutic de-escalation tool as the pathologist points out It is likely to cause therapeutic escalation in some settings In our setting, which is not a chemotherapy-oriented setting the molecular assay will more often lead to escalation than de-escalation The major coverage of these assays is based on attitudes which are, for example, no family can attitude and which are not necessarily what we do Oncotype DI-X was designed specifically to address the issue of over-treatment of early stage breast cancer in the United States Indeed, in the early 2000s, adjuvant chemotherapy was widely prescribed following National Institutes of Health guidelines This guideline was based on the observation that the benefit of adjuvant chemotherapy remains stable between different patients' profiles However, studies quickly contradicted this observation by showing that most early stage breast cancer patients do not really benefit from this treatment Hence, Oncotype DI-X was designed to stratify patients according to their risk of recurrence In order to avoid adjuvant chemotherapy for patients who have a favorable prognosis However, in France, patients are much less likely to be offered this treatment than they are in the United States As a result, patients classified high risk by Oncotype DI-X who are therefore eligible for adjuvant chemotherapy who not necessarily have been offered this treatment based on the criteria used by physicians in France So this is due mainly to differences in guidelines and local clinical cultures And lastly and thirdly In France, Oncotype DI-X text testing costs are currently covered by a system which is really criticized by physicians which is called the référentielle des actes innovants or nomenclature It is a nomenclature associated with a budget which was created to assess for assessing medical innovations under real-life conditions And this allows the coverage by the French public health insurance system to be in fact accelerated The main difficulty with this system is that its budget is fixed which means that it does not adapt to changes in demand for innovative tests Thus, the addition of new tests to the nomenclature should be followed by the removal of older ones However, in practice, this is not what happens because all tests are not removed from the nomenclature resulting in a decrease in funding for each test In this situation, there is a risk of inequities in access to medical innovation So in response to molecular assays challenging pathologists' expertise and to the limitations of these assays the project I am studying aims at developing local tools that meet pathologists' professional needs and epistemic requirements So the project is led by a pathologist and an engineer As I previously stated, it seeks to create a dataset of annotated images of breast cancer tumors and this dataset will be used later to train artificial neural networks to detect tumor components in old slide images The stated goal of the project is to enable the pathologist to explore more effectively the information present in his main study material namely histology slides Indeed, the information extracted from slides during histology examination is limited to about 15 criteria which make it possible to characterize tumor within a limited time We are talking about a quarter of an hour for complex cases and also to choose the right treatment However, the information contained in the slide is much richer than what is revealed by these 15 criteria which really relate to tumor cells According to the pathologist, in a glass slide we will breast tumor on it We often have more than a million cells before our eyes and a tumor is complex It is not just a bunch of tumor cells There is an interaction between tumor cells and the underlying hormones Immunity, structure, vessels, and so on So here you've got an example of... You've got in fact the criteria used by pathologists during the histology examination Image segmentation algorithm could make all the information contained in slides accessible to the pathologist Image segmentation is a method that appeared in computer vision in the 70s and it's a method that has recently seen its accuracy increased by deep learning It is used to find objects in digital images by labeling each pixel Solve that pixels with the same label share certain characteristics An input image is basically transformed into an output image which is usually a segmentation map that contains an outline of an object of interest Here you've got an example of a segmentation model used which is not encoded The project image segmentation was preferred to have a computer vision method such as object detection Because image segmentation enables to obtain more accurate information It gives more accurate information than object detection As image segmentation, I like objecting images while object detection algorithm Role-bounding, which is called bounding boxes, are all these objects However, when applied to histologic images, image segmentation raises some challenges For instance, for the pathologist, the mathematical operation applied to images by segmentation's algorithm are likely to make the extracted information biologically meaningless The extracted information must be intelligible to a biologist, to a pathologist and to a noncologist Extracting information like neural networks do, that is, by doing lots of co-volutions and stuff and we've fetched patterns that we understand at the end, but was not the intention The point is that in order to identify objects in images, algorithms do something that is quite different from what a pathologist does to accomplish the same task In particular, the operations performed do not involve any biological or histological knowledge Thus, in new situations, the pathologist will not be able to know the object segmented by the algorithm They have a real biological meaning In fact, they could just stem from the presence of artifacts in the training state and histopathology images are prone to arctic facts which are mainly deformations introduced during the preparation of the slide and during their digitalization To teach the algorithm to identify objects that have a biological meaning it would be necessary to manually level these microscopic training images up to cell level that is, to level the million objects present on the slide This task is tedious and it is complicated by the fact that the analysis of scanned tissues does not benefit from the refinement offered by the microscope I think that all pathologists who have ever been involved in image analysis agree that it is a fan less complicated and time-consuming task On top of that, there are many occasions where we are not really sure about what we are saying As a result, available datasets contain either a large number of roughly annotated whole slide images This one has annotations of cancer tumor patients We tell you just that the tumor is here and this one is a bit more presupposed So either a large number of roughly annotated whole slide images or a small number of patches Parts of whole slide images Small number of patches with cell level annotations In either case, these datasets are insufficient when it comes to teaching a model to segment biological objects To create a large dataset of annotated whole slide images the solution devised by the pathologists who initiated the project was to delegate the annotation task to immunohistochemistry Immunohistochemistry is a technique used in clinical practice since the 80s Approximately And it can be done on the same slide as H and E staining which is standard staining And it allows for the detection at the cell level of proteins specific to cell type structure or function by means of an antigen antibody reaction and a complex form as a brown markings on a blue marker And immunohistochemistry is used for diagnosis, prognosis and predictive purpose for treatment, selection and so on For example, it enables the pathologist to determine the level of HER2 expression and just to predict the response to HER2 antiheter... HER2 direct therapy But here immunohistochemistry is used to create binary segmentation masks that will act as labels for whole slide images So the strategy for obtaining these masks is as follows 9 histology slides are made from a single path Then they are scanned and disassembled And immunohistochemistry is used to mark a specific biological element in each of the histological sections So in fact 9 tumor components which is a number of components used by pathologists during histology examination And immunohistochemistry slides are also scanned And the images obtained are then processed to extract binary segmentation masks That masks match histological images and cover the biological object that the model will have to learn to segment And after that H&E images and their masks are fed to a neural network However, the first results obtained with the dataset suggest that what the pathologist does when he locates biological objects on a slide is not entirely reproducible by a machine that identifies combinations of pixels In particular, the engineer has a problem with a model that segments far too many mitotic cells which are in fact very infrequent in the glass side histology slide In theory, my mask does this So my model most must locate these mitotic cells And she does However, it also finds some... It doesn't identify them well in terms of shape, texture So it finds none And there were in fact 19 mitotic cells in the image That's the problem So a solution considered to reduce the number of false positives is to partially modify the model architecture and provide it with the criteria that define mitotic cells According to the engineer In fact, we have an artificial neural network that does segmentation It's an autoencoder We encode the image and decode it to make a segmentation map We wonder if we could add this classification branch at the bottom of the encoder that will simply ask whether there are mitotic cells in this patch Thus, from the encoder the model will have to both train itself to segment mitotic cells and learn to say whether there are mitotic cells in the patch The difficulty is then to retrieve the pathologist's knowledge of mitotic cells Some false positives seem to be part of a nuclear However, for the pathologist I'm from a biological point of view During mitosis there's no more nuclear membrane And so there's no more nucleus So these criteria should be added Over false positives have a fragmented shape rather than a rooted shape of a mitotic cell These over-morphological criteria should be integrated In short, the engineer task here is to gather and translate into technical applications knowledge and expertise on which the pathologist relies to identify mitotic cells So the project I studied is a good example of how artificial intelligence-based devices even deep learning-based ones can be developed and should be developed in a contextualized manner that is in close interaction between the pathologist and the engineer In this case, in this project, the tools are developed to fit the pathologist's existing practices and to meet his professional needs as well as his epistemic requirements In the literature, a few papers point out that taking specific use-context into account from the development stage will be critical for the future success of artificial intelligence And my research work, from part of a research project, intends to fill this gap in the existing literature That said, the research I presented is a work in progress as I previously stated And it requires further clarification but I have more questions than conclusions In particular, I've got two questions related to the highly contextualized nature of the development process of this dataset and of these neural networks The first question I have is How does one integrate domain-knowledge into an artificial neural network? In my last example, the pathologist suggests to the engineer to integrate the criteria he uses to identify mitotic cells into the algorithm This would improve the segmentation While this may be a simple task in the case of symbolic systems such as expert systems it could be much more complicated than an artificial neural network So expert systems are programs developed during the 70s, 60s, 70s and programs that mimics the expert reasoning process using knowledge-based rules And in the case of expert systems integrating domain-knowledge into an expert system is, I think, simply a matter of adding the appropriate rules appropriate decision rules In contrast, neural networks have very complex mathematical functions with many parameters and from what I can see their development process looks a bit like that In this sense, all does one get a neural network to implement a process that makes sense for a pathologist without choosing a set of rules And the second question I have is what about the external validity of these tools? These tools will be, indeed, highly dependent on their development context and it is not obvious that they can be transposed elsewhere or if only because the methods used to prepare a pathologist's slide or to scan them, scanners, are not exactly the same quality However, the pathologist and the engineer don't seem concerned I wonder what he's saying They don't seem to be interested in the... I'm not sure why you integrate the specific knowledge inside the development but I can't do that, I say because if it's just a level that learns stuff I would have to assess to provide answers Why don't you just put the specific knowledge that you want and then accept that algorithm is learning from and why do you want to change algorithm itself? Can you repeat the question, please? Or in French? No, I don't know I don't have any specific algorithm If the problem is that the algorithm can't replicate the specific knowledge why do you want to change the algorithm and not the asset? We know that the algorithm reproduces the assets If the problem is that the specific knowledge of the person can learn the asset, the algorithm can learn it's not a problem Yes, it's possible When I went to see them at the end of last month they started working on the problem so they had a lot of clues to explore and maybe improve their data set and in particular there may be some problems with chemical amino acid markers which I don't know maybe they put in evidence neutral particles that are under attack and remain visible so there may be some work to be done on the data set I also think that the problem is that the algorithm can't replicate the specific knowledge I think that the problem is that the algorithm can't replicate the specific knowledge I think there are two levels Why doesn't the pathologist create the asset? He just wants to do it, he doesn't have time If the pathologist has two or three or four pathologists to form a data set by marking them by hand it works Yes, but it works There is a more subtle technique, that's what I'm talking about It's impossible I think that aside from being a pathologist for 30 years to be able to write 50 pictures It's the worst thing to do with shit Yes, it's impossible Even if they make a mistake because on virtual images they can't zoom in they can't change they can't move in the thick tissue like with their microscope So sometimes they are not sure about certain objects and I think that's really the data set they try to automate that and they found something that is rather pertinent Is there a real reason to automate that? It's not just a social problem because a pathologist doesn't do that Yes, I think so If you can imagine a game, like Foldit I'll see when you train him Do you know how to do that? Do you know how to do that with a pathologist? Yes, but you have to be a pathologist I'm slightly sure about the pathologist saying Yes, we don't want to do a neural network step where we don't understand but then again they do it It's a bit confusing He didn't want to end up with something that was not intelligible for him and there were pathologists and biologists but then they end up with something that doesn't work It's for a neural network that's a bit confusing Okay, can I have a fresh one? Yes, that's fine No, but I can... Ah, yes, you can do it Yes, indeed, I think that the models of machine learning that are used for segmentation are opaque and the neural networks are artificial it's opaque for him but on the other hand what's quite difficult to understand in the project is that it's a bit hard to grasp that I have the impression that given that they are absolutely certain that their dataset has a biological reality that they are real biological objects and that they train algorithmically with real biological objects that segment real objects that are well segmented, well annotated I have the impression that it lifts a bit at home a part of the opacity It's like a kind of verification window where the pathologist would say that's done it's trained thanks to images that were noted by a technique that I know and then there's also the fact that I'm not talking about it the engineer has been recruited on the project and he works both the pathologist is the engineer and he's my office assistant of two square meters all the time, one on the other and in fact they are all the time all the time, all the time in exchange In fact it's a bit like an engineer that has recruited the pathologist who wasn't so comprehensive Yes, that's it he manages to translate what he does in machine learning in pathologist's language if you want for example him to verify the results of the algorithm the algorithm has dimensions it's pixel values so the pathologist wouldn't understand and so for the pathologist to visualize the results to understand why the algorithm is planted or not they put in place a visual interface thanks to tools that already existed but they have a little to modify to do things with but he comes he comes to discuss and I feel that it opens the black box So it's a bit like at the beginning he had a he was afraid I don't like to say he had of course he didn't go to something that would not be and then finally I think there is a dialogue that is going to happen it's not that good it's always that it rests on things that he knows that he uses a technique that is very proven and the world is still in it so you are with an a priori and it's called a priori I don't know why it's called a priori but yes it is after that I don't know if it's to finance the project that presents it I don't know I don't know I don't know I haven't been to ask in English but you can write that to anyone I have a very basic question which is a bit going into the other direction what is the general feeling among the medical community and the pathological pathologies community in France with general regarding AI in medicine is it something that they welcome themselves or is it something that is being sort of offered to them and sort of pushed from the AI community and in that regard it's a problem for me what struck me the most is the level of centralization like you said and the level of opacity having such an appropriately software being a central piece of all these systems how it makes things very big and very dependable for those companies so I wonder if there's any kind of persistence towards not only the more systemic problems of AI but this privatization of AI to answer the first part of the question which was more about the GIFSA well my study is very biased the postdoc in which I am very biased because we are going to see essentially the doctors who who are really interested in AI and then we also work in fight centers against cancer it's a bit they are at the point of research on cancer and it's normal that the AI arrives at one time or the other because it's really something that is integrated in the research on cancer and as we have pushed pushed to the development of AI in cancerology and so we have seen a lot of people who are very interested by that and in fact the majority of people are favorable to the development of AI and for reasons on radiologists to gain time or improve their performance in screening in the mammography of the organization because there are a lot of negative flaws so we see a lot of people like that for example in public hospitals I think that the AI is something else that they don't even even think about and apart from that I read a lot of surveys about how pathologists envisage the AI in their speciality but it's always the same it's to improve efficiency to allow the doctors to focus on more complex tasks by letting them do simple repetitive tasks like writing reports and I don't know if that answers the first question and for the second part I didn't completely understand I didn't have a specific question no I was just not sure because I had a new opinion the level of how much private companies and tech startups are these in the Netherlands I don't know tech startups are within the AI in the same community so I wanted to hear if you had any thoughts on how is this perceived within the medical community I mean it's not like a community that is used to well practices private based so I don't know if that's perceptual score or if it's part of the game ok always in the fight-or-miss they develop artificial intelligence with startups or I didn't feel animosity against startups and working with a big startup a huge one called Okin in Lyon in Paris also in July working with but I don't really I don't really know the same development but there's a lot of startups that are involved in their their soaked in money all the time already also already in the private sector so it's not shocking that they send their samples to California for the fact that is sent to California I had asked even if in terms of personal data we managed to use it because it's a company that can just not do a sequencing from a film blog if it sings to them and do research with them and in fact they will show it but the pathology is really easy because maybe in the United States they don't have I don't know but they don't have the same legislation on personal data so so so so so so so yes but when it's samples that are coming like that I don't know if we can speak maybe it's a very serious one I don't know I have a question regarding comparative, in the case of Nankoti, you have discussed a little the reactions of the doctors regarding the opacity of these tests, because you have already met the testimonies of the patients in these tests, because if I arrived at my doctor and he tells me, so in this test, you have a 32, and I say, so what does 32 mean? Well, that can mean that you have a 32, we do not know exactly, but the 32, so it's 32, well, what? I understand opacity, but that's really opaque. Yes, that's true. No, unfortunately I did not see that it was for the patients, but I would like to insist on a reaction of concentration that is disciplinary, precisely on these cases. Well, there are no patients, but it would be interesting to see how the doctors interpret that, is it the end of the internet or is it just based on the results of these clinics to say, well, she has 32, so it's no chemotherapy. But unfortunately, we have not yet seen the patients. On the other hand, it is difficult for all doctors. In each doctor, he will tell you that they do not interpret these clinics and such things for each case, in fact. There, it's just a more complex level with, I mean, it's just a different complexity. Do you think that you are not just, is it the problem of interpretability, for example, of each case, of many clinics? Do you think it's specific or is it similar to what is happening in general? Do you think there is a difference between this case and that of the clinic or even a test? Do you think you are right? Because in the end, a doctor is not always like, for example, an IRM, giving him a result. Even, what is it? It's the paracetamol, for example, it's not too much that the pharmacodynamic, its properties, the pharmacodynamic, are weak, it makes the patient feel better, but the doctors don't know what it is, because it is validated. That's what I said. Well, the only difference is that in the case of O-Type, it's the algorithm, but voluntarily because it's the industrial property, I think, I don't know, I don't know. It's a problem of, even if it wasn't in the legal case, if it was in an open access thing, when you would have seen it, in fact, you wouldn't have seen it, even if you could have seen it in the case of O-Type. Well, it's true, it's not bad in the case. So it's like, it's like the... What I understand is that it's like an addition of risk factors, of risk factors. So, it's like a number of genes that you have, and it's enough to hide that, because we are talking about the recipe, which is a number of genes that are in it, which are already in the case of O-Type. But what are the results of each gene, I think that's it. It's like giving a score, I have C32, it makes so much sense to have a blue marker, that to have a blue marker, the O-Type says, well, if you have your brackets between time and time, then it means you will probably have an information, I think. That's it, it's already... The features are the same, it's like that. The patients, it's very simple, you understand, and it's not all that complicated with the patient. There, you have an information, you have... I mean, you have... in the antibodies, we know what is an antibody. You can explain the relation, it's a little bit short. After the score of the 32... But you can just... If you make a score of the 32, you can't say, well, I don't know, there are more people in the recipe, but you can't explain when the score of the 32 is built. After the score, you can tell what it is. I think it's like a difference. Because also, I think I understood that in the first case, the prokotanal software gives them the number, the 32, whatever it is, based on the RTPCR data, right? On gene expression... It's RTPCR, no? Or on Osmo? What's that? It's on RTPCR, and then they use... from the major gene expression level with RTPCR, and then they use appropriate algorithm on using different weights and weight changes for gene groups, and, in fact, it's an algorithm that will teach different groups of genes and... My question is, maybe I didn't understand that, because of all of this, the RTPCR that's in expression for prokotanal software analysis, everything of this happens in Central Latin California, but do they, do the doctors and pathologists get the intermediate steps? Do they get the results of the RTPCR and the gene expression? They just get the final product, right? No, they don't. They don't get anything in between. That's what they don't get anything that happens before. In fact, the problem, too, is that pathologists are not at all involved in this... in the implementation of this test, because it's already something that's at stake. It's essentially the oncologists who are going to ask and interpret it. So, pathologists don't get anything. They don't get anything, except the paraffin block, which they will receive and that they have prepared before. But for the oncologists, in fact, there will be the test that has been done before the source, as a result of a score, and then a risk of complaints and then a risk... a benefit, if you will, by estimating it. What are the risks? It's quite common, in general, for a pathologist, it's very common. What are the risks? Yes, above all, it's a result of a test on blood. I don't know why they put 0 to 100, because they consider that patients are at risk, they are already at risk starting from 26. I don't know. But yes, it's something to dig into. You know, I was going to read the literature on how to translate, in fact, explaining the risks, what are the risks, and so on. We put things a little more like that, but that's not good. I'm going back to the same thing. How difficult it is to make an open software based on this, because the RTPCR and genome analysis is relatively commonplace. So it's just the last step, the proprietary software is the difficult part. How many years did it take to establish this sort of software with the history of the company? And how difficult would it be to have an open source software and maybe not the college themselves, but sort of... I think it's... I don't know. I know that. It's a bit naive, of course. I think we're asking, can YouTube authorize drugs? What, why drugs? You know, I'm sure, like nationalizing the pharmaceutical industry. But it's a moment short. It's about a go-type... Yes, yes. Okay, so how long did it take to establish this proprietary algorithm? In fact, from the first publication of the first publication on the test, which is the proof of the concept, where they were exposed... We invented a revolutionary test, and the algorithm is already present in the publication. And after that, I don't know, because the people who tried to replicate this test from this paper or the methodology and the algorithm were presented, they never succeeded. So, how long did it take? Well, I don't know... I don't know how to answer, because I have trouble pronouncing it. And so again, so the same for up. Do we subcontract by default? No. Because if you have two more boards, it's inter-disciplinary board, if you have a competent geneticist, and you order a genome sequencing, a complete genome sequencing of the tumor, it's done. You don't have to order something from a go-type, from a go-type DX, no? Because geneticists should be retrofitting genes in many ways, and if you are aware of the literature and the risks, you think it's a gene, you think it's a gene. Look at that. That's how hard it is to get from the gene expression to the 32. Yeah, I guess it's short, because we don't have geneticists. There's a point for precision sequencing, but if... new treatments are out every year, most of the day, with specific genetic mutations, no, is it...? No. There's an opportunity, but it's not your case to be. It's not exactly your case to be, but under which condition do we... how does this kind of test instead of doing something else in the next second? I don't know. I don't know. I don't know what to say. I don't know what to say. It was a question, but we'll see. I'm not sure but it's not your case. You don't see me. You don't see me. You make the test and show that it's in the complexion with the image, with the role of a geologist, to test it, to do something else. They use it in two agreements between the people present at the meeting, because they're obliged to come to an agreement of the consultation of the pluridiscipline in charge of cancer. It's what will decide the protocol. Because it's... So it's... when they don't agree, for example, they'll say, no, no, she's going to have her own hormonal therapy because it's a favorable pronostic and the surgeon will answer, no, no. It's in conditions where everyone doesn't agree or it's really complex, and I would have liked to know more but for now I didn't have time to go too far. I'll have to see other anapathes and other zoocologs that weren't available at the moment. And... Yes, that's right, because if you have a procedure for the pluridiscipline, you're not at all a pluridiscipline. It's just a score that no one on the board understands and you're not at all a pluridiscipline. We would have if we analysed it, I would have no questions. I'm not a pluridiscipline. I'm ready to do it, but I don't have time. How do these people who work on this project have the right to use it? Is the goal of our future to be used as a formative tool that will have a better judgment and a better representation of the situation in terms of how it will work? Well, at this stage it's the way we see their project and the tools that we use. They see that as tools that will allow them to or pathologists, for example, to discover a correlation with obvious things that weren't necessarily seen. There's also a side like that where there's a side that is very exploratory in the knowledge and then a side that would be if because of artificial neurons they counted all the myths in my place so that I don't have to do it. Or that's it. There are the two. It's all about judgment. I understood what Debord said at the end. It's a question of judgment. People are looking for a consensus about what kind of action there is. What kind of action is there? The question is whether the aim is to replace this decision by a doctor who will decide to replace it. Based on this data we can assume that there's always a need behind to interpret the data. Are we talking about the idea of the kind of data that is looking for the impression that there's a good use of data in the future? Yes. I was told that that's what will decide the kind of the patient but there's always a supervision in the sense that that's what will decide the kind of the patient but what will decide the kind of the patient is the discussion it will have with the oncologist and it will not necessarily follow the test depending on what the patient prefers. But then it's possible to say their project at the end of deep learning it's not a thing that is intended to work all that without supervision technically for now it's not possible because it's a little bit of a science and it's a little bit of a science and it's a little bit of a science and it's a little bit of a science exactly. Regarding specifically your test of it to know the focal connection I'm not sure I understand the validation procedure what is the validation procedure? The data mine is that they have a success in producing good algorithm because they are both together like the patrologist, the engineer we look to the imaging itself there are some patches that are missing What do you have to say about it? Well, on the side of the engineer... I don't know if I can speak French. On the side of the engineer, he has some masterpieces. I haven't fully understood what he was doing yet, so I have to say that he explains to me more. But he has some kind of pixel value masterpieces that he will be able to interpret, and to be able to tell me if I should be able to speak French. But the problem is that... That's why he has to have good images behind him. That's why he has to be able to tell me what good images he has. Yes. Well, with Limino and Stochini, they will get good images. I think they are markers that will show all the components of a woman, the components of a cell, depending on what they want to show and what they want to show. In addition, they will be able to get images that are easily... We can treat them as we say... Treatment of images, more easily because they are... It's a marker on the blue background, so it's easier for treatment. But there is still a verification after that. And what I said earlier, because the pathologist didn't understand all the masterpieces used by the engineer, so he was able to say... And the engineer, from his masterpieces, he didn't... He needed a... A life of... Anapath, because he wasn't able to... How can I say? He wasn't able to judge himself by his algorithm, whether or not it was the geological object, because it remains in a way, even if they acquire a lot of... It's impressive, but... And so they said, no, it's a kind of interfaces, or it's just a visual interface, it's... He will launch his model, his model will segment objects, it will be... It will be aligned on the image, and the pathologist will be able to see these objects that are aligned and check if... if it's part or not, from his point of view, from what he knows, I don't know if that will answer your question. It's just to know the pathologist, because there is no... There is no... No clinical output, no clinical validation point, for example, in relation to the tumor management, no validation with biomarkers, to see if the pathologist is better, the pathologist is better. For the moment, after... If you want to make a clinical note, you don't have to deal with anything else. Yes, prospectively, yes. But there, they are not at this point, they are not at the heart. I have a follow-up. How... there is a decrease in the voltage of the pathologist. Between the fact that he says, often, we find out and we don't know too much, we are not sure, and the validation process that he can recover. We know only about you. Because the point of view of the uncertainty that is of the young people and of the society in his experience, how did he... Sorry, the tension with the fact that it will necessarily put itself at the heart of the validation process. I think it's important. What are the points? It's not wrong, I don't know. I don't know. I don't know what's interesting about VHC, because... I don't have... for the moment, I haven't given on that, in any case, because I didn't look for it. It's my interest to know. On the other hand, actually... Do you use this city as a shelter? Yes, yes. Is it strong enough? Yes. It's true that it's... it's... it's... for the notation, for the verification, in fact, it's not. On virtual images like that, and in addition, they were a little... processed by Flushman. Are these agents that are collected at the moment, or is it a bank? It's... they have... they have... they have put in place the body of 1,000 patients that they had at home. They have accepted to participate. In fact, we really have a little more than 1,000, but they have taken into account a margin of error that has to do with the fact that the tumors are always very interesting. And so... representative. And so there are 1,000 patients. Each time there are 9 waves that are extracted in one block, 8 patients, a block for each patient and 9 waves for each... for each block. And so they have come to their home. And they have tried to have of course a little representative, that is to say with cases that were not treated, treated or... not serious, or more serious. That's what happens. You have 9 waves, but that's the same question. Because you want validation. It's very clear. Yes. For now, yes. And are there questions of the consensus between the experts who are worried? There are so many of them who know that... After that, there are two things, because at least that, it still comes into interrogation, finally, my interrogation about external validation. That's true. In general, I think when we talk about this kind of testing of several people, even if it's possible, who in what country and in what hospital have the same results? But now it's one person. It's still very little. Yes, there is that. But there is also... After the criteria that they use to do their histological examination, it's criteria that are almost standardised. There was a huge amount of standardisation because the girl didn't get the same results. 12 is a little bit standard. So I don't know if it can be enjoyed in its favor and make another expert in another hospital not in a different country. Yes. It's not possible to test the results in a different country. Yes. After that, it can be after the... Yes, it's one of the questions I ask. We say that it's not a small problem of external validation. Why did you say at the end that it didn't bother them? No, not at all. In fact, everything they said, the solution would be a prospective validation. And... What do you mean by that? We have these clinics... In fact, we have these clinics that try new data, to test an algorithm of new data that doesn't belong to them in a different centre or in several other centres. And for them, it would be a solution. Yes. We have to integrate the expertise of the director in a kind of college space. But it's not a problem because it's not for me. I don't feel that. Yes, there are things that I have to go deeper. No, it's just a question. Maybe they have a vision of expertise that is very individualistic. They can explain why they have to work like that. It's the most classic philosopher. Like, why are you not upset right now? It's the classic philosopher thing. Like, you scientists, you doctors, you should be upset. I don't trust them. It's true. I don't trust them. I don't trust them. I don't trust them. What could you say? Does it mean that we have to create the nice results in the images? In terms of pathology, in what? In the database? In the clinical database? Why don't we look at the patient instead of just looking at the results of the clinical data of the patient? What about the image? I think they want a prospective validation on other centers. At least, they think about it. I do this in a very distant future because they are far from having something that holds the road. So No, it's for... So, in other centers, on the same data that was used to train the model, that is to say, other patients, other centers, they can't do what they do with the rigor. Because the people who train the neurovibes who do machine learning are part of the training data to train to test after the performance of the model to validate it but it wouldn't be a validation that would be very... it wouldn't be a very strong validation because it comes from my own experience. I don't know. If it's something else, if you look at the genetics of the images, if you look at it more, what are the patients and the images, it's already more consistent to give you. I'm not sure that the model is more stricter than the two types of images. And in some cases... Yes, what is it? It's an observational clinic. It makes some kind of it makes some kind of clinic how can we justify that? It's the card that I'm talking about? No, it's Elf-de-feuble. So in terms of French health data it's it's a big platform that aims to share health data easy access for researchers. And that was financed by that. And so Elf-de-feuble it's about all public services imaginable. Social security, taxes so that's it. But I think it's more interesting because when it's financed by Elf-de-feuble it means that there must be an idea of the Toulouse Encopole but paradoxically for now we don't think too much. But for the first archive for the addition of data actually really I'm not sure that we can treat an algorithm with several types of data as a model of machines several types of data at the same time because there they already treat images. And I don't know maybe we should use it on the other side but I think it's possible to do an interface but for now they are interested in images of what félanabad does because félanabad uses very little to speak of the Estonian language he uses little of molecular approaches. That's the question of which we fight, look. It's a model of a look of the félanabad that you record. Yes, that's it. Model of a, it's a bit weird and in addition it's only the model how do you verify your model in relation to reality? If your reality is that there is a person here if there is a person you don't judge It's the personalized medicine but I'm talking about the specialist medicine as they call the IBM the Watson but it's not the name of the model the mech there is a what's the I got this what?