 Okay, so thank you for being here. So that's now that's our pleasure and to introduce Tina Marison. So Tina Marison, she's working as an FDA and she's Deputy Director of the Division of Applied Mechanics there. And she's chairing so the new FDA working groups on the use on simulation for the assessment of biomedical devices. So she has a PhD in theoretical applied mechanics from Cornell University in New York. And then she worked as postdoctoral fellow at Stanford University in cardiovascular biomechanics. And since 2011 she's she has been working so as an advisor on the regulatory aspects for the use of simulations and biomedical devices. So Tina, so thank you very much. And then the world is yours. Thank you so much. I'm really excited to talk to you all. I'm sorry I can't be there, especially for the yummy food and beer tasting. I'm going to switch, I'm going to turn off the camera and just switch to the slides so there's no distractions there. And I'm excited to talk with you about our work at the Food and Drug Administration. Let me just make sure you guys can, you can see the slides in the presentation mode. Okay, excellent. So FDA's mission is to protect and to promote public health. This is our website here. And we have a number of different ways in which we do that. First, let me tell you a little bit about the products that we regulate. So FDA regulates human and veterinary medicine, vaccines and biologic products like cell therapies and things of that nature, medical devices, food supply and cosmetic and radiation admitting devices like CT scanner, X-ray machines and things like that. And recently a couple years ago, we started regulating tobacco products in the United States. This is a quick org chart of the Food and Drug Administration. We have seven product centers. And what I want to point out for you is that in the United States, we have one organization that regulates the entire products under these categories for the US. And this is a very different model than the one in the EU, which I've had some chance to learn about, where you have the European Medicines Agencies, which focuses on pharmaceuticals for all of Europe. And then you have the notified bodies, which work on, for example, medical devices and biologics. And those are regulated by each of the member states. And so you have a very different system in how you do regulations in the EU than in the US. We actually have an entire organization of 8,000 people that work together to evaluate these different products here. So in addition to protecting and promoting public health, we're also charged with actually advancing and helping to speed innovations. Our goal is not to hinder things getting to the market and getting to patients. Our job is to actually support the development of those products. And the way that we do that is through something called regulatory science. And this is the science of developing tools, standards and different approaches to help FDA staff better assess safety and efficacy in the quality of the products that we regulate. And some people call this the safer, faster, cheaper motto. And in 2011, FDA put together an extensive report on regulatory science. And in that document, they identified areas in which the agency needed to focus on to be fully prepared for challenges and opportunities for tomorrow. And in that report, there were eight science priority areas, four of which had a specific call for computational modeling and simulation. And those are the four listed here. The one I'm most interested in is this idea about stimulating innovation and clinical evaluation and personalized medicine. And in that document, and I'll just let you know this, I should have said this at the beginning. I'm going to make these slides available shortly after the talk so that you can download them for your own use and to get access to all of the hyperlinks that I'll share with you. In that document, they identify different methods and approaches for ways in which computational modeling or in silico methods can be used. And one of those, for example, is computer models of cells and organs and systems to better predict product safety and effectiveness, virtual physiological patients for testing medical products. And then this one here, clinical trial simulations, the interaction, being able to actually simulate and understand therapeutic effects or how patient characteristics might affect outcomes in clinical studies, really to more fully leverage modeling in a way that can actually impact clinical trials as well. And I'm going to talk a little bit more about that later. So let's make sure that we're on the same page here. So a model is a representation of a system or some process or some phenomenon. And we think about four key models that we use to evaluate products at the agency. So the in vitro test, the laboratory test, in vivo studies, those would include animal studies and also human studies. Clinical trials are just a model of the real world as well. So in a way, they also have their restrictions based on the assumptions in which a clinical trial is also is run. And then you have in silico or the computer model. And if we think about all of these as sources of evidence that helps us evaluate the products that we regulate today, we might leverage human models and animal models and laboratory models a lot more than we say use computer modeling. And where we hope to go as an agency in the future is to actually not change the amount of evidence that's needed, but change the amount from the different sources. So we might rely less on animal models. We might rely less on human models and laboratory models and more on computer models and maybe even a part of that computer model might not be of the actual device. Maybe it's of, you know, some aspect of a patient or virtual patients. And I'll talk with you a little bit more about this in a few slides. But the idea is that we really want to leverage more fully the capabilities of computer simulation. And in fact, last year, as a part of an effort to start a new working group at FDA, we launched a survey. And 2.6% of FDA's workforce responded to that survey, which was about 400 people, which was pretty astounding to me because I only expected like 100 people to respond to our survey. And what we asked people to tell us was about their different modeling and simulation experience, but also to tell us how they actually see modeling in their day to day basis. And so the first, these are the three key roles that were identified. And then, and I thought that was pretty fascinating in that about a third of the people responded that each one of these was something they had come across. So reviewing and assessing modeling and simulation submitted by companies in their regulatory submissions that internal scientists at FDA actually developed modeling and simulation for internal stakeholders to make decisions. And then we also use modeling and simulation developed by others. So I know that there's extensive work through the Virtual Physiological Human Institute on computational modeling and simulation. And so if there are models there that are ready to be utilized, we would certainly look forward to using those in our regulatory decisions as well. And how we get there is another key challenge that I'll speak about. So here are some of the areas in which I know that modeling and simulation is happening at FDA. And we've broken down into five buckets. We've got chemical based models, physics based models, statistical based models, mechanistic models, and we have big data, which maybe some might say is not a model, except when we think about big data, what we're thinking about is how can we leverage the data to develop something called data driven models? And how can we use that knowledge to actually inform other models? So in a way, we're including it under modeling and simulation. And so what you'll notice under there is a large variety of the different types of modeling disciplines that you'll see. And in order to give you a better flavor of those different types of models, I'm going to walk you through a couple examples of modeling and simulation that's actually being used and developed by the agency. We have active research programs at FDA and I'll share with you some of the work that we're doing there. So the first one is called the virtual population. And this is actually in collaboration with an organization in Switzerland called the ITIS Foundation. And there are 10 different models of a representative person, a human, from an elderly male to an overweight male. We have a female adult, we have children. And these models each have a representation of about 200 tissues and organs. And we use this virtual population for simulations regarding a number of medical devices. So let's say you know someone or you are going to go get an MRI scan. What we did to evaluate an MRI scanner, because that's a medical device, we actually test that MRI scanner to see how much energy your body absorbs when you're inside that MRI machine due to radio frequency absorption. There's radio frequency energy around an MRI machine and your body is going to absorb that. And what's going to happen is the tissue in your body is going to heat up. And so what we need to do before we can we can release more MRI machines on the market with higher and higher power, we use these virtual patient models to simulate a person inside an MRI environment to compute how much energy the body actually absorbs and what's the increase in tissue temperature that a patient might experience. What we also do is we virtually implant medical devices in this virtual family and see what happens to that medical device inside an MRI scanner. Does it create another source of heating and potential damage to a patient? So these models have been used over 200 times in regulatory submission. So when you get back to your laptops and you want to Google virtual patients, you'll find a lot more information about them there. Another area that we are playing a very key and active role is in the development of computational models for cardiac electrophysiology. And the models that we're developing are multi scale from the sense that we're starting with models of the ion channel and embedding that into cell models where we can then apply a stimulus and solve for the action potential for each of these individual hearts. And then that gets embedded into a computational mesh of a heart to then simulate electrical activity or the propagation of arrhythmias across the heart. And I'm going to show you this video on the next slide. Hopefully you're able to see the video. This video is also available on YouTube. And what you're looking at is the electrical activity being propagated across the surface of the heart. And this is really important from a fundamental standpoint that we can actually simulate these different types of arrhythmias. What that then allows us to do is to be able to understand how therapies might impact those arrhythmias. And there is a very strong effort in collaboration with both the Center for Devices and the Center for Drugs to develop something called a Comprehensive In vitro pro arrhythmias assay. And the goal of this is to be able to better predict the abnormal rhythms in the heart, something that you just saw here where the waves are propagating over the heart in an abnormal fashion. Those arrhythmias can actually cause a lot of challenges to patients' hearts. And there's a number of different ways in which we can impact and understand how drugs might affect those arrhythmias. So one of the things that they're doing is these researchers are linking together actual data from testing these ion channels and then putting all of that information into an in silico reconstruction of a ventricular cardiomyocyte. So to be able to actually simulate those ion channels that you just saw in the previous slide. And then to say, OK, if I deliver a drug, if I know the pharmacokinetics and pharmacodynamics PKPD of those different types of drugs, I can study how that drug then impacts the arrhythmias in the heart. And then the goal of this is to actually anticipate and understand how those drugs might impact real patients in a clinical trial. And so this is really groundbreaking work that actually folks at our agency are leading. Another area that in in the drug space that's really important is a different type of not a computational simulation as you if you will, but more of leveraging complex statistical methods. So when drugs are developed, they're manufactured just like any other product, a car, an airplane, there's a there's a manufacturing protocol. The challenge with manufacturing can be keeping up with quality and quality processes. And so one of the things that the drug center is working towards is a concept called quality by design. So that we're not just designing the pharmaceutical, the drug itself, but we're also designing quality into the way that the pharmaceutical is manufactured so that you can understand how different aspects of the manufacturing process also affect the safety, if you will, or the effectiveness of how a drug actually may perform or how those manufacturing by products might actually be incorporated into those drugs and then of course deliver to a patient. So in this sense, we're not just evaluating the drug itself, we're also evaluating the manufacturing processes and we're using analyses like multivariate analysis to do that kind of to understand those kinds of complex relationships. Another area in which we're very active is in the physiologically based pharmacokinetic modeling. And so I previously mentored pharmacokinetic pharmacodynamic modeling and this is where we look at the particular drug chemistry and molecules and we can simulate those. In this case, we're actually trying to link together the physiology of a patient's, you know, particular anatomy, whether it's cardiac physiology or whether it has to do with another organ or tissue and to be able to then link that with pharmacokinetics of a drug to be able to understand how the drug might interact with the patient's body to better understand in vivo performance. And this type of approach is actually being used to design and evaluate something called bioequivalence between generic drugs and drugs that are already on the market, manufactured drugs. So there's another type of modeling called QSAR modeling. QSAR stands for Quantitative Structure Activity Relationships Models and these are classification models that are used in chemistry and biological sciences. So if we get a new molecule for a drug or a new molecule for some kind of product in a medical device, we want to know if that molecule is toxic. And typically the way we might do that is by implanting that molecule in the device or in the drug and give it to an animal and then study it in an animal setting. But there can be a lot of challenges with animal models. And so one of the ways that we can assess its toxicity is through QSAR models. And so this type of modeling helps to identify associations between the attributes of different chemical structures and its biological activity. There's a general assumption when we develop these kinds of models in that similar molecules exhibit similar chemical and biological properties. And so the activity of those molecules then can then be explained by its chemical structure. And so the three different components for QSAR models are understanding its chemical structure and being able to write a mathematical model to be able to describe that. And then some representation of its activity levels. And then an algorithm that connects the structural activity to the chemical structure. And that's the QSAR model. And then what we do is the models learn from actual laboratory testing of some of those molecules. And then the model can be used to make a prediction of a new chemical's toxicity based on its similar chemical structure. So let's say for example, if there is synthetic intermediates in some starting material or a reagent or some kind of degradance that may be present in a finished drug like something that could be a byproduct from manufacturing. The QSAR model can be used to evaluate those impurities for say a mutagenic potential in place of actually doing some kind of in vitro assay, if you will. QSAR models can also be used to interpret organ toxicity in animal studies. So if we actually get some results from an animal study, we can use the QSAR model to better understand how that toxicity that took place in one organ can then be translated to another organ. Or then to even say how might this correspond to human outcomes or toxicity in a human patient. And we actually use publicly available QSAR models. And the scientists have been able to predict the carcinogenic potential of different color additives for a wide range of structures. And we're working on an open source model to be released later this year on QSAR modeling and toxicology. So if you're someone working in the color additive space, and you have this color additive and you want to know if it's toxic or not, you can go to this database and plug in your chemical structure and out will pop FDA sort of assessment of that chemical structure. Another interesting model and I hope what you're getting by some of these examples is the broad range of modeling types. We started with the virtual population, which was a physics based model. And we moved into electrophysiology model or a mechanistic type model. And then I shared with you some statistical complex models and then chemical based quantitative structure activity relationship models. And this type of model is a very sort of simple model in the sense that it just looks at the stock and flow, essentially the supply and demand. And the National Blood Collection and Utilization Survey, this is where we actually assess how much blood donations we get annually in the United States and then how much blood do these different donations actually send to different hospitals and what do hospitals report in terms of transformations, sorry, transfusions. So the annual donations are about 15 million and the number of transfusions are about 14 million units per year, which really only leaves about one million in reserves. And if you start taking into account the expiration date of the blood supply or how the depletion of that blood supply might be a lot more rapid if there's a pandemic or some kind of emergency, it's really important to understand with all these moving parts what does our blood supply actually look like at any given time. And so some scientists in the Center for Biologics put together the stock and flow model to better understand the dynamic relationships between supply and demand. And they held a workshop in 2012 and the discussions from that workshop actually informed policy for our Center on Emergency Medical Countermeasures. So actually trying to promote and increase donations at certain times of the year when we think they are like when the flu breaks out and there are other types of things that we anticipate. And so this model is very active in evaluating the U.S. blood supply currently. And it's also used on local state and federal levels as well. So that was just a glimpse of literally just a few examples of how modeling is being developed by scientists at FDA and how it's being used to help decision making about different products that we regulate. So last year after getting a lot more exposure to all of these different types of modeling activities, myself and some colleagues decided that it would be important to start a working group that would have members from across the agency to focus on modeling and simulation. And so that's exactly what we did. And just this past fall, our charter was finalized by the chief scientist. Our main objectives are we need a place for all of these scientists to come together and collaborate and communicate about cross cutting modeling efforts at the agency. We also want to raise awareness on the success stories with modeling and simulation, the challenges that we face and how can we overcome them together as an agency. And then really to highlight what are some of the opportunities for modeling that we're not even leveraging today, despite the fact that we're doing a lot of work in modeling and simulation. Something else that's really important and I will touch upon this in a few slides is the concept of credibility. So if we're developing these models internally, or for even saying that we're willing to borrow models from people who are developing them outside of the agency, how do we determine that a model is good enough to actually be used in making regulatory decisions? So one of the goals of our working group is to establish something called credibility principles that can be used and adopted for evaluating medical products. The other thing is that we want to liaise with national and international organizations and I'm actually meant to highlight international organizations, not national, because I wanted to mention the Avicen Alliance, which I'll come to in a few moments. The idea is that we can't do all of this work alone. We need to collaborate with other people. And so we are trying to reach out with other groups that are interested in modeling and simulation as well. The structure of our working group is unique in that we have about 150 members across the agency in our working group, and they're all divided into these different interest groups that are based on the different modeling disciplines. But we have a leadership circle that has two members from each of those seven centers representing the leadership and the drive and the vision of this working group. And we have a number of implementation teams working on a communications plan, a white paper that talks about modeling and simulation at FDA, and then we're getting started with these credibility principles. I want to take a minute now and actually a few minutes and share with you my perspective from the Center for Devices and Radiological Health. I've been at FDA for almost 10 years now and I've spent all of that time working at CDRH. But more recently in the last year and a half have been very active in engaging people from other centers. And the reason for that is we have been very active at CDRH in promoting the use of modeling and simulation in devices. And you'll see from a number of initiatives that I'm going to share with you in the next few slides. We're pretty busy. And some of the other centers want to engage with us to leverage the work that we're doing at CDRH. So I think it's important for me to share some of that with you. So again, in addition to protecting and promoting public health, our charges to facilitate medical device innovation by advancing regulatory science, that's our charge as scientists at FDA. And so one of the ways in which we promote modeling and simulation is through our regulatory science priorities. Every year, FDA publishes the top 10 list for science priorities. And for the last three years, computational modeling has been on that list. And the goal is that computational modeling needs to be developed to help better inform regulatory decision making. So what does modeling and simulation look like in medical devices? So I like this slide because what I've put together is a number of examples that shows we can simulate the device. We can simulate the anatomy. We can simulate physiology. You saw the electrophysiology slides. We have a number of companies using simulation to develop products that can be 3D printed. We have simulation that is a medical device. And I'll talk about this in just the next slide. We have modeling and simulation embedded in a device. This is a device for delivering insulin to a patient who has diabetes. Inside is an algorithm that computes the insulin. It measures the insulin that a patient has and it has predictive models to say what the insulin intake for that patient might be so that the therapy can be delivered in a timely manner. We also have the ability to use complex statistical models to predict treatment effects. And then in my opening remarks, I mentioned big data. Big data is really key for how we're going to inform modeling and simulation and all these different aspects. There's something I want to take a minute to clarify. The two key areas that I would say that modeling is really present is the concept of modeling and simulation as a medical device and computational modeling and simulation that's used to support marketing applications. So this kind of modeling, the one that's scientific evidence, is mainly what most of our activities have been focusing on. If a company wants to use a model to demonstrate safety aspect or some performance aspect of their device, one of the ways they might want to do that is with a computational model. Computational model or simulation as a medical device is a lot more complex because now you're not just looking at the verification and validation of the simulations, but also how does it actually impact patient and clinical decision and things like that. So there's a term called software as a medical device. And I know there are a lot of groups in Europe very active in this space. And so I wanted to make sure I made a specific call out to where you can get more information about how software as a medical device is regulated internationally. There is a group called the International Medical Device Regulators Forum that is developing guidance that will be adopted by 12 countries across the globe in terms of how they're going to regulate software as a medical device. So I hope you find that as a useful resource. The things that I'll talk about in the coming slides are really focused on modeling and simulation as scientific evidence, although some of the actual verification and validation aspects can be applied in this space when you're evaluating the software as well. So what I'm showing here on this slide are a number of initiatives that we are actively involved into advanced regulatory science and modeling and simulation. I'm going to walk you through a few of these. So first I'm going to start with the Frontiers of Medical Devices Conference. So for the last few years, we've been working with the Biomedical Engineering Society to put on a conference that brings together people in the medical device space, medical device industry, FDA and academics who are developing computational modeling in medical devices. And every year we have about 250 people come and we have 40 FDA people that come every year. There is no other conference in the world where you will get 40 scientists from FDA who know about modeling and simulation. And so we're very proud of this meeting and it was just held last week and members of the Avicenna Alliance came and supported this conference and met with a number of people. So it was very exciting and we haven't published our dates for next year, but we'll share that information to you once it becomes available. So moving forward to talking about partnerships. So I mentioned Avicenna Alliance. So last fall I came out to Brussels and I met with a number of folks from the Avicenna Alliance in addition to giving a presentation to a couple members of parliament and there was a representative from the EU Commission there as well. And I got to give them a taste of what kind of work we're doing at FDA on modeling and simulation. And when Avicenna Alliance came out to visit FDA last week, they actually got to meet with a lot more of our staff. They got to have tour of the facilities. They met with our chief scientist and we signed a memorandum of understanding on this formalized partnership to harmonize between the EU and the US on in silico medicine. We went and met a senator who's writing a bill in the United States to pass some legislation about in silico medicine and they came to the conference. So it was a very exciting week for us and I'm happy to share some of that with all of you. Another group that we engage with is the medical device innovation consortium. This is also a nonprofit group like the Avicenna Alliance. The Avicenna Alliance is really interesting in that it brings together industry people with academics and regulators. The medical device innovation consortium also does that as well, except that it's focused on medical devices, whereas the Avicenna Alliance is focused more broadly on in silico medicine. So it brings together drugs and pharmaceuticals. And it's through the MDIC that we've developed this program called the virtual patient. And actually I just realized these slides don't have the link to that project. So I'll be sure to update the slides before I send it to you. What's really exciting about the virtual patient and why I want to take a couple minutes to tell you about it is because we've actually figured out a pathway to augment a clinical study with a computational model. So what does that mean? So let's say you have a medical device and in this case we use an example of a pacemaker. So here's the pacemaker and it has a lead that gets implanted in the heart. One of the things that we look for in a clinical study is when will or if the lead, this wire here that actually sends the electrical activity to the heart. If this lead will have a fracture. So it's really important to understand what is the fracture rate of these pacemakers. And so one of the things we can do is actually build a computational model of a pacemaker lead and we can simulate fracture and fatigue and we can test that on the bench and we can develop a complex model that actually predicts fracture. And so that's what we did. We developed a computational model to do that. When you do a clinical study for pacemakers typically a clinical trial for a new for a new device is about 1500 patients. When you're changing a device and putting it back on the market the size of the clinical study is about let's say 500 patients. What we proposed to do with this framework was to bring together Bayesian statistical methods with a computational model to minimize to lower the sample size or the number of patients needed in that clinical trial. So instead of having 500 patients we we asked for a clinical study of 450 patients meaning that we could reliably predict or augment the study with 50 or however many number of virtual patients we want it. And so what you can see here in this picture is the actual clinical trial data is in red. All of our simulations were predicted with these blue lines with a confidence interval of 95 percent and we were actually able to predict what the fracture rate would be for that patient for the for that patient group. So we ran thousands and thousands of simulations and we're able to reliably predict what the clinical outcomes would be for that particular endpoint. And so the idea is now there's this possibility that we can actually augment a clinical trial by using computational modeling data meaning we're not trying to replace the clinical trial but maybe it's possible to to replace one endpoint one endpoint that a physician wants to know about instead of having 50 more patients in the study we can use virtual patients to do that. And so this is a very exciting project that we completed last year there's now a publication and there's a lot of publicly available information about how to do this project. Okay so something else that's very near and dear to my heart is the American Society for Mechanical Engineers, VNV40 group. Maybe some of you are familiar with this but the ASME has a verification and validation committee a standards committee to develop standards that provide methodologies for how to demonstrate the accuracy and credibility of computational models and simulations. And we have a number of subcommittees under that committee and one of them is for medical devices. Some of you may be aware of the one on solid mechanics which is known as VNV10 or the one on fluid dynamics which is VNV20. The standard for VNV40 is actually going to be coming out later this year which we are very excited about. So in order to more fully leverage modeling and simulation for medical products and clinical care we need to establish credibility that we define that as trust in the predictive capability of the computational model and the way that we do that is by reporting and verification, validation, uncertainty quantification these are the methodologies that will help us demonstrate credibility. And so when you take reality and you develop some mathematical model and then you solve that mathematical model and use it for decision-making it's really important to understand all the different aspects along the way. Now unfortunately I don't have enough time today to walk you through that entire methodology so I'm going to give you just a high-level description of what we're focusing on. So verification asks the question have you solved the computational model correctly meaning did you actually solve it in the right way? Did you solve the mathematics correctly? Validation asks the question of are you solving the right equations or how well does your computational model actually appropriately represent reality for how you're going to use that computational model? And then uncertainty quantification is really getting at the uncertainty in the parameters that affect the simulation and not just the parameters in your model but also the parameters in reality because at the end of the day you're going to do some kind of comparison between an experiment and a computational model and it's important to understand all the aspects of the model but it's equally important to understand all the different aspects of your experiment and in medical devices your experiment could be an animal study it could be a clinical trial it could be an imaging study or it could be a laboratory bench study any of those models that we talked about earlier so it's really important to bring all that together and so this document that's coming out from the VNV 40 subcommittee is on assessing credibility of computational modeling through verification and validation and we present a framework to do not the VNV work but how to actually establish credibility and I make a note here that we're going to move our meeting to Brussels in 2018 to try to broaden participation with our European colleagues most of the members of our partnership on the committee are from a broad range of companies that have global presence but most of the meetings and things that we've had have taken place in the United States so I wanted to just point that out in case many of you any of you were interested in so I just want to reiterate that the goal of our document is not to present how to do VNV but how to determine the appropriate level of rigor that you need to support using your computational model in a specific context of use so we present this new concept around the context of use defining the specific role and scope of your computational model defining model risk and defining credibility goals we also talk about that it's really important to a set to establish credibility goals and the way that you do that is by using model risk because the risk of your model actually dictates the rigor of your VNV activity so if you're someone doing research at a university and you're developing hypotheses for you know bench testing or you're trying to use simulation to develop a prototype you're not going to need rigorous verification and validation right you might need some kind of checks and balances to make sure you trust your computations but you don't need rigorous validation where you do like 100 experiments to check your model but if you're going to use your model to talk about or to predict the safety of of a drug in a patient or how a device might fracture if it's implanted in a patient and you're going to use that model to inform that decision then you better have some good VNV that backs up your predictions and so this document describes for different context of use and different model risks the different types of rigor you might need and I've provided you with some resources here because the standard is not published yet but we have a lot of literature that's in the public domain for you to look at while I'm here and talking about verification validation and certainty quantification I want to point out that we actually have our own journal through ASME and this journal focuses on publishing work on VVUQ so probably most of you in the audience are developing some form of computational models that can be used in some kind of I'm going to guess a clinical setting of some sort or you know affects biology or treatment of some patient and when you publish a computational modeling study you're going to want to publish it in that relevant journal for your community whether it's an orthopedic journal or some cardiovascular journal so why would you publish your work in JVVUQ well our proposal is the following if you're doing blood flow analysis in in cerebral aneurysms and you're developing a risk profile for cerebral aneurysms or you've developed a computational model of of bone degradation due to some chemical or drug publish the V and V work your verification and validation studies in our journal and then when you publish the relevant findings that are that are important to your community you then publish your study in that clinically relevant journal and then you reference the paper that you've published in JVVUQ and now you can say you have a not just a validated model but a model that has been peer reviewed by your colleagues and that would give a lot of credibility to the work that you're doing okay so moving forward I've hit the 35 minute mark I'm just I'm keeping time here and I know we started about 15 minutes late so I just wanted to check in with see how everybody's doing I've got just a few more slides here and then I'll open it up for some Q&A another important area to to focus on is something called applicability analysis so I briefly talked to you guys about verification, validation and certainty quantification and what that means and and maybe some of that isn't something you've really dived into yet and and maybe many of you are still at the university and you're you're you haven't actually developed models that are ready for clinical use or to be adopted by some company but here's here's something to keep in mind a typical approach for model validation I would say in an in an academic setting is to develop a computational model and then to perform validation and what might that mean that might mean I'm going to go to the literature and find another study that I can recreate maybe it's not an experiment that I've developed but somebody else has and I'm going to use my model to predict the results from that experiment so I'm performing quote I'm performing quote unquote validation and and let's say that I get some reasonable agreement that's that I think that there's reasonable agreement between my predictions for my model and for this validation study now I'm going to change that model in some way because now I want to make predictions in a different kind of environment and so I make those changes to the model and then I run my simulations and now I say well my model is quote unquote validated so I believe the results from this new study well that's not a rigorous way to do validation typically we want to actually design an experiment that's really appropriate for how you're going to use your model but sometimes that can cost a lot of money so one of the things that we've been developing at FDA is an actual methodology for demonstrating how you might how that validation study you performed is actually appropriate for using your computational model in a new context of use and so we're actually hoping this what we're calling applicability analysis comes out soon so what does that mean let me show you a schematic so let's say these black circles are your validation points this is where you actually performed validation now I'm not going to say anything about how good that comparison was what I'm going to say is that if that if that comparison was good then my highest level of confidence about my predictions would be around those points that's where I would be the most confident in making my predictions because that's where I did my measurements right so if my context of use were way out here I might have very low confidence in my ability to predict out here and if I as I get closer and closer to those validation points my confidence increases so the idea about applicability is to demonstrate how sort of close if you will is your context of use to those validation points that you actually conducted and to say is that validation evidence still relevant or applicable and so that's what we're trying to get at with the concept of applicability analysis and lastly what I want to just point out are some let's see if I can oops I lost my pointer what I want to move towards is talking about FDA guidance so I mentioned that we're working on a standard to help establish verification and validation and credibility one of the other key things that's really important for the FDA is if we want to accept a modeling study in a regulatory submission we need to understand that modeling study because it's submitted to us in a report we don't actually get simulations companies don't give us their simulations they just write a report about what they've done and so to help improve the consistency in the information that a company provides to us and also to help FDA staff figure out what kind of information is needed we wrote a guidance document that focuses on describing what's needed for you to provide your report to FDA what this document does not tell you is that even if you provide your computational modeling study in this format in this reporting guidance it doesn't mean your model is adequate and this document also doesn't describe the level of evidence that's needed to say your model is good enough those types of information we're presenting in the ASME VNV 40 standard so this document is solely on reporting but reporting is extremely important and it's extremely important not just for FDA and regulatory submissions for many of you in the academic setting where you're working on your modeling studies and you're publishing them in journals it's really important that you adequately describe all the different aspects of your model all the assumptions that you made the limitations that you have this way we can appropriately interpret the predictions that you are making and making sure that they're within the scope of what you actually did in your study and with the validation that you have and I'll tell you this and I'm going to leave the slide up for a minute in case any of you are eager to write down this URL so last week at the BMES FDA conference I gave a seminar a very intensive training on the reporting guidance which I just talked about and also the ASME VNV 40 standard which I didn't even get to dive into with you guys yet and maybe I can do that as a follow-up webinar I put all of those slides together into one slide set and those are all available on my fixed share site for those of you who know about fixed share you can go to fixed share and search for my name and you'll see all of my presentations and this one is specific to the training slides on the reporting guidance and the VNV 40 standard so for medical devices and I'll say generically for medical and products we envision a future where we have digital patients so somebody can download anatomic and physiological computer models maybe hundreds and thousands that represent different patients with different types of diseases so that we can perform virtual clinical trials where new products can be deployed or tested virtually where we can discover soft failures before we actually put those treatments or therapies into a patient I know that there are different types of tools that are allowing physicians to make better decisions about surgical planning and things like that these are sort of so-called virtual clinical trials the concept of personalized medicine where we can actually use simulation to help us find the best path the best therapy for our for a specific patient is becoming more fully realized because of computational modeling but I will caution you that we still have lots to do particularly in this community we need to continue to raise awareness about the capabilities about what modeling and simulation can really do and what it can't do like we can't tell people we're going to eliminate clinical studies because that's not going to happen in any near future what we need to do is to make sure our perspective on how it's going to impact clinical trials is really is really pointed and focused on augmenting trials or designing better clinical trials we need to raise awareness about how to do reporting of studies how to do verification validation and uncertainty quantification we need methodologies across the continents for different regulatory bodies not just for the U.S. but for E.U. for Japan for China this is a global marketplace we need to make sure that those methodologies for how to assess credibility are harmonized and well adopted by a lot of stakeholders we need a way to actually determine how much evidence do I really need people say the cost of clinical trials are outpacing the revenue that companies are making well if we just tell everybody to go and get lots of validation evidence that's going to cost a lot of money too so we have to find the right balance the right balance of evidence for really using modeling and simulation to support regulatory decision making and we really need to better understand what are the consequences if the model is not correct we need to be able to understand model risk in the sense that if I use this model to make a decision what happens if I make an incorrect decision and I want to be able to take that into account in addition to all of the other evidence that's in front of me so model risk is really important to understand the other bullet I realized that I either accidentally deleted or just forgot to put here is this idea of convincing the skeptics and the non-believers I don't know how many of you come across in your day to day when you talk to people about computational modeling or in silico medicine but I know at FDA that there's still a number of sort of management people other decision makers some clinicians who have to help regulate these products they're skeptical of computational modeling and they're skeptical because they don't understand it they're not familiar with it they're not familiar with what can go wrong they're not familiar with what can go right and so it's really important for us as scientists to do a good job of talking with them about bullet number one here not just the capabilities but also the limitations and keeping that perspective that reality in check and so with that I'll just let you know that this is my last slide there's a lot of other things I didn't get to talk about but I've provided those materials and links for you here I'll also work with the organizers of this address for you all to make sure that you get these slides so you have all these different references so thank you so much for listening I think I've left enough time for some questions and I look forward to hearing from you thank you thank you very much for this comprehensive overview I think it was really really interesting maybe we can start off with some questions and maybe let me first ask you it's like you mentioned that you're developing kind of tools also because one of the problems I see at least that's what we find out is that many of the simulators many of the solvers are actually commercial and especially universities or researchers have no access to them or not easily access to them it's a bit different for companies although you hear also many companies saying like we don't want to buy a license from this or that company because we need to develop our own solvers so what's your point of view in there are you as FDA going to provide or give stamps to solvers which are openly available or should we go really and buy the commercial ones because that's the only ones which have been proven to be useful so thank you for your question I can hear the feedback I don't know if there's a way that you're awesome thank you so much there's something important to keep in mind about verification so when you when you're talking about software and you're talking about software that you can buy commercially or software that you develop on your own the way that we assess the software is through verification and there's two key pieces to verification there's verifying the code and in a way that is the responsibility of the code developer whether that's the commercial code developer or the academic code developer at FDA we don't view them any differently we still ask the person who is using an off-the-shelf software to provide us with evidence that you're using the code in the way in which the code was designed and verified for so if you're someone developing off-the-shelf software if you are doing the same thing you have some verification evidence that is on that's your responsibility and the user of that code to provide that information now what I'm not saying is that you have to provide us with the stacks and stacks of paperwork that demonstrates that but you need to provide us with some evidence that you've checked that the software that you're using has been appropriately verified the other aspect of verification I mentioned code verification is calculation or solution verification and this is where you actually are demonstrating that you've incorporated the numerical implementation correctly so you did your mesh studies either for temporal discretization or spatial discretization that you understand the default setting in your solvers and what those convergence criteria mean that you're those are the kinds of questions we ask regarding solution and or calculation verification so regardless of off-the-shelf or homegrown software we still ask those important questions it's just that the amount of evidence that you provide is going to be different if you're a commercial code or someone who's developing it off-the-shelf and if you have more specific questions about that we can talk about that but I hope that clarifies your question for you any other questions do you think that modeling could be also into that compulsory test as it's done for humans or I don't know if animals test in the future can you can you say what do you mean like what do you mean as a test for example well human in vivo tests are compulsory for a medical device how do you think this simulation can be eventually considered as a compulsory test sure so it's interesting that you asked that question thank you what I recently learned is that two things have happened in the EU recently one is that the regulations for medical devices have changed to actually stay in the law that people who regulate medical devices have to accept computer simulations as a potential evidence so if a company uses computer evidence the agency has to review it at FDA we have a similar rule whatever evidence you provide to us we have to review it but nowhere does it say that you have to like that it's that it's stipulated to use simulations in the way that animal studies and clinical studies are and I think there's good reasons for that I think we're not there yet I will say that from a standpoint of device performance anything in the cardiovascular space that's an implant you have to provide finite element analysis and that is spelled out in all of our guidance documents so it's not quote-unquote law but it's actually part of our policy so if you're developing a stent or a pacemaker or a heart valve you have to actually provide stress and strain analysis so while it's not a law it's actually part of our policy and our regulations the other thing I'll add I was talking about two things about EU regulations for the European Medicines Agency part of their new legislation is actually that the agency is going to start developing programs to actually support and enhance in-silico medicine and one of the things that the folks from the Avi-China Alliance were doing last week was speaking to Congress in the United States to pass similar regulation that FDA also develop a program geared towards in-silico medicine meaning we're not just saying that we'll accept it but that we're developing it and we're promoting it and we're coming up with new policies around in-silico medicine so I don't think we're there yet in terms of it being a stipulation but it will I think as the technology improves it becomes more credible yes I think that will happen in the future now another thing that you also mentioned is that we should do studies at some point with virtual populations and test things in there who should be responsible in providing these populations or which type of populations would be valid to be used will you as a regulatory agency say you have to test in this population or would you stamp it as this is an okay population who should take care of these really good questions so I'll first say that we don't tell anyone what types of populations they use anyone can use different types of populations that are available so I mentioned the virtual family is one that's widely used because FDA was a collaborator on that but they've also presented a lot of evidence on that virtual population but there are a lot of databases out there with lots of data so one of the things I didn't get to talk about in the interest of time is something called in the second bullet here FDA's medical device development tools program so let's say you're someone doing research and you're developing a computational tool let's say it's a virtual population for musculoskeletal modeling what you can do is instead of getting your regular your medical device or your medicine your pharmaceutical regulated by the FDA you can actually have your tool evaluated and qualified by FDA through a regulatory process meaning that after we review the evidence and we determine that that tool is suitable for use in regulatory submissions we will give you a seal of approval that says this tool can be used by others in regulatory submissions as well and so this will be one way for the marketplace for in silico people developing in silico methods to actually get their tools evaluated by FDA and then be able to market them to other companies to say hey you can use my tool because FDA has already reviewed it and you can use it in regulatory submissions so that's the only way in which we would point to a specific model the other mechanism we might do that through is consensus standards so typically our standards are really focused on test methods and different types of frameworks but it's possible that the test method right not a physical bench test could be a computational model you know we could develop a protocol for how to develop a computational model to do a certain activity and if that becomes the case and FDA adopts that standard then that would be another way in which we would say we would point to industry to say that's a computational model you should use because it was developed in a consensus standards fashion so I think those are some ways in which we might identify different software or tools and things like that and you also mentioned as computational tools as medical devices by themselves and then you also say that one of the types of models is kind of big data or machine learning models so how do you deal with kind of the currently being developed black box models as a device and how do you regulate them and how do you think you have to provide evidence that they're not harming patients for example Sure so I think anything that's that's a medical device has to demonstrate safety and effectiveness so the first thing that has to be clearly defined is how that medical device is going to be used meaning what's the context of use what's the indications for use for that device is that simulation that's a device or that big data or whatever you want to call it is it going to help a physician make a decision is it going to actually interact with a patient is it going to be something that actually diagnoses a particular disease or gives a physician specific therapy treatment options so there's a spectrum of ways in which a computational simulation that is a medical device can impact a patient there's a lot of tools and softwares out there that actually are used to help give a physician more information but if at the end of the day that physician is going to make a decision on his own then that would be a low a relatively low risk device now let's say we're talking about a device a software that actually simulates simulates something about the patient and tells the physician this is the best treatment for that patient then something like that is much higher risk because now the physician is relying on that simulation to make a decision so I briefly showed you the picture of the coronary arteries so that software from heart flow is a computational simulation of a patient's specifics anatomy and their boundary conditions represent the patient's physiology and they do a simulation to predict what is the lesion how dangerous is the lesion in that coronary artery the blockage how much blockage is there for the patient and if the patient has a blockage below a certain value the patient doesn't have any therapy but if the patient has a blockage above a certain value they can either get a stent or some other treatment and so that tool is actually used to tell the physician what to do so that tool had to undergo a clinical study it had to do rigorous software verification and validation meaning using quality systems processes to make sure that every time that software is used it's used in the right way and then it also we had to test it in the clinical setting to make sure that when it makes its prediction it's actually making a prediction that's comparable to some gold standard of care today and so it really depends on how is that software going to be used to make a decision or to inform a patient or to inform a physician or to interact with the patient that's going to dictate the level of evidence that's needed and because that that area is still rapidly evolving there is an international body of regulators that's developing that standard which I called which I mentioned the software as a medical device that's a really key document to take a look at if you're working in this space because it provides you with definitions terminology classification for different levels of risk and then it lays out what is the clinical validity that's needed for different different contexts of use or different ways that software is going to be used I hope that answers your question a little bit it partially does but for a more because there's for a more more direct question partially is that you say like okay when you have a simulator you have to show that it solves the physics when you have for example a deep learning algorithm you have absolutely no clue what it solves so how do you deal with that for example yep still evolving I don't have any specific answers for you yet only from the standpoint that I don't think we've had we've had any devices come in that have that so I know that there's a lot of groups working in that space you may have recently seen in the press last week FDA announced its digital health program and part of that digital health program involves the development of of a department to look at things like big data and machine learning so I don't have a good answer for you right now on that I'm sorry but that's a fine answer thank you very much any other questions in front hi I was very curious about how many applications have you had like the hard flow that's the I would say the best business case we have in the community of simulation and I think they have doing a really good job but I was wondering as the FDA how many success cases like that one have been already if you can disclose the information maybe it's sure well I can I can give you some insight so I will tell you that there are two the only reason I can say this is it's public knowledge so if you if you actually search for cardiovascular flow and simulation there are two other companies one in the UK I forget if it's through the university of Sheffield or not but I met I met those folks in last October when I visited Brussels I actually met the physician who's using the tool they've developed a different algorithm to compute a very similar thing as heart flow and then there's another company in Japan that's doing the same thing all I can't say where they are in the regulatory process but I know that those clinical studies are underway the other thing I'll add is that there are a lot of software as medical devices so there are a lot of tools for surgical planning for surgical treatment for sort of virtual testing of devices we have a number for neurological diseases for orthopedic devices for cardiovascular implants but all of those are just more of like helping the surgeon do some treatment planning but at the end of the day they're not influencing the physician's decision on what thing he's actually going to do for the patient what kind of treatment and so actual tools like heart flow we still don't have very many because that clinical burden of demonstrating safety and effectiveness is really challenging and it's not because it's a medical because it's a simulation the simulation part of itself has its own sort of verification and validation but because it's a medical device and it's making decisions for for physicians on patient treatment it's really challenging to find that comparator so when we're talking about validation I mentioned that the experiment can come from a number of sources so for these types of software as a medical device finding that gold standard finding that validation comparator that thing that I can actually measure and then say that my software is predicting is really challenging and for heart flow they were lucky because the fractional flow reserve is a very clean experiment it's something that's done regularly by physicians it's something that's trusted in the clinical community and so it was a really great experiment to actually develop their model for and I think some of the other companies that are developing software that has to have that clinical validation are having trouble finding that gold standard to compare to so I think we haven't gotten a lot more examples like that because the clinical validation is really challenging I was quite surprised to hear that surgical planning are usually an easier case because they didn't change too much the decision but how you plan your surgery is a really hard decision and in that domain I would say that it's even more challenging to prove that that decision is going to have an impact because surgeries are usually a very multifactorial ah but that's the thing they're not actually predicting impact to the patient what they're allowing the physician to do is to virtually deploy a device or virtually you know simulate how something might be implanted or how a procedure might take place but they're not predicting outcomes if they were predicting outcomes then that would be much higher risk because then that physician would be influenced in his decision making by that simulation that's predicting treatment outcome and that's why it's so hard there's actually a group in Europe my colleague Matthew DeBlu he's at FE FE OPS they're trying to develop a model to predict heart valve recurgitation and what they want to say yeah so what I want to say for that is if they were just they've they've talked with us and they've been very open they've come into our conference and presented before they have a number of cases where physicians are using their tool to size the device to find the best size for a patient well the physician is going to size the device based on the instructions for use so having the simulation help with sizing is not outside the intended uses of the device so that's a very low risk application but the minute that they want to start predicting valve recurgitation and saying that this is going to affect how the valve performs in the patient then that becomes a much more challenging thing to validate for which is why I think they're still working on their clinical studies to develop that evidence because predicting effects so not surgical planning to predict the treatment outcome that's very different does that I don't know if that helps to clarify the distinguish the differences so so far we've been talking about so the use of the use of model to predict patient outcomes or to to predict so the efficiency of the device in the patient but so here I agree that the validation processes is extremely critical because we're in a prognostic point of view however sometimes so many times so validating a model so as you say it is extremely challenging however when you perturb the model then the mechanisms that the model can predict in relation to an initial model that is validated according to certain aspects so has a full sense so traditionally we call it indirect validation and and when you think about this then you really think that models can be extremely useful along the chain of experimental development from for example cellular level in the labs so the individual level to the individual level so passing through small animals large animals and then eventually humans so do you think that at some point this concept of indirect validations and model could become mandatory to allow to allow a study evolving through the different steps of of expression of a drug or a device so I would say I don't know that I don't have a good sense of what's going to become mandatory or not but I will say that it's interesting that you use the term indirect validation because when we talk about at least in the more of the engineering science where for for example where the ASME is developing their standards they talk about something called hierarchy validation or or validation roll-up meaning that you start at some sub-level some sub-component and maybe in this case it's the cell and then you replicate that you can you demonstrate that you can replicate the appropriate cell function and then you move that up to saying I can impact I can predict how that cell function impacts tissue and then I can predict how that tissue impacts organs and then etc so that would be that kind of hierarchy I actually I don't see how that would be indirect validation unless you're saying that you only validate at the cellular level and then you make predictions at the organ level is that what you mean regarding that it was indirect validation and I mean that I'm predicting a mechanism that according to general knowledge for pieces of evidence seems extremely logical but yet experiments don't exist that can prove that oh I see that what I'm predicting is correct and sometimes establishing those experiments can have ethical issues so whether you can at least robustly reasonably predict a mechanism and then you have a certain level of confidence then then you should unless you don't have that then you shouldn't maybe be allowed to produce this these experiments understand so yes I do thank you for clarifying in fact so I'm really glad you brought that up because I didn't have there was a lot of different things I would have loved to to dive into here the key part of the applicability analysis so I mentioned a little bit earlier about this idea of demonstrating how your validation evidence is relevant to a context of use so that whole applicability analysis framework was developed because in the medical device space or in the medical product space it's really challenging to do direct validation if you will right I mean to actually go in and to take a measurement of of something in a human could actually be unethical nevermind if not impossible and so we need to develop computational models to help us get to those endpoints so we have to do something called indirect validation as you so nicely described and but the challenge is how do we demonstrate the credibility of that how do we do that how do we explain to others that our model is reliably predictable and so we have a whole methodology of framework for walking through describing the evidence that you have for the model the setting in which you're going to use the model and then to point out describe literally list one two three four all the different ways in which you have differences between the validation evidence you have and your clinical or your context of use and then what are the things that are the same and how do you provide the rationale the good engineering judgment needed to make that decision because at this point we have no other means of providing validation evidence for places where we can't take measurements so we've really tried to address that issue for medical for biomedical models and I would really look forward to sharing that with you once the paper comes out and maybe having a follow-up conversation about that because I'm actually hopeful that while we can't get to a quantitative predictive value there's a chance for us to get to a place where we can help scientists come up with good rationale and justifications for that so clearly to be continued so let's hope that would be great with this I want to really thank you for the time you made for us I think it's very complimentary to what we're trying to do up to now in the summer school so thanks again and thanks for having me and I'm pretty sure we will hear more of this in the near future I think so thank you very much thank you so much and I'll send you an email with the slides to share with the students perfect thank you thanks a lot have fun everyone thanks for being here outside on the in the courtyard you will find the next stage