 Hello and welcome to the sixth webinar of the engineering rising to the challenge initiative from Purdue engineering My name is Arvin Brahman. I'm the executive associate dean for the faculty and staff here in the college I'm also a professor of mechanical and materials engineering Now this initiative started in May of this year in response to the National Academy of Engineering's call to action for engineers to really tackle some of the challenges posed by the COVID-19 crisis But our initiative also looks to the longer term future To rethink and reimagine How the various systems that we have come to depend on in our modern society How these systems Need to be changed so that they might be more resilient to such shocks in the future While also serving society better Now part of the initiative Involved webinars with distinguished panelists Where these panelists really unpack Some of these challenges and provide us a glimpse into what the future might look like Today's webinar Is about the future of robotics and health care And it is my honor to introduce the moderator for today's discussion David capillary David is an associate professor of the school of mechanical engineering at Purdue Professor capillary founded the multi-scale robotics and automation lab That performs cutting-edge research on robotic and automation systems at various length scales His research interests include mobile micro robotics for biomedical and manufacturing applications Surgical robotics automated manipulation and assembly An unmanned aerial and ground robot designed for agricultural applications Professor capillary is currently co-leading a Purdue engineering initiative on autonomous and connected systems He's also the Purdue site director for a new nsf engineering research center On the internet of thanks for precision agriculture iot for ag Professor capillary has received various awards such as the nsf career award the harvey davis distinguished assistant teaching assistant professor teaching award And the association for lab automation young scientist award He's also the co-founder of c2 medical robotics A startup company spun out of intellectual property developed in his lab That is developing the world's first robotic lumbar disectomy surgical system Over to you Dave Uh, thanks arvin for the kind introduction I'm going to share my screen here Okay, so welcome to our webinar on the future of robotics and health care So the 2020 pandemic has challenged our health care systems in many ways and put a big strain on our frontline workers However, it has also opened the door to a greater acceptance of of of robots and automation So what will the future of robotics to health care look like? So one can imagine not too distant future where in an operating room There's a robot nurse assisting a surgeon in another operating room a robot is performing the surgery teleoperatively and maybe another a surgeon has just introduced a swarm of micro robots into a patient to eradicate a disease After the surgery is done a cleaning robot enters to quickly disinfect the operating room to get it set up for the next procedure And then down the hall. There's a post op patient who is learning how to strap on a soft exoskeleton robot to guide their customized Rehabilitation program what they can do to the comfort of their own home Down in the clinic patient samples for for testing exposure to potentially deadly airborne virus have been sent off to an automated processing facility Which is capable of processing thousands of tests a day Nearby a patient is waiting to receive a newly prescribed drug Do they know that a drug discovery lab was able to use robotics automation to wrap up these screen millions of cells With different compounds or discover this new treatment In this webinar on the future Robotics and health care, which is organized by the autonomous and connected systems initiative Here at the Purdue College of Engineering. We've assembled a distinguished panel of experts With perspectives of academia and industry to discuss these topics and and see how far they are from becoming reality So you can see the list of speakers that we have we've lined up for you So we asked each of them to speak about five to ten minutes on a particular topic that was described in a little paragraph I just spoke to you And then hopefully will generate some interest and questions and then the rest of the time will be dedicated To q&a from all the all the all the attendees So first up we have professor one walks Professor walks is a professor in industrial engineering school at Purdue University also has an adjunct Position in the biomedical engineering and also at the professor of surgery and IU school of medicine He's a director of the intelligence systems and assistive technologies lab at Purdue Is also affiliated with the registry center for health care engineering He has Undergraduate degrees From Hebrew University and also masters and PhD degrees industrial engineering from Ben Huron University in Israel Is a certificate of many awards including the 2013 air force young investigator award the 2015 helmsley senior science scientist fellow and a 2016 fulbright us scholarship award as well as been named the James james a and Sharon m. Tompkins rising star associate professor in 2017 It's also the editor of IEEE transactions and human machine systems and frontiers robotics and AI He's going to talk to us about some new challenges for surgical robots one Over to you Um, thank you very much. Let me see if I can show the screen there we go um All right, so the focus of my talk has to do with the use of robotics For clinical applications specifically in the operating room Not only as extenders to the surgeons But also as entities that can collaborate very effectively with humans And what I want to do is to introduce a few projects That cover a number of of works that we have been doing a with my with my lab So let's start by just a nurse As Dave mentioned in the past this idea of having a robotic assistant in the operating room is very compelling So we look into the dynamics of the operating room and we realize that there are certain operations that are pretty Automatic, okay, they're very repetitive one of them is the delivery of surgical instruments And we thought that maybe it's it's a good place to start using robotics there by understanding The explicit communications explicit request of surgical instruments and delivering the the instruments on time And we started several years ago with this idea. We took an industrial robot. We put that on the field And this is how just a nurse was bought So this robot understands hand signals and explicit verbal communication speech basically But when you think about that robotic assistants can do Surgical assistants do much more than that Um, and in addition to all that People have two arms So this industrial robot is too shaky We wanted to make sure if we can match the speed of the surgical assistant and we can match the speed But still a robot industrial robot a bulky robot like that It's not going to be really safe in the operating room. We really need this appearance of a human with two arms So we move to a different prototype that they want to to introduce through a this little piece want to be We've all seen the sci-fi movies where robots are ready to take over the world But that's fiction here at Purdue University They're working on real life scenarios like Baxter the human like robot Engineering and technology students are working to put Baxter in the or to help during surgery We have here is The robot that will act as a surgical assistant to the main surgeon Is not meant to replace the surgical assistant, but it's meant to comfort The idea here is to train Baxter for menial surgical tasks So that human hands can be doing more important duties Baxter could be performing in real life surgery settings in a couple years But turns out that while this worked well a surgical assistants and nurses It do much more than understanding explicit commands or requests They understand the behavior. They understand the Body language they can predict what's going to happen. In fact, oftentimes we refer to surgical assistants as mind readers Can we come up with the concept of a robot that can read our minds? Well, maybe we can instrument a surgeon with several Sensors physiologic sensors Sensors for EEG to capture brain signals combine all this information And being able to predict the needs of the of the surgeon and in this way What ended up happening is that while the robot is working and operating The assistant at the same time is delivering surgical instruments arranging the male tray Making sure then that the right instruments are where they should be and assisting with all the other tasks associated with the procedure so we move into a different setting which involves taking the robots outside the brain And that would mean a rural area that could be rural America a clinic a small clinic which They may need to perform a surgical procedure urgently. There's no need. There is no possibility Maybe to transfer the patient to a different location Or this could be the battlefield where a medic needs to perform a procedure And the expert is not around. So the question is how we can convey these expertise remotely So we thought about integrating augmented reality virtual reality Artificial intelligence and robotics and bringing these expertise remotely. So think of this. This is how the system works We have in a uav taking off going to the location where the patient or casualties capturing views on top From the patient sending those views to One expert or a team of experts that collaborate together creating annotations Which may involve surgical instruments and those instruments and annotations are displayed On a see-through displayed on the medic for example using the HoloLens So the medic mimics or replicate the annotations Um, and in this way he can actually complete The the whole procedure And this is quite an exciting pro project But you can see that one of the issues of using this In the military is that having a robot flying or hovering on top of the patient May flag uh may indicate about where the positions of our forces So maybe we can use the camera that is embedded on the on the display on the HoloLens And that was the subject of the second question that we address here So we came up with an algorithm that can Stabilize the image of the patient and deliver that image to the expert So rather than having an unestimalized view which is captured through The the goggles or the see-through display We have a stable image which allows the experts to work in a steady steady manner And of course these motions that we can see are happening because the medic is scanning the area around And everything that the camera the camera captures Is is is usually displayed or sent to the to the mental So we we need to fix that situation and this is how we solve the This problem to make sure that the experts don't Don't get motion sickness from uh from this situation So we talk about a surgical assistance in the operating room We talk about the rural setting and the field setting and the use of robots. What's next then? Well, we look into Coming up with ideas to create this intelligent operating room by just having one robot Maybe having multiple robots working with a the surgical team completely Having ways of Assisting the the the the the user the surgeon not replacing the surgeon or not replacing the assistant But actually assisting them and how we can do that while we can have a cognitive aware assistance That based on the cognitive love Of the surgeons and nurses the robots may actually step in Or monitor the operating room status understanding the dynamics in the operating room the interactions in the operating room Or detecting anomaly anomalies from the images that we get from their robotic Um for the robotic systems, right? And maybe according to anomalies deliver smart notifications to the users So let me show you what we have been doing here in this in this context. We um In the cognitive aware assistance what we have been doing is working with The davinci system that we have here at purview We monitor the surgeon we instrument the surgeon with a bunch of sensors Among others we can get signals from the brain We use deep learning architectures artificial intelligence to detect and classify in real time whether The user or the surgeon is in high cognitive load or low cognitive load And according to the cognitive load we present assistance So here you see a robotic vein patch, okay a suture and in one case there is low cognitive load The surgeon is suturing the vein and in the second scenario there is high cognitive load. Why because we actually You see that there is blood circulating on this vein and the bleeding is significant Which causes the surgeon getting to a situation of high cognitive load And in this context is where we want to present a system Now how to present a system well one way is for example Can I mean a robot that can perform this laparoscopic surgical skills in this case? We are not doing something very simple. We are doing the pecan point transfer But using autonomy now the interesting part of this is that our system is robots to clutter I'm bleeding. So of course detective shapes and colors In a in an area with with bleeding is extremely difficult And we came up with algorithms and AI to be able to solve this challenge. Thank you very much That's it. All right. Thank you Juan Okay So next We have happy to introduce Andy Molnar So Andy is the vice president of business development americas for uvd robots He received his bachelor's degree in biochemistry university of redding and mba university of bradford prior to this position He's president at belly med the leading global supplier of infectious control solutions He also held the position at huge freedom manufacturing company and strawman group He was executive vice president and regional head for north america there And as strawman, he's essentially orchestrated and supervised the international launch of strawman member gel In the north america launch of strawman alligraft in addition to driving growth in the company's oral tissue regeneration projects products so today he's going to talk about some really Key work critical work that his company uvd robots has been doing recently In lieu of uh, it's covet 19 pandemic. So andy over to you Thank you very much stave. I'm just gonna Organized my screen So it's a great pleasure to be here with you today in this esteemed panel I am very much coming from an industry perspective So let's start with the world health authority has identified anti microbial resistance as one of three most important problems facing human health They've listed actually 12 priority organisms That are resistant to antibiotics and that cause significant morbidity and mortality And an excessive cost to the healthcare system along the way The situation is expected to get worse unless new antibiotics can be found Or new methods introduced within infection prevention To reduce the risk associated with these organisms which lead to hospital acquired infections Now that is the mission of uvd robots and let's start off by talking about the method of efficacy here, which is ultraviolet light in the form of 254 nanometers ultraviolet sea light very specifically delivered from the lamps you see on the robot here This ultraviolet sea has been known for quite some time to be a very highly effective disinfectant and has been used in many different industrial Applications laboratory applications, but particularly prevalent in hospitals and specifically in the u.s And so it's widely used in u.s hospitals The process is that there needs to be a cleaning process to start with to clear organic material from surfaces from the floor From the the sides of beds and the foot of the bed and door handles So either a manual cleaning or an automated cleaning Followed by the disinfection step And we can achieve extremely high levels of Reduction in antimicrobials from Exceeding actually 99.99 percent or log 4 reduction as the microbiologists would describe it And obviously in doing that we're reducing bacterial colonies and to reduce in in reducing bacterial colonies We're reducing the risk of hospital acquired infections However, there are some significant enemies of the efficacy of uvc light The laws of physics for example So we have two principles here the first one at the top is the principle of shadow On the we have a light source on the left hand side We have an object which on the front is receiving dose of uvc energy and on the rear side in shadow is receiving no disinfection Therefore the light source needs to move In order to be efficacious in relation to objects in a room The second image at the bottom shows a uvc light source and the impact of distance because the Intensity of uvc light diminishes over distance According to the inverse square law principle So at a meter you've got 100 efficacy at two meters 25 percent only three meters 11 percent So therefore the light source needs to get near So just to recap those two points To overcome those different Disadvantages associated with laws of physics We need the light source to move and we need the light source to get close And that is what led to My company uvd robots based out of danmark in europe to get the wheels turning to actually Invent a new category of autonomous robot with disinfection lamps or uvc lamps That you see in this picture here So what has been The the process so far to date because as i've mentioned already uvc has been in the market In the u.s and in other countries for over 10 years and it's essentially A static manual system like a trolley on wheels essentially with uv lamps and This causes huge labor inefficiencies as i'll i'll tell from this story and this image here. This is the map of a sizable one of the largest hospital trusts in the uk And it's an oncology department and the you can see the map of the outer walls of of the department and the blue circles are addition infection points In order to get the right kind of dose In all of the touch points in in that space There's about 45 of them in total and They were using this static robotic system which you wheel in you can see it in the picture bottom left You wheel it in you plug it in to the mains power You let it you you leave the room And you you it charges up it does its work And then you go back in you unplug it you Reposition it to another one of those circles and then do the same again repeatedly and of course That's very labor intensive in this particular oncology department It would take them 20 hours of manual labor time to do one disinfection spread over two days They called us in and We showed them that we could map the space using the robot to create the map and then set the disinfection points And we chose the same disinfection points the same number of them and it did the same job in 90 minutes And the manual labor time aspect of that was three minutes only just to set up and then they could go away Whilst the robot was autonomously doing its work And do something else productive. So there's a bit of a theme emerging here about helping people to be productive and Taking on work which can be menial and labor intensive I'm going to run this video now and it's an animation of what the workflow is like for the operator and essentially the style of order the robot is coming to stop the safe and it's in the space to run between 10, 5, 9, 10 or 10 and it'll make its way down the corridor and it'll be asking is it good idea to come here because sometimes people have to have to have to go both ways and it's in the middle of the corridor and then it'll stop and it's in the space and it'll have to come to you to the time when it comes to fun and to the time when it comes to health and to open the doors to the world open the doors and press the button to save the robot and once you open the door the robot will come to the space and then there's a series of very important objectives for safety to tell the robot that it's in the room the room is being confirmed that there are no obstacles in the pathway that's been set in the robot that will allow people and animals inside the room to be safe here and then you'll come to me then press the button on top of the robot and then leave the room and you can turn the video the volume down on the video to the hard time period of course, yeah a bit loud okay you put the tablet on the door on a suction cup so that if the door vibrates but someone tries to enter the room while the lamps are on it will shut down the lamps and then it will automatically tell you when it's been disinfected You press OK, it'll come out of the room and either it will go to the next room if you're doing a series of rooms or a whole department, put the tablet back in its slot or it'll go back to its docking station and in the meantime it prepares a report on its successfully implemented disinfection which will be made available in the cloud to view. So the robot's back at its docking station and the task is complete. So I'm just going to forward to my last slide which is actually another video because I think you really need to see the robot in operation to appreciate it. This is asking the question from a proactive theatre manager. It's the same hospital we looked at earlier saying what would be the impact on surgical site infections if we could disinfect between OR operations and we could do it quickly enough for that. And this is a routine theatre disinfection. The robot's been put into the room as I previously showed you. You can see a map on the top right hand side showing you the layout of the room and the lamps are warming up. These are low pressure mercury lamps. There's eight of them. They're 180 watts each power and as it's warming up, when it completes that warm-up it'll move to its first disinfection point and I'll just show it moving autonomously to that location which is just still in the doorway of this anesthetic room. I want to point out to you a little yellow sticker in the middle towards the lower end of the screen on the end of the OR table and that's a photochromatic dosimeter which is a way to show the UV disinfection process effectiveness. As the robot will move past the head of that table, it'll turn that small disc purple which indicates and it's the highest dose that it'll indicate which is a greater than 100 millijoules per centimeter square and more than effective to kill or totally deactivate almost all microorganisms and bacteria that you might expect in that room. I'm going to fast forward it just for the sake of time here. There's a little counter on the right hand side that shows you the time that the robot's actually really taken. It's now moved around the end of the table and into the scrub room I'll forward it further forward. It's coming back now and as it comes across the head of the table you can see that sticker that was previously yellow is now purple as it's done its work. Now it's disinfecting surfaces and it's disinfecting the air and it's surprising there's a lot of dust and dead cells skins in the air that get shed frequently and bacteria does live on those and it completely deactivates those as well. So I'll forward to the very end here, eight minutes and 12 seconds and a completely sanitized and disinfected room for the next procedure. Thank you very much. Thank you Andy. Great talk. Okay, so next I'm happy to introduce Laura Blumenschein. Laura is an assistant professor of mechanical engineering here at Purdue University. She received her PhD in mechanical engineering from Stanford in 2019. Her research focuses on creating effective and robust soft robotic systems by bridging the gaps between theoretical modeling, experimental verification, and application. The particular focus on wearable soft robots and soft robots inspired by plants and she's going to talk about some human movement and wearable robots for some interesting healthcare applications. Thank you Laura. Thank you Dave. Let me just share my screen here. Okay, so yeah, thank you for that introduction. So as Dave said, I will be talking about a kind of different application for robotics in the context of healthcare, which is focusing more on this collaboration directly between robots and patients and focusing on how wearable robotics can help with human movement. So rehabilitation is a huge sector of our healthcare system and people who need specifically motor rehabilitation are affected by a wide range of different diseases and conditions. So what can we say about this population? Every year there are almost 800,000 people just in the United States alone who have a stroke and about 600,000 of these are first or new time strokes of that population. This leads to the highest reduction of mobility in the population of 65 and over and is the leading cause of serious long-term disability. So why does this matter? These are conditions that affect how well people can do activities in their daily life and it affects not just elderly populations, other major causes kind of allow us to span the age ranges where we see traumatic brain injury that often affects younger populations and we have about 280,000 traumatic brain injuries that result in hospitalizations per year. And then spinal cord injury is also a leading cause of motor impairment. So motor impairment leads to or affects in many different forms but oftentimes in the cases of stroke or other sorts of causes the important thing here is loss of either lower limb function or upper limb function which is really the areas of focus that these impaired populations are most interested in resuming their function to resume everyday living. So how are we addressing this right now which is I think the starting place for understanding how we can get robotics into this problem. So existing rehab solutions, the kind of biggest segment of this is obviously physical therapy and this involves personal interaction between physical or occupational therapists and the impaired patient. It's primarily interventional meaning it's most highly focused in the days immediately following a stroke. This means that oftentimes these therapies are not focused on the kind of higher goal of fully regaining function for the patient but are focused on getting them to the point where they can start to have regained some of their function and usually the most level of function they're going to regain in that first few days. Not really focused on long term care. And as I said the availability tapers off over time so there's little availability outside the hospital setting for these therapies. Another option that has come into prevalence recently and has been a big focus of research in a lot of labs has been exoskeleton technologies. And so the idea here is that these technologies are able to replicate the high force or torque control capabilities that you would see with a physical therapist physically moving the patient through a set of stretches or movements that allow them to complete their rehab. The difficulty here has been that while a ton of exoskeletons have been developed there is this kind of non-intuitive link between the robots kinematics and the natural human kinematics. And so these robots often end up constraining natural human kinematics in ways that can be uncomfortable or can be hard to adapt to different users. Similarly they need expertise supervision to use. This means both that it is difficult to train physical therapists in the use of these devices. You often need engineers on hand in case something breaks down at this point. And these are not devices in any sense of the term that patients could take and use in their home for personalized therapy. The kind of dream of a robotic solution to this physical therapy and rehab problem is not necessarily to get rid of the physical therapist, but it is to allow this physical therapy to take place in these other settings that aren't being covered right now. And so this is where some of my work comes in, which is in the development of soft wearable exosuits to allow us to target the part of rehab that is really not being leveraged right now. That is the everyday motions that these patients are going through and helping to encourage patients to work rehab into their everyday lives. And so I like to think of soft wearable exosuits as maybe not bringing the same level of rehab care that you would see from a personal interaction with a physical therapist, but bringing a distributed and personalized devices that can be taken anywhere the user is. So soft, why do we want to talk about soft? So soft specifically means that we are looking at compliance in non-actuated degrees of freedom, which makes these devices inherently less constraining on the user. The forces of these devices are also all being routed through the human skeleton, which means that the forces will always stay at or below human capabilities for safety's sake. So the result of this is if we can design these devices, these exosuits, results in more mobile and less expensive rehabilitation solutions. Though the compliance can create challenges in terms of sensing and control, which I'll talk about in the next few slides. So what I've shown here on the right is a exosuit that I worked, collaborated with NASA on, called the Armstrong, which was a soft wearable shoulder exoskeleton aimed at helping rehabilitation of traumatic brain injury. And so you can see that it is essentially a form factor of a shirt device with a few rigid components to help support forces and cable drive across the joints in order to provide the supporter forces. So the interesting things here is understanding what we should be focusing on in terms of our control and our sensing strategies for these devices. So I wanted to dig down into a few interesting problems that I think highlight some of these, the interesting solutions that we have to come up with when we're talking about rehabilitation devices and distributed rehabilitation devices. So one of these is the idea of what kind of control strategies are really looking at. And so to start to answer that question, we want to answer what is the human body already doing? Because we can take a lot of inspiration from the biomimetic capabilities of human upper limb in order to understand what we should target. So for an example here, the maximum joint stiffness of the human arm at the elbow is about 15 newton meters per radian. It's not really important to understand what this means exactly. But what is important here is that this gives us a target for our passive device compliance in that Armstrong device in order to try to figure out what should this feel like in its operational situation. So in order to target this in the Armstrong device, we use the technique called series elastic actuation, which puts a literal compliant element, in this case a spring in line with the actuation cable. Between the motor and whatever the joint is at the user interface, in order to provide a baseline compliance, which can then be controlled around. So in this case, just to show some results since we're low on time here, we can use this spring in series with the cable conduit drive here in order to change the apparent stiffness of this system between a much higher stiffness on the bottom here and a lower stiffness on the top. And there is some kind of existing challenges that remain based off of the friction that exists in this cable drive in that the actual rendered stiffness of the user on the other side has this wide kind of friction band in it. But we are getting the apparent stiffness out that we are aiming for with this friction kind of on top of it. So to kind of finish up this talk, I want to talk about a few different future areas that I am interested in exploring and that I think are interesting in this space of distributed wearable technologies. So the first is looking at some other work that is not directly in line with rehabilitation robots, but is in line with wearable robots. So I always preface this video by saying it is a little creepy looking, but this is in the kind of in my realm of plant inspired robots. This is called HAPRAP and it is a wearable haptic device. And the interesting thing here is the deployability of the device. So to play that again, you can see that the device starts in a rather small form factor and then it grows to wrap around the arm and conform to the arm shape of the user. In order to provide this haptic feedback, haptic being the sense of touch through these pressure cues on demand. So we can imagine in the context of a wearable assistive system, you obviously don't want to necessarily be in the rehab mindset all the time, but if we can create a deployable wearable device for rehab, perhaps this is a device that a user can wear in their everyday life and only turn it on when they need assistance or they want to, you know, they have time to kind of commit to exercises for their rehabilitation. So to end off the talk, I want to talk about a few future directions that I think that is interesting to look in this really new field of wearable robots and rehabilitation robots, and especially software rehabilitation robots. So there are a lot of existing challenges within design. As I said, deployability is obviously an interesting potential feature that we could add to these wearable robots. But in addition to that, there are even more basic design challenges just at looking at the effective interfacing of compliant materials with the human body. When you're dealing with a rigid robot, it's rather easy to think about just clamping on to the human body in order to directly transfer those forces. But when we're talking about compliant systems, that can make it more difficult to get a comfortable interface. But the compliance is actually potentially also to our benefit here in that you don't often think about, you know, a rigid body suit as being comfortable, most of us are wearing nice, compliant clothing. And so that's kind of the realm we want to aim at for our wearable robotic systems here. The other is low profile actuation for all day wearability and low encumbrance. So here, again, we want to aim for a device that someone could take out with them wherever they're going, both for assistance and for rehab on the go. As I said, control strategies for everyday function can also be a challenge in the realm of compliant systems. So one of these is specific is in sensing and determining human intent for non-repetitive behaviors. So it's rather easy to sense human intent for something as repetitive as a gait cycle for walking. It is much more difficult when you have to distinguish between is, you know, the user going to brush their hair or are they attempting to move, you know, a spoon or something else to their mouth? And so attempting to help with everyday activities like that requires us to figure out control strategies to determine or help with different human behaviors based off of human intent. And last, I think that this is not really the focus of my previous work, but I think that this is an interesting problem in the realm of health care to talk about as well, which is prevention as well as intervention, right? So injury reduction is also a area where exosuits or other supportive assistive devices could help a lot. In the realms of manual labor, something like an active back support or active arm support could help reduce injuries that also lead to limited mobility. And so thanks for your time, and I just want to end with this question of how do we support, rehabilitate and assist activities of daily life in a way that meets the need of physical therapists and impaired populations? Thanks. Thanks, Laura. Great talk. OK, so next we have Ryan Bernhardt and Paul Anderson from Eli Lilly, Eli Lilly Company. So Ryan will speak first. So he's been at Eli Lilly Company in Indianapolis for the past six years as part of working as part of the Discovery Automation Research and Technologies Group. He's been leading a team of automation engineers and scientists on the design implementation and operation of a variety of innovative automation projects across the research and development labs. He's contributed quite a bit to the and as integration leader for Lilly's Life Sciences Studio and has been receiving a couple of awards for his work there. Most recently, Ryan has led the Discovery Automation Group in the rapid implementation of robotics for Lilly's COVID-19 testing effort enabling the automated processing of thousands of samples per day for detection. That's going to be the topic of his talk today. Ryan received his BS in Chemistry from Marin University and has been with a focus on the business of green and sustainable sciences within industry. And then Paul will follow after Ryan. Paul Anderson received his BS in Chemical Engineering from UC Santa Barbara and then went on to Case Western Reserve University for his master's in Biomedical Engineering. In 2005-2018, he worked in the Advanced Automation Technologies Group at the Genomic Institute for the Novartis Research Foundation, better known as GNF. He joined Eli Lilly in San Diego in summer of 2018 and he has been helping support and build out the automated platforms for Lilly's Biologics Discovery efforts. He'll be talking about the robotics automation for drug discovery. So, Ryan and Paul, over to you. Thank you very much. So, as Dave said, today we're going to be talking about the role of automation as part of Lilly's COVID-19 response. I'll be talking about Lilly's COVID-19 testing effort, which some in the state of Indiana may have heard of. And Paul Anderson will be talking about Lilly's COVID-19 antibody treatment effort. So, as part of the onset of the pandemic in the United States in March, many people were hearing about the coronavirus or COVID-19, and it quickly became known to Lilly that there was going to be a shortage of testing. And so, our president of research asked us to leverage some of the unique positions that we had in the industry with our own clinical diagnostic lab and leverage that to do some sort of testing for our employees and families. But at the same time, the Indiana State Department of Health also recognized that there was a clear shortage of testing. And so, they came to Lilly asking if there was a way to collaborate and provide testing readily available for patients in the state of Indiana and even beyond. So, what were we testing? We were testing the presence of the SARS-CoV-2 virus. There's a couple of different ways that you can test for this. One is known as PCR, which stands for polymerase chain reaction. And it's essentially focused on the detection of a certain organism's RNA. And so, it deals with RNA extraction and then amplification of that RNA into a detectable level where it's above the threshold. Another mechanism that you may have heard of is called an ELISA. And it stands for enzyme-linked immunosorbents. Sorry, let me. Enzyme-linked immunosorbent assays. And the thing with ELISA is it's based on a blood sample or plasma serum. And it's really a detection of your body's response. So, the generation of an antibody that's in response to the presence of some virus. And so, the ELISA-based antibody test really works well for historical detection. Whereas PCR is more of a test that you would do for detecting kind of active presence of that virus. And fortunately, it works very well through samples that are a little easier to get than blood. So, saliva, nasal swabs, throat swabs, nasal peringeal. And so, how are we testing? We set out to the test the active presence through PCR testing. And so, we tried things like saliva testing, spinning in a tube. We tried nasal swabs on kind of the just the very opening of the nose. We ultimately landed on the nasal peringeal swab. And that's essentially the nasal swab that's a little longer and goes farther back into the nasal cavity. And the reason for this was it was the really the most accurate form of testing. And with this type of PCR test where we're detecting the presence of RNA, it really lends itself very well for kind of a drive-through testing mode. And so, Lilly set up a drive-through testing that was open to employees, family members, healthcare workers, essential workers, those most vulnerable in the community, as well as first responders, to really open this drive-through up to the state of Indiana. In addition to that, Lilly also began running testing for other testing sites and for the Indian State Department of Health and local hospitals. And so, on March 14th is when the automation group was really called into this. And so, this is a look at the process that we mapped out as first and foremost, there was a shortage of viral transport media. And essentially, that's the media at which when you do the swab in the nasal cavity, you'll need to put the swab into a medium that keeps it safe until testing. We thought we could just go out and buy this. Unfortunately, there was a shortage of this media. There was also a shortage of pre-filled tubes. And so, we needed a way to make our own viral transport media in a sterile environment and then be able to fill that into test tubes that many of those were turned into testing kits that would go to Lilly's drive-through and others were used for, were sent as tubes to testing sites around the greater state of Indiana. Once the patient had been tested, those tubes were inoculated and they would come back to Lilly's clinical diagnostic lab by the hundreds, thousands. And here's kind of a look at what they looked like as they were coming back. Once those patient tubes came back, we needed to transfer from the sample tubes into plates. We needed to perform a cell lysis and then extract the RNA from that sample. We needed to set up a QPCR reaction plate and we would set it up against controls and then known positives. Then we needed to perform a QPCR analysis. And finally, we needed to be able to review and provide patient results back to the patient as well as the state of Indiana. And the timing of doing this is critical. As many people are aware, the quarantine periods and the potential infection rate, the sooner you know the better as to how to react or handle things moving forward. So our projected number early on was a few hundred. Then it went to when we partnered with the ISDH, it went to 1,000 and then very quickly after that, it went to 5,000 and then to potentially needing to be able to run over 15,000 patient samples a day. The clinical diagnostic lab at Lilly before this was almost exclusively a manual lab-based operation. And so they were used to maybe being able to accommodate about 50 samples in a day on a very productive day. As you can see, these numbers could not be handled at that scale and so my group got involved in a matter of four days. We automated this entire process having a total number of 27 robots that we put in place. And in yellow, you can see the number of robots for kind of each step of the process. But I want to just take a few minutes to kind of walk you through what this looked like. And so I want to start at the automated operation that we put in place for the automated viral transport media filling solution. And so this is a look at what we took, automated liquid handlers that were already being leveraged across Lilly's research and development labs. And we converted those into completely sterile environments by putting HEPA hoods to blow clean air. We had UV sterilization to sterilize the deck, much like the cleaning robot in the hospitals that we just saw. And then we used the automated liquid handler with a media pumping solution to pump bulk media in a sterile manner up to the deck and then provide the ability to automate the filling of tens of thousands of tubes in a day with three milliliters of media. And so here's a video that kind of just shows the operation. The media fill pump was automated. So it uses a detection mode where it can detect using optic sensors when the media was out. But overall in the BTM production process, we filled over 415,000 BTM tubes. We've made over 250,000 test kits. We've done this on five different tube types. And also we've been able to handle Lilly's clinical trials on some of the antibody treatment as well. Our average daily capacity was about 10,000 tubes that we filled on a daily basis. And this really came down to we were making tubes faster than the state could provide testing. And so as part of this whole operation, we've had over 100% pass through our quality control. So every single batch of media has been successfully made in a sterile manner. And then we've also leveraged our robotic expertise and set up BTM billing operations for many other sites around the state of Indiana, both from a public and private partnership. Now I just want to spend the last couple of minutes highlighting the robotic solutions on kind of the back end. So once the patients were tested, the samples would come back. And then this is really where we had to run the testing. So the first stop in the patient testing was being able to successfully transfer patient samples from those tubes containing swabs to the extraction plates. And so this picture on the right highlights, on March 18th, we were running the first live samples for the Indiana State Department of Health. And so we literally got this solution up and running, repurposed robots and program the systems in a matter of four days. We didn't get much sleep. But one of the real critical parts is everyone typically thinks about automation, and they think about the physical automation, robots and that sort of thing. But automation also includes the digital automation. And so in this case, we were using automated barcode scanners to burn, to stand in every patient sample. And essentially that created the, the accessioning chain or the chain of command for that sample all the way through the entire process to the point where the results came out the back end of the analyzer. The other thing we had to do is use intelligent liquid handlers to be able to go in and successfully pipet these patient samples where in some of the cases you had one, two, sometimes even three swabs that were put into these tubes. And so we had to have intelligent robots to be able to go in and pipet around that. This is a look at the next stop, which was the RNA extraction. We used two different mechanisms. We used a filtered based RNA extraction chemistry setup. But we also use magnetic bead based RNA extraction solutions. And so in this case, we're capturing the RNA on magnetic beads. We're able to purify that. And then we can provide a elution reagent to elute the, the RNA off of those beads for final collection. The next part of the process was going into setting up this QPCR reaction plate. And in this case, we would take each sample from a 96 well, purified RNA plate. And we would, we would pipet that out against three different genes. So we have one control or negative gene. And then we had two positive genes. So in order for something to be a clear positive, it had to, it had to cross the threshold of both positive genes. And that really provide that an extra layer of, of accuracy to, to our testing process. And then the final step is once this plate was prepared, we, we've set up eight quant studio analyzers. And so we would, we automated the, the PCR analysis. And at the end of what would come out of the machine was something that looked like this. And so what you see here is a negative where you only have the, the sample run, the positive sample showing. And in the case of the positive, it was, it shows that both the negative and the two positives are there. And so all in all, the testing metrics for Lilly's COVID-19 testing effort, since March 18th has been running over 82,000 patient samples, 6,000 of those have been for our clinical studies. And a remarkable thing is that we have an average turnaround time of under 20 hours. And in comparison to some of the other leading diagnostic companies around the country, they at this time in early March, they were eight, nine, 10, 11 days in turnaround. So we're really proud of that. And we, we, with the 27 robots, we had a daily automated capacity to be able to run up to 10,000 samples per day. So with that, I'll turn it over to Paul for the COVID-19, Lilly's COVID-19 treatment effort. Thanks Ryan. So you're going to click through these for me. Yeah. Okay. So just a little bit of background to set the stage for the automation that we've built out in San Diego for our antibody discovery efforts. The background of the immune system is kind of where we start and the first line of defense is the innate immune system where you have skin, mucus, tears, things like that, that provide your first line of defense against pathogens and infections. If you click to the next screen, please. If something makes it past that and actually invade your body, the adaptive immune response kicks into action. And that's several different cell types. Work in concert with one another to fight infections. Click through one more time. What we're going to talk about mainly is the B cell. And so the B cell is unique in that it provides each B cell generates a single antibody. It creates a large amount of that antibody, but it provides one antibody to fight an infection. And so when you talk about COVID-19 patients and convalescent serum, really what you're transferring to from a recovered patient into an infected patient with convalescent serum is the B cells and the antibodies produced from those B cells that can go into a new patient and help fight an infection. What we're going to start with is basically taking those B cells and using that as our starting point for drug discovery. So you click on the next, thanks. So we isolate those B cells from, from whatever hosts we have. We have several types of animal models in the case with our COVID-19 work. This was actually worked on by our collaborator, Absolera, where they had access to one of the early recovered patients in North America. And so this process is a little bit different with that collaboration, but I'll discuss what we, what we typically do here. So we would typically sort, or single B cells into wells of a micro-titer plate. And this could be typically 96 well plates. And so you have 96 different B cells in that plate. And we'll run dozens of plates to this, this process. And so we have several automated platforms that we use to generate our, our lead molecules. So the first platform is a molecular cloning system where we're taking that B cell and really using a similar PCR action that Ryan talked about where they're looking for the presence of viral RNA. What we're doing is basically chopping out the DNA for the antibody from that B cell. And so that's done in the molecular cloning system. And then we take that snippet of DNA, we put it into some E. coli cells. And those basically replicate and make large quantities of DNA. And we take those plates to a DNA purification system where now we're purifying a large amount of DNA that codes for the, our antibody that we're looking to detect. We take that DNA and put it into our protein expression system. And that's really, you can think of it more like an antibody expression system. So the DNA is added to some cells along with some other reagents and the cells basically uptake the DNA and translate that into antibody. And they go through that process over about a week and through that week, they're essentially churning out a good quantity of antibody. The next string system in the chain is a quantification cherry pick where we spin those cells down and basically throw the cells away because the cells were just used as a little factory to make the antibody and the antibody is secreted. So it's in the liquid surrounding cells. So the cells are of no more use to us. So we measure how much antibody we've produced. And then we cherry pick the ones that have a decent amount to move forward. The next system is a assay-ready dose response little system. So basically once we have a decent amount of antibody to test, we use this one to make plates that have different concentrations that antibody for testing. And then the final one we'll talk about is our assay system. All the first systems on there largely fit that this is perfect. You can leave that over, leave that run. The first five or six, five systems on that list really are running a fixed process where they're doing the same kind of thing regardless of the project. The assay system is significantly larger system and has a lot of instrumentation on there that allows us to run all kinds of different types of screens or assays. So we can run binding assays. We can run cell-based functional assays. We can run some very, very complex multi-parametric type of screens with the integrated flow cytometer. So we have everything on here to be able to fit the needs of any specific project. What you see in the video on the right is actual footage from running some of these screens a few weeks back. And so we leverage these robots on rails to move plates around the system, to add liquid, to wash plates, wash cells, to run on different types of readers for depending on the type of experiment we're running. And so this is something we built in November last year. And we started using it really in December, January. And so we've just got a few runs under our belt by the time COVID-19 hit. And I would say the lion's share of the work that's run on the system since we built it late last year has been for our discovery efforts with COVID-19. So it was timely that we got the system in place a little less than a year ago. So you can go to the next slide. So one of the types of projects that we run looks something like we generate results that look like this. You can click through one more time. These are 3D four-well plates. And basically when I talked about making dose responses and running at different concentrations, what you see here is every two by two block of samples here is one sample running at four different concentrations. And the size of this square in there essentially is how strongly it's binding. And so as you go down to lower doses, you want to see a larger square. And that means you have something that's very high affinity towards the target of interest. And up in plate A2, it's a little bit small, but you can see up there, there's a quite strong hit that stays pretty high affinity all the way down the dose response curve. So we're able to generate results like this, and we're able to process the data and really turn around and pick hits for our COVID-19 effort. It was kind of remarkable. We'd have samples coming off. We'd run the screen. We'd have a late call at night process these hits and get someone in there in the evening to start cherry picking samples and get it onto the next device late that night so we could meet early in the next morning. And those systems would run overnight and start talking about results from the next step. So that's kind of a flat out brute force effort to push these projects through the last several months. You can click through one more time. So that's again, just a high level overview of the several systems we have in place. For any questions, you can probably hit that on the Q&A. And that's all we have. Okay, great. Thank you. So thanks everyone. Thank you for all the great talks. So now we're going to open up for this, the Q&A. And so again, please type your questions in the Q&A box. Instead of the chat box. This is easier to manage. It was in the Q&A section. It looks like we have a couple already. So I'm just going to start off. So the first one to be for Andy. Now the question is what happens if the UV light breaks as it goes around the room? Is there potential for gases to contaminate the area? Yeah, these, there's a couple of good questions there for me to pick up on. So the first one, the first thing that happens is that we don't have the UV lamps break as it goes around the room for some important reasons. We have a lot of safety mechanisms in place with the robot. We have three 3D cameras at the top of the tower. And we have LiDAR sensors at the bottom. So that enables it to avoid objects, which would typically be an overhang of a surface or something sticking out. When I showed the animation, it showed that the user was going through a checklist. And one of the checklist items was to ensure that there are no obstacles in the way for the path of the robot to follow its chosen path. And that's an important step to ensure that it has the freedom to move in the area that it would be navigating. And so we don't have that happen, but if it were to happen, and it's typically not in operation, probably be more in transportation, more likely than any other time, we'd follow the guidelines and recommendations of the manufacturer, which is to aerate the room typically five to 10 minutes with windows open or doors open in order to allow mercury vapor to escape, and then a careful cleanup and disposal of broken glass. So it isn't a big deal that it's an industry lowest amount of mercury in these bulbs. So it's not happened, but if it were to happen, there's a good safety procedure to follow. Great. Thanks, Andy. And then I guess a follow up to that was, there's evidence that coronavirus can stay on the walls. What about the areas of the office wall covered by the instruments? Yeah, so I think it's fair to say that UV light, whether it's our robot or any form of UV light will reach surfaces that are in direct line of sight. Obviously the thing with our autonomous robot is that it creates the best opportunity to get to every surface from behind and in front because it's moving in so many different directions. And obviously the lamps are in a 360 degree kind of orientation as well. So it will reach the walls and any COVID, I didn't maybe explicitly say it, but COVID-19 is completely deactivated by our lamps at 22 millijoules percent of meter squared. That was a study done at Boston University with those lamps. And that's actually a very low dose. It's much lower than trying to deactivate bacteria, antibiotic resistant bacteria. And we can do it at a drive by speed, which is 10 centimeters a second. It's quite a slow speed, but even a drive by speed with deactivating COVID in the air and on surfaces. So it will be on surfaces. And obviously what we're trying to do is deactivate it in high touch areas and on surfaces where people might be. And it will reach quite high up in the air. The lamps are about six feet off the ground. So it will, at an angle, will reach at about eight foot on surfaces high up. So if there's something obstructing that, and we're talking about behind that surface, if that distance is away from the wall, as the robot moves around, it will get behind those instruments. However, if the instrument, if this question is about where the instrument is fixed to the wall, obviously it doesn't need to get behind where it's a fixed to the wall, just around the base of the instrument and are on the wall itself. So wherever there's not a shadow behind it, it's going to reach it. Okay. Thanks, Andy. Okay. And next question is kind of more general question for the panel. How might the innovation of robotics and the medical industry flow over into other industries that have not yet seen robotic implementation? Anyone want to take a stab at that? I can go first. And I say, a lot of these robotics, at least at Laura and in Malima talking about is robots and humans. It's also Andy too robots and humans actually working together and collaborating. And I think it's a huge part of the business of industries which can benefit from this human robot collaboration. You've seen that in manufacturing now. More and more. And I would expect it to happen in other industries as well. Anybody else want to comment on that? I'd say like, we're seeing a lot of spillover. I talked about a bunch of islands of automation, and the lab automation space companies coming out with someone to what Andy's talking about mobile robots that have those arms on mobile workstations, incubators, hotels and different devices. So, you know, we're, I mean, it's something that we're looking to in my lab, my group in particular of exploring how can we connect all these devices, possibly mobile robots and lab in the next few years. So we see a lot of stuff trickles over to us from the Amazon's and from the manufacturing side of the business. But as far as going over to the other industries, I think the more exposure and the more case studies people see, and the more cross-pollination of disciplines, we see these things starting to get picked up in different areas. Yeah. Yeah, I think to, I think this pandemic and people having to, you know, being at home, not being able to be in the labs in our industry or for others, it's not being able to be in their labs or being able to physically be somewhere. I think that it's really expanded and opened the minds of how can we get work done if we can't physically be somewhere. And that's where one of the things that's continued to work despite people having to be quarantined or social distancing is really, is the robots continued to still, you know, be able to do work without fear of catching a virus. And so we're seeing more of that even, and as Paul mentioned, in our own industry where we like to think automation is being deployed readily, we still are seeing more and more researchers and areas of the business. I think now more than ever looking to adopt how can we, how can we implement more automation so that we're more productive? You know, we still have restrictions right now on how many people can be in the lab at any given time. It's alternating shifts. And so it's all about that productivity. How can we still get our work done if we can't physically be there to do it? Yeah, thanks. Okay. Next question for surgical applications and particularly and rehab. Please speak to the issue of safety and reliability. Hardware can fail and algorithms often have unexpected behavior despite the best efforts of the implementer. AI for driverless cars has limitations anticipating human behavior of humans in its path. I would think surgical assistance would have similar challenges. I can speak on this briefly from the rehab aspect. If I guess one wants to speak about the surgical aspect, but from the rehab point of view, this is a wonderful question and is, is I think something that is kind of in that, that intermediary stage between the, you know, design and the research and the actual adoption of these technologies by users who can make a benefit from them. And that question is how much do humans trust these systems and how safe are they? I think that in the issue of safety and reliability, I think we are making a much bigger kind of leaps forward. And especially with software bike systems and compliance systems, issues of actually creating problems with safety are not as big as I think you would think, but the perception of safety is a very big issue still. And that can lead to a lot of difficulty with acceptance of these devices. From my point of view, what that often means is that robots have to essentially be transparent in their, in their intentions to humans and easily readable, because whether you intend or not, humans are going to read intention into your robot. And so designing with that in mind is really important when it comes to anything that is directly interacting with a person. Thanks, Laura. Did you want to comment? Thank you. Yes. I want to also second the opinion of Laura in terms of readability and transparency. You know, when you look at the surgical team working together, there is this issue of transparency because we understand the bold language we can anticipate behaviors. We understand the dynamics in the team. And in the moment that you add robots to the team, this issue is not so clear. By definition, robots are not transparent, you know, are unpredictable in a way, or at least from the operator point of view, right? So when the robots are fully teleoperated, there is not much a problem. But the question is when you start granting autonomy, the question is what to expect. So there is a lot of work and research in directions that grant this transparency to autonomous robots by showing you, for example, using augmented reality, what is going to happen, and how robots are going to use the surgical space. And you can match that expectation, or you cannot approve that and keep with your mode of work, and the robot will have to adapt to you. But in some way, having this idea of transferring or giving you some cues about what's going to happen from the point of view of the robot is essential for these dynamics to be fluent and natural. Great. Thanks, Juan. Okay. I'm going to try to go to the next question here. Okay. So this is for any panelist. How is the concept of ethics being woven into the conceptions of these systems, like ensuring materials are environmentally friendly, putting emphasis on access for low income patients, et cetera? Anyone have any thoughts on that? I guess one of the things we've been doing, and especially in the research and development space, and this may also kind of tie into the last question a bit is, Paul and I, I mean, we really have a big push on kind of the next generation robots and talking more collaborative robots and being able to work hand in hand with people. And I think from a safety standpoint, you know, that's, that's one of the things we're thinking about when it comes to the ethics of putting these robots in a place where, you know, they could potentially harm someone. And so we're doing more and more where we, we truly want the experience between the researcher and the robot to be one of collaboration, both from a productivity standpoint, but also from that safety standpoint. Yeah. And I would also say, you know, the faster you can automate the systems, the cheaper they become, right? So these drug developments and also the testing. So then they can be very low cost systems. Say to the environmentally friendly side of things, that's, it's kind of a challenging thing because a lot of what we're working with our samples that we're typically not using tips or using consumables multiple times, but there are companies coming out. Green Nova is one of them that we've explored. And these are companies based around reusing washing tips and reusing plastics and their whole mantra is reducing waste streams. And so as something we've explored, we haven't implemented it yet at our site in San Diego, but I know several companies are starting to take that up and there's applications. There's areas where it really is effective. When you do stuff like what Ryan and I were describing a little bit where you PCR, those are amplification reactions. So they're extremely sensitive to, to contamination. So there are other areas in which it probably makes a lot more sense to be washing tips and reusing plastics like that. But we, we divert waste streams, but don't do too much on the tip recycling front yet. Yeah. And in regards to, you know, making these available in low cost scenarios and underprivileged as part of Lily's COVID-19 testing effort, we didn't charge for any of that testing. That was something we did to get back to the community. And we've worked with, in addition with the ISDH, we've also worked with the Marion County Health Department as well to ensure we're getting testing to some of the, some of the harder to reach places there as well. So we were able to deploy automation in that type of setting. Great. Thanks. Okay. Next question. How close are we to have a robot to provide some basic operational support like intraoperative suction and irrigation? What's the major hurdle? Maybe maybe one. We are doing, we, I think that we're very close. There is a question regarding the science and there is the question regarding the market, right? In terms of the science, I think that the science is there. We, particularly in our group, we have a system that can do suction in an autonomous manner. And they can help based on the cognitive load as well, but it can work with the DaVinci robotic system in an autonomous manner. Irrigation is pretty similar in the, in the level of complexity. Now, the problem is more related to the market and to standards and the approvals, right? So not, it's very difficult to really get a robot to be acceptable in the operating room setting and a surgical intuitive, for example, has done a, you know, a huge work and a forum for several years getting these approvals. And in the end they have these approvals, but getting a robot to the OR is very, very difficult. So as I said, in addition to the science itself, which is interesting, an interesting question, I think that those tasks we can do in an autonomous manner, there is the question of approvals in terms of a FDA approval and medical, you know, needs. And then there is of course the society and societal and, you know, work environment related questions, which is right now some of the tasks that you are actually mentioning are done by an assistant to the DaVinci robot. And so, you know, what's going to happen if now we have another robot that actually is going to do that task? What is surgical assistant is going to do? How the dynamics is going to flow? Is there going to be concern of robots taking over? So, you know, those are more cultural questions or, so to say, you know, it's sociological questions that we need to address. But, you know, and those are important as well. And then there is of course the ethical questions related to that. I hope that I answered was a long answer. Yeah, thanks Juan. Okay. Another question for Andy. These are kind of two similar questions. So I'm going to merge them together regarding, is there any potential harm from the UV light on any other equipment that is fixed operating room medical equipment or other materials that's commonly found. And some of those. That's a good question. And as much as we are concerned to ensure high efficacy, we also clearly want to avoid material degradation. So UVC light used repeatedly in close proximity for long periods of time. The kind of material degradation impact is, is discoloration potential for micro fractures on plastics. Et cetera. So I think what we've done with our robot is that we, whenever, whenever when any hospital environment, the first thing that you discover and find out is what target organisms are you trying to deactivate? And we know what doses, doses are required for that. And so we plan the disinfection points where the robot pauses and how long they pause for and how far away from different surfaces in accordance with the target organisms that we are trying to deactivate. Let's say it's COVID-19 for a second outside the hospital environment. I didn't answer the question about other applications, but we since COVID-19 came about, literally every industry is asking us about robots. So whether it be hotels, whether it be offices, whether it be schools, you name it. And there it's really more of a COVID type scenario, which is a lower dose, which means you move the robot faster and you have it stop at less points in order to hit the target dose. In that case, 22 millijoules per centimeter squared for COVID-19. For hospital environments, however, you require more dose. So we really, it's a kind of a way of assessing just how much dose we require in any given room before we move into the next room. And if you think about what hospitals are actually used to doing, they're used to putting a static system often, in many cases in the center of a room, and letting it run for 45 minutes so that you get stationary overload of dose for something that's close by and then under dose for something that's further away because the distance is a principle you can't get over. So we try and deliver an even dose across the room just enough to do its job and not too long to have those material effects. There is quite some research out there. We've done some of our own on plastics, et cetera. And we're very confident that you don't see material effects with our dosage, with our system for many, many years. So it's not just a question of a short period of time. It's a long period of time it would take. Thanks, Andy. Okay, next question. Laura, can the exoskeletons be configured to different body make-ups of individuals? Are measurements taken of the individual to fit well? This is a great question. And I think that it comes into maybe a different realm of this research, which is how to create exosuits that scale to different body proportions. It's not an area of research that I've currently been working in and I'm not aware of anyone who's specifically targeting that problem. But I think as we move from solutions that are well targeted to meet the needs of rehab and start to move into product development, that will be an open area of research for figuring out how do we adjust these solutions to different body proportions? Or can we create solutions that do not require as much adjustment? That's a great question. Okay, thanks, Laura. Okay, next one is for Ryan. Would you consider automation to be feasible from an industry standpoint post the demand that the pandemic generated? And if so, why do you think the company did not adopt the automation earlier? Yeah, absolutely. I think sometimes we're all focused on the way we've done things and been successful today, but sometimes it takes an emergency or something to occur to have us, for us to really take a step back and look at is there a better way to do something? Is this the only way that we're going to be able to do this? And so in the case of like Lilly's clinical diagnostic lab, we're one of the only pharmaceutical companies that even have our own. And it's really been specialized into assay development and what they have done. So they've never been focused on scale, but in this case, with needing to be able to run tens of thousands of samples or sorry, thousands of samples a day, there was no way to achieve that in a manual process. And so really it was necessary to think about how can we get this done? And it was through robotics. I will say to date now that this area has been automated, many times they've said, I don't see how we could ever go back to the way we did things prior to this. But the benefits that they've achieved are just really too remarkable. And it starts to open the door to new ways of doing things, new projects that can be taken on, new timelines for being able to achieve things. And so I think sometimes that's once you've gone down a path and tried something and you experienced the benefits for yourself, I think it opens the door to leveraging these robots for many other applications. So we hope COVID-19 is not here to stay at this, at least at this level and pandemic level. And so clearly we want to deploy or redeploy these robots for other areas of our R&D process and to be beneficial there. And so that's what I think many other labs will do. They'll find additional ways to leverage the robots that maybe they've put into place due to COVID and they'll find other ways to make them extremely useful moving forward. Thanks. Next question. So what types of regulatory barriers exist in implementing surgical and disinfecting robots in the clinic or hospital setting? For instance, do requirements of both OSHA and FDA need to be met? I could speak to the UBD robots in that sense. Right now it's not governed by FDA or OSHA, it's actually governed by EPA, UV light. In the sense that it's considered a pesticide, which is a little bit strange, but it's kind of falling between the two organizations. It doesn't mean to say that at some point in the future the FDA do not bring under their regulations, but so it's actually not considered a medical device. It's considered a disinfection device. Okay, thanks, Andy. For the surgical robots themselves, FDA approval is needed. I don't, I can't really comment on OSHA requirements, but I believe the FDA kind of captures that and some of their regulations there. Okay, thanks. Okay, Juan, this is for you. Do you think in the future that medical robots are going to be able to perform surgery as assistants, or is there interest in the shift of them potentially performing some surgeries as well? I think that in the future we're going to look into robots that can perform surgery completely. We know that when you look at the dynamics of a surgical team, you can see that oftentimes the expert surgeon is performing, for example, an orthopedic procedure, and the prep or the closing phase of the surgery is done oftentimes by the resident. So why? Because it's known that those phases of the surgery are more simpler in a way, more mechanistic, they're more repetitive. So the chances of complications in those stages are very, very minimal. So, you know, I do think that we're going to see more and more autonomy in surgical robots. You know, the big companies like surgical intuitive, Mazur and others are very cautious about how to go about it. So they focus mostly on tail operation and tail robotics, where the surgeon is the person controlling what's going on. But we know from research and current experiments and prototypes that robots working, for example, using autonomy and artificial intelligence are quite successful in several tasks, like incisions, closure and not dying. There is multiple procedures that are simple, that are very simple and robots are quite successful, even maybe even faster than humans. So, you know, I think that is going to happen. But in the same way that right now, people may be scared to go to get into an airplane that is going to be fully autonomous, even that, you know, technically autonomous cars, for example, exist, you know, and people may be willing to step into an autonomous car when you talk about an airplane, people are, you know, they are concerned about that. They are really scared. So in the same way, you know, going into a surgery when you know that the robot is fully autonomous, it's going to create a lot of reluctancy. And until we get to the point that people will get over those initial barriers, I think that it's going to take quite some time. And which is, again, not necessarily an issue of technological development. It's more an issue of getting used to those ideas and making sure that the people expressing these concerns are being heard, you know. Thanks Juan. Okay. The next question is for Ryan. Can you speak to the accuracy of Lilly's testing? How and when do you know you're not, you're not getting accurate test results? What is impeding your organization from testing even higher rates? Yeah, I'm going to preface this question by saying that I'm not a medical doctor. So I would rather leave that to the doctors that were, that are overseeing this process. I will say that we did a number of testing, clinical tests in terms of saliva, the nose swab, the nasal parengeal. And what we found was the MP, the nasal parengeal was, seemed to have the most presence of the RNA. And therefore we were able to, we felt that was the most accurate way of testing. And so we stuck with that, although that's, you know, not as comfortable and easy to do as a saliva sample. I will also say that our process while it was fully automated, we had a medical pathologist that has literally reviewed every single sample and result that came off. And so by having not only the PCR software, it tells you it's positive or negative, it didn't, we didn't have a cloud based software that checks that, but then it goes to the medical pathologist to actually do the final review. And if, if he sees anything that he doesn't, doesn't like, you know, he can, he can ask for a repeat and it can repeat from the RNA extraction or where it can go all the way back to the initial sample to repeat. And so that, that is really provided that extra layer of checking the process. And we feel that's been, you know, it makes it even a much more accurate test in our. Right. Thanks, Ryan. So we're getting close on time. So I want to give everybody a chance to kind of have some closing remarks and talk about, you know, what's next? What do you see in your kind of robotics and healthcare area specialization? You know, maybe short term, like five years down the line and maybe 10 years down the line. So maybe we can just go in the order of the speakers. So maybe Juan, you can start. Well, I see that as, as you were talking, I think that first, we're going to see more robotics in hospitals, more robotics in the operating room, more cybernetics. We're going to see teams of robots and humans working together rather than being controlled. So we're going to start seeing some. Independency or autonomy, real autonomy in robots while working on teams. It's going to be some form of hybrid form of a future work. And this is what I expect in the next five years or so. When we're looking into a next 10 or 15 years, I start looking into procedures that are going to be small procedures and simple procedures that people will be able to get done not only through robotics or not only, but the main players in the, in the operating room are actually going to be all robots. So yeah, that's my. Thanks Juan. Andy. I would speak slightly more to the shorter term and the longer term because I think that's where my, my thoughts are kind of focused. And I could see given the navigability, the autonomous navigation of robots in hospital type environments. I can see more robots being deployed to remove manual handling challenges of humans pushing and pulling. So an obvious one is, is a bed, a hospital bed. So a lot of porters that push people on trolleys, wheelchairs or beds or push or pull either way. And I could see robots doing that task fairly easily. And allowing humans to do, you know, work which perhaps might be deemed of high value in the workplace. I could foresee robots delivering things within the hospital environment, whether it be medicines, whether that be food into wars. And I think those are kind of not so far away from where we are now that they're kind of more on the horizon and actually just like patient handling, lifting, hoists, which are kind of partly controlled by humans, partly controlled and mechanical. But those could be robotic so that somebody's lifted and moved robotically. Those would be the thoughts that are fairly short term in my view. Great. Thanks, Andy. Laura. Yeah. So I think before getting into like maybe what my future thoughts are, I think just to kind of tie this, the stuff that I talked about into the situation we're in right now, I think that the pandemic and especially the response of social distancing and, you know, isolation has really highlighted more than ever this problem that we knew existed, which is this kind of scalability of personal assistance, person to person assistance for elderly and impaired populations. And so I think that this kind of situation is going to highlight the need for these robotic assistive devices and kind of active prosthetic or orthosis devices. And so I think in the short term, we are very close to getting devices that can start to offer some level of rehabilitation exercises and assistive exercises in the at home environment. The challenge is going to be, you know, in the longer term is how to scale those devices to be affordable enough to be widely available to a large segment of the population that could benefit from them. And also how to get the control of these devices so that they can start to help with more than just, you know, simply moving your limbs through traditional rehab exercises and start to do things like actively help with activities of daily lifting and bring more functionality and more freedom to people who have limited mobility due to, you know, a variety of different sources. Thanks, Laura. Ryan. Yeah, I see this being, I think the start of kind of a new wave of robotics. And I think the prioritization is going to be leveraging robotics to allow humans to be goal oriented versus task oriented. I think robots give us a great opportunity to leverage them for task based work and freeze up the human to utilize the power of the mind. And I think that's where we're going to see a lot of robots being deployed in situations where I think the cleaning, the sterilization is a great example where it takes 20 hours for humans to do that over a two day period. You know, let a robot do it and free those humans up to do something of much, much higher value. And I think there's more opportunity, there's always been the opportunities for that, but I think people will now start looking for those opportunities more often because they need to be productive. Maybe it's makeup for lost time of what's happened through the COVID-19 pandemic. Maybe it's the new normal of having the social distance and those types of things. But I think it's trying to get more done with less. And so that's where I think we're headed. Great. Thanks, Ryan. Paul. Yeah, I would say in my little corner of the world with the drug discovery side of things, you know, I see where we're headed rather than building. It takes a long time to build these big robotic platforms to fit any purpose in our pipeline. So I think where we'll likely headed would be a great like deconvoluted model where you essentially have everything out in the open and accessible. And the mobile robots are basically picking the components and the devices they need for any process. So on any of those systems I didn't talk about, but there's half the instruments on each of those systems is the same. Like you have things that take seals off or lids off, put them back on. You have centrifuges, things like that. They all have kind of the usual suspects of equipment, but each one of them has, you know, a couple of unique devices to fit the process. So by having a distributed model where you essentially have banks of all these instruments and then mobile robots to just pick in place, I think that would be really nice. And so as your process evolves, new instruments come up, you can kind of bolt it on the end and allow access to that. So I think that would be a whole different type of lab automation model that would open up a lot of doors and make things faster as new technologies come to service. Right. Thanks, Paul. Okay. So that's the end of our webinar. Thank everybody for attending. We had a great number of questions. Apologies if we couldn't get to all the questions, but this webinar will also be as being recorded and I'll be able to view it or share the link about one to two days. Thank you. Thank you. Thank you. Thank you. Thank you. Using the, the website that's there in the chat window. Again, thank you for everybody attending. Hope you enjoyed it. Enjoy the rest of your evening. Thank you, everyone. Thanks, Dave. Thank you. Thank you very much.