 Welcome to the Intel AI Lounge. Today we're very excited to share with you the precision medicine panel discussion. I'll be moderating the session. My name is Kay Aaron. I'm the general manager of health and life sciences at Intel. And I'm excited to share with you the three panelists that we have here. First is John Madison. He is the chief information medical officer. And he is part of the Kaiser Permanente. We're very excited to have you here. Thank you, John. We also have Naveen Rao. He is the VP and general manager for the artificial intelligence solutions at Intel. He's also the former CEO of Nirvana, which was acquired by Intel. And we also have Bob Rogers, who's the chief data scientist at our AI solutions group. So why don't we get started with our questions? I'm going to ask each of the panelists to talk, introduce themselves, but as well as talk about how they get started with AI. So why don't we start with John? Sure. So can you hear me OK in the back? Can you hear? OK, cool. So I am a recovering evolutionary biologist and a recovering physician and a recovering geek. And I implemented the health record system for the first and largest region, Kaiser Permanente. And it's pretty obvious that most of the useful data in a health record, it lies in free text. So I started up an Intel natural language processing team to be able to mine free text about a dozen years ago. And so we can do things with that that you can't otherwise get out of health information. I'll give you an example. I read an article online from the New England Journal of Medicine about four years ago that said, over half of all people who have had their spleen taken out were not appropriately vaccinated for a common form of pneumonia. And when your spleen's missing, you must have that vaccine or you die of very sudden death with sepsis. In fact, our medical director in Northern California's father died of that exact same scenario. So when I read the article, I went to my structured data analytics team and to my natural language processing team and said, please show me everybody who has had their spleen taken out and hasn't been appropriately vaccinated. And we ran through about 20 million records in about three hours with the NLP team. And it took about three weeks with a structured data analytics team that sounds counterintuitive, but it actually happened that way. And it's not a competition for time only. It's a competition for quality and sensitivity and specificity. So we were able to identify all of our members who had their spleen taken out who should have had a pneumococcal vaccine and we vaccinated them. And there are a number of people alive today who otherwise would have died absent that capability. So people don't really commonly associate natural language processing with machine learning, but in fact, natural language processing relies heavily and is the first really highly successful example of machine learning. And so we've done dozens of similar projects, mining free text data in millions of records very efficiently, very effectively, that have really helped advance the quality of care and reduce the cost of care. It's a natural step forward to go into the world of personalized medicine with the arrival of $100 genome, which is actually what it costs today to do a full genome sequence. Microbiomics, that is the ecosystem of bacteria that are in our gut and every organ of the body, actually. And we know now that there's a profound influence of what's in our gut and how we metabolize drugs, what diseases we get. You can tell in a five-year-old whether or not they were born by a vaginal delivery or a C-section delivery by virtue of the bacteria and they got five years later. So if you look at the complexity of the data that exists in the genome, in the microbiome, in the health record with free text, and you look at all the other sources of data like the streaming data from my wearable monitor that I'm part of a research study on precision medicine out of Stanford, there is a vast amount of disparate data, not to mention all the imaging, that really can collectively produce much more useful information to advance our understanding of science and to advance our understanding of every individual. And then we can do the mashup of a much broader range of science in healthcare with a much deeper sense of data from an individual. And to do that with structured questions and structured data is very yesterday. The only way we're going to be able to disambiguate those data and be able to operate on those data and concert and generate real useful answers from the broad array of data types and the massive quantity of data is to let loose machine learning on all of those data substrates. So we're, my team is moving down that pathway and we're very excited about the future prospects for doing that. Well, yeah, great. I think that's actually some of the things I'm very excited about in the future with some of the technologies we're developing. My background, I started actually being fascinated with computation in biological forms when I was nine, I mean, reading lots of sci-fi, I was kind of a big dork, which I pretty much still am, have really changed a whole lot. And just basically seeing that machines really aren't all that different from biological entities. We are biological machines and kind of understanding how a computer works and how we engineer those things and then trying to pull together concepts that we learned from biology into that has always been a fascination of mine. As an undergrad, I was in the EE kind of CS world and even then I did some research projects around that. I worked in industry for about 10 years designing chips, microprocessors, various kinds of ASICs. And then actually went back to school, quit my job, got a PhD in neuroscience and computational neuroscience to specifically understand what's the state of the art? What do we really understand about the brain and are there concepts there we can take and bring back? Inspiration has always been, you know, like we wanna, we watch birds fly around, we wanna figure out how to make something that flies, extract those principles and then build a plane. Don't necessarily wanna build a bird. And so Nirvana's really, you know, was the culmination of all those experiences bringing it together, trying to push computation in a new direction. And now as part of Intel, we can really add a lot of fuel to that fire. I'm super excited to be part of Intel in that the technologies that we were developing can really proliferate and be applied to healthcare, can be applied to internet, can be applied to every facet of our lives. And some of the examples that John mentioned are extremely exciting right now. I mean, these are things we can do today. And the generality of these solutions is really gonna hit every part of healthcare. I mean, from a personal viewpoint, my whole family are MDs. I'm sort of the black sheep of the family. I don't have an MD. And it's always been kind of funny to me that knowledge is concentrated in a few individuals. Like if you have a rare, you know, tumor or something like that, you need the guy who knows how to read this MRI. Why? Why is it like that? Can't we encapsulate that knowledge into a computer or into an algorithm and democratize it? And the reason we couldn't do it is we just didn't know how. But now we really are getting to a point where we know how to do that. And so I want that capability to go to everybody or bring the cost of healthcare down. It'll make all of us healthier. That affects everything about our society. So that's really what is exciting about it to me. That's great. So as you heard, I'm Bob Rogers. I'm chief data scientist for analytics and artificial intelligence solutions at Intel. My mission is to put powerful analytics in the hands of every decision maker. And when I think about precision medicine, decision makers are not just doctors and surgeons and nurses, but they're also case managers and care coordinators and probably most of all patients. So the mission is really to put powerful analytics and AI capabilities in the hands of everyone in healthcare. It's a very complex world and we need tools to help us navigate it. So my background, I started with a PhD in physics and I was modeling, computer modeling stuff falling into supermassive black holes. And there's a lot of applications for that in the world. No. There will be, I'm sure. Yeah, one of these days. As soon as we have time travel. Okay, so I actually, around 1991, I was working on my post-doctoral research and I heard about neural networks, these things that could compute the way the brain computes. And so I started doing some research on that. I wrote some papers and actually it was an interesting story that the problem that we solved that got me really excited about neural networks which have become deep learning. My office mate would come in, he was this young guy who's about to go off to grad school. He'd come in every morning, I hate my project. And I'm like, okay, finally after two weeks, what's your project? What's the problem? And it turned out he had to circle these little fuzzy spots on these images from a telescope. So they were looking for the interesting things in a sky survey and he had to circle them and write down their coordinates all summer. Anyone want to volunteer to do that, you know? Yeah, he was very unhappy. So we took the first two weeks of data that he created doing his work by hand and we trained an artificial neural network to do his summer project and finished it in about eight hours of computing. And so he was like, yeah, this is amazing, I'm so happy. And we wrote a paper, I was the first author of course, because I was the senior guy at age 24. And he was second author, his first paper ever. He was very, very excited. So you have to fast forward about 20 years. His name popped up on the internet. And so I caught my attention. He had just won the Nobel Prize in Physics. So that's what artificial intelligence will get you. So thanks, thanks, Naveen. You know, fast forwarding, I also developed some time series forecasting capabilities that allowed me to create a hedge fund that I ran for 12 years. After that, I got into healthcare, which really is the center of my passion. Applying healthcare to figure out how to get all the data from all those siloed sources, put it into the cloud in a secure way and analyze it so you could actually understand those cases that John was just talking about. How do you know that that person had had a splenectomy and that they needed to get that pneumavax? You need to be able to search all the data. So we used AI, natural language processing, machine learning to do that. And then two years ago, I was lucky enough to join Intel. And in the intervening time, people like Naveen actually thawed the AI winner and were really in a spring of amazing opportunities with AI, not just in healthcare, but everywhere. But of course, the healthcare applications are incredibly life-saving and empowering. So excited to be here on this stage with you guys. I just want to key off your comment about the role of physics and AI and healthcare. So the field of microbiomics that I referred to earlier back here in our gut, there's more bacteria in our gut than there are cells in our body. There's a hundred times more DNA in that bacteria than there is in the human genome. And we're now discovering a couple hundred species of bacteria a year that have never been identified under a microscope just by their DNA. So it turns out the person who really catapulted the study in the science of microbiomics forward was an astrophysicist who did his PhD in Stephen Hawking's lab on the collision of black holes and then subsequently put together a team that had been in virtual reality. And he developed for super computing center. And so how did he get an interest in microbiomics? He has the capacity to do high-performance computing and the kind of advanced analytics that are required to look at a hundred times the volume of the 3.2 billion base pairs in the human genome that are represented in the bacteria in our gut. And that has unleashed the whole science of microbiomics, which is going to really turn a lot of our assumptions of health and healthcare upside down. That's great. I mean, that's really transformation also a lot of data. Right. So I just wanted to let the audience know that we want to make this an interactive session. So I'll be asking for questions in a little bit, but I will start off with one question so that you can think about it. So I wanted to ask you, looks like you've been thinking a lot about AI for over the years and wanted to understand, even though AI is just really starting in healthcare, what are some of the new trends or the changes that you've seen in the last few years that will impact how the AI is being used going forward? So I'll start off. There was a paper published by a guy by the name of a tag market at Harvard last summer that for the first time explained why neural networks are efficient beyond any mathematical model would predict. And what he, and the title of the paper is fun, it's called a deep learning versus cheap learning. And so there were two sort of punchlines of the paper. The reason that mathematics doesn't explain the efficiency of neural networks is because there's a higher order of mathematics called physics. And the physics of the underlying data structures determined how efficient you could mind those data using machine learning tools much more so than any mathematical modeling. And so the second thing that was revealed in that paper is that the substrate of the data that you're operating on and the natural physics of those data have inherent levels of complexity that determine whether or not a 12-layered neural net will get you where you want to go really fast. Because when you do the modeling, you do, for those math geeks in the audience, a factorial. So if there's 12 layers, there's 12 factorial permutations of different ways you could sequence the learning through those data. When you have 140 layers of a neural net, it's a much, much, much bigger number of permutations that, and so you end up being hardware bound. And so what Max Tagmark basically said is you can determine whether to do deep learning or cheap learning based upon the underlying physics of the data substrates you're operating on and have a good insight into how to optimize your hardware and your software approach to that problem. So another way to put that is that neural networks represent the world and the way the world is sort of built. Exactly. It's kind of hierarchical. Exactly. It's funny because it's sort of in retrospect, oh yeah, that kind of makes sense, but when you're thinking about it mathematically, we're like, well, anything, any one layer, two layer neural network could represent any mathematical function, therefore it's fully general, and that's the way we used to look at it, right? And so now we're saying, well, actually decomposing the world into different types of features that are layered upon each other is actually a much more efficient, compact representation of the world, right? Exactly. And I think this is actually precisely the point of kind of what you're getting at, okay? What's really exciting now is that what we were doing before was sort of building these bespoke solutions for different kinds of data, right? NLP, natural language processing, there's a whole field, 25 plus years of people devoted to figuring out features, figuring out what structures make sense in this particular context. Those didn't carry over at all to computer vision, didn't carry over at all to time series analysis. Now, with neural networks, we've seen it at Nirvana and an out part of Intel solving customers problems. We apply a very similar set of techniques across all these different types of data domains and solve them, right? All data in the real world seems to be hierarchical. You can decompose it into this hierarchy and it works really well. Our brains are actually general structures, right? I mean, as a neuroscientist, you can look at different parts of your brain and there are differences, you know, something that takes in visual information versus auditory information, slightly different, but they're much more similar than they are different. So there is something invariance, something very common between all of these different modalities and we're starting to learn that. And this is what is extremely exciting to me, kind of like trying to understand the biological machine as a computer, right? We're figuring it out, right? We're getting those principles. And one of the really fun things that Ray Kurzweil likes to talk about is, and it falls in the genre of biomimicry and how we actually replicate biologic evolution in our technical solution. So if you look at, and we're beginning to understand more and more how real neural nets work in our cerebral cortex and there's sort of a pyramid structure so that the first pass of a broad base of analytics gets constrained to the next pass, gets constrained to the next pass, which is how information is processed in the brain. So we're discovering increasingly that what we've been evolving towards in terms of architectures of neural nets is approximating the architecture of the human cortex. And the more we understand the human cortex, the more insight we get to how to optimize neural nets. So when you think about it with millions of years of evolution of how the cortex is structured, it shouldn't be a surprise that the optimization protocols, if you will, in our genetic code are profoundly efficient in how they operate. So there's a real role for looking at biologic evolutionary solutions, vis-a-vis technical solutions. And there's a friend of mine worked with George Church at Harvard and actually published a book on biomimicry and they wrote the book completely in DNA. So if you have, if all of you have your home DNA decoder, you can actually read the book on your DNA reader. Just kidding. Well, there's actually a startup I just saw in the Read Right DNA. Yeah. Yeah, they're right. Actually, it's a Hewick something. Yeah. What was it? Yeah, they're basically encoding information in DNA as a storage medium. And the same friend of mine that co-authored that biomimicry book in DNA also did the estimate of the density of information storage. So a cubic centimeter of DNA can store an exabyte of data. I mean, that's mind-blowing. Highway dense, yeah. Yeah, that's amazing. Well, so you hit upon a really important point there that one of the things that's changed is, well, there are two major things that have changed in my perception from, let's say, five to 10 years ago when we were using machine learning, you could use data to train models and make predictions to understand complex phenomena. But they had limited utility, and the challenge was that if I'm trying to build one of these things, I had to do a lot of work upfront. It's called feature engineering. I had to do a lot of work to figure out what are the key attributes of that data? What are the 10 or 20 or 100 pieces of information that I should pull out of the data to feed to the model and then the model can turn it into a predictive machine? And so what's really exciting about the new generation of machine learning technology, and particularly deep learning, is that it can actually learn from example data those features without you having to do any pre-programming. That's why, Navina's saying, you can take the same sort of overall approach and apply it to a bunch of different problems because you're not having to fine tune those features. So at the end of the day, the two things that have changed to really enable this revolution is access to more data and I'd be curious to hear from you where you're seeing data come from, what are the strategies around that? And then, so access to data, and I'm talking millions of examples. So 10,000 of examples, most times isn't gonna cut it, millions of examples will do it and then the other piece is the computing capability to actually take millions of examples and optimize this algorithm in a single lifetime. I mean, back in 91 when I started, we literally would have thousands of examples and it would take overnight to run the thing. So now in the world of millions and you're putting together all these combinations, the computing has changed a lot. I know you've made some revolutionary advances in that, but I'm curious about the data. Where are you seeing interesting sources of data for analytics? So I do some work in the genomic space and there are more viable permutations of the human genome than there are people who have ever walked the face of the earth and the polygenic determination of a phenotypic expression, translation, what our genome does to us in our physical experience in health and disease is determined by many, many genes and the interaction of many, many genes and how they're up and down regulated and the complexity of disambiguating which 27 genes are affecting your diabetes and how are they up and down regulated by different interventions? It's gonna be different than his. It's gonna be different than his and we already know that there's four or five distinct genetic subtypes of type two diabetes. So physicians still think there's one disease called type two diabetes. There's actually at least four or five genetic variants that have been identified. And so when you start thinking about disambiguating particularly when we don't know what 95% of DNA does still, what actually is the underlying cause it will require this massive capability of developing these feature vectors sometimes intuiting it if you will from the data itself and other times taking what's no knowledge to develop some of those feature vectors and be able to really understand the interaction of the genome and the microbiome and the phenotypic data. So there's the complexity is high and because the variation complexity is high you do need these massive numbers. Now I'm gonna make a very personal pitch here. So forgive me, but if any of you have any role in policy at all let me tell you what's happening right now. So the Genomic Information Non-Discrimination Act so called GINA passed by a friend of mine written by a friend of mine a number of years ago says that no one can be discriminated against for health insurance based upon their genomic information. That's cool, that should allow all of you to feel comfortable donating your DNA to science, right? Wrong. You are 100% unprotected from discrimination for life insurance, long-term care and disability and it's being practiced legally today and there's legislation in the house in markup right now to completely undermine the existing GINA legislation and say that whenever there's another applicable statute like HIPAA that the GINA is irrelevant that none of the fines and penalties are applicable at all. So we need a ton of data to be able to operate on. We will not be getting a ton of data to operate on until we have the kind of protection we need to tell people you can trust us, you can give us your data, you will not be subject to discrimination and that is not the case today and it's being further undermined. So I wanna make a plea to any of you that have any policy influence to go after that because we need this data to help the understanding of human health and disease and we're not gonna get it when people look behind the curtain and see that discrimination is occurring today based upon genetic information. Well I don't like the idea of being discriminated against based on my DNA. Right. And especially given how little we actually know there's so much complexity in how these things unfold in our own bodies that I think anything that is being done is probably childishly immature and oversimplifying. So it's pretty rough. I guess the translation here is that we're all unique. It's not just a Disney movie, right? We really are. And I think one of the strengths that I'm seeing kind of going back to the original point of these new techniques is they're, it's going across different data types will actually allow us to learn more about the uniqueness of an individual, right? It's not gonna be just from one data source that we're collecting data from many different modalities, right? We're collecting like behavioral data from wearables. We're collecting things from scans, from blood tests, from the genome, from many different sources and the ability to integrate those into a unified picture, right? That's the important thing that we're getting toward now. And I think that's what I think is gonna be super exciting here. Like think about it, right? Every one of us, I can tell you, visualize a coin, right? You can visualize a coin. Not only when you visualize it, you also know what it feels like. You know how heavy it is. You have a mental model of that from many different perspectives. And if I take away one of those senses, you can still identify the coin. If I tell you to put your hand in your pocket and pick out a coin, you probably can do that with 100% reliability. And that's because we have this generalized capability to build a model of something in the world. And that's what we need to do for individuals is actually take all these different data sources and come up with a model for an individual and you can actually then say what drug works best on this? What treatment works best on this? It's going to get better with time. It's not going to be perfect because this is what a doctor does, right? Like a doctor who's very experienced. You're a practicing physician, right? Back me up here. That's what you're doing is like you basically have some categories. You're taking information from the patient when you talk with them and you're building a mental model and you like apply what you know could work on that patient, right? I don't have clinic hours anymore but I do take care of many friends and family. You used to, you used to. I practiced for many years before I became a full-time geek. Okay, great. I thought you were a recovering geek. I know. I do more policy now. He's off the wagon. I just want to take a moment and see if there's anyone from my audience who would like to ask, oh, go ahead. You got a mic here. Hang on a second. So I have tons of questions. Bouncer? Sorry. Yes. So first of all, the microbiome and the genome are really complex. You already hit about that. Yet most of the studies we do are small scale and we have difficulty repeating them from y'all from study to study. How are we going to reconcile all that and what are the technical hurdles to get to the vision that you want? So primarily it's been the cost of sequencing. Up until a year ago it was $1,000, true cost. Now it's $100, true cost. And so that barrier is going to enable fairly pervasive testing. It's not a real competitive market because there's one sequencer that is way ahead of everybody else so the price is not $100 yet. The cost is below $100. So as soon as there's competition to drive the cost down and hopefully as soon as we all have the protection we need against discrimination, as I mentioned earlier, then we will have large enough sample sizes and so it is our expectation that we will be able to pool data from multiple sources. I chair the eHealth Work Group of the Global Alliance for Genomics and Health which has worked on this very issue. And rather than pooling all the data into a single common repository, the strategy and we're developing our five year plan in a month in London. But the goal is to have a federation of essentially credentialed data enclaves. That's a formal method, HHS already does that. So you can get credentialed to search all the data that Medicare has on people that's been de-identified according to HIPAA. So we want to provide the same kind of service with appropriate consent at an international scale. And there's a lot of nations that are talking very much about data nationality so that you can't export data. So this approach of a federated model to get it data from all the countries is important. The other thing is the blockchain technology is gonna be very profoundly useful in this context. So David Hausser of UC Santa Cruz is right now working on a protocol using an open blockchain public ledger where you can put out, so for any typical cancer you may have a half dozen what are called somatic variants. Cancer is a genetic disease. So what has mutated to cause it to behave like a cancer? And if we look at those biologically active somatic variants publish them on a blockchain that's public so there's not enough data there to re-identify the patient. But we can, if I'm a physician treating women with breast cancer and rather than say, what's the protocol for treating a 50 year old woman with a cell type of cancer? I can say, show me all the people in the world who have had this cancer at the age of 50 with these exact six somatic variants. Find the 200 people worldwide with that. Ask them for consent through a secondary mechanism to donate everything about their medical record. Pool that information to a cohort of 200 that exactly resembles the woman sitting in front of me and find out of the 200 ways they were treated what got the best result. And so that's the kind of future where a distributed federated architecture will allow us to query and obtain a very, very relevant cohort so we can basically be treating patients like mine sitting right in front of me. Same thing applies for establishing research cohorts. So there's some very exciting stuff at the convergence of big data analytics, machine learning, and blockchain. And this is an area that I'm really excited about and I think we're excited about generally at Intel. We actually have something called the collaborative cancer cloud which is this kind of federated model we have three different academic research centers. They have each of them has a very sizable and valuable collection of genomic data with phenotypic annotations. So pancreatic cancer, colon cancer, et cetera. And we've actually built a secure computing architecture that can allow a person who's given the right permissions by those organizations to ask a specific question of specific data without ever sharing the data. So the idea is my data is really important to me. It's valuable. I want us to be able to do a study that gets the number from the 20 pancreatic cancer patients in my cohort up to the 80 that we have in the whole group. But I can't do that if I'm going to just spill my data all over the world. And there are HIPAA and compliance reasons for that. There are business reasons for that. So what we've built at Intel is this platform that allows you to do a different kinds of queries on this genetic data and reach out to these different sources without sharing it. And then the work that I'm really involved in right now and that I'm extremely excited about, this also touches something that both of you said is it's not sufficient to just get the genome sequences. You also have to have the phenotypic data. You have to know what cancer they've had. You have to know that they've been treated with this drug and they've survived for three months or that they had this side effect. That clinical data also needs to be put together. It's owned by other organizations, right? Other hospitals. So the broader generalization of the collaborative cancer cloud is something we call the data exchange. And it's a misnomer in the sense that we're not actually exchanging data. We're doing analytics on aggregated data sets without sharing it, but it really opens up a world where we can have huge populations and big enough amounts of data to actually train these models and draw the thread. And of course that really then hits home for the techniques that Nirvana is bringing to the table and of course for the... Stanford's one of your academic medical centers? Not for that collaborative cancer cloud. The reason I mention Stanford is because the reason I'm wearing this Fitbit is because I'm a research subject at Mike Snyder, the chair of genetics at Stanford, iPop, intrapersonal omics profile. So I was fully sequenced five years ago and I get four full microbiomes, my gut, my mouth, my nose, my ears every three months and I've done that for four years now and about a pint of blood. And so to your question about the density of data, so a lot of the problem with applying these techniques to healthcare data is it's basically a sparse matrix and there's a lot of discontinuities in what you can find and operate on. So what Mike is doing with the iPop study is much the same as you described, creating a highly dense longitudinal set of data that will help us mitigate the sparse matrix problem. What's that? Pardon me. What's that? That's a box of stool samples. Right, okay. Box of stool samples, that's gotta be a new one I've heard now. Okay, well thank you so much. That was a great question so I'm going to repeat this and ask if there's another question. Do you want to go ahead? Hi, thanks. So I'm a journalist and I report a lot on these neural networks, like a system that's better at reading mammograms than human radiologists, or a system that's better at predicting which patients in the ICU will get sepsis. These fascinating academic studies that I don't really see being translated very quickly into actual hospitals or clinical practice seems like a lot of the problems are regulatory or liability or human factors, but how do you get past that and really make this stuff practical? Well so I think there's a few things that we can do there and I think the proof points in the technology are really important to start with in this specific space, right? In other places sometimes you can start with other things but here like there's a real confidence problem, right? When it comes to healthcare for good reason, right? We have doctors trained for many, many years, you know, school and then residencies and other kinds of training because we are really, really conservative with healthcare, right? So we need to make sure that technology is well beyond just a paper, right? These papers are proof points, they get people interested, they even like fuel entire grant cycles sometimes, right? And that's what we need to happen and it's just an inherent problem that's going to take a while to get those things to a point where it's like, well, I really do trust what this is saying and I really think it's okay to now start integrating that into our standard of care. I think that's where you're seeing it. It's frustrating for all of us, believe me. I mean, like I said, I think personally one of the biggest things I want to have an impact like when I go to my grave is that we use machine learning to improve healthcare. I really do feel that way, but it's just not something we can do very quickly. And as a business person, I don't actually look at those use cases right away because I know the cycle is just going to be long. So to your point, the FDA for about four years now has understood that the process that has been given to them by their board of directors, otherwise known as Congress, is broken. And so they've been very actively seeking new models of regulation and what's really forcing their hand is regulation of devices and software because in many cases there are black box aspects of that and there's a black box aspect to machine learning historically Intel and others are making inroads and they're providing some sort of traceability and transparency into what happens in that black box rather than say, well, overall we get better results but once in a while we kill somebody, right? So there is progress being made on that front and there's a concept that I like to use. Everyone knows Ray Kurzweil's book, The Singularity is Near? Well, I like to think of the Diadarity is Near and the Diadarity is where you have human transparency into what goes on in the black box. And so maybe Bob, you wanna speak a little bit about, you mentioned that there's, in a prior discussion there's some work going on in Intel there. Yeah, no, absolutely. So we're working with a number of groups to really build tools that allow us, in fact Naveen probably can talk even more detail than I can, but they're tools that allow us to actually interrogate machine learning and deep learning systems to understand not only how they respond to a wide variety of situations but also where are their biases? I mean, one of the things that's shocking is that if you look at the clinical studies that our drug safety rules are based on, 50 year old white guys are the peak of that distribution. Which I don't see any problem with that, but some of you out there might not like that if you're taking a drug, sorry. So yeah, we wanna understand what are the biases in the data, right? And so there's some new technologies. There's actually some very interesting data generative technologies and this is something I'm also curious what Naveen has to say about that you can generate from small sets of observed data much broader sets of varied data that help probe and fill in your training for some of these systems that are very data dependent. So that takes us to a place where we're gonna start to see deep learning systems generating data to train other deep learning systems and they start to sort of go back and forth and you start to have some very nice ways to at least expose the weaknesses of these underlying technologies. And that feeds back to your question about regulatory oversight of this and there's a fascinating but little known origin of why very few women are in clinical studies. The litimide caused birth defects. So rather than say pregnant women can't be enrolled in drug trials, is it any woman who is at risk of getting pregnant cannot be enrolled? So there was actually a scientific meritorious argument back in the day when they really didn't know what was gonna happen in post-litimide. So it turns out that the adverse unintended consequence of that decision was we don't have data on women and we know in certain drugs like Xanax that the metabolism is so much slower that the typical dosing of Xanax in women should be less than half of that for men and a lot of women have had very serious adverse effects by virtue of the fact that they weren't studied. So the point I want to illustrate with that is that regulatory cycles, so people have known for a long time that was like bad way of doing regulation that should be changed. It's only recently getting changed in any meaningful way. So regulatory cycles and legislative cycles are incredibly slow. The rate of exponential growth in technology is exponential. And so there's this impedance mismatch between the cycle time for regulation and the cycle time for innovation. And what we need to do, and I'm working with the FDA, I've done four workshops with them on this very issue, is that they recognize that they need to completely revitalize their process. They're very interested in doing it. They're not resisting it. People think, oh, that bad FDA, they're resisting. Trust me, there's nobody on the planet who wants to revise these review processes more than the FDA itself. And so they're looking at models and what I've recommended is a global crowdsourcing and the FDA could shift from a regulatory role to one of doing two things. Assuring that people who do their reviews are competent and assuring that their conflicts of interest are managed. Because if you don't have a conflict of interest in this very interconnected space, you probably don't know enough to be a reviewer. So there has to be a way to manage the conflict of interest. And I think those are some of the key points that the FDA is wrestling with because there's type one and type two errors. If you under-regulate, you end up with another thalidomide and people born without fingers. If you over-regulate, you prevent life-saving drugs from coming to market. So striking that balance across all these different technologies is extraordinarily difficult. If it were easy, the FDA would have done it four years ago. It's very complicated. Jumping on that question, so all three of you are in some ways entrepreneurs, within your organization or started companies. And I think it'd be good to talk a little bit about the business opportunity here where there's a huge ecosystem in healthcare. There are different segments, biotech, pharma, insurance payers, et cetera. Where do you see is the ripe opportunity or industry ready to really take this on and make AI the competitive advantage? Well, the last question also included why aren't you using the results of the sepsis detection? We do. There were six or seven published ways of doing it. We did our own data and looked at it. We found a way that was superior to all the published methods and we apply that today. So we are actually using that technology to change clinical outcomes as far as where the opportunities are. So it's interesting because if you look at what's gonna be here in three years, we're not gonna be using those big data analytics models for sepsis that we are deploying today because we're just gonna be getting a tiny aliquot of blood looking for the DNA or RNA of any potential infection. And we won't have to infer that there's a bacterial infection from all these other ancillary secondary phenomenon. We'll see if the DNA is in the blood. So things are changing so fast that the opportunities that people need to look for are what are generalizable and sustainable kind of wins that are gonna lead to a revenue cycle that will justify the venture capital world investing. So there's a lot of interesting opportunities in the space but I think some of the biggest opportunities relate to what Bob has talked about and bringing many different disparate data sources together and really looking for things that are not comprehensible in the human brain or in traditional analytic models. And I think there's, we also got to look a little bit beyond direct care, right? That's the one thing that, I mean, we're talking about policy and how we set up standards, these kinds of things. That's one area that's gonna drive innovation forward. I completely agree with that. Direct care is one piece. How do we scale out many of the knowledge kind of things that are embedded into one person's head and get them out to the world and democratize that. Then there's also development of the underlying technologies of medicine, right? Pharmaceuticals. The traditional way that pharmaceuticals is developed is actually kind of funny, right? It's a lot of it started just by chance, right? Penicillin is a very famous story, right? It's not that different today, unfortunately, right? It's conceptually very similar. And now we've got more science behind it. We talk about domains and interactions, these kinds of things, fundamentally the problem is what we can, computer science called NP-hard. It's too difficult to model. You can't solve it analytically, right? And this is true for all of these kind of natural sorts of problems, by the way. And so, there's a whole field around this, molecular dynamics and modeling these sorts of things that actually are being driven forward by these AI techniques because it turns out our brain doesn't do magic. It actually doesn't solve these problems. It approximates them very well. An experience allows you to approximate them better and better. Actually, it goes a little bit to what you were saying before, it's like simulations and forming neural networks and training off each other. There are these emergent dynamics of like, you can simulate physics, steps of physics, and you come up with a system that's much too complicated to ever solve. Three pool balls on a table is one such system. Seems pretty simple. You know how to model that, but it actually turns out you can't predict where a ball's gonna be once you inject some energy into that table. So something that simple is already too complex. So neural network techniques actually allow us to start making those tractable, right? These NP hard problems and things like molecular dynamics and actually understanding how different medications and genetics will interact with each other is something we're seeing today. And so I think there's a huge opportunity there. We've actually worked with some customers in this space and I think I'm seeing it like Roche is acquiring a few different companies in space. They really want to drive it forward, like using big data to drive drug development. Kind of counter-intuitive. I never would have thought it had I not seen it myself, so. Lots of wishes. And there's a big related challenge because in personalized medicine, there's smaller and smaller cohorts of people that will benefit from a drug that still takes $2 billion on average to develop. That is unsustainable. So there's an economic imperative of overcoming the cost and the cycle time for drug development. I want to go at this question a little bit differently thinking about not so much where the industry segments that can benefit from AI, but what are the kinds of applications that I think are most impactful? So if this is what a skilled surgeon needs to know at a particular time to care properly for a patient, this is where most, this area here is where most surgeons are. That is they're close to the maximum knowledge and ability to assimilate as they can be. So it's possible to build complex AI that can pick up on that one little thing and move them up to here, but it's not a gigantic accelerator amplifier of capability. But think about other actors in healthcare. And I mentioned a couple of them earlier. Who do you think the least trained actor in healthcare is? Patience. Yes, the patients. The patients are really very poorly trained, including me. I'm abysmal at figuring out who to call and where to go. So one of the big opportunities. You know as much the doctor, right? Yeah. That's right. My doctor friends always hate that. Yeah. I've been knowing a diagnosis. Yeah, Dr. Google knows. So the opportunities that I see that are really, really exciting are when you take an AI agent, what sometimes I like to call a contextually intelligent agent or a CIA, and apply it to a problem where a patient has a complex future ahead of them that they need help navigating. And you use the AI to help them work through. Post-operative. You've got PT. You've got drugs. You've got to be looking for side effects. An agent can actually help you navigate. It's like your own personal GPS for healthcare. So it's giving you the information that you need about you for your care. That's my definition of precision medicine. And it can include genomics, of course, but it's much bigger. It's that broader picture. And I think those sort of agent way of thinking about thing and filling in the gaps where there's less training and more opportunity is very exciting. And I had a great startup idea right there, by the way. Oh yes, right. We'll meet you all out back for the next startup. Yeah. Yeah, I had a conversation with the head of the American Association of Medical Specialties and just a couple days ago. And what she was saying, and I'm aware of this phenomenon, but all of the medical specialists are saying, you're killing us with these stupid board recertification trivia tests that you're giving us. So if you're a cardiologist, you have to remember something that happens in one in 10 million people, right? And they're saying, that's irrelevant anymore because we've got advanced decision support coming. We have these kind of analytics coming precisely what you're saying. So it's human augmentation of decision support that is coming at blazing speed towards healthcare. And so in that context, it's much more important that you have a basic foundation. You know how to think, you know how to learn, and you know where to look. And so all of those things, so we're gonna be human augmented learning systems, much more so than in the past. And so the whole certification process is being revised right now. Yeah, speak up. Yeah. I'll just talk very loudly. Sorry, you guys were talking about certain things being implemented for their peak learning curve. When you get complex systems, like we need to write about three billion calls or a billion calls. Sure. The brain is about as complex as it gets. Maybe the microbiome is more complex than the brain. I know the numbers and possibilities of the next result. Our brains are big enough to understand our brains literally, you know, we need some layer of learning beyond that to kind of completely fathom all connections and all kinds of problems. What makes it fathomable is that you can decom... Sure. She was saying that our brain is really complex and large and even our brains don't know how our brains work, so are there ways to... What hope do we have kind of thing? It takes you to... It's a metaphysical question. It turtles all the way down. Exactly, that's a great quote. I mean, basically, you can decompose every system. Every complicated system can be decomposed into simpler emergent properties. You lose something, perhaps, with each of those, but you get enough to actually understand most of the behavior. And that's really how we understand the world, right? And that's what we've learned in the last few years of what neural network techniques can allow us to do. And that's why our brain can understand our brain. Yeah, actually. I'd recommend Ray Kurzweil's last book because he addresses that issue and they're very elegantly... Yeah. Yeah, we're seeing some really interesting technologies emerging right now where neural network systems are actually connecting other neural network systems in networks. And that's, you know, you can see some very compelling behavior because one of the things I like to distinguish AI versus traditional analytics is we used to have question answering systems. I used to query a database and create a report to find out how many widgets I sold. Then I started using regression or machine learning to classify complex situations from, you know, this is one of these and that's one of those. And then as we've moved more recently, we've got these AI-like capabilities like being able to recognize that there's a kitty in a photograph. But if you think about it, if I were to show you a photograph that happened to have a cat in it and I said, what's the answer? You'd look at me like, what are you talking about? I have to know the question. So where we're cresting with these connected sets of neural systems and with AI in general is that the systems are starting to be able to, from the context, understand what the question is. Why would I be asking about this picture? I'm a marketing guy and I'm curious about what logos are in the thing or what kind of cat it is. So it's the being able to ask a question and then take these question answering systems and actually apply them. So it's this ability to understand context and ask questions that we're starting to see emerge from these more complex hierarchical neural systems. There's a person dying to ask you questions. Sorry, you have hit on several different topics that all coalesce together. You mentioned personalized models. You mentioned the AI agents that can help you as you're going through a transitionary period. You mentioned data sources, especially across long time periods. Who today has access to enough data to make meaningful progress on that? Not just when you're dealing with an issue, but day-to-day improvement of your life and your health. Right, great question. That is a great question and I don't think we have a good answer to it. Well, I think every large healthcare organization and various healthcare consortiums are working very hard to achieve that goal. The problem remains in creating semantic interoperability. So I've spent a lot of my career working on semantic interoperability and the problem is that if you don't have well-defined or self-defined data and if you don't have well-defined and documented metadata and you start operating on it, it's really easy to reach false conclusions and I can give you a classic example. It's well known with hundreds of studies looking at when you give an antibiotic before surgery and how effective it is in preventing a post-op infection. Simple question, right? So most of the literature done prospectively was done in institutions where they had small sample sizes. So if you pool that, you get a little bit more noise but you get a more confirming answer. What was done at a very large, not my own, but a very large institution, I won't name them for obvious reasons, but they pooled lots of data from lots of different hospitals where the data definitions and the metadata were different. Two examples, when did they indicate the antibiotic was given? Was it when it was ordered, dispensed from the pharmacy, delivered to the floor, brought to the bedside, put in the IV or the IV starts flowing? Different hospitals used a different metric of when it started. When did surgery occur? When they were wheeled into the OR? When they were prepped and draped? When the first incision occurred? All different and they concluded quite dramatically that it didn't matter when you gave the pre-op antibiotic and whether or not you get a post-op infection. And everybody who was intimate with the prior studies just completely ignored and discounted that study. It was wrong and it was wrong because of the lack of commonality and the normalization of data definitions and metadata definitions. So because of that, this problem is much more challenging than you would think. If it were so easy as to put all this data together and operate on it, normalize and operate on it, we would have done all that a long time ago. Semantic interoperability remains a big problem and we have a lot of heavy lifting ahead of us in that space. I'm working with the Global Alliance, for example, of genomics and health. There's like 30 different major ontologies for how you represent genetic information and different institutions are using different ones in different ways and different versions over different periods of time. That's a mess. Or normalize data set versus a population? Well, so N of one studies in single subject research is an emerging field of statistics. So there's some really interesting new models like step wedge analytics for doing that on small sample sizes recruiting people asynchronously. There's single subject research statistics. You compare yourself with yourself at a different point in time in a different context. So there are emerging statistics to do that and as long as you use the same sensor, you won't have a problem but people are changing their remote sensors and you're getting different data measured in different ways with different sensors and different normalization and different calibration. So yes, it even persists in the N of one environment. Yeah, you have to get start with a large N that you can apply to the N of one. I'm gonna actually attack your question from a different perspective. So who has the data, the millions of examples to train a deep learning system from scratch? It's a very limited set right now. Technology such as the collaborative cancer cloud and data exchange are definitely impacting that and creating larger and larger sets of critical mass and notwithstanding the very challenging semantic interoperability questions. But there's another opportunity K asked about what's changed recently. One of the things that's changed in deep learning is that we now have modules that have been trained on massive data sets that are actually very smart at certain kinds of problems. So for instance, you can go online and find deep learning systems that actually can recognize better than humans whether there's a cat, dog, motorcycle, house in a photograph. From Intel open source. Yes, from Intel open source. So here's what happens next because that the first, most of that deep learning system is very expressive. That combinatorial mixture of features that Navin was talking about when you have all these layers. There's a lot of features there that are actually very general to images not just finding cats, dogs, trees. So what happens is you can do something called transfer learning where you take a small or modest data set and actually re-optimize it for your specific problem very, very quickly. And so we're starting to see a place where you can on one end of the spectrum where we're getting access to the computing capabilities and the data to build these incredibly expressive deep learning systems. And over here on the right, we're able to start using those deep learning systems to solve custom versions of problems. And I just last weekend or two weekends ago in 20 minutes, I was able to take one of those general systems and create one that could recognize all different kinds of flowers, very subtle distinctions that I would never be able to know that on my own but I happened to be able to get the data set and literally it took 20 minutes and I had this vision system that I could now use for a specific problem. I think that's incredibly profound and I think we're gonna see this spectrum of wherever you are in your ability to get data and to define problems and to put hardware in place to see really neat customizations and a proliferation of applications of this kind of technology. So one other trend I think I'm very hopeful about is so this is a hard problem, clearly, right? I mean, getting data together, formatting it, it's just for many different sources. It's one of these things that's probably never gonna happen perfectly, right? But one trend I think that is extremely hopeful to me is the fact that the cost of gathering data has precipitously dropped. Building that thing is almost free these days, right? And I can write software and put it on 100 million cell phones in an instant. You couldn't do that five years ago even, right? And so the amount of information we can gain from a cell phone today has gone up, right? We have more sensors. We're bringing online more sensors. People have Apple watches and they're sending it even blood data back to the phone. So once we can actually start gathering more data and do it cheaper and cheaper, it actually doesn't matter where the data is. I can write my own app. I can gather that data and I can start driving the correct inferences or useful inferences back to you. So that is a positive trend, I think, here. And personally, I think that's how we're gonna solve it is by gathering it from many different sources cheaply. Hi, my name's Pete. I very much enjoyed the conversation so far, but I was hoping, perhaps, to bring a little more focus into precision medicine and ask two questions. Number one, how have you applied the AI technologies as they're emerging so rapidly to your natural language processing? I'm particularly interested in, if you look at things like Amazon Echo or Siri or the other voice recognition systems that are based on AI, they've just become incredibly accurate and I'm interested in specifics about how I might use technology like that in medicine. So where would I find a medical nomenclature and perhaps some reference to a back end that works that way? And the second thing is, what specifically is Intel doing or making available? And you mentioned some open source stuff on cats and dogs and stuff, but I'm the doc, so I'm looking at the medical side of that. What are you guys providing that would allow us, who are kind of geeks on the software side as well as being docs, to experiment a little bit more thoroughly with AI technologies. Google has a free AI toolkit. Several other people have put them out with free AI toolkits in order to accelerate that. There's special hardware now with graphics, co-processors hitting amazing speeds. And so I was wondering, where do I go in Intel to find some of those tools and perhaps learn a bit about the fantastic work that you guys are already doing at Kaiser? So let me take that first part and then we'll be able to talk about the MD part. So in terms of the technology, this is what is extremely exciting now about what Intel is focusing on. We are providing those pieces so you can actually assemble them and build the applications. How you build that application specific for MDs and the use cases is up to you or the one who's building an application. But we're going to power that technology from multiple perspectives. So Intel already is the main force behind the data center, right? Cloud computing, all of this is already Intel. We're making that extremely amenable to AI and making it setting the standard for AI in the future. So we can do that from a number of different mechanisms. For somebody who wants to develop an application quickly, we have hosted solutions. Intel Nirvana is kind of the brand for these kinds of things. Hosted solutions to get you going very quickly. Once you get to a certain level of scale where costs start making more sense, things can be brought on premise. We're supplying that. We're also supplying software that makes that transition essentially free, right? Then taking those solutions that you develop on the cloud or develop in the data center and actually deploying them on a device. Like we want to write something on your smartphone or PC or whatever. We're actually providing those hooks as well. So we want to make it very easy for developers to take these pieces and actually build solutions out of them quickly. So you probably don't even care what hardware it's running on. You're like, here's my data set. This is what I want to do. I want to train it, make it work, go fast, right? Make my developers efficient. That's all you care about, right? And that's what we're doing. We're taking it from that point and how do we best do that? We're going to provide those technologies. The next couple of years there's going to be a lot of new stuff coming from Intel. Do you want to talk about AI Academy as well? Yeah, that's a great segue there. So in addition to this, we have an entire set of tutorials and other online resources and things we're going to be bringing into the academic world for people to get going quickly. So that's not just enablement on our tools but also just general concepts. What is a neural network? How does it work? How does it train? All of these things are available now and we have them in a nice digestible class format that you can actually go and play with. Let me give a couple of quick answers in addition to the great answers already. So you're asking, why can't we use medical terminology and do what Alexa does? Well, no, no, no. You may not be aware of this but Andrew Ying who was the AI guy at Google was recruited to Baidu. They have a medical chatbot in China today. I haven't been able, I don't speak Chinese, I haven't been able to use it yet. There are two similar initiatives in this country that I know of. There's probably a dozen more in stealth mode but Lumioda and HealthCap are doing chatbots for healthcare today using medical terminology. You have the compound problem of semantic normalization within language compounded by a cross language. I've done a lot of work at an international organization called SNOMED which translates medical terminology and so you're aware of that. We could talk offline if you want because I'm pretty deep into the semantic space. Go Google Intel Nirvana and you'll see all the websites there. Intel.com slash AI or nirvana assist.com. Okay, great. Well, this has been fantastic. I want to first of all thank all the people here for coming and asking great questions. I also want to thank our fantastic panelists today. And then lastly, and lastly I just want to share one bit of information. We will have more discussions on AI next Tuesday at 9.30 AM, Diane Bryant who's our general manager of data centers group will be here to do a keynote. So I hope you all get to join that. Thanks for coming.