 All right, well, it looks like we have our quorum. So let's go ahead and get started. So good afternoon, everyone, or morning or evening or whatever it might be, wherever you are as you're tuning in. It's great to see everyone. And I'm very much looking forward to our conversation today. So my name is Andrew Bremer. I'm an associate program officer on the Board of Life Sciences here at the National Academies of Sciences, Engineering and Medicine. So it's my pleasure to welcome you to our sessions today. And for many of you, welcome back to our series of discussions of workshop discussions for our Standing Committee on Biotechnology Capabilities and National Security Needs. Today, our session is the final plan discussion for our cutting edge scientific capabilities for biological detection workshop series following on from our discussions last week. So on the next slide, then there's just a little bit more info for today as well as a couple of notes that I wanted to make before getting started. So similar to what I said before, just to take a moment to mention as I did last week, this workshop is very similar to others at the National Academies in that the discussions do not result in any formal recommendations as an outcome of the discussion. However, the public proceedings will be published in the next couple of months if you're interested in returning to the discussions from the workshop. Those public proceedings will chronicle the presentations and discussions that take place both last week and today as well. So just to note that the discussion is being recorded and participation may be included in those proceedings. If you would prefer not to have your participation included, we just encourage you to only watch and listen in into discussion. However, if you are interested, we very much encourage youth of the chat function and the Q&A function. If you don't have the Q&A function popped up, you will be able to click on the three dots in the lower right hand screen of the WebEx portal and you'll see the Q&A function there. With that, though, for today, I first want to welcome and introduce the chair of our standing committee, Dr. Todd Coleman for some brief remarks. Dr. Coleman is an associate professor in the Department of Bioengineering at Stanford University. His multidisciplinary research expertise spans applied probability, bioelectronics, physiology, synthetic biology, and more. It's been such a delight working with him in these last couple of months as we started up the standing committee. He holds a bachelor's in electrical engineering and computer engineering from the University of Michigan and his master's and PhD in electrical engineering from MIT. Dr. Coleman, Todd, welcome, and I'll pass things over to you. Good morning slash afternoon to everyone. And so, yeah, I'd first like to just welcome everyone to this exciting workshop session today. I'm personally very enthusiastic. This doesn't feel like work because, to be honest, this is fun turning science fiction into reality. And so, the standing committee that Andrew mentioned, the standing committee on biotechnology, capabilities, and national security needs was established to facilitate active cross-sector engagement among stakeholders as we explore new and emerging biotechnologies with the potential to enable national security-relevant capabilities. And so, we had some workshop sessions last week, four of them actually, and they were planned with that goal in mind. We hope that this active engagement fostered by these sessions continues as our standing committee continues to explore and develop future events such as this, where we try to examine additional biotechnologies and implications for their research development and deployment. And so, in short, I would just like to say, please hesitate to reach out and get in touch with our committee or staff. So, with that, I hand it back to Andrew. Thanks so much, Todd. And we'll dive right into it. And so, I want to introduce now to you our Vice-Chair for the Standing Committee and the moderator for our session today, Dr. Diane Duelis. Dr. Duelis is a senior research fellow at National Defense University. Her research areas include emerging biological technologies, biodefense, and preparedness for bio threats with specific areas of expertise, including dual-use life sciences research, synthetic biology, the U.S. bio-economy, and more. Prior to NDU, she held roles at the NIH, as well as at the Office of Science and Technology Policy in the White House, serving as the Assistant Director for Life Sciences. She holds a PhD in biology from the University of Delaware. Diane, Dr. Duelis, welcome, and I'll turn things over to you. Okay, great. Thanks so much, Andrew. And thanks to Todd for both of you kicking this off. I'm really excited about this session. And this is, as Todd mentioned, this is sort of a follow-on from last week's really interesting sessions that we had on capabilities overall related to biological detection. So today we're gonna discuss the impending impact of scaled and accessible brain-sensing technologies. And one of the things that struck me from last week's sessions was, number one, how rapidly many of these technologies are advancing, and Todd alluded to this as well. And I think in the brain sciences, many of us who are used to things kind of edging forward on a very slow scale of development when it comes to the human brain are seeing sort of leaps and bounds being made in this space. So as was discussed last week, we've been seeing this dramatic rise in our ability to detect, capture, and make sense of vast amount of data originating from the brain and its physiology. One of my research collaborators, Dr. Jim Giordano from Georgetown University, we have called this and referred to this as neuro data, kind of distinguishing it from other types of biological data because when it has to do with the brain, it always has to do with behavior and human beings when we're talking about the human brain. So today we're gonna hear about advancements in neuro detection technologies, their implications, and really continue this discussion. So it's my pleasure to be welcoming our guest speakers, which include Jonathan Berent and Dr. Alan Levy. So first, Jonathan, who I believe goes by JB Berent, is the CEO and founder of NextSense, which is spin-off of X, Google's moonshot factory. While he was at X, he built a team to develop brain sensing technologies and grew this project into an early funded pipeline project just as recently as 2020. So working with prominent sleep and dream researchers, he published a groundbreaking study revealing a mechanism to study altered states of consciousness during sleep. He has a BA in philosophy and religious studies from Stanford. Dr. Alan Levy is the Guizetta Foundation endowed chair for Alzheimer's Disease Research. Kavita, I actually pronounced it, hopefully I pronounced that correctly. And apologies if I had not. At Emory, where he is the founding director of that Institute of Brain Health for personalized medicine. He's a neurologist and neuroscientist working on mechanisms and biomarkers for neurodegenerative disease, an area that I spent a lot of time in while I was at the NIH. So I'm very excited to hear these updates. Dr. Levy is an internationally recognized leader in this space and a member of the National Academy of Medicine's graduating BS from University of Michigan and an MD and PhD in immunology from the University of Chicago. So JB and Alan, thank you so much. We're excited to hear what you have to say and I'm going to turn it over to you for your direct remarks. Thank you so much, Diane and Todd, Andrew and Kavita for inviting us and we hope we'll have a really good discussion today. Alan and I like this title, Look Up. It comes from, I was dealing with one of Alan's colleagues, Dr. David Rye, and we were having a discussion and he said, JB, I guess you're the crowd that looks up and I didn't know if that was a good thing or not. So I had to watch the movie that maybe many of you have watched which is entitled, Don't Look Up. But if you haven't seen it, you want to look up. You want to be part of that crowd. So we thought that was a fitting title because we are going to be talking about something that is exciting. There's a lot that can happen with ubiquitous brain technology but there's some things that we need to think about and I think one of the things that we haven't seen enough in in terms of Silicon Valley Tech leadership is thoughtful introduction of new tech. So we hope to have a more of a balanced view of this exciting area. So with that, I'm going to turn over to Alan to give us a bit of a primer of, so we can understand this double way of sensing brain waves but we're going to start with more of a primer on EEG that will hopefully set the stage. So Alan, I'll go ahead and advance the slides for you. Great, thanks, JB and thanks everybody for the introduction. And so I feel like a little bit out of water here. So my expertise in neurology and neuroscience is not EEG. So I'm a perfect person to give you a primer. This is a diverse group and let me try and just walk you back my perspective on EEG as a neurologist. And so this has been a technology that's been around a long time. Electroencephalography was discovered in 1924, Hans Berger in Germany did this and recorded one little electro putting it on the scalp in his family and friends and himself was able to detect their own electrical activity in the brain. You know, and it was about 10 years later that people started really replicating it. But what you're looking at is his first publication was five years later. So this has been around almost a hundred years now and you can see that one trace on the upper is the brain activity, the EEG. And what, you know, this has been now commonplace in neurology really since the 1930s, 1940s and psychiatry really Berger was a psychiatrist and the fields of neurology and psychiatry were very much more unified than they are today. And people were using EEG and exploring it for all sorts of things. JB and I were joking earlier, Hans Berger actually, I think the motivation for him to do this was his interest in mental telepathy that maybe brainwaves could be transmitted, you know, thoroughly and to communicate nonverbally in remote distances. So maybe that's what we'll go to later in the discussion. We use EEG routinely in medicine for much more mundane purposes. So we use it, you know, commonly for detecting seizures. Very importantly, it's used to detect the states of consciousness. So it's not uncommon if somebody's confused, they'll get an EEG for the neurologist to determine if there's a metabolic change and their brain cells are just not firing well or whether they might be having subclinical seizures. And even a person in coma, we often don't sometimes use to use this instead of a brain death to determine if somebody was brain dead or not. And EEG, the brain activity is also the basis for how we monitor sleep and sleep medicine. And over the last couple of decades, there's been much more robust use of EEG for also event related potentials, which we will also talk a little bit more about, JB will give you some examples. And so down in the lower right there, you see a cartoon of, you know, what an EEG really looks like, the way it's used in medicine and for the most part in research. There's an array of electrodes that are pasted onto the scalp of a patient. Some companies now have caps that you can put higher density caps on, but you're measuring electrical activity across the surface of the brain. And on the lower left panel, you can see it, what the electrical activity is really the manifestation of the way brain cells communicate. You can imagine there's electrical activity that you could measure in different parts of the brain that reflect the different brain structures, which are involved in different brain activities. And it's long been known that there are different rhythms of the electrical, you know, the dipoles that are getting measured on the scalp surface. In fact, Hans Berger himself was given then the alpha rhythm as it's called, which is the frequency of waves on that third tracing down in the lower left. That's the alpha rhythm. The frequency of those rhythms is about eight to 10 hertz per second. So it used to be called the Berger rhythm. And in fact, for many years, the EEG filtered out high-frequency things, but these different frequencies of which these waves of electricity are propagated reflect different brain states. The very bottom is what we don't wanna do. If anybody now is in delta rhythm, that means you're sleeping or comatose, not good. And up at these higher frequencies, there's a lot of the most important cognition in intellectual processing is occurring at these high frequencies that weren't appreciated until recently. And so there's a lot of information you can gain out of just measuring the surface of the scalp. But there's some real problems and it hasn't really been widely used. I'm not sure what the committee has talked about earlier, but I think about the ways we assess brain function are often very static. Imaging, brain imaging, MRI, even functional MRI. And so EEG is like millisecond, microsecond timescales that are how the brain is really communicating in real time. If you go to the next slide, JV. So until recently, there have been a lot of challenges that have really limited the use and I think creative thinking about how the EEG might be tapped for innovative purposes, medical and research and otherwise. And typically the way it's used, you can see some pictures of people with EEGs. That's our friend JV up there in the upper left. He shaved his hair to make sure there was good connectivity of that EEG. And down on the bottom is a typical setup for a sleep study, which happens in clinical labs all the time. So you can see these electrodes are pasted on. It's not easy. It's a pain to get these set up. And typically what's done is you just capture a small snapshot of brain activity across those regions that are getting monitored. Typically 20 minutes or so in a clinical evaluation with the sleep study, it's overnight. And some of the really important limitations though are what's getting monitored is actually just the very top surface of the brain around the top edges. So some of the most elegant areas of the brain that are responsible for our highest behavior functions are not tapped into with that EEG. And then most importantly, I think one of the things I've got very interested in is we're in this whole new era of digital biomarkers. How we can phenotype people and individuals as we go towards personalized medicine. And digital biomarkers are held with great promise in part because we can collect, we can think about collecting information in real time over a long time period and at scale, which is what the technology JB is gonna talk about in a minute, it really opens up. And on the next slide, JB as you heard, has founded this company, Nexense, which got spun out of X. The lower left there on the left side, you see an MRI picture of the brain. And there's a little red dot under the temporal lobe is where the EEG sits in these earbuds that JB and his team have developed. So these are earbuds that are placed in the middle ear canal and you can see it sits right under the temporal lobe. That temporal lobe is one of the most difficult areas for us to access electrically or otherwise when it comes to brain function for clinical research purposes. Yet that temporal lobe structure is where the hippocampus is. That's the side of the brain that is responsible for forming memories. It's essential for forming new memories. It's also the part of the brain that's deeply involved in language and personality. And it's the area of the brain that most often causes seizures. So it's the most epileptogenic part of the brain. These earbuds can sit in people. And what we did together with JB and his team is first proof of concept. Are we able to measure brain activity? And we brought this into our epilepsy monitoring unit at Emory, monitored 20 patients. You're looking at the electrocephalogram. The very top channel there is labeled ear. There are two earbuds, one on the left and one on the right. You can see the cursor. That's the signal, the electrical signal which is identical to the signal below it which is the surface EEG. In fact, these were done in people with intracranial electrodes. So it's pretty horrific if you're not aware but we stick dozens of electrodes deep into the brain in individuals with epilepsy to determine which part of the brain is irritable. So this was the absolute ground truth. We could compare the intracranial recordings with the ear EEG and determine that this is very high fidelity that we were able to recapitulate the signal predominantly in the temporal lobe which is where we would expect. And also, as JB mentioned my colleague, Dr. David Rice, a sleep expert at Emory. It's already clear that you can use these earbuds to monitor sleep in a way that really is unprecedented. And so I've become convinced in this process that this is a terrific opportunity for us to monitor brainwave activity at least from the temporal lobe region and the surround including the adjacent frontal lobe and do it chronically at scale. And so I'm gonna turn it over to JB now who's really the expert. Thanks so much, Alan. And to start off, there's been a lot of researchers doing some fantastic research in here and hopefully I have a few slides I can give credit to some of the pioneers in the research but we did build this early prototype. It looks like this. I am recording brainwaves right now. Who knows what we'll mine from this conversation now and hopefully I was paying attention to what you were just saying but maybe you can hold me accountable we'll look at it together afterwards. But it's a prototype, it records about 50 hours it can do it both on device as well as across a Bluetooth connection much like your iPods might go to the phone we have that transmitting to a phone and it's records which then goes up into the cloud. We design these custom. So as you can see, they fit really nicely into my ear and they have a very basic design without any electronics inside just passive recording with the conductive material that allows for the biopotential acquisition and then it travels to the device itself. One of the challenges, I mean, Hunsberger himself had a couple of decades before his research was taken seriously and replicated. So we wondered, how are we gonna convince people that this is a valid location? And so even before we did the clinical work that Alan's talking about at X we had to come up with some way of validating it. So I'll just briefly take you through that. We created this ballistic gel phantom with these dipoles that would be generating signal and then we could put the earbuds into this non-human model to make sure that we were picking up signals from a known generated signal source. And so we passed that. Then you heard about the burger rhythm or the alpha rhythm. So it was a good day when we were able to close our eyes and be able to see that alpha rhythm which generates from the observable region. So it wasn't obvious that we'd be able to detect that in ear but we can see that. And that was again a boost in our confidence. We did do the ECOG correlation and so Alan mentioned that. This is kind of an eye chart but essentially what you can do is you can look at various correlations by frequency. So do you see correlation in the beta frequency from the ground truth to the ear? Do you see it in the theta? Do you see it in the alpha and even in the gamma? And so we looked at that both sleeping and waking across the patient population at Emory and we found some really interesting things. And what was I think most interesting is that the correlations weren't spurious. How do we know that they weren't spurious? Well, you can see on this graph high correlation is in the yellow and darker yellow and no correlation is sort of this light blue. You do get some inverse correlation depending on dipole orientation. But when you look at the places that we had the highest correlation it made anatomical sense. So it was right there in that temporal region. So this gave us again more confidence that what we're seeing is not just random noise and we're making false conclusions. So we saw this at the theta rhythm. We saw this at the alpha rhythm. And we saw this surprisingly at gamma. I don't think we have enough data to be comfortable confirming that we see this reliably but there is some way that it makes sense. That picture that Alan showed you there is less bone material that's separating from the source acquisition and the neural correlate. And that's thicker as you get into this goal. So people are skeptical of gamma on the surface area. However, perhaps as we get closer inside with less attenuation of the signal we are picking up gamma. And at some point Alan, people might be interested in the 40 Hertz work that's happening right now in Alzheimer's. And we can talk about that if that's of interest Diane. So that's pretty exciting. And then of course, as Alan mentioned, we did pick up seizures. And these are some of the original slides that we used and it was during sleep. That was something that was new for me learning that people have these subclinical seizures that they don't even know they have and many times at night. And so it's nice that we'd be able to detect that. And we talked a little bit about the sleep staging. We're picking up all the characteristic waveforms which pretty much look the same as they do on Scout. There's not that much difference in terms of the morphology of the ways. And here's where I just wanted to give a little bit of credit. So Kid Mose in our house, one of the first pioneers about a decade ago a Daniel O'Mandex of Imperial College of London and TAMVU University of Colorado. I've spent time with all of them. I've read their publications. And if you're really interested about DREG, I highly recommend you check out anything that's published by these folks. Now we did some interesting experiments. And Dr. Ken Paller and Phyllis Z were the researchers on this and we looked at to see if we could enhance slow wave sleep. And it's really interesting that if you phase lock on the slow wave pattern itself and put an auditory stimulus, when we did a double blind control crossover study, we did see some impact on slow wave activity. And so Alan and I've been talking about this for several years that perhaps there's something we can do to intervene because there is a known correlation between sleep fragmentation of slow wave sleep and cognitive decline. And perhaps there's a way to have some intervening activity. And so we looked at that. It was interesting that there were responders and non-responders. We don't really understand why some people responded very favorably. When you did break it into that category, we did see some people doing better on memory tests. So Dr. Paller designed declarative memory tests and we looked at that across the various cohorts and those that had more slow wave activity did better on the memory test. So that's interesting research. We also did some novel work around attention to Cody. The reference I made about I'm recording my brain waves, if you match the auditory track with the brain waves, you will see some similarities if the person is paying attention. So at X we were wondering if this could be a novel way to control interfaces. But then as I got to know Alan and I got more interested in the medical domain, I thought maybe strength of attention to become a biomarker for brain health. And so Malcolm Slaney at Google, he's one of the top researchers in the world around this space of auditory attention. And we actually did this where we had this very difficult task you can see electrodes around the ear, two different audio books played at the same volume. People had to look straight ahead and attend left versus right, one minute left, one minute right. And with the exception of subject number two, we were able to really tell which one they were attending to. So that's quite interesting and might be some interesting implications of that. And then lastly, a little bit, maybe this is almost a little creepy, to be honest. We did a session to see if the ERP could be indicative of whether you like something or dislike something. So with Android phones, they do a photo burst. And so we showed a range of photos. We had people subjectively like or dislike the various photos. And indeed we saw some correlation between a dislike. And so when the brain sees something that it doesn't like there is just this involuntary response. And there are studies of that that are kind of putting into question free will itself like what happens first, the response or the conscious experience of it. But we did find some interesting research there. And then we did some very interesting research. Again, the credit goes to Karen Conkley and Ken Pallish who pioneered this research. I just happened to be subject zero. And that was where leveraging the work of targeted memory reactivation where you pair a tone during conscious activity and you play that tone at the right phase of sleep. It can trigger a memory. And so that can be either a way to enhance memories if you're learning a new language or in our case we wanted to induce a special hybrid state of consciousness where you actually are still asleep but you can do things like math problems. And so that was published in cell biology. And I think Ken may be in attendance on the call and he even told me that it made the late night show. So his research is quite interesting and I was very happy to be able to dissipate it in some way. And I think that's it. Diane, we'll pass it over to you for questions. Okay, wow. There's a lot going through my mind right now. That was really a fascinating presentation. So sort of one of the first things I wanna mention is that I was involved in a recent set of DoD workshops where we've been looking at some of these different neuro technologies and brain machine interfaces. And one of the ways that we've talked about describing that space is about these technologies for assessment versus these technologies for real interventions. And then how do we use data as like a force multiplier to kind of expand in both of those categories? And so I was watching during your comments, I'm like, wow, you've hit on all of these three with one product here in these earbuds. You can use these earbuds for assessments as you're examining people's brainwaves when they sleep or doing other kinds of activities. But then you could advance these to an actual intervention where you're creating a sound that then creates an intervention in the patient. So, and then you can use these sort of reams of data to make observations about how people behave and looking at images and things like that. So what's fascinating to me again, is the same kind of experiences we were having at last week's sessions is how rapidly this technology is traversing all three of those categories at once. So I guess, so that's just a comment that other people may have questions and I'm gonna try and look for people's hands up. But in the meantime, so let's talk about the earbuds and their sort of general use. So right now you're using them for this exploratory research. Do you see them as products? What kind of applications do you see this as a product? What kind of applications do you see in real time use going forward? Sure, no, great question Diane. So we are pursuing an FDA clearance of the device so that we'll be able to use this primarily that we have the data that we have supports patients with epilepsy being one of our first primary targets because the signal is very big. Sometimes it is focal. So not all seizures manifest in any kind of tonic-clonic motor activity. And so being able to capture accurate seizure burden could one speed diagnosis for some. Some people are on this very long, multi-month, even multi-year odyssey to really determine what they're having. But then others for medicine titration. So another great colleague of Valens is Dan Wiegel at Emory and he said, JB, I'm a scientist and I wanna make decisions based on data but right now when I see patients I have to base it on a diary. And I know that it's been shown that these diaries are not accurate. So imagine the conflict I feel in lowering a medicine dose even though I know the side effects are tremendous for AEDs, I can't in good conscience lower it because I don't know if their seizure diary is accurate. Compare that with an accurate seizure burden. Then you could imagine the doctors being able or holistically treat the quality of life and minimize those side effects by keeping people's seizure free. So we see that as kind of our immediate use case and bring that to the world for you. I would say, JB, I would just add that the second obvious one to us for medical applications is really gonna be sleep, right? So people probably know that overnight sleep centers are rapidly vanishing because insurers will pay for only in-home sleep studies. And we really don't have good ways to assess sleep to be honest with you, frankly. And so this could be a real game changer for us because you could also imagine a microphone we could hear snoring, didn't get into it. You can pick up eye movements by that. So you really can, that enables the staging of sleep. So I think sleep disorders underlie so much morbidity themselves, but the risk for other diseases covered in disease and the like. So I think there's a big market for that. Certainly a big move. Yeah, absolutely. And I think one of the hurdles and barriers that we're getting over in this space with neuro detection in general is the non-invasiveness, the fact that you can use these non-invasive methods of assessing the brain and then making these observations and then interventions on top of them. So it's just amazing. Okay, so I see Charles has his hand up. So let's go there first. I wanna remind everyone to go ahead and put questions, comments into the chat. And Andrew and I will do our best to field those. So Charles, go ahead. Yeah, so I was wondering about how well localized the signal is. So you mentioned that there's a strong correlation when you do ECOG correlation with the earbuds. But what if the epileptic focus is far from the temporal lobe? How effective is that technology at detecting remote sites? And, you know, because people are often set home with a whole scalp full of electrodes to do a longitudinal monitoring. So the question is what the relative effectiveness of those two technologies would be. Yeah, no, great question, Charles. And that's part of the primary area of research right now. So because we have the stereotactic electrodes, the depth electrodes and we have the MRI, we've been able to do reconstruction of those and basically see the gradient of how far we can go into the brain. Obviously, the lateral temporal lobe, we have good coverage. As you get in more musial temporal lobe, hippocampus, we have missed some seizures in that area. So I think we would ever claim that we're gonna have full coverage for the ear. In fact, some of our analysis, if we look at different seizure types and how many generalized, we think about 85% is probably what we'll capture. Although you'll find, and Alan can confirm that from his perspective as a neurologist, a lot of the seizures, especially the focal seizures are temporal lobe. So even if we can just capture those very well, and there's some, there's a high prevalence that's undiagnosed in the Alzheimer's and cognitive impairment population. Sometimes as high as 30% of those folks are actually having these subclinical seizures in that temporal lobe. So we feel like that would also be a good way for us to capture. So more on that, we'll certainly be publishing on our effects of this reconstruction. So if I may be. Add to that, JB, that I was very surprised actually to see that there were some focal activity in the frontal lobe that also got picked up on occasion. But certainly most seizures spread and that's when it becomes a concern when it leaves one part of the brain and spreads. And so once it spreads, then it's pretty easy to detect. Okay, great. Thank you. I wanna go over to Rocco Casagrande who had his hand up. He also put this question into the chat. Rocco, are you, can you unmute? Sure, do you just want me to read my question out? Sure, go ahead. Just in case people on the phone can't see the chat. Sure, this is a fascinating talk. Thank you. I was wondering, I noticed the experiments where you provided a stimulus to the subjects who were done on sleeping patients. I was wondering if what you think the feasibility is of providing a stimulus to a subject who's awake to influence their opinions. Like, so you mentioned the liking or disliking a photo. Do you think it'd be feasible to add a stimulus to make someone like a photo more? And if so, what's the timeframe that that kind of research might be in hand? And so, by the question, like, actually influence the experience of liking or just the neural correlative liking? That they report liking it. Let's just make it a simpler readout versus whether they truly like it or not. You know, Alan, what would you think about the feasibility of something where you could stimulate a particular center for like-dislike? Yeah, that's a great question. I think it gets more into Diane's, you know, suggesting where you're intervening, right? So there's gotta be a way to modulate. One of the things, we haven't talked about this, JB, but my mind went to some experiments that were done at NIH with a collaborator there, Karim Sigloul, who also has intracranial electrodes. And, you know, we've been also using digital technologies to monitor memory. And Karim was putting electrodes really at deep depth throughout the temporal lobe. And the waveforms, you can predict, basically, is machine learning to predict whether someone's gonna remember what they see or not. So we developed this high-tracking task, right? And so we can predict with pretty good accuracy just by the brainwave activity. So that led us to start thinking, well, if you could do that, you could potentially intervene and block a memory or perhaps enhance a memory, which is sort of the first step in whether you might like or dislike something because it's just like something you probably have to have some experience with it previously. That's right. Yeah, I think that some of these things that bring some cause for concern, I would say we probably don't have to be too concerned. One thing that we haven't really talked about is EEGs are very noisy signal. And so some of the neuroscience companies that are out there will promise a lot in terms of the BCI. But part of, based on my research on that, we're still pretty far away. So I think some of these scenarios we don't have to be concerned about in the short term. Great, thank you. I want to go over to Jorge now, Santiago Ortiz, who has his hand up with a question. Good afternoon, everyone. And thank you so much for this presentation. It's very exciting to see this technology. Have you thought about the applications of this technology to education in terms of assessing, learning, and better understanding the efficacy of different teaching techniques than learning environments? What the goal of improving educational experiences? Yes, great question, Jorge. I do think that there's some applications of that. And no doubt you'll probably see some companies that leverage ear or around the ear technology to do just that. So you think about the ADHD population and kids' attentions wondering. It is possible to decode both auditory attention and even visual attention. And so you could imagine some sort of haptic feedback, maybe a little buzz on a wrist when somebody's attention is wandering to help them focus. You could imagine, I don't know if this would necessarily be something that I recommend, but you could imagine where a teacher might have access to this information and have a sense of who's paying attention. And perhaps a little bit of real time dialing in content to be sure that you're grabbing it. So I do think attention decoding itself is probably the area where you could get the most profitable venture into the education space. I think you could potentially even extend it. I totally agree with you, JB, about attention. And then the next step would be, if you attend to it, are you encoding it? And I think because we're in that temporal lobe in the memory circuitry of the brain, you may be able to predict in the future whether somebody's actually going to remember or learn what they're reading. So that would a great application if the student knew they had to reread a paragraph. So I want to just tease out one thing that I just heard JB say about how there's a certain amount of messiness in these EEGs. Can you identify particular individuals on the basis of their EEGs? And could you say more about that? Yeah, we had a discussion about this, I think a week ago, and so there are some researchers. In fact, one woman, I don't know if you've talked to her as part of one of your panels, but Sarah Laszloh did some interesting research in that area. And I think it's possible. Obviously, there is some initial evidence that it's there. I think one thing that's interesting about EEG is that ear morphology is very unique. We had 5,000 year scans and we did a principal component analysis and it's so unique, it's like a fingerprint. So you can imagine certainly devices going into an ear almost like a lock and a key, and then the signal that comes up is most likely very unique because it's where the electrodes are always placed based on the location. So I think in some ways, ear EEG might make that even more likely to be successful as opposed to scout because as you may know, you have to measure. So you start at this location and it's called a 10-20 montage based on 10% and 20% distances and it's hard to get that exact versus the ear. Every time I put these in, they're always in the same place. They're always recording from the same location of brain. And the ERPs is an interesting way. So you might imagine a tone that's played. There's gonna be some sort of involuntary response and being able to capture that over time. One of the nice things about ERPs, and if you do them enough, you can average out the noise. And so imagine that you have just sort of this subaudible ping that's happening on some sort of basis throughout the day. It's really capturing what is JBZRP to this sound? What is Allen's ERP to this sound? And so then I think you could imagine with a high amount of feasibility, you could get that sort of brain authentication. Allen, any additional thoughts on that? I guess that the related question is not just identification of particular individuals, but I guess certain patterns or themes that appear that say, okay, this person is pre-AD or early cognitive impairment or some other kind of condition. Is that sort of, is that a feasible thing? That's a great question. Yeah, I was thinking probably, especially with stimuli, right? So I think at resting state, I have a hard time imagining how we could pick up personal identifiers, except as JB mentioned, through the anatomy. But certainly we're all responding differently to our environment, right? And that's boy, what a, how individualized can you get? Your environment, how your brain processes it? So you definitely could think about that. We're very interested in exploring the role of the EEG in the ear for Alzheimer's disease as a biomarker and mild cognitive impairment. One of the things that JB and I are dreaming about is getting this into a large numbers of healthy elderly individuals so we can understand in the long 20 year pre-clinical, you know, pro-derome of Alzheimer's disease. The pathology first starts in the brain in the temporal lobes, you know, as many of you may know, and it's exactly where we're picking up. So this could potentially be a very sensitive biomarker. Diane, I did start. Sorry, I had it. Oh, go ahead. I saw an interesting question in the chat pop-up about the risks. I was just, I was just putting that and it's directed to you. I'll read it just in case there's people on the phone. Do you envision a commercial version or future products or HIPAA may not apply? And your philosophy on data privacy and security and then tied to this, Michael also asked, what's the risk of misuse of this technology? And I believe Kavita also asked a similar question. Okay, so back, right back over to you, JB. Yeah, no, I think as like I mentioned at the very beginning, I think this is an area where we need to be really responsible and we need to be really thoughtful. And so I think the chief concern is around data ownership and privacy. And so I, one of the studies I think that is kind of jarring that was actually published in the Journal of National Academies was in 2014 when Kacinski and Stillwell showed with just Facebook likes, not very many, the ability for AI to predict personality better. So, you know, with 70 likes, the AI can predict your personality better than a coworker with 150, it's a family member. And with 300 likes, AI can predict personality better than even your spouse. And so imagine now with the brainwaves and what you are actually responding to and liking, like that just cannot go into the hands of big tech. You know, that's just, that is just not what we wanna see. And I think I feel very strongly about making sure that we take advantage of really the trend that's happening, which is the web three trend. So, you know, if I'm not sure how much people are familiar with this, what's happening with web three, but you know, web one was all about publishing, web two was all about user generated content, such as social media and things like that. But web three is taking advantage of the blockchain technology. So those that are creating value, they own it and they get the value. So you don't need the middle companies, you know, the many ways that the big tech companies to take that profit. So I think that finding a way for people, you know, when they're using our earbuds certainly that they're gonna own that data. And if they choose to monetize it, if they want to participate in pharmaceutical trials, they can certainly, you know, do that and get compensated to it, you know, they should be able to donate if they want to to science. I mean, we're gonna develop a version that is, you know, gonna look something like this where you could do a call, I'm doing this call but it's not audio, the audio is coming through the speakers, but you can imagine doing audio through a pair of earbuds, as well as the sensing and capturing those brainwaves and they should be able to donate that to science and, you know, that's part of our vision. So I think that's, this is here before we know it. You know, again, I'm using the product today. We have lots of people in the clinic using it. It'll be update clear. There are other companies, great entrepreneurs in the space that are doing also really amazing things. And so it's coming. And so I think we have to be really thoughtful and make sure that we have safeguards around the data and that users are in control of what they do with it. Those are really good points. And I have a question from a committee here that I wanna make sure to get to, but before I do that, just following along the sort of, how do we do some good due diligence around that space? Are there sort of, how is the data, how do you envision data being protected? I mean, are these like Bluetooth devices involved in passing data to where they get stored and analyzed? And are they sort of unique programs? I have this in, I have a vision in my mind of the future where there'll be DIY people out there getting unique earbuds created so they can measure their EEGs and then, you know, using programs to measure their own EEGs. We saw this with transcranial electric stimulation. Remember when people were like going on eBay and buying stuff they could use to stimulate their own brains and earbuds, how much more would they think that was cool, right? So could you say what you think about that space? Just briefly. So I think a couple of things. One, we will go back to the fact that EEG is very noisy. So it's not like, you know, if somebody were to take this recording that I had today and you know, would they be able to make anything, you know, meaningful out of it? It's just, you know, it's not that type of resolution. I think that it's EEG combined with other data, right? It's, you know, EEG in response to a stimuli that you might be able to start making those pattern matches. So that's one, you know, contextual thing to keep in mind. I do think that some of the research that's going on with homomorphic encryption and, you know, zero knowledge proofs allow us to keep data local, right? You know, one of the things that Google I think was one of the first pioneers was federated learning model. So, you know, as much as possible, keep the data local such as the EEG to the device and then only update the model in the cloud. So, you know, we kind of imagine a future and we have a project wisdom that we'll be launching later this year where people will be able to buy our earbuds if they want to contribute to science and while they're watching Netflix, we believe that's a great thing, right? And so, you know, they will be able to do that, but you know, we're looking at models to keep that data local and we just abstract. And so we can do the learning, you know, from across a large population so that we can take the healthy control data and then have that help us develop our pathological algorithms to help with epilepsy and, you know, extreme sleep disorders. So there are some, you know, breakthrough algorithms that are out there that encourage that privacy, which is really good. And then again, the second point being that, you know, it is a signal that just by itself, you know, wouldn't be that damaging, but in context, there could be some problems. Okay, great, thank you. So the question here from Kavita and also Andrew, let me know if anyone's hands up that I haven't seen. I don't see any, but I just want to make sure. How unique is in-ear EEG and what does the research landscape look like for that? Sure, well, I mentioned earlier that Daniela Mondich and Kiddmose, Prieben Kiddmose were probably the first two. So if you look at some of their publications, they'll go back about 10 years. They didn't have any sophisticated ear scanning technologies that later got developed. And so you'll see a lot of crude prototypes, but, you know, they were some of the first to show that there was real brain signal there. And so they looked at it with sleep patterns. There's a really nice publication of about 70 subjects. I think that Kiddmose did showing sleep can be recorded in-ear. And I think that, you know, essentially that was some of the research, you know, I would say I owe, you know, my own confidence from their research meeting them in person. And then Alan, who I met about five years ago and we talked about if we built something like this, what would be the clinical use cases? And I've always been more drawn towards, you know, the clinical and the medical as opposed to, you know, more of the human-computer interface. So there's a great amount of literature that's out there. And I would say every month there's new publications. And so that to me is very exciting. Again, I, you know, want to represent sort of the whole field on this call, not, you know, not in any sense. I think there's going to be a lot of interesting companies or there's going to be a lot of interesting products. I just hope that they're thoughtfully, you know, designed and, you know, we're keeping in mind data privacy. Alan, anything you wanted to add to that? You know, I spent my time trying to think about how we can cure Alzheimer's disease and all the good foundations, you know, but one of the reasons I've loved working with JB because there's a lot of conscience there, you know, so they're trying to do the right thing and that came through loudly. I think it would be interesting to talk a little bit more about you raised the point, Diane, about the ability to modulate. David had suggested that maybe we talk a little bit about the potential to use this as a neuromodulation device. One of the things we're really excited about, there's been a terrific team at MIT that discovered that these high frequency gamma rhythms at the point that you can modulate them and you can modulate them with light flickering at 40 times per second or sounds flickering at 40 times per second, right? And so what we realized early on is that since we can record the frequency of the rhythm we could use that as a way to detect whether brains are getting entrained. But the most important, most interesting thing is not just the sleep modulation, but if you introduce this 40 Hertz modulation you activate the brain's neuromune system and you can actually clear out. You can clear out amyloid pathology in mouse models of Alzheimer's disease. This is rapidly getting translated into clinical trials in humans because it's so safe and invasive. So, you know, the thought that you could, you know, deliver this non-invasively through an ear bud, perhaps even subthreshold, you know, delivery. Yeah, that's absolutely fascinating. And on your first slide where you were showing sort of an explaining the EEG, going back to the original observations and you had all the different waveforms shown there in your first slide. One of the things that was going through my mind is in my course I always get students who ask me about, they listen to things that are meant to change their brainwaves, right? So they go on YouTube and they listen. They ask me how, you know, how legit are these things? And so what you're telling me now is absolutely fascinating. And I did not know that about the immune system. I mean, that is just an amazing possibility if it could happen. I'm trying to remember the terminology for these audio that you can listen to on YouTube that's meant to try to... Yeah. Yeah, can you say what it is? Well, I think, yeah, sometimes the popular term is binaural beats. Binaural beats, gosh, I don't know why, sorry about that. Yeah, yeah. But I was having like a senior moment, I couldn't remember, binaural beats. And so they believe that if they listen to these, they can get their brain entrained into these different modes. And of course they don't know because they've got the headset on and they're just listening. But for example, some people with post-traumatic stress disorder can listen to these and it actually alleviates some of their anxiety or depression even. And so in the military context, people ask me frequently how relevant are these potential technologies or do they really work? They seem to work, but we don't have good clinical evidence, too much clinical evidence in that space. But anyway, you're right, having it via an ear bud that's directly controlled is fascinating. Okay, so we've got just three minutes left. I wanna check with Andrew and Kavita and make sure did we get everybody's questions? Did I miss anyone who wanted to ask a question and didn't get the opportunity? I don't see any, but I'm also looking now for hand if there is an additional question or two. Okay, and of course, Todd, my chair. Todd, any remarks from you before I pass over the- Question. So being in the ear, you're kind of close to the vagus nerve area. Could there be some interesting ways to explore the vagus nerve as well in terms of modulation? Absolutely, yes. Well, Todd, you know this very well. I think one of the things that we're interested in and other people too are the regular stimulation. And so I think that because the peripheral nerves of the vagus nerve are in the ear, it's very interesting to think about how you could modulate that. And so I think modulation is something that in the next sense has been a little bit, I guess cautious to go into. And because again, there's only so much you can do as a small company, but as we grow, I'm certainly keeping a keen eye on that research because if you could modulate the vagus nerve, Todd, as you know, you've done a lot of research in this area, you might be able to help with digestion, you might imagine sort of, I think of sort of almost digital interventions in the moment. So, you know, some people take a beta blocker before a presentation like this and what if you just had the digital stimulation of the VNS to calm the nervous system or things like that. So I think there's a lot of very interesting research in our modulation. I do think we have to be careful. And so I think Diane, you mentioned people going on Amazon, hooking up nine volt batteries. You know, really hope that people aren't doing that because you know, it's unlikely that if you just, you know, excite a bunch of neurons in the frontal lobe that who knows, maybe you're just agitating it and the brain sort of kicks into gear to repair things that were done and it feels like you get this temporary boost in focus. So I think all modulation needs to be very thoughtfully done and I'll look to the academics like I did for EREG to see some well-published results, peer-reviewed journals before next sense goes too far in that space. JV, it's a treatment for epilepsy already. Time to move. Not in the year though, right? Bagel nerve stimulation though. Yeah, direct bagel nerve stimulation, correct, yeah. Wow, cool, very cool. Well, I think I'm gonna turn it back to Andrew but I just wanna say thank you so much for this session. I'm just fascinated by this. I'm gonna have a lot of remedial reading to do after this revelation of some of these studies. I'm fascinated by it and thank you so much for sharing it all with us. Andrew Kavita, I will turn back over and let you tell us what's gonna happen next. Yes, and I'll echo the thanks, Diane, not just to Alan and JV but you and Todd and the rest of our standing committee as well. There is a closing slide that I have today just as a reminder for everyone tuning in that as I said at the front end, a proceedings in brief document will be published here in the next handful of months that kind of outline and chronicle the discussions that took place, not just today but in the sessions that took place last week as well. You are welcome to follow along with the standing committee. Our website is listed there and if you're interested in kind of being pinged, not just one of those proceedings come out but other activities that come up within the Board on Life Sciences, feel free to visit our website that's listed there and you can sign up for our listserv where the notifications for that will come out. With that, thank you all for tuning in today. Thank you JV and Alan again for joining us. I'm very much looking forward to continue discussions and conversations around this and plenty of other topics. I'll mention for our standing committee members will be migrating over to our Zoom call that's within your calendar invitation. But with that, I will close out today and thank you so much all again for the great conversation.