 All right, welcome, everyone. We'll wait a second as everyone joins the room, and then we'll get started. People are still being allowed in. Wait about 30 seconds. Good. Okay, welcome, everyone. We are here to talk about this amazing new book, The Battle for Your Brain. My name is Francis Shen. I'm a faculty member at the Harvard Medical School Center for Bioethics, Harvard University Affiliated Professor at Harvard Law School, and faculty member at the Center for Law, Brain, and Behavior at MGH. In a moment, I'm gonna turn it over to Professor Carmel Shachar, who will introduce Professor Farahani. But before I do that, a few housekeeping items. First, thank you to the Petrie-Floem Center and the HMS Center for Bioethics for co-sponsoring this event, and to staff at both centers for getting the word out and supporting it. Second, this event is being recorded. So if you like what you see, you'll be able to pass it on to your friends afterwards. We'll share the link with everyone who's attending. Third, we invite you to submit your questions for Dr. Farahani. There is a Q and A button that you will see in your Zoom webinar screen. You click that and you can submit a question. Given time, we won't be able to get to all of them, but we will certainly be able to get to some of them. So those are my housekeeping notes. And with that, let me introduce Carmel Shachar, who is currently the Executive Director of the Petrie-Floem Center for Health Law Policy Biotechnology and Bioethics at Harvard Law School. And she will soon be taking on the new role of Assistant Clinical Professor of Law and Faculty Director of the Health Law and Policy Clinic at Harvard Law School. So congratulations on that new role. Thanks for being here. Thank you for Petrie-Floem Co-sponsorship and let me hand it over to you. Thanks, Francis. I am really delighted to have the privilege to introduce Professor Neeta Farahani. She is the Robinson O. Everett Distinguished Professor of Law and Philosophy at Duke Law School, as well as the Founding Director of Duke Science and Society, the Faculty Chair of the Duke MA in Bioethics and Science Policy and the Principal Investigator of SLAP Lab, which is quite possibly the best acronym. We are very privileged at the Petrie-Floem Center to work with Neeta because she is also co-editor in chief and co-founder of the Journal of Law and the Biosciences in addition to many other groups that she contributes her time, including being on the Board of Advisors for Scientific American. Neeta is somebody who I find really admirable as a bioethicist and a health lawyer. I will say one short story and then move on so that you can actually hear from her. We were lucky enough to co-edit a volume on consumer genetic technology with Neeta several years ago and at the Writers' Dinner for the conference that led into the book, we were all discussing consumer genetic technologies, such as 23 and Neet, but we were all discussing it in a very bloodless way, like, oh, somebody could use 23 and Neet, but not I. I, of course, as a bioethicist and a robot with no DNA and I'm not interested in it at all. And Neeta, I think, was the first person from the group to really talk about it as, do I wanna do this? And actually, this is amazing that we can peer into our DNA and this is a really great thing. And I think her just discussing it in such a lovely candid and open way shifted the conversation and made me comfortable saying, yeah, I actually have used 23 and Neet taught me some really valuable things about my genetics that I'm really happy for. And so when I heard that Neeta was writing a book on ethics and neuroscience, I got very excited as a bioethicist married to a neuroscientist because as I've told her, I now have an easy gift for our wedding anniversary coming up. But I think even if I was not married to a neuroscientist, this book would be one of my top choices for my summer reading. I think this conversation is going to be fantastic. And Francis, thank you for pulling this event together. And Neeta, thank you for giving us your time. Thanks for those lovely comments. Carmel is such a delight to join you guys today. Yeah, this is wonderful. Thanks Carmel. So I'm gonna get us started. First of all, I encourage everyone to read the book. I echo everything Carmel said, whether it's summer reading or anniversary presents or four-year student. It's a really wonderful book. I was telling Francis Farhani before we get on. I was really struck by how personal the book is. If you read it, you'll learn of Neeta's family, her children, her mother, conversations with her father about her mother's health, about Francis Farhani's own brain challenges and how she's tackled them. So thank you for writing a book that is both intellectual and intimate. The core thesis of the book, at least as I take it, is that we're at this crossroads with neuro-technology. There's one version of the future that's really awesome. There's another version that's really bad and we have some tough choices to make. And the core, I think of the emphasis in the book is that one of the things we need to do is protect legal protections for our cognitive liberty at the individual level. So I'd like to ask you maybe to help us and our audience think about some context for this argument and for the book. And Carmel, such as you've got talent across all these different areas, you know so much. When you sat down and just kind of thought about what you're gonna write about and with so many different options and so many ways that neuroscience is affecting society, how did you come to the decision to write this book at this time? Oh, that's a big question, Francis. So first, I would say it seems like the book is well timed in the moment that people are incredibly anxious about what's happening with AI. And the book is in part that, right? It's in part a book about the extraordinary advances in AI that now enable the decoding of the human brain as well. But that's luck in having published the book in the moment at which a lot of people are anxious about it already. I've been thinking about these issues for a very long time, right? More than a decade, I've been writing about the different ways in which neuroscience and neuro technology are used in the legal system and give us a lens by which we can understand some of our preexisting commitments within law. But in each of those earlier endeavors, what I discovered was that there was really kind of a missing whole. There was a way in which we haven't been thinking about the ability to actually peer into, manipulate, change the human brain much more directly than we can otherwise. And in recognizing that whole, I started writing early on about the role of cognitive liberty in our lives. And by cognitive liberty, I really mean the right to self-determination over our brains and mental experiences as a right to both access and use technologies, but also a right from interference with our mental privacy and freedom of thought. And this book was, for me, very much an exploration of the emerging technologies that enable that access and decoding, the ways in which that will become widespread and are already becoming widespread and fleshing out the legal and normative case for cognitive liberty. And the process of writing the book was my own journey. And that's a lot of the reasons why there's so much personal in the book. It was my personal journey in both understanding and grappling with these technologies, but also in developing this bigger framework of cognitive liberty. And so it just came very naturally to inject myself in it because I was part of the story, right? Whether it was watching a movie with my daughter and puzzling over, like, oh my gosh, that parallel between the science fiction and the realities and just writing about that experience to help people kind of get entrance into it. And I'll say this, which is to me, it's a very natural evolution of technology, which is people are very accustomed to sensors and their smartwatches and in their rings that track their heartbeats and footsteps and breaths and body temperatures. And so the idea that brain sensors can now detect and decode our brain activity is a natural progression of where technology and the quantified self-movement is, but it was only when I started to hear from technologists themselves about moving from what had been a category of technology that had really been niche products with very limited applications to embedding brain sensors into our everyday devices like earbuds and headphones and watches and wearable tattoos and how that not only will decode and give us access to our brain activity, but enable us to have a new way of interfacing with technology that I really understood how urgent the conversation was that this was about to be a transformational technology. And unlike AI where everybody is talking about it, nobody seems to be talking about this transformational technology that's happening. So that made it incredibly urgent for me to then finally with all of these ideas percolating for a very long time to write this book and to define this category of cognitive liberty and law in our lives. I am not hearing you. I pressed the wrong button. That was me. That was great. And that leads to a couple more questions. So I think a lot of the, at least some of the attendees today are people who work or might wanna work in this area thinking about, how do you anticipate the future and then account for it? And I was wondering if you could talk a little bit about that. You mentioned the word urgency and that certainly comes across in the book. And you mentioned and through some stories illustrated how at least my sense was that you thought this might happen and you see a certain presentation or you have a couple of conversations and you walk away thinking, this is actually happening already now. But how do you as someone who is working on things that haven't happened yet, how do you get the timing right? That is you definitely don't wanna wait till it's too late. On the other hand, I think of the year 2000, the Hullabaloo where we got all ready for this thing that never happened, right? It was always the next big thing. It didn't happen. So how do you do that? Like get ready for something and in particular think about making laws for a world that sort of exists but doesn't quite yet exist. So a lot of us have read Shoshana Zuboff's book The Age of Surveillance Capitalism and I think it's a tremendous book and it's also a tremendous book about a phenomena that has already happened, right? Which is that our personal data has already been commodified. We have become the product for companies. The business model has been built around the commodification of data and the misuse in many cases of data. And that doesn't have to be the story with neuro technology, right? It can be a different story that we can write but you can't write that story decades in advance and there isn't the political will or the interest in writing it decades in advance but there is a moment to get it right and that's, I believe this moment and here's why. So first, you've read the book, Frances. So you know, very little of what I've described is about a future, right? A lot of what I'm describing is about what's already here and what is already happening just not at widespread scale across society, right? But I mean, 5,000 companies worldwide are already using smart cap technology to track employees' fatigue levels. You know, I gave a presentation at the World Economic Forum in Davos about the use of brain wearables in the workplace and a major global corporation, CEO came up to me afterwards and said that was such an interesting presentation you can include us as a use case because we've already used the technology that you're describing on thousands of employees and you know, we have some really interesting data that we'd love to share with you about all of the other metrics that we're measuring beyond attention and fatigue including, you know, engagement and boredom and you know, a bunch of other aspects in the workplace. When I talk with the technologists in the space and I did a tremendous number of interviews in the book with both the technologists but also the deep dives into the, you know scientists and researchers in it they all think we're at the tipping point, right? None of them think like this is technology that's decades away. Now some of it, some of the applications are decades away meaning or not even decades away but you know, a decade away. Like right now EEG isn't fast enough to be a very reliable controller for interface with other technology. There's a latency in how quickly you can turn on and off or use it in the place of a mouse for example. And so the way that, you know brain sensors will first become widespread is really to track everyday brain activity from focus and attention to everyday metrics of brain health as an interface for all of the rest of our technology those are applications that will come in time but in kind of getting the moment right I think getting the moment right for something that is as transformational as neuro technology that is as transformational as direct access to be able to alter and detect what's happening in the brain. The moment before is the right time to get the safeguards into place especially if they're the kinds of safeguards that I'm talking about which is to have the terms of service be fundamentally different. Commodification of brain data has already begun it isn't at scale yet. Brain sensors are already being sold worldwide it isn't everybody who's using them yet when it becomes an everyday part of our everyday lives that's the moment at which you hope that the safeguards are already in place and so that's why I think now is the right moment to do so but you have to do a lot of futurism you can't be writing about science fiction decades in advance if you're writing a book like this I think what most people have said about the book that they found startling is not that I am writing about a future but that I am writing about so many real world examples that are already in use today and that's I think what motivates people to recognize the urgency of the conversation. Yeah it's and there are and there are lots of these real world examples and it leads to where I wanna go next which is that throughout the book you really and I commend you for this you take a balanced approach that is at least some of the headlines covering your book I think have picked up on the need to protect the brain bad things are gonna happen we need to go back to be yeoman it's not your argument at all as I'm gonna point out but we need to be able to yeoman farmers and like an anti technology movement you're actually and you come out we'll get to this later you think traditionalists are on the wrong side of history with respect to sort of paternalism but even at the end you're right with neuro technology it's not too late to protect against the same fate for our brains we stand at a fork in the road where the coming dawn of neuro technology could change our lives for the better or lead to this dystopian future but I wanna focus on that better part because I feel like especially in say the biothics, neuro ethics community you get more credit for finding all the bad things that are gonna happen and then trying to stop them but one bad thing is that people couldn't see the benefits here and don't uptake the technology in fact it could lead to inequities that those who can afford and understand this stuff start using it and there'd be fear for using it to receive vaccines is the place where boy it would have been really good if more had used them could you help us paint a picture of what the better future looks like if we get it right, if we get the laws in place what does flourishing look like alongside this neuro tech that's already here? Yeah, thanks for pointing that out Francis so first I will say it's hard to write a book in this space that really is balanced and I really believe there's a lot of good here and there's some really frightening aspects of it and it is about getting it right and even working with my editor this was one of the hardest parts he was like nuance is not something that is easy to get across and this kind of nuanced position of like it's good it's bad it's both like that he's like that is not it's much easier to sell a book that is like look at the dystopian future that's coming and some of the headlines have done that and that's been a little frustrating for me to be honest is when you have these super dystopian clickbait headlines especially when I'm writing about clickbait as a bad thing in the book right here so that's unfortunate some of the most well written pieces that I've really treasured have been the ones that have done the nuance and those I think have advanced the conversation in really meaningful ways there was a guardian piece that I thought was phenomenal in this regard that really set and reset the tone of the conversation that had already started around the book but on the positive so why do I take that approach? So first cognitive liberty as I define it in the book is again the right to and the right from and the right to as I see it is a powerful right to self-determination over our brains and mental experiences which includes the right to access what's happening in our own brains and the right to change it whether that's to enhance it or diminish it and let's just start with access because that's where some really positive things could occur we know and track virtually nothing that's happening in our own brains right now which is really stunning if you think about it by comparison to like tracking our heartbeats and our footsteps and people can tell you their cholesterol levels, their blood pressure but they only have their internal software to access what's happening in their own brains even though most of us identify our sense of self most closely with our own brains and so from the basics like little things which are big things for many of us like when do you focus the best? Do you focus best in the morning? Do you focus best in the afternoon? Do you really work better from home? What is the switching cost for you between going from platform to platform when you're on social media and then you go back into writing how much time does that actually cost you? What does the arc of cognitive decline look like over time? If you're an epileptic can you detect epileptic seizures minutes to up to an hour before they occur? Tracking basic brain metrics from everything from your focus and attention and boredom and fatigue to your biases like actually being able to detect your own biases and see not just through implicit association tests but through actual targeting of an identification of your own biases like there's a lot we can learn through the process of self-discovery about ourselves, about our brains, about our brain health there's a lot we can do to enhance and to even ease our own suffering and I talk about some very personal experiences of using for example, neurofeedback in my own life drugs and devices that I've used and my own journey of chronic migraines and trying absolutely everything under the sun within the realm of neuro technology to do so. So, it's very scary to make your brain transparent to others. There are very significant downsides to doing so but the reason we're gonna go into this new era isn't just so that governments and corporations can hack into our brains. The reason we're gonna go into this new era is because it can be transformational for our health for our wellbeing and even what it means to be human. I mean, brain to brain communication and brain to text communication. There's so much that may happen in this coming age with neuro technology that will transform humanity in many ways that will be positive that I think will willingly go into this era and I want us if we do so to do so recognizing how to do so in ways that will be beneficial rather than harmful to society. Yeah, well, I appreciate you saying that and I'm glad that it's interesting to hear that you had that conversation with your editor because it did come through that you do see this balance and I understand you to gotta sell the book and it's gotta be positioned a certain way but I commend you for doing that balancing. I wanna contrast the benefits version with the harms version, the sort of dystopia that you also raise with one line of thread that I didn't see discussed as much in the book and that has to do with the intersection with race and scientific racism and racial inequity and as context for this, there's been a group here at Harvard that started to focus on neuro technology justice and it's been a lot of fun and we had some support from the Dana Foundation to do a neuro technology justice summit in February and we asked the audience and actually we had eight Dana Fellows, amazing people. We asked them to basically the fork in the road question that you ask, is neuro tech channels are gonna be co-opted by the rich and the powerful and the mean spirited or is there some beneficial role like the one you just mentioned? I gotta say people were split with good reason like on each side but one of our fellows, Dr. Jasmine Quasar is at Carnegie Mellon has done some work and I don't wanna raise it here about the sort of systematic bias of particular electrical encephalography and ethnias these devices that for those who don't know work by measuring electrical activity and you describe it well in the book but through the scalp. And the problem that Dr. Quasar and others pointed out is that for almost the entirety of these technologies exist and it was sort of overlooked that they were systematically excluding certain groups because of hair color. And so I wanna ask about who's included in this neuro technology future and how do we address this concern which is clearly there because it's been there so almost every time a new technology whether it's neuro or not is shown up but especially in neuroscience with our history of phrenology and its justification of a racial hierarchy. How do we as and I guess who's the we also but how do we address this? And are you concerned about it? How does that again that sort of racial inequity or concern about racial injustice play into this neuro technology future or present? Yeah, it's a great question. So I hear two different questions within what you asked, Frances. One of them is the validity of the data itself given the populations that have been included within data sets that have analyzed electroencephalography and FNIRs. And the second is questions about who benefits from the technology who's at risk from the technology what are the kind of distributional concerns? Is that fair and kind of? Yeah, I think that's right. Okay, and so let's start with the first one and the first one was respect to data validity. I mean, interestingly, every brain is a little bit different and every brain requires calibration when you use these devices. They're not like there is some universality in terms of being able to like turn switches on and off with your brain or concentration or alpha or beta kind of brain activity levels but calibration is actually unique to each person when you use the devices. And so questions about the validity of the data as to whether or not like alpha really represents a particular thing or gamma or delta brainwaves and frequencies reflect different things. Those I think have been ecologically validated across a lot of different populations in terms of is that really correlated with stress levels? Is that really correlated with attention and focus levels? That doesn't seem to vary based on who's included excluded within the dataset because it has been replicated widely across now many different scalps and many different skin colors to use it and it's also been calibrated with other modalities and of course with clinical EEG which has been much broader than the consumer-based EEG datasets. I'll also say the generative AI adds a really interesting dimension to this because most of the neuro technologists who I've been talking with have talked about how really generative in AI will enable true customization to your own brain activity and calibration to your own brain activity. So I think at least from a analytic or kind of clinical validity of the technology who's included is anybody who wants to be included in the kind of absolute sense. So then we need to talk about it from the distributive justice sense which is who will have access to the technologies. I don't think this is that different than any other conversation meaning I don't think that there's something unique or special or different about the distributive justice concerns when it comes to neuro technologies than other technologies in that the distributional concerns about the maldistribution of technologies the maldistribution of benefits across society the structural problems that that plays into the hierarchy that that perpetuates the hierarchies of informational asymmetries and the workplace all of that I think exists across technologies. And so the reason it's not in this book uniquely is not because I don't think it's an important part of the conversation it's because in fashioning cognitive liberty cognitive liberty exists within already other liberties and interests that are at play I'm trying to kind of carve out what's unique in this space and I'm not sure that I see that as being a unique issue within the space but a pervasive issue that we have to address across society. And that's gonna be true for AI and the power of AI and the tools and the availability of AI across society. The only way in which I think it starts to become unique is if you think that particular technologies are so transformational as to be critical to human flourishing. And to the extent that we believe that if we believe like AI is crucial and access to it is crucial to human flourishing more so than other technologies or neuro technologies and access to it are crucial to human flourishing and more so than other technologies then the urgency of addressing the distributional concerns within that technology become much more important. And I think you could make that case for neuro tech which is to say that because of its power and because of the fact that if it really does become the way in which people interface with other technologies and substantially decreases the friction and increases the ease or the enhancement potentials are substantially more powerful with neuro technologies than other areas then it becomes urgent that we address what is a pervasive issue in this context because of the unique benefits that it would provide to people. Yeah, that makes sense. Let me ask about another pervasive issue that shows up here and that has to do with the balance that you want to strike between allowing individuals to flourish but protecting them from the harms. And in particular, I want to think about adolescents. You argue that in general respecting people's right to self-determination including their right to enhance or diminish their brains will further enable human flourishing and broadly with respect to giving consumers information you argue that traditionalists are on the wrong side of history the view that consumers are too health illiterate to justify their access to justify self-access to their brains belittles the average person and denies them the opportunity to become more educated. So I want to think about a hypothetical of 13 year olds because they are- They're really hanging out strongly on that one, didn't I, Francis? They're many that I'm like a little bit more hesitant but on that one that's like a very that's a pretty strong statement. Yeah, no, and I mean, and I get it. I mean, it makes a lot of sense in a lot of ways but then I thought, okay but I'm thinking about it like me, you know or 45 year old me but what about, and now are your oldest or a little younger, ours is 11 we've all been through there, but age 13. So I picked 13 because it's PG 13 it's when Apple and others like begin to you get to do certain things but you're also like five years away from 18, right? So you know, what about that? It seems to me it's an example where there and there are many others but there's one where there's this tension between trying to follow the sort of path of self-determination. And if I reread this sort of many of us think that 13 year olds are too health illiterate. I wouldn't- Yeah, but I mean, so I would say I'm older and are different, right? I mean, so I'm fine with paternalism when it comes to children because I think that's where paternalism plays its role. You're supposed to parent and so making choices for children like I don't think that full self-determination at age 13 to do anything you want or have any kind of screen time you want or make any choices you want with respect to the use or misuse of technologies is within your hands. I don't think you have fully formed judgment to be making those choices yet. It's not about health literacy or illiteracy it's about the fact that you are still being parented. And parenting I think is helping children to develop the capacity for self-determination over time as their judgment and their perspective develops. So if you ask me at 13 whether or not I think that a child is too health illiterate to get direct access to information about their most serious medical conditions or what's happening in their own brains and whether parents can exercise choices in that instance, I think, yes, absolutely. I'm fine with that. If you want to talk to me about an adult making those choices and like somebody standing in the place of making a decision about what access to information they have about their own brains then I think we're on the wrong side of history. Okay, so I want to ask you about the middle because I thought that might be your answer. And of course, you know, I thought of this. So there's all this neuroscience on the young adult brain which you know really well, right? It's arguably the place it's been the most active and there are a lot who argue that the 18 year old, let's speak 18 because 18 now is a magical legal number. It's certainly a place where these technology companies are aware and are trying to, these are the consumers and they're targeting. Would part of the, how do we balance that in the law? That is, could we justify broader protections? But it would also limit what an 18 year old could do roughly on the argument that these technologies which are specific to your neuron cells and the way they're operating and connecting together we got to be really careful because we've learned a lot about how you're not fully wired. What about, is there, I mean, what do we do with that? I mean, TikTok or something, that's not the best example, but you know, but to the firm who wants to put, you know, whatever the latest, greatest neurotech is in market 18, could we prevent that should we wait until they're 25 or do we let cognitive liberty, you know, protect their right to self-determination even at age 18. And then we'll go to audience questions to be in my last one. No, that's a great question. Especially given that, you know, all of the neuroscience tools that, you know, you continue developing all the way through your mid-20s at least. And what I would say is this, which is we have drawn a rather arbitrary line in law around the age of 18 for a whole bunch of things, right? For voting, for, you know, being able to fight in wars for, you know, what we determined to be a legal adult. And if we're gonna maintain that line, which it seems like we are, then, you know, as the kind of legal fiction of when adulthood begins, I don't think that we ought to carve this category out uniquely to say, except for when it comes to information about your own brain or except when it comes to choices around self-determination with respect to your own brain. And, you know, does that mean that we can't adopt different nudges or informational, you know, kind of campaigns to help a child understand that their brain continues to develop and that the impact, for example, as we learn about it of, you know, screen addictions or of, you know, the use of social media, which shapes and reshapes their brains in ways that can be devastating for self-esteem or can make them more likely to suffer from mental health issues, shouldn't be part of our education campaigns or that we shouldn't go after companies, for example, who are intentionally trying to addict people to their platforms, recognizing that those have disproportionate effects on children. I think all of that is fair game, right? Meaning increase information forcing functions, try to get companies or create incentives or regulations around their, you know, continued deployment of manipulative and addictive features that are embedded within technologies that disproportionately affect the youth. Whether we should then also take the next step, which just to say you can't have access to your own brain data or you can't have access to the technologies until you're beyond 18. I would not favor that largely because of where we've drawn the line on other kind of legal adulthood versus non-adulthood. Would I favor changing that line at some point given what we know about the developing brain? Maybe, you know, I think the more we learn about the developing brain, the more it seems like we really have picked a pretty arbitrary point in time to carve out a bunch of different rights and regulations. Thank you for that. So we're gonna turn to audience questions now. If you weren't here at the beginning, you're welcome to put your questions in the Q&A. One question is the name of the book, The Battle for Your Brain. Everywhere, books are sold. It's great. So one of the questions here is good ones about opting out. And the question is what are the options for opting out of neuro technologies, the context that a questioner asks is sometimes with our Fitbits, we can opt out of them. But with many of our phone collections, we tacitly accept the use agreement and it's difficult to opt out. So how much data will be secretly collected by these neuro technologies? So first I should say a lot of data is already being secretly collected, not through brain sensors, right? But, you know, much of the profiles that are being created about people through their digital activities are really designed to try to infer brains and mental experiences, right? People are trying to figure out what the emotional and kind of cognitive landscape is of individuals and then to use that for micro-targeting of advertisements or develop profiles and everything else about them. To the extent that you can opt out of a digital life, that's the extent to which you can opt out of all of that information being collected against you, which is to say you don't really have opt-out options. Brain sensors for the foreseeable future, I think will be opt-outable. That is you can buy, you know, air pods that will have brain sensors and you can buy air pods that will not have brain sensors and maybe not air pods in particular, but earbuds, right? So you might choose to go with the company that does or doesn't have brain sensors in it or you might choose headphones that do or don't have brain sensors in it or the watch that does or doesn't. When it will become much more difficult to opt in and opt out is when it becomes the way in which we interface with all of the rest of our technology. And that's a future that many of these companies are really focused on building. So MATA plans to launch in 2025, its first watch for neural interface with its AR glasses. Now that's a timeframe that has been pushed a couple of times, originally it was 2022 and then 2023 and now it's early 2025, so we'll see. But that's their current estimate is that neural interface, the first iteration of it will be early 2025. By the end of I think the decade, we can expect that it'll be increasingly more difficult to interface with virtual reality and augmented reality without sensors as the way in which you interface with those technologies. And so I think, whether it's AR, VR, or like I heard a presentation at TED last week around humane, which is screenless technology that is your AI that senses your everyday activity. I think increasingly all of that will be integrated with sensors that detect brain and other biometric activity which will become very hard to opt out of to the extent that you wanna have a digital life that'll be harder and harder. Which is why I think it's so important that we really get out ahead of that moment of ubiquity is to give people that option to be able to opt out. And I'll just say chapter two of the book focuses on your brain at work. And I think that's gonna become a place in which it'll become harder and harder to opt out as well is the extent to which that's integrated into everyday workplace for cognitive ergonomics, that is the adaptive workplace to the brain or self-tracking of focus or tracking that is not self-tracking but mandated by employers. Thanks for that. I wanna combine a couple of questions. In the final chapter towards the end of the book, you talk about knowing for a universal human right as part of the solution. Two of our questions are raising other legal avenues which could be complementary or substitutes. One from Adam Steiner asked about data ownership and some as you know have argued on both sides that property law and data ownership of brain data might be one option. Danielle Torino asked, also she loves your book and what are suggestions around writing data use agreements or informed consent? It's almost a contractual mechanism to deal with this. So could you talk a little bit about the different legal avenues that might be available and how you see them working independently or work together? Yeah, so I'm advocating for cognitive liberty as a universal human right but it's not the only approach to it, right? And I'm not the only person who's either writing about cognitive liberty or kind of developing the normative frameworks and theories around it. This is my take on how we ought to recognize it and how we ought to integrate it and what the normative case for it ought to be. The reason I locate it within international human rights law is because first I think it's an international global right that we need to recognize that I think has significant both legal power to recognize it internationally but also has significant norm potential as well. So setting it as both a legal as well as a societal norm of recognize the importance of cognitive liberty I think is foundational. And the mechanism by which I suggest that we do that is by recognizing it has three different components which exist and are reflected within international human rights law. That is to update privacy to include mental privacy and I describe which aspects of our cognitive and effective experiences would be covered by that to update freedom of thought to go beyond religious freedom and belief and to include freedom from interception, manipulation and punishment of our thoughts. And that builds on the work of Dr. Ahmed Shahid who was the special rapporteur of freedom of thought and held the mandate previously. And the last is self-determination which is this kind of positive aspect of it. But to your question specifically about other mechanisms by which that would be implemented I see that both as an international right but then requiring both national legislation and context specific legislation. So context specific could include things like employment law that needs to specifically address the collection of biometric data and particularly brain biometric data within the workplace and what that means and what rights individuals have and when mental privacy might yield for example to societal interests in certain contexts. The other aspect of it is things like data use and data agreements. And so the GDPR addresses some aspects of data use there are different laws within the United States in different jurisdictions that address biometric collection and use. And the question is whether or not we treat this as being part of those bigger biometric laws that we carve out something as uniquely sensitive with respect to brain data or if we focus instead on the inferences that can be drawn from different kinds of data to get at brains and mental experiences. I favor this last category is the way to think about it which is inferences that can be drawn. But I also think that there's something unique about accessing directly brain activity and the chilling effect that that has on people. And so I talk about that as one of the aspects that we need to think about. And I'll just put on the table which is I was talking with the founder of OpenBCI last week and we were talking about what might data use agreements look like that companies could adopt which would give people personal control over their data but still enable them to share with researchers and scientists for the aggregation of data for being able to discover a lot more about our brains and to be able to solve many of the issues that arise from neurological disease and suffering. And I think a lot of people within the neurotic industry are exploring that option, which is to figure out what could we do to give people kind of personal whether it's through web three or blockchain or other kind of options, personal control over brain data that they could then share through interoperability and standards that might be set to enable those additional insights. Yeah, I think, I mean, my view is combination as you say of these things is going to be needed and of course country specific, I mean, culture specific, the like. Speaking of different cultures, there are two questions here that I think I'm going to pair it together. And the first question which comes from Anushka Dadlani asked, could you talk about how advances in neurotech could impact benefit advances in other scientific fields specifically anti-aging and human longevity and there was an additional question from an anonymous attendee which asked, have you run these, Nita, have you run these ideas past older adults? And I'm curious about both of them. Like they're not the exact same question but they're kind of in the same bucket. I asked you the question about developing brain so let's look at- Wait, so what was the first question? The second question is around senior citizens but- Yeah, and the first question here is about the implications of neurotech for really anti-aging, human longevity. Right, right, right, yeah. Yeah. So that stuff's interesting, right? In the last or second to last chapter of the book Beyond Human, I do talk about some of the possibilities of human longevity, whether that's through things like brain uploading that people have described or some of the really innovative work like Nenad Sestin and his team's work on BrainX to be able to figure out if there are aspects of the human brain and damage that occurs over time that could be reversed. And even questions about different aspects of the brain where we start to see deterioration over time whether it's through neurodegenerative diseases or things like glutathione and its depletion over time and whether the more we learn about the brain and the more brain sensors for example are used in everyday life if we might not gain insights that would enable us to be able to address a lot of these questions with respect to human longevity, both the causes, potentially treatments and reversibility of it. I have talked to seniors and it's interesting because I think the most common response I get from them is thank goodness I will not be alive to see some of these changes. And so it's, I think people quickly see the dystopian side of it and it's harder for people to imagine what the like that the promise really will outweigh the rest. And that is a possibility, right? I mean, it is possible that it will become so dystopian that it is hard for us to actually achieve a vision of the world where we've realized the benefits of it. I'm hopeful that we can get out ahead of it so that that's not the case. I'm hopeful that the benefits to brain health and wellness and addressing and understanding the burden and toll of neurological disease and suffering that we could discover through this technology are promising enough that we can realize that future. But that's the most common response. Thank goodness I will not be alive for it. Well, you may be, I mean, you consider yourself, are you a futurist? I've seen that in your bio sometimes. You are, yeah. So I mean, you might radical life extension. Well, I mean, so not in that way. I mean, I'm a futurist and I sort of realized finally that that's what I am, which is the first thing that I did when I graduated from college is I worked at a strategy consulting company. And for three years what they taught me were tools of forecasting from trend analysis to mathematical modeling to being able to estimate and understand the size of industries and the direction and how to track the different metrics of progress and in a field. And I've been applying that throughout my career to technological development and scientific development across genomics and AI and neuro tech. And I finally realized, oh, that's futurism is that kind of, you know, methodological and analytical forecasting of where things are going. It's just, I combine that futurism with trying to then understand the legal and ethical implications and how we might get out ahead of it. But that's what I'm really drawn to is the directionality of change and technologies and forecasting more so than, you know, kind of technologies that are already here and spending a lot of time thinking about those. I think there's kind of a place for each of us in that landscape. And mine is, I like thinking about where things are going and trying to help to assess it. If I could pick up on that thread, a couple of questions from students have basically asked some version of, okay, so what might careers look like in the new world you're painting slash what I would ask is, how do we train the next generation of, and I don't even know what the fields are exactly and how you would fill that in to help us take the right fork and the right path as we re-appreciate this fork. And maybe you can talk about your own background which is really interdisciplinary. I mean, you're ostensibly a law professor, but not like most law professors you're doing all these amazing things. You run a lab, you know, you run all of like deep science and society. So maybe there's part bio question in there too, but just it feels like for these new challenges we need to rethink a little bit about how we're preparing our next generation. I wonder if you can talk about that and any concrete advice you'd have for students who are on this webinar now kind of getting excited about what you're saying and wanting to do something. So I say you and I are both weird in that regard, for instance, kind of career. That's why they glued together here. Great. So I think this is one that we could take together and talking about different ways to be able to shape our career path because, you know, we both I think have developed labs and, you know, work on different government entities and, you know, I work with companies and I suspect you do as well and kind of helping to advise them about the ethical pathway forward. And also I'm a law professor who runs and founded Duke Science and Society. I think there's a lot of different ways to come at the problem and it'd be interested in your reflections on the same, which is I've always been interested in the intersection between the legal, the kind of normative and the science and the technology. And so that's really my scientific, legal and philosophical background was bringing those different disciplines together and then looking at the issue at the intersection of them. But, you know, there are a huge number of questions to be asked from the technological side, from the privacy scholar side, from the cybersecurity side, from the, you know, kind of sociology of all of this from cultural anthropology, from the history and being able to use history to both inform the future of it. I think there isn't a right pathway forward. It's a uniquely interdisciplinary space where a lot of different voices need to come together, you know, to help understand that the clinical side, the physician side, the patient advocacy side, the democratic deliberation and policymaker side. So I think it's finding your passion. And for me, my passion was trying to address the intersection of these questions in a particular way. And that's what drove me for the degrees that I got and for the opportunities that I've taken on, but it's figuring out what drives you, what questions drive you, what impact you wanna have and what kind of contribution you wanna make as a result and from what angle you wanna do it. But, I don't know, you also are weird like me. So, like, what do you think of that? I concur at everything. You said, I actually would, again, I really like, as I mentioned the outset, that your book is not, I mean, you're futurist. You're not attacking the technology as a bad thing. And you have multiple examples where through conversations, you're working with these companies, you know, and now helping them, I think, to see things, maybe the others, it's one exchange, right? You basically say, well, what are these, never thought of that, right? And so you're helping them to think about these things. And I think going into it in a collaborative tone is just so, so important, but collaborative, but you're also standing up for what you would want us to have as a universal human right. So we might have time for a couple more audience questions, but I want to get one more in, and you know I'd ask about the criminal justice system, I'm sure, so I want to read, this is the end of this chapter, the last fortress, and you argue that mental privacy is a critical aspect of cognitive liberty, but like all privacy interests, it is not absolute. People can and should have the right to give others access to their brain activity. There will be times when we want to do so to promote research on exchanges or goods and services. And here's what I want to get to. There will be times that society will demand tracking of our brain activity when the lives or others are at risk. So I see this playing out already, actually in the criminal justice system. I see some people called it digital prisons, others probably me see it as a much more humane future than locking people up. But I'm wondering what you think about how we balance this right to cognitive liberty for someone who has done some active interpersonal violence, who has some history of serious mental illness, who's now being released back in society. And we have the opportunity for the first time ever not to rely on like check-ins with the parole officer every two weeks, but to get 24 seven data on this individual, we might be able to see the volcano before it erupts. But of course that sounds, is that utopian? Is that utopian? Like how do you think about this balancing of rights specifically in the criminal justice domain? Yeah, it's both, right? It's this topian and utopian and, you know, it's always important with emerging tech to look at it compared to the status quo, right? And I think a lot of times people look at technology in a vacuum rather than by what we're already doing, right? I mean, automated vehicles isn't, you know, and kind of self-driving vehicles is a good comparison to that, which is like, yeah, they sometimes get into accidents and so do humans. And so then the question is, you know, are they worse? Are they better? They're better than the status quo. And, you know, right now we do really dystopian things to prisoners, solitary confinement and prison and prison conditions and people who are mentally ill not getting any of the resources and help that they need and people who are, you know, deeply, you know, addicted to drugs and other, you know, devices or children who are being sent away for life. And all of that to me is horrific in the way in which we actually currently approach justice. And so the question is, can neurotech help? And in some ways it can. It can both help us understand what we're doing, right? The research itself can help us understand what is the impact of solitary confinement? What is the impact on prison? What is the impact of the different choices that we're making? And then this kind of idea of tracking, right? Whether it's ankle bracelets or, you know, other forms of brain sensing, tracking or even moral rehabilitation, could that be done well? It could be. Could it be done in horrific ways that overwrite people's brains, overwrite their autonomy, violate their cognitive liberty and become deeply problematic? Yes. You do lose some liberty rights when you commit a crime. And society I think has a right to deprive you of some aspects of your liberty when you have committed a crime, especially a violent crime against another person. And since these rights, since cognitive liberty is not an absolute right, but a relative right, figuring out where we're gonna carve out that balance between liberty, you know, that you have given up as a result of committing a crime versus how much we can take from you in a sense, like how much we're justified in taking from you and how intrusive it is with the mechanisms that we imagine in digital prisons or otherwise. I think that's, you know, we're gonna have to figure that out where to figure out both what the technology can do and when it's justified, but sometimes by comparison to the status quo, it may be better than what we're currently doing. Thank you for that time. For one more audience question, I'll ask this question from Federico Fernandez-Kepka. And I think it reflects probably some questions that others have. So the question is, do you think that accepting these neuro technologies would create an issue of over-medicalizing the brain and begin a path to create an ideal brain or incentivize a certain type of brain activity and stigmatize other types of brain activity, which I guess is another way of asking about neurodiversity as well. And if we can't all agree on what the good life is, how can we all agree on what the good brain is, how do you approach, how do you answer that question? Yeah, there is a risk of becoming deeply reductionistic about the brain. And I talk in, I don't remember which chapter, but I think it's the chapter on mental manipulation, which is about the seductive allure of neuroscience and kind of reducing people to the kind of, their brains and their puzzle pieces. I wrote an op-ed recently for the LA Times about the use of cognitive and personality testing for the workplace and how we've already started to try to reduce people to their cognitive and effective functionings and how that's discriminating against people with neuro atypical brains, even though that may have nothing to do with who makes a good employee or what a good fit is. There's definitely a risk that we go that path. The more we understand about the brain or reduce it to bits and bytes, rather than taking into account the whole person. There's also a possibility on the other side, which is we all discover that we're on a spectrum of neuro atypicality and the kind of idea that most of us have about our own brains and mental experiences or lack of biases or neuro-typicality turns out not to be accurate and that with better data, we start to recognize the broader diversity of human brains and mental experiences and cognitive and effective functions. So it's a bit of an empirical question, I think what we do with it and it's a bit of a normative question of what we choose to do with it. Both of those things I think will play out over time and we just have to be a tendon to the risk of the kind of reductionist tendencies that we have when it comes to explanations about the brain and behavior. Yeah, so that makes sense. In our last minute and a half or so, maybe you'll want to not answer, but so let's say we're at the crossroads. I read the book as, I don't know, 51, 49, thinking that we're going to go the right way, that my reading of the book was Professor Farahani thinks that we can achieve this brighter future and that we actually will kind of a rally to arms. You know, I referenced the summit that we had, there was some really deep pessimism. We're not at the fork in the road was like 50, 100, way back. Like this is just, the story ain't changing. Did I read that right? Are you ultimately optimistic here or are you really agnostic? And it's gonna depend on collectively whatever the we is, what we do. Maybe you can end on that note where this is all headed. I am more optimistic now than I even when I wrote and finished the book. And that's because of the collective awakening, I think that's happening in the face of the rapid changes with AI. And the fact that you see so many people in a younger generation, for example, choosing to unplug from devices and read books. And, you know, I see a kind of pendulum like swinging in the opposite direction. I think there is a very dystopian path. And so for you to say 51, 49 is probably right. Like it's not like I'm like, this is all gonna turn out great if we just make some good choices right now. It's all gonna be fine, right? I think there is a very Orwellian vision of this future that's ahead of us if we don't make some radical changes now. And, you know, I am optimistic in this moment that those kinds of radical changes are possible and that we have the political will and we have the collective consciousness to do so. But if we don't make those choices, you know, it won't be good. Oh, that's a great way to end. Professor Farahani, thank you for joining us. The book, A Battle for Your Brain, available everywhere. Allanita Farahani on Twitter. And thank you, Nina, for being here. This was a lot of fun. We learned a lot and we'll be in touch. Thanks for the rich conversation and thanks everybody for joining. Thanks everyone. Take care. Bye-bye. I'm gonna end the webinar.