 I've known Adam for probably 10, 12 years at this point. We've been kind of loosely working together through areas of science and technology acceleration. We bonded early on in realizing that the way in which science was proceeding and the way in which that science was getting translated into devices and technologies and then eventually getting into products and mass market adoption was just sort of completely and utterly broken and going extremely slowly. And there are some extremely difficult incentive field questions there, both within the kind of academic environment with academic credit and reputation being awarded to, say, conceptual results, but not really getting the results into close to development and close to any kind of production. And separately on the market side of this almost kind of both long-term irrational and borderline crazy or actually maybe definitely crazy way of valuing things where certain critical developments for humanity are totally not invested in or totally undervalued relative to maybe really glitzy and shiny short-term things leading to this massive macro misallocation of resources where the macro environments are funding large-scale short-term things instead of like funding extremely valuable long-term oriented projects. So Adam and I have been like over many years working in many different areas, helping many different groups and projects and so on. I've been just so thoroughly amazed to see Adam's work come to fruition in so many different fields at the heart of everything in Neurotech, everything related to VCI and so on, but also neuroimaging and in many other areas of the forefront of computing. You can see sort of Adam's work and touching and helping so many teams out there. And one of the things that I admire greatly about Adam is that he's been supporting and connecting many groups for a decade and a half now. And many groups either exist because Adam connected people to then help them start it or they exist and our support because Adam helped them find support. And this is now kicking off to a whole new scale of success with the recent invention of the new model for FROs, Focus Research Organizations, which we'll hear a little bit about. And that's working and scaling, which is super exciting to hear and see. And I really think if we kind of end up producing solid Neurotech in the next decade or two, it'll be a great part. Thanks to you, Adam, and all the work that you've been chipping away at for a couple of decades. So thank you so much for the work that you do. Thank you for joining us and speaking with us tonight. Have a bunch of questions around VCI and the field and road mapping and FROs and kind of like how you're thinking that this might develop and what should we be focusing on now. But yeah, thank you so much for joining us. Thank you so much for having me. Yeah, super happy to be here, yeah. Great, so let's dive in. Early on, we aligned on the concept of road mapping and tech trees and helping humanity kind of think ahead to a field and what are the roadblocks in that field and how do I identify them and so on. Can you maybe, how's your thinking evolved over this? Is that still really important or do you see it getting better? Maybe start by helping us understand kind of what you mean by road maps and tech trees and so on. Yeah, it's a slightly complicated set of things because in some ways the idea of a road map and when we come back to the idea of FROs, these are sort of very obvious things. It's sort of like if aliens came and started doing science on earth, these would just be so obvious and there's been periods of time and fields, you know, aerospace in the 1960s or something where this idea of road mapping is just completely obvious to everyone. So there's not really any rocket science, I guess, to the idea of road mapping. But I think what one of the things that is worth pointing out and what kind of come back to you is the way in which science is structured is not necessarily structured to optimize for any given kind of long range outcome, like how do we achieve break of your interface that's really scalable or something like that. It's optimized as a kind of self-organizing system of various actors. And within that system, there are certain activities that are more or less incentivized. And in particular, you know, in the kind of company setting, you know, you have maybe roadmaps at some level within one company around what does a corporation need to do to develop products and capture value. But less so a kind of field-wide dependency graph of different pieces of the technology, different pieces of science as well that are needed to unlock some goal, right? So if you just think about working backwards from some goal, looking at all possible kind of technological pathways, what are the scientific risk points? What are the right ways to address each of those? How do they depend on each other? That's the idea of a roadmap. It's also very much the idea of a kind of tech tree is that there are certain technologies that would unlock many other capabilities. There are very, very root level capabilities like lithography, you know, for computer chips unlocks everything that we're talking about here. You can think about what it was that unlocked lithography and so on. But in the industry setting, there's kind of you go and pitch VCs and you have a certain product roadmap but you don't necessarily have that field-wide roadmap. And the academic setting, Michael Nielsen has this really great paper. I think the idea of roadmaps is a subset of this idea of like vision papers or kind of strategy papers. And that type of activity, it doesn't fit in the kind of novelty. As fields evolve and it's very natural, people get rewarded, not just for sort of having some general idea of how things should go and what the strategy that other people should take but making some definite incremental contribution that can be verified immediately by reviewers and say, yes, you did this and it advances something in a certain way. So there's certain kinds of papers and certain kinds of activities that are both bite-sized enough for an individual person to get credit for in their research pay-off in a certain frame of time are verifiable and reviewable in a certain way that leads to kind of research contribution and making the tech tree is not necessarily a research contribution in that setting even though it's very important at the system level. Do you think, we talked in the past about how so much of this knowledge exists laden in the minds of scientists and engineers and developers in a bunch of the different places. Do you think that that's getting better now that the number of papers has scaling basically close to exponentially and just getting so difficult to follow the fields and there's so much convergent or simultaneous invention in different places. Are you getting a sense of, is that still a case that a lot of this knowledge is trapped or do you think it's getting closer to it being diffused into the internet in some way now just maybe needing to be interlinked and organized? I think these things relate in the sense that as we kind of unblock people if you imagine giving someone a fellowship to do road mapping in a certain fields or Tom Coley and other people have started using the term field strategist for that road mapping role, then because there is so much in the internet I think the types of things that I've done are just wholly reliant on Google Scholar and the internet and the ability to do search and the ability to kind of rapidly fire off an email so that you know the world expert on something get a critique of something. So I think in many ways it is getting much more possible and the type of things that we're doing road mapping, BCI technologies that would have been very, very hard to do. And also some of the serendipitous discoveries and kind of merging of fields is very much enabled by that. I still think we're not necessarily taking advantage of it enough and because again it's often done as kind of along the way kind of along the side. You have researchers that are searching Google Scholar they're finding things, they're understanding what's out there. The example Mellon gave of optogenetics is also one that we've talked about in some of these questions about what's the digital system that support this where optogenetics really relied on understanding that there's certain types of kind of photosynthetic proteins in microbes. If you could bring that into neuroscience you could express those in neurons potentially that would be really transformative for neuroscience for the following reasons because light would be better than electrodes for certain reasons. Here is what you would need to do to test and set up that experiment who's here you would contact what the team would look like that could do that experiment and they kind of stitched it together very rapidly how to do that but that took a kind of extraordinary focus and many of the people were in a kind of very special configuration. They had special fellowships that were letting them do that they had been trained in a special way and they had a very special orientation and so there isn't really something that deliberately tries to optimize the probability of such events happening. It's something that is a very special case to have that kind of cross-disciplinary synthesis or kind of moving of tools from one discipline to another. I think that there might be ways that we could accelerate that with better tools for capturing that tacit knowledge and certainly that's true at the level of things like experimental protocols and kind of details of experiment reproducibility. We again the system hasn't really had a goal directed effort to optimize for that these are tools that are kind of being used along the side of people's day-to-day day jobs. Can you tell us a bit about how the FROs are going? How has the reception been so far? Is the model work, what's working about the model? What scale are you hitting and so on? Yeah. Well we have three that exist right now that have been funded and that have labs and are doing operations. Two of those are within our meta-non-profit kind of incubator structure called Convergent Research. So those have two labs operating. We basically started this over about the past year. We now have a separate nonprofit that is kind of the incubator for these and we have several more that are sort of passing a review phase and likely to launch in early 2023. Those are in the biomedical area still in various sort of parts of bio. And I'm very excited. Even just in the last six months, I think we've started to see at the ideas level, people kind of endogenously picking up this concept and coming to us with things that fit the FRO model. So much as in brain and brain sciences there are a lot of these kind of scalable measurement problems or kind of scalable interfacing, measurement slash control perturbation that look like engineering. They look like large team hardware, integrated system engineering and scale, not just like kind of individualized discovery. And those are needed even to do the basic very fundamental things in neuroscience but there's not necessarily a mechanism to unlock that. We're starting to see examples of that. I think climate, we have three climate related proposals right now that have just come in in the last six months that are all in this idea of sort of scalable integrated measurement of climate variables. And what if we could just put all the different types of sensing in one platform so that you can actually have a high dimensional measurement as opposed to lots of disconnected low dimensional measurements. And so we're starting to see traction I think at the ideas level. And we're trying to be very stringent about it, right? So anything that really can be a traditional VC back company or even a kind of long range VC back company and anything that can be done with a traditional granting mechanism to universities or other types of institutions. We're really trying to rigorously avoid that. So like in there are a lot of things just in that climate example, fusion reactors that has might have been it's a big difficult research thing but it is something where the VC world is now like really driving that. So we're not looking at fusion reactors. That's ultimately a kind of private good as a product that fusion reactor you can capture value but something like the underlying data of material science that supports fusion reactors or the underlying measurement of what is actually the dynamics of the climate system that says, are we gonna need to do different types of interventions? Those are things that look more like systems engineering but in service of public goods. And that's we've been seeing traction on that in the FROs. You know, another thing that's been exciting is that the first projects that exist have had some significant success in hiring really fantastic scientists. And one of the first questions that always comes up as well, these things don't offer kind of equity and a high growth company. They also don't offer the sort of security of the academic path with the possibility of tenure and so on. And they raise a lot of questions. You know, the team is gonna have to design for themself, what is it that comes next? And they have to work with us also to sort of put guardrails around that and understand what the best and public benefit path is for that but they're basically gonna have to design de novo, a kind of career path of what comes out of the, for them out of the throw and what comes out of the technology for impact. And so a lot of the question about FROs has been, can we hire amazing people? And the answer is yes, we can hire amazing people. So I'm really excited about that. The first two teams have had a really, really done a really great job recruiting and we're starting to see, you know, they're both doing serious work on the ground. What's the scale of like the teams and the funding and so on? If you could say. Yeah, they're usually, there's scope, the mental model of it is really, the idea is sort of modeled on like a series A biotech startup, something like that. So it's not meant to be thousands and thousands of people, billions of dollars at the scale where an entire field sort of has to agree on something. It's meant to be still these relatively agile teams. But what you'd be surprised is that even getting, you know, three or four or five people in the traditional academic setting to really be truly rowing in the same direction, truly that kind of specialization of labor and tight integration that you'd see in a product development team in a company, you can't do that in other settings. And so even a team of 15 or 20 people, which is where I think these first two projects probably are going to kind of ask them to maybe around 20 something people each over five years. That tight knit integration, that's still at least a totally different method of product, that sort of project management and how people coordinate and how you hire how you manage the team. So what that works out to, if you add lab space and equipment is sort of on the order of five to 10 million a year for five years. It's roughly like sort of 30 to $50 million projects per project. And I think if you, how many such projects do you need in a field is something on the order of like 30 to 30 to 50 or do you think like even 10 projects could make a massive difference? Yeah, it's a really great question. Yeah, because I mean, this again comes back to these FRO shaped bottlenecks sort of what are this particular bottlenecks that then there's so many things that you can do as companies and that will make sense to use different structures. But my guess is it's kind of like more like three to five FROs per like major kind of macro fields. I think in neuroscience, we should have one on human brain interfacing as you were just talking about with Milan. We should have one and we do have one in the 11 bio on the structural molecular circuit mapping. We should have one on activity mapping. There's probably a couple others but it's kind of like three to five kind of core data generating discovery platforms if you want that then can spin off lots and lots of things where once we kind of understand some science from that and the science is accelerated and also you accelerate the very, the thousands of researchers that are working in the more academic structures then. But is that number just coming from the constraints on the funding side? Meaning like, hey, you only can afford to fund a certain number of these so then you kind of like round robin it across fields. Like if you could, I don't know, organize all of the R&D funding of the U.S. or, I don't know. I actually don't think that you need thousands and thousands of FROs to be honest. I think that we're still far from having five FROs per field in terms of our available and sort of the speed at which we can allocate that funding and access that funding. So we definitely need a lot more funding. But I actually really think that we need to be very careful sort of the FRO shouldn't be everything, right? There's many, it is a kind of special leap, right? And there is something that maybe inherently more scalable about funding lots of fellowships and lots of trainees and sort of individual groups in a more distributed fashion, right? You don't necessarily need to put everything to be an FRO. And so there are these small subset sort of core engineering intensive platforms that will then make those public goods accelerate what everybody else is doing and then enable a lot of for-profit driven innovation downstream. But in the example of neuroscience, I mean, there are sort of several kind of core platforms. I think you could add two or three others, but we're not talking like 100 neuroscience projects. And then in climate, again, we sort of have like three core climate measurement problems. I think we maybe will have four or five. But I think that those, you know, in many ways there is so much amazing work that's going on. And the question is just, if there's this structural bottleneck where a particular type of work is missing, then there's a lot of leverage for unlocking that, but it doesn't mean that everything needs to be that type of work. Yeah, let's get into BCIN and Neurotag and so on. So while back you wrote this awesome paper about just thinking about the physical limits of imaging and so on, and I'm sure that you're thinking that way about just the current BCIs and so on. How, when you think about approaches, what do you think are the most promising approaches at the moment that might be able to unlock some of those kind of gaps between kind of where the current technology is and where the physical limits are, meaning like there's room to grow there? What are the approaches that you're most excited about? There's a huge amount of room to grow. And maybe to convey this a little bit, I can speculate even a little bit more than Millen was willing to and sort of where I think it's likely that like long-term this looks like. And some of that is inspired by, but it's not identical, the case for how you do human interfacing. It's not identical to the question that we asked in the 2013 paper, which was about how do you do sort of complete coverage of the mouse brain activity, but it's related, right? Because basically the big problem with the mouse brain is this three-dimensional centimeter or so thick object. And you have to basically interrogate that object even though it's opaque without heating it up without destroying that object, right? It's sort of a real-time sensing of a three-dimensional light-sensitive and heat-sensitive, you know, hunk of material. And the human brain problem is somewhat related. The skull is a slightly different thing than the tissue, but basically that's this constraint. It's sort of three-dimensional sensing of an opaque biological object. That's basically the question. And so the way that I think it's likely that that ultimately goes, I personally think it's very unlikely that that goes by physical wiring being inserted, ultimately. I think that that's a nice thing that you can do right now because we understand how to work with physical wires, but I don't think that that scales incredibly well. And so I think the thing that will look like optimally and in the long term is that you're using kind of non-invasive forms of radiation. So light, sound, magnetic fields that can penetrate through tissue. And light doesn't exactly penetrate because it bounces around and scatters, but it can actually go quite a long distance just in a randomized way. And so there's a question of what you can do with that. But so light, sound and other types of non-invasive things like magnetics in combination with each other. But then the problem is neurons naturally aren't coupling to those modalities incredibly well. So then you need to add in a kind of transducer and DARPA in a fairly recent program called as a nanotransducer. And that is the thing that couples the neurons to that form of radiation so that they can talk to them, right? Like optogenetics as an example. So I think there are kind of these couple of big areas that you would push where at the beginning, you don't directly make that BCI system that has all those things together. This is kind of the technological road mapping part is that you might need to make big technical pushes on some of the components. How do you deliver the radiation? How do you steer the radiation in this like scattering material? And then also how do you deliver these nanotransducers which could be done with viruses? It could be done with nanoparticles. It could be something else but sort of gene or cell therapy, gene or cell delivery to the brain. And then how do you do these noninvasive modalities? And initially the noninvasive modality that you would use for some medical application without the nanotransducer might be kind of a big clunky MRI like thing. It's not a portable consumer product that Facebook immediately wants to put in its AR glasses. It's gonna use a lot of power and stuff. But then if you added the nanotransducer later, then that would make it much easier to couple to that detailed information. And so the nanotransducer makes it easier rather than harder to get the information. But then there's this other path, right? You can't do that all necessarily in one company or one effort. And so that's why I think that there's these kind of FRO like kind of moonshotty projects on these kind of noninvasive radiation and then fundamental biological kind of delivery and access with things like genes and cells. Not to mention the connectomics and the brain mapping and understanding how do you target those genes and cells to circuits? But so with all that said, this is kind of why you see it, I see it participating into a set of FRO like projects. But each of those things to some degree has applications on their own. And so one of the ones that I think is most exciting where there is a real application is one that Milan just mentioned kind of in passing which is ultrasound. And so ultrasound, there's a group out of Caltech that's really pushing it in a few other groups. But ultrasound, several key things about it. One is that you can create it and detect it using tiny microchips, right? These are micro like electromechanical systems. You can make a chip that can vibrate ultrasound. It doesn't have to be a big device. You can have sort of very miniaturized delivery and sensing of the ultrasound on a chip. Two, ultrasound can naturally both stimulate the brain and do things like this blood brain barrier opening without any genes or cells having to be part of the picture that are exogenous. And it can also see brain activity in a way that's probably better than many of the things we're talking about like near-infraspectroscopy or FMRI. And so the question is basically how do you make an ultrasound platform for the brain today? And the problem there is that sending ultrasound through the skull is difficult. You can do it, but it requires you to have a pretty long wavelength of ultrasound, which means your spatial precision is low. And so if you can instead put the ultrasound emitter detector array, it's chip, above the brain, so not touching the brain, which is really good for safety, but inside the skull, then that would be, just from a fundamental technology platform perspective, that would be how you could use ultrasound to sort of see the whole brain at kind of intermediate resolution and eventually you'll have things like ultrasound-sensitive proteins and stuff like that that would or cells that would be their full interface. But that's really powerful even now. But that's another example where it has certain applications, epilepsy and some others that are very immediate. But if you wanna try to say, well, how are we gonna solve depression or OCD with that? You actually kind of need the thing to be in the brain for a while and do some scientific studies to understand how those things work before you can say here's a sure shot at the exact way we have to use ultrasound to stimulate that. So it's still a discovery platform kind of play, which again is not the traditional kind of product-market fit orientation where medical device companies basically require you to have product-market fit before you start the company essentially. So. Yeah, as you think about maybe five, 10, 15, 20 years out, like how do you see the field and the tech and especially the devices that working at scale or the problems being solved? How do you see this progressing? How long do you think certain things are gonna take? Well, I mean, it does depend kind of on what's happening. I mean, I think some of these ideas like the implantable ultrasound, semi-invasive ultrasound have been ideas that people have been mentioning and discussing for five plus years, six, seven years. I remember discussing these things in 2016. And it does seem to have in the current ecosystem a pretty long time constant of that to develop, right? I think I would say that for the last six or seven years, there has been a bunch of scientific improvements in this sort of functional ultrasound and focus ultrasound field. But the biggest thing that's honestly been bottlenecked by in my opinion is the sort of funding and structuring of those institutions because there were proposals five or six years ago, they're not qualitatively different in scope than they would have been, than they are now. So, you know, some of the other areas like the endovascular electronics going in the bloodstream, again, something a bunch of people were talking about. There's NIH grant for that in 2014 or 2015. We now have Synchron, which is a company that has made progress on this, which is really exciting. And so they actually have something in patience. This is the thing that Milan was mentioning was originally prototyped in Australia. So they've gone a kind of relatively traditional medical device path and are having success with that. And that's super exciting. But then you can imagine outfitting that with a huge amount of electronics. That thing could have a camera on it. You know, that thing could be in a research setting, talking to cells and doing all sorts of other things. And again, that kind of engineering play space or and also the sort of clinical observational human observatory type play space of how do we actually discover these applications that need the devices in the brain in order to even discover them and thus drive the markets forward? That has still been pretty slow. And you know, Neuralink I think is getting close to doing stuff in humans. But again, there's sort of a relatively niche set of applications that's the same application space you would have been talking about five years ago. So how do you really open that up? Well, my hope is that sort of DARPA-like models and FRO-like models would contribute. And also that maybe you all can come up with clever ways of, as you said, getting around the sort of short-sightedness of how valuations and value capture works. Because ideally, whether you call it a FRO or a DARPA program or what it is, you would have this kind of ecosystem of technology development that is feeding into a larger roadmap or vision. And that is a struggle to do in the current ecosystem. Sometimes you even see very perverse things like a medical device company makes something much better and then a much bigger company just eats it and shelves that thing, which is the exact opposite of what's supposed to happen in terms of the long-term. So there can be quite perverse incentives in the short-term. And then meanwhile, I see just tremendous progress in terms of the fundamental enablers of this stuff. Optics and genetic engineering and so on is getting much better. And there'll be lots of good reasons to do, to solve some of these questions like the viral delivery and cell delivery. There was a great little article online with by Jose Luis Reconna about Alzheimer's, where he just pointed out, if you could basically, there are genes that are protective against Alzheimer's. And so if you could just have CRISPR delivery to a large fraction of cells in the brain, that's like one possible way that you could just like, you could do this kind of protective thing against Alzheimer's. That will, I think that will on its own actually drive a huge amount of the molecular biology side. Yeah, and very much agree that new financial instruments and ways of coordinating public goods funding and so on will really help. And that is amazing about that genetic result that's really exciting. Yeah, I'm not 100% sure of how it will work, but that's an example of the kind of thing. There is an economic driver now to push forward gene delivery into cells and brains. But all of this still feels like, I don't know, 10, 20, 30 years and so on. And when you sort of like compare it with the AGI timelines that people are like thinking about and worrying about, the most aggressive of those is like five, seven years. The probability mass is in the seven to 15 years now, which used to be 25 to 30 years now that has shrunken and then maybe like the more conservative people in the field are like still 30 to 80 years out and so on. That's a very huge mismatch, right? And so like we have like potentially transcendental technology A that like maybe totally misaligned or kind of misaligned and problematic or even if it's not misaligned, even in the tool scenario case where like you have an AGI tool that is massively disabilizing for the world, right? Like potentially like work in some ways. So now BCI is like from a lot of the communities like if you can accelerate BCI and you can get to some of these important results faster, then you could either get into fully connecting humans with computing ahead of any kind of AGI. So then you sort of like level the playing field or at the very least get to this kind of enhancing mode where like you have an exocortex or something like that where you sort of like being able to use the AI system so then kind of interconnect with humans and whatnot. But the timelines here seem like way off. So like in this multiverse timeline, it's looking kind of bleak compared to like if we could, you know, that was... And if your neuroscience has any hope of actually contributing positively to the AI safety things as well, which I think there's at least a chance that that could be the case, it's also frustrating that it's so far behind in that way as well because it is as Millen pointed out. I mean, it's not necessarily that the human value function scales to super intelligence, right? But basically we have aligned a very complicated learning system along some relatively predictable, you know, we're both sitting here having this conversation, you know, I'm not just going off making paper clips or something. And so, you know, there is something to be said to be learned from the brain as well in terms of what AI could or should look like. So not to mention some of these direct BCI applications. So yeah, it is a bit disturbing. I mean, it's not clear to me what's gonna happen. I don't have a pretty wide range of AGI or both definitions and timelines that are kind of simultaneously in my mind, but it is... I mean, I think the one possibility though is that like to the degree this is really true that it acts as a kind of general accelerant of lots of other things too. It's like an indirect, but people basically start to realize that we need to get serious kind of. I would say, you know, some of the things that DeepMind has done with protein folding, you know, that kind of comes directly out of AI is sort of very inspirational to what we're doing with FROs. Yeah, if you were thinking about biological research as well, yeah. If you were able to direct the, I don't know, research agenda of DeepMind or open AI or Anthropocore, these groups where you're able to say, hey, like direct everything towards like BCI development, what are the kinds of problems that you would solve with ML? Because, you know, the off of old was just this amazing result that just sped up chemistry and biology tremendously, right? And so that has saved enormous amount of time in many areas. How would you sort of like direct these projects to then kind of orient our use of advanced AI to speed up BCI? Yeah, I mean, so on one hand, I actually, the thing I take the most from DeepMind is really organizational innovation as opposed to really fantastic AI. I mean, it is really for fantastic AI too, but it's more the organizational innovation of centralizing that research talent, being able to kind of direct it towards problems, you know, the off of old daily standup kind of meetings as opposed to your typical academic style of working. So a lot of it would be applying that towards things where I don't think that barring really extraordinary leaps in intelligence, you can get around just gathering certain kinds of data, right? So I don't think that that AI can really help us figure out that much about how to cure depression in the brain or something unless we basically have just this data access, right? Sort of physical access to data. So a lot of it would be around the concept sort of self-driving labs or getting into closed loops with data generation where the data quality is high enough that AI can then start to sort of show benefit, maybe potentially doing kind of closed loop hypothesis generation or sort of scientific studies. You could certainly imagine an AI in the loop, you know, kind of upload C. elegans kind of type projects or sort of even small organisms could you do sort of very comprehensive interfacing where you're trying to train a model that recapitulates what an nervous system does. But then that thing is also saying, well, here's what's the next experiment that you need to do to perturb that system so that we can update my model of the brain that would certainly do when I would push on much more as kind of modeling brains with AI, modeling or computation of AI. But so many of these things are dependent on the data generation. And can you get in the kind of, is again this kind of engineering systems for data generation, where you could reasonably get in a closed loop with AI, less so than the AI algorithm itself, yeah. Yeah, well, there's hope yet. So the open up for questions from the audience and questions from Twitter. If you're in the live stream, you can use the hashtag PL breakthroughs and I'll be looking at that and seeing questions come through. I'll start with questions from the audience here. Raise your hand, coming around. Shy audience, I'll start with one. So maybe if you think maybe I had to, the imaging projects and BCI further along, how far away do you think we're from something, some important results in being able to model full brains, right? So I think this has been a long trajectory of research for a lot of groups of being able to kind of build full neural models of CL against and so on. And eventually it was a hope of eventually getting to a mouse and whatnot. How has the progress been in that area of things and how far away do you think fully emulating a whole brain is? It also depends a lot on what you mean by kind of what level of emulation do you want and what kind of other properties you have to have because I think you could do a kind of FROS project with some very small nervous systems like CL against in sort of five to 10 year timeframe, maybe less where you could imagine that the emulation then is basically just kind of GPT-3 on neural activity data, if that makes sense. And then the question is how valuable, how useful is that to you? Because it doesn't necessarily tell you how other nervous systems work. It doesn't tell you how the cortex work. It doesn't tell you how you want your BCI to interact with different parts of the brain or how to cure diseases or so on. But in terms of just could you get the throughput of incomprehensiveness of data that you're really capturing the relevant dimensions and then you're asking to train a model that just predicts that. I don't think we're necessarily that far from it but I'm also not sure exactly how valuable it is in this grander scheme. Like if we could emulate a zebrafish at that level of just kind of GPT-3-esque kind of prediction or something reasonable as zebrafish, neural patterns or something. Let me bring up the diagram from the kind of like the different kind of models of how you might do emulation. Let's see if we could get like that in the lecturing would be great. So I'm showing a diagram here of the kind of, there's been many attempts in the past of like modeling out how complex would different versions of, if you were to sort of like simulate a brain and so on, would approach different types of models at different layers of complexity and fidelity, how much computational power that might require. And so this is kind of mapped to a human brain. Like now you could run the same experiment with say a mouse brain or something much less complex to get a much, you know, very high fidelity model much sooner and so on, which might start giving you clues as to which model might actually be the accurate one before you could like scale it up. If you were to sort of like guess right now, like which of these models do you think is critical for brains? And of course like this is like the massive question there. Yeah, it's, I somewhat questioned the premise of sort of this kind of direct emulation. Like I think this may be kind of semi-right in terms of if you were trying to do sort of direct simulation, but again, the way that I see this is actually like we will gain in understanding and we will gain in abstraction of models as we progress. Like if you had to just start now knowing no priors essentially, you know, then I think honestly it would be very hard to just stop at electrophysiology, right? Because we know there's lots of interesting things going on, you know, synapses can switch, you know, which kinds of neurotransmitters they use or they can put things into an out-of-receptor as it doesn't really tell you how plus this works. Well, let's start at the end and then work our way back, right? So surely, you know, if you get the stochastic behavior of single molecules, like, you know, you're... Yeah, in principle, absolutely. In principle, that's absolutely fine. I think even at the end... Where's the first one where you start saying maybe not? So like distribution of complexes, probably yes. I don't quite understand what the order of metabolome and proteome and so on there, but I mean, I think there's a decent chance that you do need what you're calling like states of protein complexes or something like that in some meaningful way, if you don't have other priors, right? But if you've actually understood something about how these circuits function, then you may realize that you only need a few states of certain protein complexes or the ones that matter, not all of them, right? And that that can be abstracted in a certain way. And so I see it much more as there's a kind of fluid continuum between neuromorphic AI, understanding neuroscience, uploading brains. I see those as all a kind of continuum that basically it just has to do with the fact that we can't do any of those right now because we don't have anywhere near comprehensiveness in terms of that observation or sort of theories that are well-linked to it. But if we were actually seeing enough progress in these things, then this is not exactly... I wouldn't be too worried about the raw computation anymore. I think you can probably... Because you can start getting into having... So I guess what you're saying, let me know if this is correct or not, you're saying for parts of the brain, you might be able to get away with some analog model or some kind of structure that you can assimilate in lower fidelity or still high fidelity, you just have... You know what the model is, so you can now start to work in a fraction. In principle, to really understand smell or something, you really have to understand olfactory receptor binding to the individual proteins and stuff like that. But if you can just understand, well, we kind of have this distribution of different smell receptors. It creates a certain space of a certain dimensionality is this kind of thing. Then we can just kind of do that and we don't have to worry about small molecule binding to olfactory receptors. There's always gonna be enormous complexity, right? So I don't think that biology just cleanly parcelates in terms of these abstraction layers like perfectly and there's like Michael Levin stuff with ion channels and electrical communication. There's lots of weird things and if you look at the immune system is interacting with the brain and that's small molecular fragments and there's a huge amount that's going on. But in terms of description of, what are the parts of this that matter for intelligent behavior or for what distinguishes one person from another functionally? It's not so much that I'm worried about raw computation. I'm worried about this whole line of progress of how do we understand any of this and kind of have like some reasonable with priors, you know, description, structured description of this. What's one of the things that's useful for me in this chart is mapping out to say, well, we'll at least have evidence of one model for intelligence that works at some level of computation. So then you can kind of back into, well, certainly it's probably not perfectly efficient and so on, especially once you get into the higher versions the higher models are extremely inefficient. And so you get to, back into, it sort of gives us a blueprint of how well we could be modeling these things and how we could like get to understand some fractions of the problem to be able to attack those, follow that approach you suggested of like figuring out how some of these parts might work and so on. Yeah. Well, I think machine learning is gonna play a key role and even in the like CL again or something like that I think we'll end up using like learned transfer functions of various kinds. So we don't have to know every molecule. Do you think it's a time for kind of like nemolode style or the CL against virtualization projects scale up like doing zebra fishes and so on? I don't know. I haven't kept up with that side of things. Potentially I think that it could be a great time for small organism, very comprehensive measurement projects, all the more so than it was a couple of years ago. Still super challenging, but I think it's not certain but it's worth ideating about whether some of the challenges look more like engineering that would be amenable to the sort of FRO type model versus a kind of conceptual breakthroughs that would be needed for kind of small organism, GBT zebrafish. Yeah. We have some questions in the audience. Thanks for this. This is great. Are there ways in which the brain and the computer may have kind of fundamentally different constructions where they may not be able to interface or communicate in certain aspects like I don't know, emotions, love, things like that? Yeah, I'm not sure. I mean, I think that there's a chance that some of the things the brain is structured to do don't look all that different from pretty significant elaborations on reinforcement learning, neural agents that we see in AI but not necessarily so qualitative. There's something totally fundamental about that. These emotions could have partly be kind of cost functions. They could probably be state modulations of... So I'm not sure there's something fundamental there but the thing I do worry about certainly with the kind of, if you want naive kind of stick lots of electrodes into the brain to make a BCI thing, is that I think the brain does have a huge amount of substructure. So if you imagine just trying to wire up in a totally random way to a computer, you just have a kind of net of wires, the brain would be even harder than that, right? Because basically the brain has certain, computers have certain types of transistors and capacitors and a few kind of canonical structures but in some cortical circuit, let's say there might be well-defined kind of input signals, output signals, earning signals, attention modulation signals, other types of state modulation signals, things that are meant more for like memory storage, things that are meant for more like biological kind of cleanup or homeostasis, other kinds of wires that are used for special purposes. And so if you just kind of wire up randomly, you go, oh, we connected to some neurons, that's kind of dismissing the point, right? You kind of really need, this is part of why I'm so excited about Konectomics and mapping the structural molecular circuitry is like, it's kind of stupid if you're trying to put an input to something to instead, are we trying to train something? Why are you hooking up to the output wire, right? You wanna be hooking up to the cost function wire and we don't really know a translation of exactly which of the sort of thousands of different types of cells and how they're arranged correspond to inputs, outputs, cost functions, attention modulation and so on. And if we knew that, this whole thing in some ways would be much easier to think about. So I'm very worried in a practical sense that we'll kind of oversimplify the brain as just some kind of hunk of neurons or something, but I'm less worried that there's like some fundamental thing like emotion, makes it impossible to communicate. You could just sort of say, well, I think the decent model, I don't know, but a decent model might be take the most interesting RL agent or something that exists in AI and say, well, how would you propose to intervene in its computations or read out its state and that can just be like, it doesn't necessarily have to be something that profound, yeah. And probably, it greatly depends on the fidelity level as we were seeing in that chart. Like if you pick the wrong level and you emulate too little, then you might miss critical components. So this is where you wanna definitely emulate the whole way through. Yeah, and I think it's certainly dangerous to oversimplify, you could say, well, the hippocampus is just a memory storage system, but it's also very involved in emotions and all that. So yeah, another question over there? Yeah. Hi, Adam. Thanks a lot. So one thing that I would like to ask is like, how long do you think until we have some sort of smart devices that will provide some feedback of our brain to our smartphones and well, this feedback is gonna be like, okay, you need to sleep more or it will learn how our brain works and then we'll provide feedbacks that actually will improve our brain activity and efficiency. Yeah, the thing that I find a little hard about this is like, I think there's a lot you can do just with basically behavior, right? So sort of like, there's some continuum between that and just wearable computing and AR and stuff like that. And so, yeah, I mean, at some level, like just you can see pupil dilation or just tone of voice or a huge number of other things. Even how you type, you know, is gonna be what you say and how you type and all of that is gonna be reflecting bunch of states of your brain. And so the question to me is, when does a brain interface provide some kind of real comparative advantage and real kind of justification versus the actual difficulty of doing that relative to a kind of truly completely non-invasive, you know, human-computer interaction stuff. And so this is why I think it's a little hard to say, well, maybe this field will just be driven by user interfaces. I don't think it's quite that simple either, right? This is actually, you know, that there are things like control labs, which works on the wrist, you know, you can probably do some stuff with the brain, but there's so much that you can get without the brain for a human-computer interaction purpose. If you think about, you know, GPT-3 is just text, right? And you can still do so much. So I don't think you need the brain in the near term. And that's why to me, you have to kind of, it has to be kind of really qualitative improvements in the technology and in the discoveries that you make using the technology of what you can really do with it, as opposed to just this kind of consumer kind of feedback application or interface application. We kind of cool if that was wrong, but I just don't see something just coming along that's just replaces the smartphone for that same use case. At least not in the short term. Other questions? Well, yeah, one more. I guess if maybe just a quick follow-up on my earlier one, if we think that the brain can be mapped kind of similar to computer, is there, does that mean that we can then program a computer to do what humans can in the sense of like emotion and conscience and all of those things? Is that a logical extension? Yeah, it's a pretty big question. I don't think any of that necessarily follows particularly from anything that I said. I think that's something that sort of follows from kind of just where we are or where at least a lot of people are in just kind of the brain sciences or cognitive science overall. Like in short, yes, probably. And some of it may not even require neuroscience insight to functionally do similar things. So yeah, because we're making paintings now to some degree, so I think there's levels of fidelity, consciousness, I don't know, that I'm pretty confused about certain aspects of that, but writing poetry or having this conversation or whatever, I'm pretty sure. Yeah, I'm pretty sure that means you could program computers to do that in a certain very weird programming modality. Yeah, I'm gonna disagree with you and say that it does follow from what you said in that if you can emulate a human fully, then you have at least one way in which you make a computer that is capable of doing that. Now the question is, if you have that, once you have that as an existence proof, then what other approaches can you do to have that level of sophistication in those kinds of models and so on? We have extremely complex emotions. Now you can go approaches the other direction and say, hey, what has happened historically in this? Like what have people made already that is similar to this? And there are a lot of both like kind of research experiment type things, but more frequently games that imbue all kinds of agents and NPCs in games with significant emotion type behavior in their reaction and response to stimuli in the environment, right? So you can think of many games when you're playing with characters and so on as having these like very basic agents that have very, almost close to zero intelligence and zero consciousness, like definitively, they have no actual neural structure that could ever give rise to something like that, but have some, at least based on everything we know now. But they have something close to emotional structures where you have many different types of inputs coming together to shift and alter like their decision-making process. Now there's a deep kind of like ethical questions there of like, this goes back to the chat that we were having with Nick about the ethics of digital minds in the sense of as you make minds of higher and higher complexity, you start having questions around where does exactly consciousness emerge or what are sort of like the ethics of making these systems at different degrees of complexity that might give rise to some of these emotions, right? So in the kind of like Buddhist sense of everything is samsara and suffering and so on, then if you're not now making a bunch of agents and you imbue them with the ability to suffer, then you're like making the problem worse. And so there's like some deep ethical questions we'll have to contend with in the coming decades, especially as the intelligence increases. It's another reason I think for neuroscience, honestly, to have some role is what is actually yeah, the key circuitry of that, you know, and what's the parameter space, you know, people that, you know, can respond to pain but don't feel pain consciously, for example, right? These types of things and yeah, how does that function? What's the, what are, what actually, what is we call it, you know, pain or pleasure? What is the actual set of, you know, brainstem and other types of signals that are, what's the actual dimensionality of that space we don't know. So, you know. Well, I think one more question right there, yeah. This is a follow-up question sort to what was just asked. So would, and this is very speculative, but what an AGI in the like platonic form sense, do you think it would display emotions or some kind of human behaviors as an emergent phenomenon and in practice, if we were ever to achieve, actually get an AGI, would it do that or would the human influence be so strong that it's almost inevitable that it would display kind of human characteristics? Yeah, I don't know how to answer that in any full way. I mean, I think there are probably very many different kinds of, I mean, this is also an interesting question for Nick and other people, but there are many different kinds of motivational structures or things that potentially one could give an AI. And I don't think we understand exactly what in the makeup of humans is causing these different parts of behavior and emotion and how much of that is really would naturally, is naturally optimizing for some other end goal, right? How much of it is instrumental versus how much of that is things that we were built in to have. But I certainly think in neuromorphic AI, in certain ways, you would expect those things, but exactly what that means, what is neuromorphic AI? We don't really know, again, because we don't really understand how the, what is the level of detail structure that the brain has versus learning or so on is still, I think scientifically these things like connectomics could add a lot to understanding those things. Well, thank you so much, Adam, for speaking with us. It's been extremely enlightening for everyone. And we're looking forward to working together and I don't know if these things are the next years. And thanks for spending the time with us. Thanks so much for having me. All right. I agree.