 Alright, thank you very much and now it's my pleasure to introduce our panelists and before I do that I would like to welcome and thank Professor Anand Raghunathan who's going to join us here on stage. He is the organizer and moderator of the panel Professor of Electrical and Computer Engineering and Professor Raghunathan is also the Associate Director of the Center for Brain Inspired Computing here on our campus. So our panelists we have of course Professor Jim DeCarlo that you already heard from Professor of Neuroscience and the head of the Department of Brain and Cognitive Sciences. So thank you Jim again Professor Jennifer Neville there we go Professor Neville is the Professor of Computer Science and Statistics here at Purdue and the third panelist is Professor Kosik Roy Professor Roy is Professor of Electrical and Computer Engineering and the Director of the Brain Inspired Computing at Purdue. So I would like all of us to give a warm welcome to our moderator and our panelists. Thank you Dimitri. It's a pleasure to have the opportunity to discuss this very exciting topic with a set of very distinguished and really well qualified panelists. So following up on Jim's talk we thought we would try to address this question in the panel and I think it's an apt one because it did come up in the questions that the audience asked. So the way we'll run this is like a typical panel. I'll give a very brief introduction and the panelists will have an opportunity to make opening statements, but the true key to success of any panel lies in the audience participation. So please jump in ask, you know don't shy away from asking questions that might be tough or controversial. We hope to get a lively discussion going. So thank you all for staying back and participating as well. So as Dimitri said, we have three panelists here. Let me quickly try to give you a little bit of background to this question. So as you all know stepping a little bit broader away from AI and looking at computing at large human brains, biological brains have been an enduring source of inspiration for computing throughout the history of computing. This is George Boole and his book published in 1854 titled an investigation of the laws of thought which led to Boolean algebra, which as we all know is the foundation for modern digital computers. Moving forward by about a century this is von Neumann and his book on the computer and the brain and it's a very interesting read available online as the previous book as well. And Alan Turing as well, right with his article titled Computing Machinery and Intelligence and a rather funny quote associated with him. He's supposed to have shouted this out loud in the cafeteria of Bell Labs. I'm not interested in developing a powerful brain. All I'm after is just a mediocre brain. Something like the president of the American Telephone and Telegraph Company. And fast forwarding to 2019 winners of the Turing Award named after Alan Turing this year where Jan Lekon, Jeff Hinton and Joshua Benjo. Coincidentally, Jeff Hinton is a descendant of George Boole. Their award was for conceptual and engineering breakthroughs that have made deep neural networks brain-inspired networks a critical component of computing. So clearly we've had a lot of inspiration from biology and computing from time to time. Now focusing specifically on AI and neuroscience and Jim referred to this virtuous cycle. So we certainly hope that AI and neuroscience can form this virtual cycle where on the one hand neuroscience informs advances in AI and Jim you know address that very well in his talk. But I think a virtuous cycle needs, you know, two links. You need to complete the loop, right? And so certainly advances in AI should help better understanding the brain and the later part of Jim's talk address that as well. Now, this is a broad topic, right? So while I'd welcome audience questions on on any aspect of this for the preset questions I have for the panelists, I focused on one of those links, which is, you know, going from neuroscience to AI or neuroscience, neuroscience inspired AI. And this is a recent series of articles in nature titled The Brain, A Source of Inspiration and in particular if you're interested refer you to read the article from Demis Hassabis of Google DeepMind and others. And they list some key ideas from neuroscience that have inspired AI starting with just the concept of artificial neurons or artificial neural networks. In the first place, moving on to deep neural networks where you have more than two layers, hidden layers. Reboard living learning or reinforcement learning. The notion of attention, paying attention to different parts of a visual scene, for example. The notion of episodic memory. Remembering and replaying things that you experience perhaps while you sleep. The notion of a working memory, continual learning and so on and so forth. So these are some ideas, just to name a few, where if you look at the papers being published, you will see references being made back to the literature in neuroscience. Okay. Now, of course, this is not to say that these ideas could not be explained in other ways, you know, without referring to neuroscience. And hopefully that's something we'll talk about in the panel. But this is sort of the starting point. And so with that, I'd like to start the panel by laying out these questions to the panelists. What are, looking back, the first question looks back and the other three look forward. What are the major success stories of neuroscience having driven meaningful advances in AI? And maybe a subplot to that would be, you know, which of those, you know, have perfectly good explanations without a neuroscience basis? What the second question is, looking forward, what benefits can we expect from neuroscience-driven AI going forward? In some sense, artificial intelligence is already superhuman in its capabilities, right, in a very narrow sense, for sure, at specific tasks. So can we really still gain by being neuroscience inspired? The third question is, how do we take valuable lessons from the brain while avoiding blind biomimicry? And I think to use the phrase a lot, so I'll borrow it, feathered wings, building planes with feathered wings, we don't want to do that. But how do we strike that balance? And last, but not the least, how do we reflect the differing substrates, right? So the brain is built on wetware, right, biological cells and essentially a vat of chemicals. But we don't build our artificial intelligence systems on the same substrate, right? We have used hardware or software or hardware and software on which artificial intelligence is realized, right? So how does that affect the approach to borrowing ideas from neuroscience? So these are some of the questions, but I hope they're not all of the questions. I really hope that we'll have more interesting questions coming from you, the audience. So with that, I will give Jim a break because he's been speaking for 45 minutes. So I thought I'd start the proceedings with Jennifer. So she'll go first and then Jim and then Kaushik. Okay, so I'm going to stand over here. Could we switch to Jennifer's presentation? Great. So I'm going to stand over here because I haven't presented these slides before, so I need to be able to look at them to tell you what I'm going to say. So I am not a neuroscience researcher. I am an AI and machine learning researcher from the Computer Science Department. So I will give you my view of what's going on in the AI space and how that connects to neuroscience in a limited way. Let me go back. So let me start with just framing the question that we have focused on in the area of AI is really how to represent knowledge and reason with it inside an algorithm. And we have taken a lot of insights from how we do this in our brain. And there is a nice feedback loop back and forth between the computational algorithm people and the neuroscientists or psychologists who are studying it in the brain. But from a history of AI perspective, there's two main camps of AI. And the first main camp is called symbolic AI. And you can see this, I can't even see the quote at the bottom here, from Newell and Simon in 1976. They claim that a physical symbol system has the necessary and sufficient means for general intelligent action. And so when Newell and Simon were really two of the fathers of the field of AI and they thought that the way that we reasoned and decided how to act in the world was to have symbols that we thought about. So here it's showing you their symbols related to people and objects in the world and the relationship between them. And that is how we decide how to plan and act and behave in the world. And another thread of AI, you could call connectionist AI, which is really this set, the areas of AI that have focused on representing information in these neural networks in the brain. And in this sense, what they thought that information was represented in the weights inside these neural networks and how the units connected to each other and how they were activated and the memory that was used in those neural network structures in the brain. And so for a long time in the field of AI, these were very two separate threads and I'm glad that Anand put up the Turing Award winners because I tried to find a quote with respect to connectionist AI, but I think this is actually the best quote here from Jeff Hinton where he said now that they've gotten the Turing word, he guesses that neural networks are now respectable computer science because for a long time it was sort of looked down on and most of the field of AI thought that symbolic methods were really the way to go. And even though they're these two threads of research, they were really being pursued jointly in parallel over the course of, this is one of my slides that shows the history of machine learning, where, oh it didn't transfer very well to Windows machine, but this shows you that at the top in machine learning we had a bunch of symbolic methods that included rules and decision trees and graphical models and then on the bottom we had more continuous methods that started with linear models and I guess I didn't put artificial neural networks here, but I should have put the word artificial and kernel methods and finally deep learning and along the way what we were learning with respect to both of these models was how to put information into the algorithms or the representations in order to learn better more accurate models in the end. And what I want to point to here is this focus on inductive bias in the 1970s in early 80s because that's what I'm going to come back to when I talk about the relationship with neuroscience. So just to give you a quick, I had sort of one idea to talk about the connection between neuroscience and machine learning and that is in the area of reinforcement learning, which is a subfield of machine learning that focuses on how to learn in cases where you get delayed rewards. So for example when you're playing a game you don't get immediate feedback after you make a move of whether that was the right move or not, eventually you just win or lose the game and you have to figure out based on that delayed reward how to back up value of each of the moves you made along the way to learn to figure out how to decide what's a good strategies over time. And so there have been people studying this kind of process both in biological systems as well as in computational algorithmic procedures in the area of machine learning and really there's been a nice correspondence here between what the theoretical algorithms did in machine learning and what's been found in neuroscience about how this works in the brain and in particular they found that if they look at the signals of the dopamine neurons in the brain that those really encode what we call reward prediction errors which is what we need in the algorithms that specify the difference between the expected reward that you think you're going to get for example you expect to win the game if you make a certain move and what actually happens and so then you learn from that over time that what you expected to happen either did happen or didn't happen and adjust your strategy over time and that really shows a physical manifestation of the exact mathematical procedures that are encoded in temporal difference learning and reinforcement learning an algorithm that was invented by Rich Sutton in 1988 and so this really shows you the connections back and forth between things that we explore in the machine learning world algorithmically just to see if we can get these algorithms to mimic the behavior that we want to see for example that they can learn how to win the game like AlphaGo or playing Atari games which is being worked on right now and what is actually happening in the substrates in our physical brains and whether we can show that the same kind of processes are happening so in terms of moving forward what I just wanted to point to is that as we try to merge these two fields together it really is going to help us do what I think is the next thing that we need to do in machine learning which is to merge or unify together these symbolic and connectionist views so that is really what a lot of the current work in machine learning is working on right now is how to take these more complicated abstractions that we reason with in symbolic systems and push them down into a neural network formulation by reasoning over more complex sets for example graphs or sequences with special kinds of variances in them and we're learning how to do that mathematically algorithmically right now but what we would hope to find is that we can see the same sort of signals in a neuroscience investigations that can help drive forward that synergy and I think in particular the area of social neuroscience right now is looking at very interesting questions about how you process information when you're in social situations and have to interact with people and for example I think maybe I've picked the wrong figure here but for example some of the recent work by Emily Falk from the University of Pennsylvania and her colleagues have looked at whether you how you are positioned in your social network actually affects how the neurons activate in the brain when you're processing certain information so that you actually they've shown that you actually have different firing patterns when you see someone based on their positional setting in your neural and your social network structure there's way too many network words that we're trying to use here so for example if you're close to them tied to them very tightly in a social network it'll fire in a different way than when you see somebody that you're more loosely connected to in your social network which is exactly the type of things that our machine learning algorithms are trying to use to make more accurate predictions about people and their behaviors in social networks okay so that's it that's all I had to thank you Jen and I guess next Jim you already showed one slide so yeah I don't I'd I'd rather get to move to the discussion so I sort of gave you my opening views on things in my talk so that's the interest of time maybe let cost you is that okay I got one that's sure you can ask questions okay all right so I'm gonna get closer to you guys so that I can see better in a certainly deep learning has been a huge amount of progress and know that right but let's look at where we are today in terms of energy consumption you know I'm gonna take the engineering point of view and try to look at the energy consumption and this is actually a plot of what the chart shows the AI compute demand especially for training the y-axis basically shows the number of operations required to train for let's say image classification language processing or optimization approaches and as an example if you take one of those networks implementing let's say one of the optimization problems like network architecture synthesis the results are actually quite amazing I mean the number of operations required are you know zeta operations for training which is of the order of about 10 to the 21 or so but if I were to really look at the carbon footprint for training that particular network this is actually quite amazing the estimated carbon emission for the NAS which is the network architecture synthesis on the transform which is a network is about 350 next higher than air travel from New York to San Francisco per passenger 17x higher than the average American one year and about 5x higher than a car lifetime so that those are really amazing numbers huge amount of energy consumption and that's for training now on the other side of it you far too really look at inference and this is something that you're all familiar with in the you know the Google Alpha go beating Lisa doll back in about two years ago but the question of course is at what cost you know what's the amount of power consumption or energy required the power consumption energy required it turns out back in 1997 the IBM D blue beating Casper of that was about you know 15,000 watts you know at that time Casper was very very upset about it but you know more recently 2016 Google Alpha go about 300,000 watts how efficient is a brain the number might not be exact but probably our daughter about you know 20 watts which is about 10 to the 4 you know 10 orders of the 4 orders of magnitude higher than what the brain does now and that's a high performance scenario now if I were to really look at the efficiency gap in a if you're to try to implement for example some of these AI algorithms into your let's say you have a smart glass and if you try to implement that and as an example if I were to take a smart glass like this implement a Google edge TP on it for inference and if I were to try to analyze it we'll find out that the you know the battery lifetime is going to be off the order about an hour or so so if you would like to have you know battery life about you know eight to ten hours turns out you really have to do something different so where do these inefficiencies come from you know partly we don't know how the brain does it but the inefficiencies really come from the fact that hey we don't know the good algorithms we don't have the right kind of hardware architecture and we certainly don't have the right kind of neural circuits and devices so with that in mind the question of course is you know can we do better can we really come up with a system the next generation of AI systems looking at the right kind of algorithms right kind of architecture and circuits and in order to do that I believe that we can take some cues from the brain because we have an existential proof that the brain does good whether he does the best or not I don't know but it certainly does well to that effect can we really look at neuroscience and really try to find out the kind of network topologies which are used in the brain and to that effect you know Jim talked about the cornet kind of architecture which is a brain like you know architecture and it is a good brain score would that would that architecture be suitable would that architecture give me the kind of energy improvement that I would like to have how about the information you know representation in the brain they are usually in terms of spikes it turns out you know if I'm you know if I'm a circuit designer the spikes are good why are spikes good because it can let me do even driven computation so if I can learn with spikes and if I can do even driven computation one can potentially get a large amount of improvement in energy consumption can you do better learning you know we have to see that again we can also kind of look at different kinds of learning but whatever we design today these kind of networks they're still black boxes you know we don't really know why they work and a lot of times we don't know why they don't work to that effect there's a need for understanding these networks there's a theoretical understanding required and again we can actually go to you know communication theory computer science theory and you know going to more of the mathematical foundations to find out can we come up with better theory of learning can we have better network optimizations and at the end of it that might lead to better safety and robustness of these systems today these systems may not be as robust as we want them to be and finally at the end of it what happens is that we end up implementing everything with our CMOS circuits so what are these CMOS circuits they're basically in a good on-off switches and our good on-off switch is good for neurons and synapses probably not so the question then is that is it possible that I really think of devices that can in some ways mimic the neuron and the synaptic functionalities of different bio fidelity and that might lead to better neuromimetic devices and better neural computing fabrics so those are the kind of things that one can potentially look at by the way these are some of the things that as a part of our center center for brain inspired computing that we're looking at and hopefully we're all working together as a part of that center and hopefully we can potentially get a quantum improvement in the near future thanks thanks thanks Kaushik so maybe what we could do is if you could go back to my presentation and put up the questions on the screen that we had I'll get it started with the first question looking back among many of these advances that you know undoubtedly were inspired by neuroscience right you know the pioneers you know refer to inspirations from the brain how many of them which of them do you think where do you think that inspiration was crucial right and to give you some examples right maybe if you think of the notion of deep neural networks are having many layers that could also for instance be justified you know perfectly well by looking at you know when hardware when we design logic circuits we all know that designing circuits with two levels of logic is not scalable when the circuit when the function gets more complex you need a multi-level circuit and this is also you know forms the basis of you know circuit complexity based analysis in computer science theory for example right so that's an alternative way of looking at the rationale for deep or multi-layer networks right convolution you know we have professor Riebman sitting in here who's sort of signaled an image processing I think the notion of convolution if I'm not mistaken far predates you know in just in electrical engineering when it was sort of actually discovered the brain has these visual fields that show spatial invariance so there are alternative explanations for some of these phenomena but at the same time it is clear that you know neuroscience has been a source of inspiration so maybe the first set of first question I might set forth to the panelists is to you know from your perspectives which of these are you do you think are the crucial inspirations that that AI has been benefited from you know coming from neuroscience well I just briefly in it because I think I think in the interest of I mean I I think it's very hard to sort of say where what came from what in fields and I think the fact that that's hard to say I mean that's exactly what we want to be and going forward because you know we all of these things you know there's sort of you know a lot of the original back prop stuff was published in cognitive science journals does that mean it came from cognitive scientists or engineers right so so the folks that were working in these intersections Jeff you mentioned some of the touring award winners are Jeff among them that they they were you know inspired by the brain and working in these spaces of ideas and are they you know am I an engineer or a scientist I'm not actually sure but sort of in my lab I try to put these people together and it's sort of rather than debate which came from one field or another I think the notion that you put people together and say these are this is at least an instance proof so let's try to monitor it but it's it's a question of resource allocation you know I don't think I don't think if you want to do AI you should invest all your money and figure out how the brain works and then do AI I also don't think you should spend everything on you know just straight-up engineering forward engineering hope that will be like the brain some balance between those two and I think that is sort of the spirit of what what's going on but assigning these crucial ideas I think that's a little bit of a you're just asking speculation to be hard to trace back so anything I'd say you'd say somebody else had sort of thought of it a different way right so I don't think it's a it's a fair question sorry not to give you a hard time but that's so you're questioning the question in other words in effect right any thoughts Jen or Koshik I kind of agree with Jim but if you really think about inspiration certainly are some sort of you know brain guided approach the very fact that we're using some neural network possibly came from you know the brain possibly has neurons synapses connected with some interesting ways can it can you do some interesting functionality so is that from neuroscience possibly at one time but at the end of it when it comes to engineering we certainly have to implement those in in an effective interesting ways I we certainly don't have the biological neurons to play with or at least for my circuits I still have to use an artificial neuron which is going to be consisting mostly of transistors and connected in some interesting ways so again that's why the engineering comes into play do I have the synapses probably not we have to come up with you know a solution to these some of these synapses using again the CMOS transistors in some ways that's certainly engineering that certainly devises that certainly circuits yeah I guess I would say that the field of AI has often tried to mimic human behavior which would be trying to see what's going on in the brain and actually do the same thing but also a second thrust would be to just have algorithms that are going to behave rationally and one thing we could discuss is whether we want our algorithms to behave like humans who are often irrational in their decisions or what information they pay attention to or what strategies they take or whether we want them to be more predictable and deterministic in terms of moving forward so that's one thing to think about I guess I would also point out I brought this up at lunch that the recent advances in machine learning while you might think that they're all from deep learning given the current hype in the news actually a lot of progress had been made on models that came from machine learning that were not neural network based and really they have proven to be able to be scaled to very large environments and be very accurate and very predictive of course neural networks are helping us push the boundaries even further but I would say that we have made some progress in just being sort of engineers and trying to get the thing that is going to be the most predictive and then later on we go back and think about how do we understand what's going on in that algorithm either from a mathematical perspective or from connecting it to neuroscience so I think it's also impossible to tease them apart but there is this nice back and forth between the two great so I think we have a question from the audience so in the interest of making sure we you know the audience gets priority let's go ahead awesome sorry sorry I have to leave in about 10 minutes I want to ask it now it seems very much through this whole talk that we are talking a lot about neuroscience connection with AI but in a sense we're not making so much a human as we're making the perfect engineering student you know we have this problem we have to solve it you know whether it's in computer science neuroscience uh engineering but there's a whole side of the brain that hundreds of people have which I would kind of describe as the liberal arts student you know where that goal is not so well defined where you have to really um you don't necessarily have this end goal you're not necessarily doing something for a purpose so are there any research is there any research currently into you know that side of the brain and if that side can help strengthen um this almost engineering side because some of the great minds of all times you know for say Leonardo da Vinci he was an excellent engineer but he also was an excellent um liberal arts student so I guess is there anything currently involved where they all kind of focusing on this more um engineering side of the mind sounds like a question for Jim there's different levels to try to take your question I was trying to you know there's a version of just you know what we all do many of us are goal driven you know I presented you know our stuff is very goal driven let's build the simulated copy of the system and that's our goal and you might say well that's the wrong goal you should have a different goal I mean there's versions like that there's one that that is the engineering side of me but I think you're asking something more meta about where do let's say even where do new ideas come from right that right there's a sort of hypothesis generation phase followed by once you have a space of hypothesis like even the families of deep neural network models then you can say okay now we can engineer and optimize against it and like and then ML effectively takes over but I think you're asking more deeply where do hypotheses come from I mean I can tell you some neuroscience work about like elements of that but that's more on like small species like how do birds learn to sing one of my colleagues Michael Fiat MIT works on that the bird has to generate variability in the song to then shape that so you could call that variability generation as hypothesis generation but there's sort of much more cognitive level versions of the same kind of question where do new ideas even come from and I think that's the spirit of what you're after I don't yet know the answers to that again there's a hint of neuroscience there but that is those are really hard intelligence questions I'm sure Jen or would have more to say about this than I would from the neuroscience side other than that's not a question our lab with vision is working on we're much more goal directed yeah I would say from the AI perspective I don't know if this exactly answers your question but there are people working on creativity in art and dance and music and humor and those are some of the much harder questions in order to get an algorithm that can generate things that are sort of objectively beautiful to people or new and interesting and certainly there's a lot of work on neural networks producing you know new art that looks exactly like it was generated by Van Gogh but is not exactly you know Van Gogh that was painted whether or not that's actually truly creative or that's just another objective function that the model is applying I think is an open question so I think one interesting tidbit is that in these chatbot systems that are trying to talk and interact with people they found that some people just want to keep the conversation going and they don't have a goal of I need to book a flight or I need to add this meeting to my schedule and so they some of the researchers have been working with improv actors to actually try to get data about how are how are you creative and how do you improvise and eventually if we get enough data from that there's an expectation that we're going to be able to learn how to do that from a model as an optimization procedure but maybe that's a very engineering way to think about it so it's still an open question whether we'll be successful or not thank you there was one more question of my question is for anyone so we have some algorithms that are able to perform very well at very specific tasks like chess and go and alpha go but those algorithms are very geared towards performing those tasks only and don't really generalize well to a human brain as as in terms of telling us like what a game is or doing any of the things that a normal adult or even a child can do so I guess my question is how how long do you imagine it will take I guess to achieve the goal of reverse engineering the brain or any brain like a human brain a child brain in terms of years or like do you think this can even be something that can be reasonably achieved in our lifetime or in a span of many years what do you think about that how do you envision that taking place the scientists are supposed to say it's 10 years out right so I mean but it's always 10 years out but no it's going to be at least 10 years out but you're right that the current things are narrow but they're certainly way broader than they were 10 years ago right so I think maybe we should make some plot of the progression of narrowness it's even hard to define the space of problems I like to step back and go if you were if you came down from another planet and said look at these organisms here all of us we're narrow too with respect to some framing right and I think that's sort of a lost thing we think we're like some generalists to do everything like I could show you white noise patterns and like say can you separate this from that you know I don't know what those are right there's you're really optimized in some narrow space that evolution has sort of built this is against being from the vision side but also you know even your motor mechanics everything you do your ways of thinking how you use language these aren't structures that are we are narrow in a very if you sort of step back and say imagine you go to another planet right or maybe there's even new physics right there's a there's a narrowness still it's just a question of scale but you're right these systems are narrow but they're getting wider and you're asking when does the width essentially capture ours and I get if we could try to make the engineer would say let's make some plots of width and like operationalize it and then maybe we'd make an extrapolation and I've never done that maybe you guys have thought about that but I certainly think we're going to need a lot of advances on the hardware side even the bio embodied system side if you really think about agents in the world this is much more than it's much more than just a sensor processing which is what we work on that that feels like there's a whole slew of engineering beyond just kind of information processing that that's going to be needed to even imagine such a world right that that and that's more to things that caution and unthink about but again that's my perspective no I agree I mean it depends on how you define the brain to be is it a human brain or is it another artificial brain that we are actually building I mean today we can do a whole lot of things right I mean but and I mean again going back to what is narrow what is broad it keeps changing and we keep doing new things their advancements and you know algorithms their advancements and circuits and architecture in fact even devices today which are helping and developing broader you know better broader systems yeah I don't know I can't predict I think one you know other I guess relevant thought here is that you don't need to fully understand the brain in order to you know have some practical utility in any of those application areas Jim talked about right you know as we find more and more there are there's sort of there's an incremental path forward we can already see utility from that whether it be to you know health and medicine or you know brain machine interfaces or to AI so there's a question right actually there might there's a mic back there and then we'll take one more after all these nice large scale questions here's a smaller scale one so when you're talking about the models either neural or computational we're mostly talking about cell body firing what would be the model or circuit equivalent of dendritic computations well so I mean that's a good question I mean so some people would like to think of a whole neuron with all its dendrites as it being maybe those are multiple nodes with separate compute compartments this is these this is there's certain people who think that's how it it might be I mean this is this is really I think I would say that's not yet clear and and again I'm I can I'm not maybe gonna speculate a bit more on that but I mean it's one of those many things that the neurons I mean I had a slide I pulled out which is what needs to change in our current neural network model should we start building fancier dendrites was sort of one of the points on that slide so there's there's many things that differ architecturally between the current models and and it's really like educated guesses this is sort of like are those more of the feathers and those more than we but then it kind of depends on your objective function a bit like is that going to be an energy savings goal or is that going to be something that's going to make it perform better be more robust and we don't really know the answers to those so we're going to take incremental guesses and try to implement various that's at least our approach and then see where they show improvements but I'm just people still speculate about that I mean of course there's measurements of individual cells and what they do but how that translates from the cell all the way up to like a full network performance is completely unclear and I don't know how to do that without you know actually trying it but maybe someone more theoretically savvy than me will figure that out but my version be build it and see if it works better but that's tough to do at scale so I don't think that's an answer but at least that's I'm just giving you where we are on this so the question is we know that the brain has different structure different architecture in different regions and even the neurons have several types why don't we use that information in building the models why do why do we try to build a very simplistic model where the units are actually homogeneous whereas we know it's definitely not that in the brain so for example you were trying to emulate area v4 but we know that up to area v4 there are different types of neurons there are different types of structure but we are using just homogeneous layers you think there is some area of improvement that can be used more information and maybe a different approach to just fitting a simple model right so that's a great question if I handed you a tissue slide of v4 and it you wouldn't be able to tell the difference so you would say look there's no the the connectivity circuits actually look very similar at the anatomical level this is this is long known that cortical structures there are some exceptions but they're very similar anatomically but your point about even within v4 or within it there's many cell types and most of these models this is related to the dendrite question many of these models are just assuming one cell which is just an integrate and fire kind of thing and of course wait what about the inhibitory neurons and these particular cell types there's a whole parts list there that isn't put in and again what this is back to the feathers would should we just implement that all and put it together and hope it works which is sort of like the dendrite version or that's the that's the blue brain pod project in europe in effect was put that all in that is very hard to do at scale so you know where we're really at is taking kind of guesses as to well maybe we should try this slight move in that direction like for us that's like local recurrence was a slight move in that direction because the part was a parts list of neuroscience is huge and you just can't imagine implementing that all right now and you know and then getting all the parameters right to make it sort of just stand up and walk out of the room it's sort of too too big of a leap so you have to kind of take these we have some traction now with these sort of networks and you know if you add you know just increment off of that is how I think of it like as you said like there's there are other things and we we can add some of them but we need to sort of be smart in how we add them and then what improvements they give right that rather than just try to throw it all in at once because that feels impossible to me but it's a good moe inspiration that you're pointing out but from the circuits and architecture point of view does make sense in some cases to actually use different kinds of neurons and you know people have looked at some of those the problem certainly comes in different ways I mean can you effectively train some of those large scale networks in some interesting ways you know I mean we can certainly distribute different kinds of neurons but training becomes an issue and then of course you have to really think about inference but just to give you an idea that people have looked at the combinations of at least you know if you want to call Rayleigh to be a you know good neuron along with some radial neurons and all to actually look at different kinds of you know robustness issues can I get a better robust circuit for those people have looked at some of those here but not really from the you know neuroscience perspective I would say it's just a more from the mathematical side of things great I know that there's a couple of other questions and I hate to break this up but I've been given a hard stop because our friends who are helping cast this event or actually have other duties there's other events taking place but I will if the panelists don't mind a volunteer that they could stay back for a little longer to help you know answer some of your questions so feel free to chat to a chat with us after this is over maybe I'll invite each of you to make a very brief closing like you know short closing statement on any aspect of this field or where you see it headed maybe a closing statement from each of you and then we can wrap up I'll just reiterate what I try to say in my talk the intersection between neural network engineering and wanting to understand the brain is very exciting right now and that flow is gonna I think that's gonna be a big booming area certainly for neuroscience it will be a bet whether those payoffs occur on the AI side but it's going to be required for us to actually say we understand ourselves that how to better educate our kids how to do fixed brain disorders the engineering applied to neuroscience is going to is going to just get bigger and bigger with regard to understanding the brain and I may be payoffs on AI that we're going to be looking forward to so I think if you're and if you like both those questions it's an exciting time to be working at that intersection thank you so I think I forgot to say this one my opening remarks that I think what we can take from neuroscience is the inductive bias that we need to put into our models to help specialize them in ways that are going to allow them to grow to larger more complex environments and I think what Jim said earlier was a great example that we are actually specialized narrow systems in some way and if we can learn from what we see in the brain either from different types of neurons or different ensembles I don't know what dendrites are but I'll look it up that if we can then learn to incorporate that as a specialization into our models that might be the thing that pushes us to that much more complex models that have more flexibility to reason about more complex environments so I think that's really the exciting thing to to look at the connection between the two well I certainly think it's a very very exciting area of research especially looking at neuroscience and engineering together the way you know we are looking at things is across the stack I mean I'm talking from the electrical engineering point of view going from algorithms we don't know the right algorithms coming up with the right learning and inference techniques going further down into an architecture or a proper kind of you know implementation a hardware architecture suitable for that and at the end of it really thinking about you know should I really look at the devices moving away from the standard way of looking at you know transistors only into a domain where you can really think up actually building directly near neurons and synapses of different bipedality which can potentially lead to you know very exciting results so you know for the students here I think it's a very very exciting area exciting time and exciting area research great so I just want to conclude by saying that you know my takeaway from listening to all all the three panelists is you know I think one common factor was their excitement for this area so I hope that that you know at least those of you the students who are looking for problems or areas to work on will take that message that there's interface between neuroscience and engineering at large or more narrowly neuroscience and computing or even more narrowly neuroscience in AI is an exciting one and hopefully some of you will be the next generation of leaders driving the advances to come so with that let's thank all the panelists once more and thank you all for your participation