 So, in other words, you have this vast associative web inside your head that's embodying information about all your previous experiences and knowledge. And information that you learn newly will have to make sense and be referenced or grounded by this associational web. That's how it enters it. So, again, this is just an entirely different paradigm of how we learn, right, and how we think than how a computer does it. Boom! What's up, everyone? Welcome to Simulation. I'm your host, Alan Sakyan. We are at the Brain and Cognitive Sciences Building at the Massachusetts Institute of Technology in the beautiful Cambridge, Massachusetts. We are now going to be talking about the differences between brains and computers. We have Dr. Robert Ajemyan joining us on the show. Hello. How are you doing? Thank you so much for coming on the show. My pleasure. I'm super excited. Robert's Backgrounds Epic. He's a research scientist at MIT's McGovern Institute for Brain Research, a professor of brain and cognitive sciences at MIT, and he teaches a computational neuroscience class titled Emerging Computation in Distributed Neural Circuits and studies the science of mnemonic techniques. And you can find his link below. He's also a fellow Armenian. And check out his link, super interesting stuff up there. Let's jump into this question that we love starting with. We find ourselves as stewards of Earth. What is your current take on the state of our world? Well, obviously, I think the state of our world is failing in numerous respects. And there's a geopolitical answer and then a more personal answer. So the geopolitical answer is that as humans are transforming the planet, the recent UN report on all the species vanishing, etc., we're not necessarily taking action to counter that. So that's a big problem, but that's one that I can't do too much about. I am concerned that the scientists know what we're doing is going too far and pushing the planet to its limits, but the policymakers aren't necessarily listening properly and taking corrective actions. So that's on the big geopolitical level. But again, that doesn't necessarily affect me on the personal level. What concerns me more on the personal level is that as our society has become more technologically driven with social mediation of the entire planet, I'm concerned that the way that humans think has been transformed in a not propitious way. In particular, it strikes me that with all of these electronic devices and contraptions that a lot of our thinking and a lot of our daily behavior is all stimulus-driven. We see a beep, we read a text, we see these things, and we respond. It's all short bursts of attention and quick thinking. And I don't think that's necessarily conducive to, say, academic scientific problem solving. Some of my colleagues ridicule me for being what they call a technophobe because I don't have a smartphone. And I'm not a technophobe. I think technology has a lot of good uses. But I think one of the downsides of technology is that it interferes with people to sustain their attention for a long time on hard problems because you're always being perturbed by the electronic influx of information from a variety of sources. And you kind of see this in what's happened to the current scientists. The current scientist nowadays is not sitting in his desk thinking for four or five hours a day. I try to do that. I don't maintain that ideal, but I think a reasonable amount of time. They're constantly managing. They become big project managers as if they're in industry because you've got lots of grants, you've got to write these reports, you've got lots of students, you've got to handle this administrative aspect, you've got to handle this administrative aspect. You're always on your smartphone answering mail, texting with students. I don't think that's the way thoughtful science can be done, which is you really need to maintain a lot of internal focus to solve hard problems. And God knows the brain presents lots of hard problems being one of the last undiscovered regions of scientific inquiry. We know very little about the brain relative to almost every other natural phenomenon. Michelangelo probably would not have been able to spend over three years sculpting the statue of David if there was social media. Oh, I completely agree. But even go to like, I mean, think about Einstein or, you know, how would he survive in this kind of an environment with all of these burdens and responsibilities and electronic buzzing back and forth? I mean, you just need to sit and think. And if that's done nowadays, that's considered a sign of unproductivity, right? Because everything is stimulus driven and response driven. And there's a sense the time scale has increased for everything, right? Everything is rapid. Everything is instant gratification. There's no sense of delayed gratification. Now what people want to do when they write a paper is get it out on the worldwide web, get lots of hits, get lots of immediate citations. They're not writing the paper for what people used to write the paper for, which is, will it be around in 50 years? What's the indication of whether it's good work? It's not how many hits you get through some kind of advertising or branding. It's about how enduring is that work over time? Because history will get it right in terms of science. There's right and there's wrong, unlike in say art, where it's a matter of taste and people pass through different phases. So work that's enduring will be known. There's an interesting article by two guys, Giman and Giman. I think I sent it to you about what's it called, science in the age of selfies. Yes. And it talks about some of these trends where scientists are basically interested in photographing themselves, right? Instead of using your example, slowly doing a landscape of a far away horizon. And that's unfortunate. Another famous one is the, we wanted flying cars and we got 140 characters. That's absolutely great. I mean, I don't even understand, I don't see how this can help students write, read or think when they're operating with Twitter. You just don't understand that. So just one of the quick arguments on behalf of it is that you can get a piece of news from around the world disseminated effectively to lots of people. I agree. And that's one of the core arguments for it. And we both agree. And then, but at the same time, a lot of the profound thinkers and executors, scientists, artists of the days before us, were able to be at complete silence and peace by themselves. And you also highlighted the, can you create something that has a lindy effect? Can it withstand the test of time? And I do think that whatever Satoshi Nakamoto, whatever blockchain, whoever that is, whatever it is, ended up potentially making something that will last several decades. And so I do think that creating things like that and focusing on creating things like that is way more important than what some of the perverse scientific incentives are in our systems today. I agree completely. And again, all technology has advantages. For example, the internet, I use the internet. I mean, think about, I can get a paper from any journal, you know, as long as we have subscriptions at the university, that I'd have to go to the library. Maybe my library wouldn't happen. So information flow is a good thing. The problem is, do people have the discipline to police themselves internally? Or can this thing kind of escalate out of control? And from my point of view, it's kind of escalated out of control. Some people have the discipline to police themselves, some don't. But the younger generation being thrust in it is not going to know the difference and won't even be able to police themselves. Especially because the news feed has become a slot machine. And that's one of the addictive strategies of a reward system that's varied. Yeah. And one of the ironies of the fact that information is so accessible is that now no one reads. I mean, you talk about scientists today versus 50 years ago. My boss, Emilio Bicci, tremendous individual, he still reads to this day. Most people don't read journal articles. Why? There's no time. You've got to get out. You've got to get out. You've got to go on Twitter. You've got to promote yourself. You've got to promote your brand. You've got to do all these things. And just like people say, Facebook has made people lonelier, ironically. Same thing. Great access to information, which, if used properly, is undeniably a great thing, has somehow led to people reading less. Yeah. So that's not a good outcome. And we've got to be very careful. And I think one of the phrases that can help us get there is signal to noise. We have to really be able to identify what is true knowledge. How can we get that into these feeds more effectively? All right, let's do the journey. Sure. How did you, as a child, get inspired by neuroscience and computation and then find yourself here? Well, actually, I did not intend ever on being a neuroscientist. What I intended on being was a mathematician physicist. And in fact, when I went to college, I studied math and physics. And I enjoyed math and physics, but I realized that my area of facility and expertise is in theory. And in terms of physics in particular, the amount of jobs or even the need for theorists is relatively small. There aren't that many theorists necessary. Whereas, and I looked towards various medical problems, including neuroscience, I'm like, boy, we know very little about this. So maybe a theorist can make a big difference there. As a result, a few years after I graduated, I enrolled in an excellent program at Boston University called the Department of Cognitive Neurosystems, which was basically a theoretical neuroscience program. So the idea was, you're not going to do experiments. You're going to figure out how to come up with frameworks that will allow us to interpret the results in these experiments, maybe codify all these experiments to develop theories of various motor, sensory, cognitive behaviors, circuits in the brain. And that's what I became impassioned about. And ever since, I have not regretted that decision. And then one more bit is how did you even pre-physics and math get hooked into it? Would you have mentors or any family? What was the big thing for you that sparked the interest? I don't know. I always liked math and physics ever since I was a kid, because I was amazed at the ability to predict and control how things happen in nature. It's almost like magical disability to make some calculations on a piece of paper and wow. That's true understanding of the system, right? And so when I realized how far to jump forward from the brain, we aren't actually making any predictions about this stuff, I'm like, wouldn't it be great if the brain could just be nudged a step towards these remarkable predictions you can make in physics? Yeah, with a few strokes on a piece of paper, you can say what's going to happen with great certainty? That just always amazed me. That's beautiful. OK, and then the PhD itself, the analysis of movement representation in motor cortical cells at Boston University. Sure. So what I got very interested at the time, and I was personally interested in motor control, because I actually am a fairly avid tennis player. I've played tennis, and so how do you learn a skill? How do you maintain a skill? Why is it that you never forget how to ride a bike? A person comes back from playing tennis after not playing for five years, and they seem to be able to hit the stroke pretty well. These were problems that all fascinated me in systems neuroscience and motor control. So what I specifically studied for my dissertation was something called the encoding problem. And the encoding problem is how is information about upcoming movements represented as a series of commands in the brain? In other words, Roger Federer wants to go hit a forehand. That must trigger a cascade of events from intention at the higher levels of the frontal cortex to the motor cortical output to the spinal cord, thereby telling his muscles what steps to take to hit his forehand cross-court. And that's kind of a fascinating problem. I focused on the motor cortex, which is the region that projects to the spinal cord. And that got me interested actually in the differences in brains and computers, which is something that, as you know, I've been pursuing since. Because it turns out that I came out, I wanted to set out to come up with this beautiful mathematical formalism to explain if information is represented in this nice coordinate system, this is how it's encoded in the brain. If it's represented like this, this is how it's encoded in the brain. At the end of the day, I realized early during my post doc that that's great. But that's kind of a computer perspective. Unfortunately, the brain doesn't obey any of these rules. The question is, do these concepts that we impose from an anthropomorphic perspective even make sense? So my thesis, which I'm still very proud of, kind of changed my direction in the sense that if the assumptions were correct, the work was nice and beautiful. The question is, are those assumptions even correct for the brain? And if not, where do you go next? And so I've been trying to follow those secondary directions like maybe it's not such a nice system like a computer program we would write to implement these things. It's a different type of system obeying a different set of rules. And we need to focus on those. Even the example of being able to take a thought from an executive function state down to a motor cortex, down to the spinal cord, down to the muscle to hit back the ball is a very profound way of analyzing something that seems to be quite, at the surface level, just a simple tennis. Oh, people think it's simple. The complexities are remarkable. And this is why, while there have been successes in AI, there is no AI device even near Roger Federer in his ability to play tennis. And we can get into maybe some of that later. But the point is motor control is an incredibly complicated problem of an appreciated difficulty. Compared to something like an image recognition. Object recognition. These problems are simpler and smaller dimensional. And it's very interesting that the time scales to do something like an object recognition seems to almost be here. It is. We are ready to do that. There are other problems we are not ready for. And God knows when we'll be ready for them. Yeah, yeah. Quick on that point, where does the thought of, in the milliseconds of time, of calculating where the ball got hit back, where does it originate in the executive functioning? Well, that's a part of the brain that makes it difficult to study. Even that question is, to some extent, ill-posed because it's a highly distributed representation that's represented in parallel in multiple places. So I could give you some canonical answer with the frontal cortex has the goals, and then you do this stream back. But it's really everywhere. It's also in the cerebellum. It's also in the basal ganglia. These circuits are all interacting in parallel. So our whole serial understanding, based on the computer metaphor for how these things work, is inadequate for understanding how the brain works. It's just more complicated than that. And we don't know all the answers. Yeah, yeah. This is going to be a reoccurring theme. Unfortunately, regarding the brain, not knowing the answers is more the norm than the exception. It's extremely important to be humble and admit that that that is the case. So let's talk about these differences between computers and brains. You really gave us a really good one right there, which is that there's just so much parallel circuitry that understanding the stimuli of the environment and making decision in return. So keep going on these things. You gave us transmission speed, clock rate, clock rate, parallelism, yeah, all these different things. So brains are composed of neurons and synapses, right? Those are the fundamental units. And computers are, loosely speaking, constructive chips. So let's just take simply operating characteristics of these smaller units, neurons and chips, before we even get into higher level systems differences, like parallelism, et cetera. What is the clock rate for a computer? That's how many gigahertz you have, right? That's how many computations can be performed per second. Roughly on the order of, say, 10 to the ninth. It's probably higher for some of the faster computers, and that just keeps going up. That's 10 to the ninth. What's the speed with which, say, neurons can compute things? Well, a neuron can only spike. That's the way information is transmitted from one neuron to another with an electrical signal sent down its axon, where it stimulates the dendrite of the neighboring neuron. I would say about 200 spikes per second would be really high. So let's say, best case scenario, 10 to the third is this clock rate. 10 to the third being about order of one spike per millisecond, which is probably faster than we can do. So you have 10 to the ninth for the computer, 10 to the third for a neuron. What about signal transmission? How fast does that signal move along the axon? Well, in computers, electric signals obviously travel the speed of light, which is, if I recall correctly, roughly 10 to the 8 meters per second. How fast do signals move along axons? Well, roughly 10 to the second meters per second. So 10 to the ninth, 10 to the third for clock rate, 10 to the eighth, 10 to the second for signal transmission speed. What about noise? Signal to noise ratio. Computers, basically the problem of noise is solved. If you run a program the same way, you always get the same answer, right? There's very little noise in these circuits. Some thermal noise is estimated around 10 to the sixth, which is 10 to the sixth signal ratio to the noise ratio. In the neurons, it's hard to estimate signal to noise ratio because we don't even really know what the signal is. But it's on the ballpark of 1, charitably 10. So the noise is very, very significant. So if you look at these, just these three operating characteristics, and we can discuss some of the systems level differences as well, you have 10 to the sixth. That's the order of magnitude difference between these operating characteristics. So then why is the computer used as a metaphor? And there's a good reason why the computers use as a metaphor, right? I mean, there have been all kinds of metaphors about the brain throughout history. Hydraulic engines, watches, right? Electrical telephone signals, right? The prevailing technology at the time, and many articles and books have been written about this, is generally the leading metaphor for the brain because the brain obviously is capable of the most remarkable functions possible. So I want to match that with whatever technology is capable. And this dates back since the ancient Greece, which is the time of ancient Greece, basically. These different metaphors have evolved over time. So now we are able to get computers to do very, very remarkable things. Because we can do remarkable things with computers, because we can do remarkable things with our brains, there's a natural tendency to say brains and computers are alike. They're not. How we're doing those things is entirely different. And then you mentioned the others, parallelism, right? Computers do not work in parallel. There's some efforts at doing parallel processing, neuromorphic engineering. None of those have panned out. Standard computers operate basically in series and very rapidly. So the motif for computing in a computer is lots of serial computations. In the brain, you've got the slow, sloppy, and precise computing, but it's highly parallelized. And that's because each neuron in the brain connects to 10 to the fourth other neurons, roughly 10,000 other neurons. That's a rule of thumb. Some are more and some are less. But the whole architecture is different. Maybe you've heard of that game, Six Ways to Kevin Bacon or Six Connections to Kevin Bacon. Six Degrees of Separation. Six Degrees of Separation, right, Kevin? And so you start with one name, and then you make these associated jumps. Well, something like that actually existed in the brain. It's called a small world architecture. And the idea is that no synapse anywhere in the brain is more than three or four synapses removed from affecting some other synapse. So these are just testaments to the degree to which there's this parallel architecture where everything is talking to everything simultaneously. But somehow, the system is organized in a way that it's able to make sense of it all and perform these remarkable functions. But this doesn't mean you're doing it any way like the brain is doing it. In fact, again, the differences are striking. How did these object recognition algorithms manage to all of a sudden perform at this human level of performance? Was it because there are architectural changes? No, architecture's been the same since the early 80s. Is it because there are any algorithmic changes? No, algorithms have been the same since the 70s. What's changed is processing speed and the amount of data. Particularly, you've heard of Moore's Law, processing speed doubling, what is it, every two years or something? That has been maintained for a long time. This is what's going on. We are good. We have developed a technology to enslave electronic circuits for performing trillions and trillions of calculations in series to perform tasks amenable to those operations. And that's why object recognition has been a problem where we've made a lot of progress. It's not going to be the same for all the problems. As I said, the problem of motor control with Roger Federer, for a variety of reasons, you have all these interaction torques between all the degrees of freedom of your body. It's a continuous problem as opposed to a discrete categorization problem. Those are hard problems. When there's a computer or a robot that can play tennis like Roger Federer, then we've solved those problems. It's going to be a long time before we ever get to the stage because it's just a different type of intelligence. Artificial intelligence will work better for emulating some types of natural intelligence than others. Now, you talked about differences between the computers and the brains. Let me give you another strong one which also harkens to an area where I do research as I was discussing mnemonic techniques. And so the question is, how is information stored and processed in the brain? In the brain, information is stored in bits in these registers. I have my one, zero, one, zero, one, zero, one, zero, everything's reduced to a binary code. And I have an address by which I can connect to that register. So if I want information somewhere, I need the address, okay, then I go to the information. So one point about that information is that it's segregated, it's localized, it's only there, okay? And the address is what connects the user to that information. The brain is entirely different, okay? All information is stored relationally, associationally in terms of its content and it's all in a distributed fashion, meaning that information is encoded in distributed activation patterns that share neurons for different pieces of information. It's not segregated in one register. It's spread everywhere, okay? And I don't have someone reading it with an address. It's organized by content. So in other words, you have this vast associative web inside your head that's embodying information about all your previous experiences and knowledge. And information that you learn newly will have to make sense and be referenced or grounded by this associational web. That's how it enters it. So again, this is just an entirely different paradigm of how we learn, right? And how we think than how a computer does it. And one way that you can see this is in the use of mnemonic techniques to improve people's memory. And I don't know how much your audience is familiar with it, lots of people aren't familiar with it, but are these ancient techniques dating back to Greek civilization, gentlemen named Simonides, and even actually further back than that. And if you apply these techniques, which I'll just describe very cursorily, you can remember huge pieces of information. Now, why was this important? Well, in the old days, Papyrus was expensive. They didn't have computers. There was no printing press. So you'd want to learn vast pieces of information. You want to memorize your speeches, et cetera. People practiced at this, okay? And all of these techniques were based on association. You would take some new information and you'd relate it to some old information. And then you'd concoct some kind of story up. And voila, the magic of these techniques was that it became really easy to remember lots of things. This is now a lost art. We completely ignore memory because we have these electronic devices. Kids are not even learning the multiplication tables in school systems. Or handwriting. Or handwriting, right? But think about what an incredible mistake this could be if these memory techniques embody a more general associate of computing faculty that is being ignored. In other words, if association is a fundamental paradigm of how we process information, i.e. relating new information to old information and experience and putting it in context. And that helps us not only memorize things, but it helps us with pattern formation, concept formation, even creativity, linking different things to each other, right? It's all part of general neural computing, which we don't really understand. And if we all of a sudden omit one major component of it, memory, because the way memory's done in computers is so easy that we have infinite amount of memory, it would be like not running on the treadmill, not staying in shape because I can drive a car in a train to wherever I wanna go, right? That wouldn't make sense. You gotta exercise the fundamental faculties of your brain. And if association in memory's only one instantiation of it is an important faculty and you're ignoring it, that's a problem. But you only realize that if you try to think carefully in a nuanced fashion, what are the differences between how the brain works and how a computer works? And they are profound. The differences are far greater than similarities. Only similarities is that they're both capable of remarkable feats. Yeah, that was very eloquently explained. And I think that there's potentially some ways that we can break this down to make it even, continue to make it really relatable. So when a child is first recognizing and being taught what an object is and what we do with that object, like these chairs, what we do is we sit in these chairs and what they are called is a chair and then that practice happens and then they're set in different languages as well. Sure. Okay, so we're building an associative web. Exactly. Yes, of what we do with the object, what the object's called. What it means to us, how it feels to us, the purpose it serves for us. This is basically the grounding problem, right? I mean, the object doesn't exist in a vacuum, it exists relative to our lives and how we use it, right? And that grounding problem has to be solved for biological organisms, but you don't need to ground anything in a computer. It's not. So could it potentially be that the amount of time that it takes for a child to learn what the chair, the associative web of information about the chair is just a lot longer than what it could be for a computer and then is there a way to ground the computer into the meaningful reality? For various technical reasons, I don't think it's possible. It's a little bit of a theoretical discussion to possible to ground the computer, but this grounding happens naturally in humans for a variety of reasons I said, or any biological organism. One is you've got this small world's architecture in your brain. So the sensory areas are speaking to the goal areas, they're speaking to the motor areas. They are all in communication with these cyclic redundant loops. So everything is seeing everything on multiple levels and scales. So as a result of that, you are naturally learning the object at multiple levels. As of now, like a computer, how are you gonna give it drive? How are you gonna give it motivation? Well, you could program it to do this, et cetera. That's fine, but it's not really solving the grounding problem in the sense that it doesn't have an innate intrinsic complicated purpose in the world to which it's able to naturally relate all phenomena to it on multiple levels and scales. And that's the unique feature about animal intelligence, a particular human intelligence. You do these things naturally. Yeah. And to be able to build an associative web for a robot that is trying to compete against the world's best tennis players is much harder right now than it is to have the perception systems of an autonomous vehicle that already has so many cameras that we don't have. We only get to look forward and we have to turn around versus the slurping of data that is able to be done. So we're already seeing sophistication. Yeah, absolutely. Yeah, speak about this for us. Well, the case of object recognition, and again, I don't wanna get too technical on it, but this is a problem that essentially can be overcome with big data and Moore's law. All right, so in order to solve, and I don't think self-driving cars are as near our future as people say, because I just think they don't really generalize, so all of a sudden it's a cloudy rainy day, something bad's gonna happen, all right? You can train it on all the data you want. When you put it out in the real world, something bad's gonna happen, something different's gonna happen, and that's gonna cause consequences. But that being said, certainly the performance of object recognition systems clearly rivals human performance, and that just happens to be a problem where big data and computing power can overcome it. And for a variety of reasons, motor control is a harder problem, in part because I have many degrees of freedom to my body, and they all interact in highly complex, non-linear ways. And so if you're gonna hit a ground stroke and move your arm, the degrees of freedom and the continuous signal necessary is in such a high-dimensional space that I can't really overcome it with a lot of training data and Moore's law. That's computing power. I can't throw training data and computing power at every problem. Some problems are more amenable than to others, and this unfortunately is an aspect of artificial intelligence that's completely neglected by the AI community, that they just wanna keep throwing these success stories, that the algorithms behind current success stories to future problems without thinking, well, maybe I need to match the problem to the architecture. I wanna see what your thoughts are on this to see if we can continue on this relatable example. We are working on, when a child's born, we're working on effectively helping that child build a mental map, an associative web of its reality that sees to, for it to be able to find a meaning or purpose in the world, make its basic physiological needs be met. Absolutely. So that is kind of this general way that we're trying to help with this mental model, okay. All right, and then with, and it seems like everything right now with the artificial intelligences are very narrow intelligences. Highly constrained. Highly constrained intelligences. And I really enjoyed all of the ways that you, that you really, you're really helping out with this associative web versus these narrow and also just neurons versus computer chips. These are, there's gonna be some sort of a weird potentially Venn diagram plus new things that we have yet to really think about that. Yeah. But no one's in that space right now because everyone's dominated by what's called deep nets, which is the current architecture that's successful in a few problems. And no one's actually thinking logically about, well, how far can this go? As you probably know, there's, this is the third AI revolution. First one started in 57, 58 by a guy named Rosenblatt. Did excellent work, then Minsky and Parpe, both of them were at MIT, showed that these networks can solve certain problems. Next revolution came when Rommelhardt, Hinton McClellan, Williams, and a bunch of people at Stanford came up with parallel distributed processing. They had back prop current algorithms and architecture behind today's AI. That work, they went fairly far and then boom, support vector machines, which was a different simpler algorithm, overcame them. So everyone forgot about them. But Moore's Law kept cranking along and then eventually, the guy you mentioned, the Russian guy in Hinton, I can't remember his name, came out with that paper in 2012. Oh, Ilya. Yeah, eventually the computing power and the data got big enough so that all of a sudden, wow, you can do object recognition in a level that rivals humans and support vector machines fell by the wayside. What's going to happen now, right? I mean, these cycles repeat. If you don't understand history, you're doomed to repeat it. There are gonna be some limitations to these problems that are discovered. It's better that we discover that sooner than later. So great, in some domains, they work, we use them, we enrich our society and our community through their use. And you were literally just teaching a class where I got the privilege to sit down and see your students debating about what the limitations are on deep nets. And that was so interesting that you're already trying to push with different tools, push the edge, yeah. Because what I want is I want people to think about different paradigms for understanding the brain. This is still the unexplored scientific country. We do not know how it works. If we just relax our guard and say, well, deep nets do certain things well, we're not gonna explore sloppy, slow modes of parallel computing that somehow miraculously give rise to the, it works, our brains work, animal brains work. We don't know why they work so well, but there's something about them that is truly exceptional. That's what we need to discover. And to just use the algorithm and architecture that's most relatable to our current computing device, the computer, and to say, well, maybe that's how the brain works, that's naive and ignorant and it's not true. Full circling back to the constant age of stimulus versus the longer, slower, focused periods of creative endeavoring, creative exploration. So what is that process that's occurring in the mind? And it feels like it's a hijacking of the short term. I agree. So essentially, if you buy into this framework of association being an important part of our computing, anything we can do to enhance our associative infrastructure will help our brains do their thing, right? For, as I said, for mnemonic techniques, people put a lot of their old information in order and then when they learn new information, they mesh the old information and the new information so they kind of expand the web. And sleep is so good. Well, sleep is very important for everything, right? So without sleep, you're just not gonna have a healthy brain function. Absolutely, it helps consolidate, it helps everything. Now, if someone is just basically stimulus-driven, they're not gonna be forming certain high order associations. They're forming low level peripheral associations, kind of stimulus response type behavior. We wanna go deeper than that. We wanna create associative infrastructure from radically different parts of our brain. This one mnemonic expert, this guy, Jim Carroll, that is a good friend of mine. The man who has the greatest long-term memory in the world, as far as we know, and he just knows a ridiculous amount of information. They did an FMRI on his brain and he's got these really thick tracks communicating information from one part to the other. I mean, it's not a surprise, right? If you spend a lot of time building associations, you're gonna build your associative infrastructure. It's hard to do that if you're basically doing stimulus response, this text, this thing, this button, right? I mean, you gotta think deeply to create new, novel, meaningful associations. Yes. Yes, and so these meaningful associations, I think are potentially some of the most important ways to both push the edge of knowledge with different fields, connecting different fields, but also to figure out the associations between the existing fields and figuring out how to make it relatable, more relatable for children to be able to rocket them out faster to the edge of neuroscience or to the edge of a different field because it's both this interplay between pushing the edge further, connecting different vertices on the edge, and then I think a really strong synthesizer and a person that knows metaphors and analogies. Absolutely. And the more information you put in your brain, the more likely you are to create the right connections for new creative insights and discoveries, right? This is why you can't defer all of your memory to Google and to electronic devices. You need to have ideas in your head so you're constantly hypothesis testing, you're comparing things. If you wanna be a scientist, you shouldn't just know your science, you should know related scientists, are you saying so that then you can go with the interdisciplinary boundaries, make connections, what principles carry over from here to there? One of the best aspects of my training for computational neuroscience was the fact that I took classes not just in physics and math, but biology and chemistry. So when I had a broad scientific background that prepared me to think about the brain in scientific terms, as opposed to a lot of people who just think of it in terms of engineering and computer science terms, which is in my opinion obviously not the app metaphor. And now I wonder on these techniques that are really old that help with mnemonics, I wanna know is where you sit on memory palaces, on meditation, on long periods of focus. So teach us about where you sit on these ancient techniques and how they can help us. So the way the ancient memory techniques work is very simple. It's method of loci, memory palace, et cetera, and I'll just do it very quickly. You have some part of your personal experience, some, for example, set of locations on your route to school, your journey to school, or the rooms of the house you grew up in, or anything like that, where you have a serial ordering of information in the form of a spatial map. That's kind of your old skeleton structure of information. Now I wanna learn, say, all the capitals of the world. Then what you do is you take a capital, the first capital, you put it in the first room, you tell some story that helps you remember it, that's an associative technique, and then you go to the second one, the third one, et cetera. And the techniques are obviously much more elaborate than that and they involve all kinds of different types of association, but that's the basic structure. You have old information, you have new information, you merge the two using the faculty of association. And what's remarkable about this is your memory goes up a ridiculous amount. There's this old paper about the magic number seven, and you may have heard of it, it's one of the most cited papers in all of sociology. It was written a little bit tongue in cheek by a Harvard psychologist, what? The working memory. Yeah, the working memory seven, right? Seven plus or minus two. It's actually, if you do these mnemonic techniques, you'll be up to 50 in three months with 20 minutes a day. I mean, and this is documented, this is not, I mean, this is why this was so popular in ancient Greece and ancient Rome and in societies before there was a means to record information. So one perspective would be, well, now we can record information, so I don't need to do this stuff anymore. And that's true, I mean, in a certain sense, why do I go to the gym if I got a bike or if I have a car? But the way that I look at it is if these associations, forming these associations, exercises a more general associate of faculty that it's useful for pattern recognition, concept formation, creativity, et cetera, it's good exercise for your mind. It's exercise, not that there aren't other types, not the only type of exercise. I think this is one of multiple types of exercise. And another thing it does, which I think is also important, focus obviously is a big problem. We live in an ADD society, everything stimulus-driven, you're responding, the use of these mnemonic techniques requires internal introspection to an extent that other forms of memorization or just accessing it online don't. And it's kind of important, at least I believe, for proper intellectual development that you sometimes turn your focus inward and concentrate. And I think meditation does the same thing. I'm no way an expert on meditation at all, but my understanding is that it allows you kind of to focus longer and to be in your mind and not of the external sensory rush that's constantly fluctuating. And I think that's good for thinking and creating and ideating in general. Yeah, this really relatable example I think is when we prepare for a presentation. Because we so often take our computer or our phone or we write notes on our hand or whatever we do, versus when we have, I remember for the TEDx San Francisco talk that I gave last year, I remember building out one of the most beautiful imaginative structures for me to be able to go and remember the pillars of the talk that would then cascade all of the sub points that I would already know. And so to be able to associate senses is one of the big things. And I like how you gave the example of childhood because those are such early formed memories, vivid memories that if you put the points on your walk from the bedroom to the living room. Exactly. And when you add in these massive exaggerations and senses that they're on that stop is where the smell of the apple pie comes in and that's where the massive purple elephants were. And so these are those ways to be able to retain. And I just love helping with memory. I myself want to, and I'm going to continue working on it at a deeper level because it just seems to be so critical. But I wonder on- Oh, by the way. Yeah, yeah. This is not new. I mean, Francis Yates in the 60s wrote the book The Art of Memory and you heard of Moonwalking with Einstein by Joshua Four. I mean, this stuff is literally 2,500 years old at least and it probably dates back since the dawn of civilization. There's nothing new about doing this. Especially when the resources that we wanted to put the information on were so limited. Exactly. We didn't have the ability to etch into stone like holy cow, now it's just pressing keys on a keyboard or speaking into the device or recording it. Exactly, yeah. Completely, hugely different. I'm curious as to the trade-off that occurs, the opportunity cost that occurs between, it depends on the weight. What is the weight of the information that you want to store? Because sometimes the weight of the information maybe I want to put into the appendage of the technological device. And maybe that's one of those reasons because it would probably be more difficult for every single one of these shows if I went and passed my time building a memory palace on every single point that we were gonna hit versus having it on a whiteboard or a computer that we could hit. I got you. I mean, I think the point is you want the most relevant key points to be put in memory. You can't do it all. I'm not saying we shouldn't have books or diaries or pieces of paper because there's so much information you can't retain vertically everything. So this is not a diatribe against all storage media. In fact, I gave this interview for, what was it, the German Der Spiegel, I think. The Der Spiegel. So the whole article was in German. And at the end, I wasn't sure what it said. So I typed it in Google and it said something like, I am fighting against two of the greatest adventures of all time, the computer and the printing press. And I'm not fighting against the computer or the printing press. I'm acknowledging that they're good and they're important. I'm just saying, just as you go to the gym in order to exercise, there are certain mental exercises involving memory that are worthwhile. That's it. Exactly. That's it. So you can't put everything in your head and you shouldn't even try. But you shouldn't put nothing in your head either. So the question is where is the balance and how far has society tipped? Not for us, our generation, I'm older than you, but I'm really concerned for the younger generation that doesn't have anything to memorize when they go to school. The ones that are growing up with the smartphones, it's just the example of really, you're giving a presentation just, or you're telling a story, like let's say, you just literally just want to tell one person a story. Build a memory palace of that story and then you'll see how much better you can retain it and speak about it. Similar with meditation, just go and try it, right? Just go and try it. Just try it. And those are old things too, right? That's thousands of years old. These are not novel, risky techniques of unknown benefit. What's happening to the neural architecture with the children that are born with smartphones? I mean, that's a complicated question. I don't know. I just think behaviorally, there's a bad trend towards attention deficits when everything is stimulus driven, right? When kids don't have time to think, to create by themselves and everything is a video game or television show or text or whatever. At some level, your associative infrastructure isn't as deep. It's more peripheral. It's at the periphery where it's being built, but you're not making all the kinds of deep cross connections that you want to make. And I'm not going to pretend I understand how this all works at the system neuroscience level because nobody does, but it does seem to me intuitive that the denser you can build that web in a meaningful way, the better you'll be. And I guess I'd put it like this. Is a child gonna become a great mathematician if they didn't memorize their times tables when they're six? And I would say, no, I'd rather have the kid learn the times table at the age of six. I think he's more likely to go on to be a great mathematician because that's in his head in memory. And then he can put other things in memory. And eventually there's enough stuff in memory he can start doing hypothesis testing. You gotta put a certain amount of stuff in there to get some good results out. And my concern is we're not putting enough in. And an educator's response might be, well, we need to teach critical thinking skills instead. Well, first of all, how well is that working? But second of all, that assumes that association is not an important part of critical thinking, which of course it is. What is critical thinking, right? How would they define it? So the idea, remember, this all comes from the computer architecture. Computing and memory are separate. They're not in our brains, okay? This is not the von Neumann architecture. Computing and memory are not separate. They're integrated, they're enmeshed. Memory is a compute, I like to tell my class. And that perplexes them a little because they're not used to thinking of memory as a dynamical entity because they're not used to thinking of content addressable associational memory, even though we kinda know that's the way it is from a theoretical standpoint. So if memory is a compute and you want a lot of associations, you don't want your kid growing up without any of them. And what would you say right now on the conversation you were having with your students, what are people trying to figure out that is not the deep nets? Where can we take our flashlight and look? What does it seem like is gonna be pushing the bounds for AI computation? So there are a lot of areas. One is unsupervised learning. And the idea is that these are very unrealistic systems where you have a very specific error signal that's completely biologically implausible. But we manage to self-organize information kinda naturally. A kid doesn't necessarily need a teacher. He figures things out by touching, right? By going around feeling, looking at things from different perspectives, sitting on it, all these different things. So unsupervised learning and there are a variety of techniques that are out there and they're starting to come more into prominence, self-organizing feature maps, et cetera. There's an old literature on this that's very important and that people need to pay attention because ultimately we can't have this teacher, this tight supervised learning system be the model for how we learn because kids don't need that tight supervision in order to learn. They learn on their own from exploration and that's a self-organized type of learning. Also I think the dynamical systems approach which is ignored, the dynamical systems approach is essentially more in line with other types of sciences, less along the line of touring computation where a whole bunch of serial computations are made. Instead let's think about a dynamical system that's largely connected in parallel but does something useful in a short time even under noisy conditions. So there are a couple of other areas, all of which are challenging and also complex systems theory, right? The brain is the ultimate complex system, a complex system being defined as the whole being more than some of the parts that exist at multiple spatial temporal scales, et cetera, et cetera. It's hard math to figure out how these things work but that's another area of math that I encourage my class to look into. The whole point though is that these are all unexplored areas and none of them as of yet have proven that useful technologically. So what dominates is deep nets because right now hundreds of billions of dollars is being spent on them for a few technological applications. So that's a great imbalance. That will ultimately be corrected. Again, these boom and bust cycles and it will self-correct but the question is how much financial and human capital will be kind of misspent in the process? And it also helps so much that when you are with your students that you enable this nuanced discourse to actually occur. You even said there's a lack of comedy with people being sometimes challenged. We have a sage on the stage and we don't feel comfortable raising our hand and interjecting, giving a little bit of pushback, arguing a little bit in a friendly, nuanced way with love and compassion but wanting to get to what are the best ideas. And so I want you to speak about what that's been like as a research scientist and professor in academia. How do you feel about that? Well, I feel science has changed for the worse along those lines but for understandable reasons there's a lot more funding pressure on science. I mean, you're probably well aware of this. I'm sure all the scientists you speak to tell you about this. So when funding gets tight, obviously that can lead to certain negative tendencies including the fact that scientists then have to have a stronger will to brand themselves in order to kind of stand out above the fray. So I mentioned that article science in the age of selfies and that's really what goes on a little bit. Again, understandable reasons. You gotta survive. You gotta make a living. And so there's a lot less, again, interconnection. Okay, reaching out to others, questioning others. I mean, just take, this is not a good example in the sense of how extreme it is but in the old days like Boer and Einstein they had these debates on quantum mechanics and they had them all the time. They fought, and they're great friends. They fought, they fought, they fought, they discussed. Some people could say those debates, you know, what's the productivity? Nowadays it'd be like, well, how many papers are resulting from the debates? But back, of course, those were great science. They didn't need to worry about that because they already had plenty of papers. But the point is is intellectual rumination, ideation, there's not enough time for it because that will by definition cut your short-term productivity as far as publishable items. And if it's a dog eat dog world where you're competing for grants that have a success rate of five to 10%, what are you gonna do? So scientists is in a very difficult situation these days because the incentive structure is completely out of whack. Yeah, that's so critical that every time that we try and have this intellectual nuance, rumination, reflection, discourses that are not being productive during that time, not building your brand and not competing in the attention economy at that moment. Exactly. And this is one of the benefits of being able to do things like this, I think, is that we get to have this nuanced discourse potentially inspire and engage other people to care about it and that kind of brings those two things a little bit together. So hopefully there's maybe more of these, here we are, right, eye to eye, human to human while other people can watch from around the world that maybe don't get the opportunity to sit down with an MIT research scientist. I encourage my class, I mean I deliberately tell my class I want you to do 10% of the speaking. And I don't want it to be, and they're like shocked by this because it's just not the way the system works. They're supposed to go, retain information, do things, no, I don't want them to do that. I want them to think, I want them to challenge me. I want them to argue with me. Yeah, yeah, exactly. I love that. And that way you can get the young 20, 30 year old minds, even younger 15 sometimes to just, sometimes throw you a good interesting curveball and then that can be how we can, they help teach me. Exactly. If they really want to think about these things I'm sure they have plenty of things and I have been taught things by my students and I will continue to be taught things by the students and I hope the more they teach me, better off we both are. And on the way out, this question about geopolitics, you mentioned at the beginning in your state of the world answer, it seems as though there's a period of deep spiritual actualization and transcendence that we need to go through to understand more of the sheer miracle it is that we all share this rock orbiting a star together and that we were so privileged that 100 billion humans before us got all of these ubiquities and the basic physiological needs that we have. So on that geopolitical level is there a thought that comes to mind to assist with that spiritual growth that we all need? I think the hard part is this concept that is very non-intuitive that in molding our environment to shape say our short term needs, we can actually do long term structure to the whole planet. And it's understandable why we don't think this way because for the entirety of human civilization we couldn't affect our planet adversely, right? When we were just hunters or gatherers there's nothing we could really do to adversely affect our planet. So that encourages unbridled ingenuity where you do whatever it takes to make the quality of your life better. And that's a great system and that's worked. But now there are these externalities rising. Even economists acknowledge their externalities, pollution, global warming, all these things. These are hard to wrap our head around unless we understand them. I am hopeful that the next generation, the younger generation will understand them much better because it's being inculcated in the school curricula. My only concern is, will it be too late or how late, how much damage will be done? But ultimately we have to get our heads wrapped around the idea that we can actually adversely affect the planet doing something trying to be ingenious at a local scale that never caused any harm before. But it does now. That's just the way it's worked. We have to adjust, we have to adapt. And if we don't adapt, we're gonna pay a big price. But that is a very, very difficult concept. Yeah. And on the way out, last two questions that we like asking our guests. First question is, are we in a simulation? No, to me that's kind of a ridiculous idea. One of the other neuro myths that I like to dispel, I have a little neuro-mislecture in my class and this is like this is that, you can download your brain to a computer. I don't know where this stuff comes from, but it's really hard to understand why people would say these things. And for the downloading the brain to your computer, which surprisingly I actually gave that question to my class last year. I haven't yet asked this year's class. And more than half of them thought that they could download in the future. It's in theory possible to do this. And this is just a fundamental misunderstanding of the kind of the grounding problem, right? I mean, I can do computer simulation of a lot of different things, right? And what I'm doing is I'm focusing on one or two features of that thing. It doesn't make it isomorphic to the thing. If I simulate, right, a water molecule on my computer, is it wet? Am I gonna drink it? No, but what you're doing is you're simulating various features of it, various aspects of it, right? But it's funny, you know, people don't ask that question, but they're like, maybe we can put our brain, well you can't put a molecule of water onto a computer, right? And then there are all kinds of grounding issues and parallel systems versus discrete systems and Turing machines and stuff like that. So I mean, it's wrong on multiple levels, but I mean, I guess the problem there is that sci-fi culture is so prevalent that the people don't necessarily go to the scientist to ask them to think about these things carefully. They just say, oh, isn't this a cool move? And it's the same with chips in the brain, right? Everyone's putting a chip in the brain. Chips in the brain don't really work very well. It's gonna be a long time, if ever, before chips in the brain's work. So you gotta think about these things carefully, right? And not just jump to conclusions because it's cool. And it's in that scientific fiction movie, science fiction movie that I enjoyed. Yeah, yeah. And really think deeply about ethical considerations, the democratization of the technology, the accessibility of it for many other people. Absolutely, all that's important. But that people know about. They think about those things, actually, which is good. Well, the big imagination's very important too because maybe there are potentially ways in a very long time from now to be able to do this. Oh, I'm all for that. But people need to think about these things carefully instead of just kind of going off the handle and saying, wow, this is possible. I'm sure I can do it. No, this is very deep, very nuanced, very subtle question. You need to think carefully about it. Yeah, so if you were to simulate a water molecule in a world, how could you make that molecule feel wet to the animals in that world? So that's a great question. And how about the last questions? What is the most beautiful thing in the world? Well, I mean, I think there's a certain elegance and symmetry in the laws that govern the universe and at multiple scales, not just the scale of physics, but chemistry and biology. Somehow, elegant, codifying principles filter down at almost all levels and scales that allow us to comprehend the world. You know, the old quote, the most incomprehensible thing about the world is that it's so comprehensible. I believe it is attributed to Einstein. I mean, it's an amazing aspect of existence that there's so much of it that we can understand. Even though there's lots we don't understand, we certainly understand more than we almost have a right to think we could ever understand. So that's always boggled my mind and probably will continue to boggle it. Yeah, this has been such an honor. Thank you, Robert, for coming on to the show and teaching us. Thank you very much. Really, we really appreciate it as well. I want everyone to share more ideas around the differences between computers and brains around neuroscience and consciousness. Definitely get talking more about the nuance of these concepts with your friends, your family, online, your coworkers. Get chatting about it and sharing about it. Check out Robert's link below in the bio as well. And also support the artists, the entrepreneurs, the organizations around the world that you believe in. Simulations, links are below. Support us, help us continue doing cool things like coming to MIT and doing interviews. And go and build the future, everyone. Manifest your dreams into the world. Thank you so much for tuning in and we will see you soon. Peace.