 we focused on making neural networks be more biologically accurate. So one of the specific projects that I worked on was implementing latency between neurons and implementing neuronal spiking, where brain neurons really don't have a gradient of a value between, say, 0 and 1. It's either on or off. The research to make these kinds of neural networks where neurons are either on or off is very new. And I was very happy to get the cutting edge of this kind of research. My project right now is making these kinds of cells for oxytocin. We have very good cells for vasopressin and now we want to make these kinds of sensors for oxytocin. This will be very useful to see where these neurotransmitters actually move in the brain and this will allow us to study the brain more in depth. You know someone's opinion may contradict yours. Where's my friend Alan? It's all about your perspective. Who are we and what is the nature of this reality? One, two, three, four, three, two, one. Ni Hao, everybody. Welcome to Simulation. I'm your host, Alan Sokian. We're on site in the beautiful Beijing, China, at the Peking University School of Life Sciences. We are now going to be talking with Anthony Simonoff. Hello. Nice to meet you again, I guess. Thank you very much for having me on your show. Very excited to be here. You are so welcome, brother. I'm so pumped for this. Absolutely. It's going to be great. It's great to meet you. It's so cool knowing that you were coming here from Chicago over the summer to do and research here at PKU. And it was great to meet you. And so I'm pumped for this convo. For those that don't know, it's his background. He's a researcher in the Li Lab at Peking University School of Life Sciences, focused on engineering fluorescence sensors for neurotransmitters. And you can find the links in the bio below to ulonglilab.org as well as LinkedIn profile. All right. Let's start things off by asking you one of our favorite questions. What are your thoughts on the direction of our world? This is a very, as I'm sure you know, very broad topic. So I'll touch on a few things that I think about it. Like the world I guess is, like for me, what I think about it is the direction that people are giving to it. Excuse me. So what that means is like all of us collectively, like where are we going? Well, I don't know if anyone really knows. We have kind of scientific stuff that we're doing. We're doing other, well, especially for me, because I'm a researcher, there's also cultural and musical and all of the other kinds of experiences that people are doing and living and constructing on a daily basis, which I hope are put down for a good cause. They will allow people to live happier, healthier, and better lives in the future. So long story short, I think the direction of a world is for a more prosperous, more peaceful kind of society where people just live happier lives, more fulfilling. What do you think is the core skill that we should embody to make sure it's peaceful and harmonious and move forward? I don't think there is one specific skill that we say, okay, everyone, you need to learn how to be a farmer. And then everyone will be able to have a good person. But there's not one specific skill, right? At least be nice, be kind. Maybe, be loving. Yeah, absolutely. Of course, disagreements always have existed in human history, but it's how we overcome those disagreements and what we do in order to peacefully establish some kind of resolution that will really make a difference, I feel, in the future. Let's hit your journey. So you're born in Chicago. Yes, sir. And you picked up piano when you were like five, is that right? Yeah, actually, right around, well, when I was four, technically. So right around that time, I've been playing ever since. It's been an amazing journey for the past, what, 14 years now? Yeah, 14 years of piano playing since you were four years old. Okay. And then, yeah, how does this all end up happening that you pick up music when you're young, then you pick up your interest in science and computer science. So tell us about who you were when you were young and how you picked those things up. Well, for piano, it was mostly my parents, because no little toddler wants to sit and play piano for hours a day. So I don't have to ascribe to my parents. And thank you for introducing me to this world of music. For neuroscience and machine learning, the kinds of other stuff that I've done, I've always been interested in the brain. I read a great book called Gedel Escher Bach by Douglas Hofstadter. It's really opened my eyes to the possibilities of exploring the brain and exploring the kind of computer science aspects of it. So especially over the course of my childhood, I would read more kind of science fiction novels to see where artificial intelligence and similar technologies could go. And then once I got to college, I was able to actually work in a lab modeling, making models for the human brain, which was very exciting. So then when you got to college, you picked neuroscience? Yeah, so the University of Chicago does not unfortunately have an artificial intelligence major. So I chose the next best thing, which is to be a neuroscience major, as well as focusing on computer science in my free time. It's more involved with me working in a lab. I worked in David Friedman's Computational Neuroscience Lab. We focused on making neural networks be more biologically accurate. So one of the specific projects that I worked on was implementing latency between neurons and implementing neuronal spiking, which is really what happens in the brain, where brain neurons really don't have a gradient of a value between say 0 and 1. It's either on or off. And the research to make these kinds of neural networks where neurons are either on or off is very new. And I was very happy to be at the cutting edge of this kind of research. All right, let's unpack this more. This is an interesting aspect to our conversation. So let's start by explaining what do you mean neural networks and biology? How do you see those things? What is their relationship? I think in the past and even up to now, neural networks, well, they're called neural networks. They're based on human brains. Their purpose is to learn. They have singular units called neurons. And they're very similar to the brain in a lot of ways and very different and even more. So well, they're similar because they're neurons and they're connected and we can model some areas of the brain relatively well. Unfortunately, we don't understand the brain well enough to model all of it yet. And that's about where the similarities to the biological side end, I feel. So you have, for neural networks, they can process way more data than a brain can because they're built in computers. And as computing power grows, they will only grow in size and strengthen complexity. So this is where you get the fears of AI taking over the world from the matrix to Elon Musk's recent tweets. Once you have this kind of superintelligence that can process and do way more data computationally than a human can, it can take over the world. We are nowhere near there yet, but it's a possibility. All right, so let's revisit the structure of this. So we artificially create a neural network in a computer? Yes. Okay. So these are models, their backbone is a lot of linear algebra where it's just pure math. So the most stereotypical example is you feed it a handwritten number and then the network will output whether it thinks the number is zero through nine, which is actually way harder to do programmatically than you'd expect, which is why this has only been a recent development. Okay, so let's walk us through that example. So then when I write the number two on a little sheet of paper, and then I take an image of it and then I feed it through the neural network, the first thing that it's doing is it's scanning it pixel by pixel. Like, what is it doing? And then how is it creating an idea of what the answer is? Yeah, so neural networks, yeah, so in essence, that's what's happening. You're taking this little picture. I think it's 16 by 16 pixels. I could be off. But it takes this picture and then the neural networks have what are called hidden layers. So they're layers where you don't really know what they're doing. You might hypothetically think that, oh, if we have three of these hidden layers, the first one we'll look at kind of where the pixels are arranged on the screen. The second one will look for maybe a curve at the top or like the straight line at the bottom. And then the third layer will combine a whole of this information together and then output the information that it sees. But we don't actually know if this is what's happening. We, obviously, you can analyze it, but just by looking at the neural network itself, we have no idea how it makes these calculations. So, yes, to answer your question, it scans it pixel by pixel or, sorry, it stores all of these pixels in some kind of memory format. And then it does a bunch of calculations on these pixels with certain weights and biases within these hidden layers and then outputs a probability of what this answer is. So it will say, okay, I think that it's a one with maybe 1% probability and a zero with 2% probability, but I think it's a two with 97% probability. Or there's a 97% chance that this is a two. And that's how you train this neural network to give you a good answer. Hmm. So the hidden layers are doing some sort of analysis on the image. Yes. Yeah. And then you have to train those hidden layers to do these specific aspects of the analysis. Yeah. So it's like an if curve, then this stuff like that. What is it? What is it like then? Not really. It's not if curve than this. The hidden layers, they're called hidden because they do whatever they want. And the way you train these hidden layers is you, okay, at the end maybe. Because those if then statements are like hard. Hard coded. Yeah. Hard coded. And you these, you have to be really flexible with these hidden layers. You, because there's going to be tons of examples of what that curve can look like. So it's not going to be just an if then. Yeah. Yeah. So the way it works is an algorithm called back propagation where you feed them, say you write a two and you feed it to the network. And then the network says it's a five. And you're like, well, no, this is a two. And I want you to lower this error. And the way this error lowering works is there's a gradient. Kind of it does what's called a gradient descent. And it goes down this kind of gradient of probabilities. And eventually you feed it enough numbers and it will go down kind of low enough in this like inverse parabola to the low point where the probabilities have little error. And so that's where you get the accurate answers. Okay. So back propagation is telling the neural network that you were wrong and then we need to tweak a certain hidden layer. Yes. In order to increase efficiency. Yeah. Pretty much. Okay. Okay. And then when you do the tweaking, is it a, what does the tweaking of a hidden layer look like? Is it changing the linear algebra, the math? Is that what you're saying? The math is essentially the same. It's changing the values. Changing the values in the math. In the math, yeah. So each neuron has what's called a weight and a bias. So weight is a number that you multiply the input by. So for example, if my input is five and my weight is two, then I multiply by two. And I get a 10. And a bias is what you add on to this final value. So if my, so five times two is 10, and then say my bias is three, right? So then 10 plus three, so then my value of this neuron after whatever input's going to it is a 10. Which was two. So two times five. Yeah. So your input was two and your output was 13. Yes. And so what was the significance of weights and biases? Well, I used large numbers to, as an easy example, all of these numbers would actually be between zero and one. Yeah. So the purpose of these. Meaning zero percent or 100 percent is? That's only for the last layer, like the answer. For the, it's not really percentage. It's just a value between zero and one. For all the hidden layers up until the last layer. Yeah. So each, yeah. So with each kind of input and weight, the, what it means is that the, this specific neuron out of however many hundreds you have can change kind of the data flowing through it. It changes it either up or down. And then by the time it flows through all of these hundreds of neurons, by the end of just return neurons to your tonight, then that's where you have the kind of probability scale of this is not this number, not this number, but it's this one kind of thing. Okay. So my values in the math are where I change weights and biases. Yes. On all the neurons. Yeah. And then, so back propagation, there's also this thing called a cost function or a loss function, which is kind of how far you are away from the actual answer. And back propagation is a kind of tool that lowers this, the value of the loss function. So your error is less and it does this by going back up the hidden layers and tweaking the weights and biases in such a way as to lower the loss function value pretty much. So I can make an algorithm that will go and tweak the math to make it so that my answers are more efficient. Yes. Yeah. And then you feed it hundreds over thousands of examples. And then by the end, it becomes very good at analyzing. Oh, this is a two or a five or a cat or a dog. Or whatever Google is doing nowadays. So the first example we gave was like ImageNet, where it's numbers and pictures. Yes. Yeah. And now that's already classifying at unprecedented rates, obviously, with greater efficiency than humans do due to sheer, we'd sleep eight hours, we get hungry, we get emotional, we can only do one at a time. Exactly. Yeah. Yeah. So there's also different kinds of technologies like, for example, speech translation or speech recognition for Alexa and Siri are also built on similar kinds of networks. So not exactly the same, they're obviously more complex. So natural language processing also takes something like the word mom. Yeah. And then it feeds that into a neural network that has hidden layers that is trying to take the audio that I emitted, that sound wave. Yeah. So if you say, for example, hey Siri, call mom. And I'm sorry to any viewers who actually called mom right now. That's funny. Yeah, the device at home actually calls mom that time. Calling mom. Yeah. Or even better, maybe you should call your mom right now. Tell her you love her right now. That would be a great idea. But yeah, so when I say, hey Alexa, hey Siri, call mom, what it does is it takes the sound wave and it does a bunch of, all the peaks and valleys that analyzes all of this data. And then by the end, after all of this math that was put on it and I don't know the details, I don't think Amazon or Apple will ever let us know the exact details of what happens by the end, it sends the commands to your phone to call your mom. Right. So this obviously shows a long way to go as a lot of people still don't, well, Alexa and Siri still obviously aren't perfect. But it's come a really long way, especially in the past few years. Yeah. So the first thing is then it would take this call or call or call. Yeah. And so there's all these different ways to say it, but then that first command is there. It's like, okay, with 90% accuracy, we know he's at call. And then the second word was mom. Yeah. Okay. And then so then it's fine. So this is, this is, this is tough. So then it's feeding the data from the peaks and the valleys of the sound wave through the neural network and the neural network is then analyzing with weight and bias on different neurons going through the hidden layers that this is this. It might be a different kind of style of neural network instead of just having layers and numbers and biases. What are the different styles of neural networks? So I'll just list a few. The most, the one that most people think about and which is the one you hear about in like introduction to neural networks 101 class is what's called a convolutional neural network. You have an input layer of however many neurons and you have hidden layers. Each have however many neurons and then you have an output layer. So they're kind of stacked one on top of another. Now the other kind, the one that goes through linearly. Yes. Yeah. And it just goes forward and then the back propagation goes backwards. There's no communication between neurons in the same layer. Convolutional neural networks and neurons don't talk to each other in the same layer. Yes. Okay. They just move the data from the first layer to the second through those hidden layers out the, yeah. So there's no communication. Vertically. Vertically. Yeah. One style of neural network that does allow for this communication is what's called the recurrent neural network. So you have your input layer and then you defeat your data into like a good way to describe it is kind of like imagine a box where all of the neurons are connected to each other and they all talk to each other. And then you iterate on this box a few times and then you output it. So this is, it's called recurrent because all of the neurons are talking to each other in this kind of box. So it would be like if the convolutional neural network would hit like one of the hidden layers and then say, hey, hold on, slow down. Let's go back to the second hidden layer and that would be more like what a recurrent neural network would be doing would be having those neurons talking to each other to produce the best answer instead of go through it linearly. Actually, very close. Yeah. So this is basically having a convolutional neural network. Not, so see I have two hidden layers not go backwards, but just talk within one layer to itself. Only within one layer and then, okay. Interesting. So the box is one layer. One layer and you can have multiple of these boxes. But that's usually not necessary. Depending on what kind of task you want. Each layer could be its own recurrent neural network. Yes. Box, which is unnecessary or which. It just requires a lot more computing power. Yeah. It increases with each box you have. Yeah, yeah, yeah, yeah. And so GPUs are the best or TPUs now are the best. So either graphics processing units or specifically, I forget what TPU stands for. Tenser. Tenser processing units, which are specifically designed for this kind of large scale matrix multiplication of for machine learning. In David Friedman's lab, we used I think eight 2060 TIs in one computer to run our networks. So pretty much top of the line graphics cards. There weren't TPUs, but they were very good. Okay. So now what did you do with these recurrent and convolutional neural networks? Which one did you use and what were you doing in the lab at U of Chicago? And then we'll talk about what you're doing here. Yeah, of course. So most of my project during the school year focused on implementing latency between each neuron and in the models that we were building. So in a stereotypical neural network, each neuron just instantly as possible on a computer, instantly communicates to each neuron down the line. However, in the brain, this isn't actually the case. And for your brain neurons to communicate, it takes about 10 to 40 milliseconds for the signal to travel from one neuron to another. So one of the projects I worked on with the other members of the lab was implementing this kind of random-ish latency between each neuron. And in the end, we implemented it. It didn't exactly lead to, well, no, it led to better results in the specific tasks we were going for. Another project I worked on was implementing complex numbers into one of our neural networks. Unfortunately, this didn't exactly work out in our implementation for whatever reason. So these are the kinds of experiments we try in order to make these neurons be more biologically accurate. So there's a 10 to 40 millisecond latency between neurons communicating in our brain? Yes. Okay. And then in a neural network, it's even less than that? And if you don't implement this kind of latency, it's basically each neuron just communicates as fast as possible for whatever your time constant is. Oh, and then you can add a latency paradigm to that layer and say that, hey, you have to do this in less than 10 milliseconds. Yes, you can do that. We specifically said, we specifically made neurons basically hit pause and be like, okay, you have to do this in 35 milliseconds or 20 milliseconds, whatever the specific, whatever for this specific neuron, whatever the value was. So it just makes it slower. And which is good for recurrent neural networks is that it maybe takes us time to figure out if it's the right? Not necessarily, this isn't necessarily good for if you want fast neural network kind of outputs. We were focusing on making these networks be more biologically accurate and having this tell us stuff about the brain. For example, one of the recent papers out of the Friedman lab, which was before my time, but it was published in June in Nature Neuroscience was that the researchers found by implementing a different kind of biological process, they actually found that memory stored in two different places in the brain, which is very, and one of those places is very hard to measure in an actual human brain because one of these places is synaptic strengths between neurons. And this kind of strength is difficult of not impossible to measure because you can't really take a brain apart and look at each of the connection strength. So by making these kinds of biological kind of testing grounds, we would lab similar to David Friedman's and of course David Friedman's allow us to actually learn more about the human brain as well. So what would be then aspects of neural network engineering that would give you these insights into how we store memory in our biology? How did those discoveries work? So the researchers specifically, Nick Mass was the first author of the paper, and his team had a hypothesis that working memory, so basically like short-term memory, is stored in both neuronal strengths, which is how strong does a neuron fire and how often and synaptic efficacies, which is the strength of the connections between neurons and through a lot of math, which I don't really know off the top of my head, they were managed to implement both of these processes in neural networks and train them on specific tasks. And what they found was in order to just memorize information in short-term memory, synaptic strengths are used. So the strengths between each neuron, not the neuron itself, but the strengths between different neurons. But in order to kind of manipulate this information so like see something and then think about it or even or even remember it. So that involves the neurons themselves, which is kind of understandable. Cool, cool. Okay. And now the idea is that we can then do other forms of neural network engineering to take it insights about how our biological brains work. Yeah, yeah, this is absolutely just one example of many others that are currently out there. And what are you interested in doing with neural network engineering? And you gave another example of you guys were doing, you were trying to implement, you said real numbers into complex numbers into yeah. So there's a real part and an imaginary part. So the real part is just a number that we're usually we're used to thinking about. So like zero through one. Zero through one in all the decimals. Yes. Is the reals. That's the reals. Yes. And then the imaginary part is another value from zero to one times i. And i is the square root of negative one. So that's what makes it imaginary. And in some implementations of in the literature that we found in some implementations as I was for better kind of a better and faster neural network processing. However, in our implementations that I worked on, this didn't really work. And there's a whole slew of reasons why this might have failed. And we're not really sure why exactly just in our specific implementation, these complex numbers did not work. Okay. And remember, there was something crazy that I was learning about. Yeah, what is it square root of negative i when negative one. Yeah, that's just i square root of negative one is i. Yeah. Yeah. And then I think there was like something else after that, like the quaternions and octonians or something. I was learning about that. It was so crazy. Yeah. There's a bunch of these little like weird mathematical idiosyncrasies that computer scientists and mathematicians and physicists like to uncover. Yeah. And hopefully give us insight into the world. So, okay, so now, yeah, so let's talk about neural network engineering for biological purposes. So yeah, what do you what do you want to see happen in that in that field? Where do you see it going right now? What do you want to do in it? I think the be all and all would be to be actually able to simulate an actual human brain within a computer. However, the human brain has many, many neurons in the order of hundreds in the order of millions, I think hundreds of thousands, two millions with the order of trillions of connections between them. And we simply do not have the computing power to first of all even be able to scan a human brain to this kind of granularity. And second of all to be able to simulate this human brain with all of its different connections once we've scanned it. So neurotransmitters. Yeah. And especially of all of the other cells that are in the brain that help the neurons do what they do, some of which helps to create neurotransmitters, others kind of deliver blood. And if you have a stroke than those kinds of than those neurons near there are impaired. So I hope that in the near future, this will be a possibility and this will allow us to really understand the human brain to a much better extent once we can simulate it and play around with it play around with the simulated brain. But we're still very far from that. There's been recent papers that unfortunately I forgot the first author of that have simulated a lot of neurons. However, I think it was still only on the scale of 10% of an actual human brain. So we're coming closer. We're coming closer, but we're still very far away. And I hope that I will be able to take part in all of this. How did you decide to come out to Peking University this summer? So the University of Chicago has a great program called the Metcalf program. And the Metcalf program, among all of its other benefits, the University of Chicago students allows will pay students to go out to unpaid internships. And I was interested in going to China to traveling and studying abroad. And I applied through Handshake, which is the kind of job search platform that University of Chicago uses as some of the universities. I applied to Dr. Lee's lab. And after an interview process with a walk on, who was one of the students here, I was able to come out here. And the University of Chicago, the Metcalf program sponsors my trip out here, which really is what makes it all possible. Interesting. So Metcalf program, so you have potentially you can have wealthy donors or patrons from around the world that sponsor young people to go out over the summers to different parts of the world and meet people and do research there and make cultural ties happen. Yeah, absolutely. So it's for any kind of internship. So whatever you're interested in, the Metcalf program within certain criteria obviously will sponsor your trip overseas or if you want to stay in Chicago and work in one of the labs there, any kind of work experience. And then how about engineering fluorescence sensors for neurotransmitters? This is what you're doing right now in the lab. What does that mean? So we like when the cell receives the neurotransmitter fluoresces. Exactly. Yeah. So we have certain proteins called GPCRs, green fluorescent proteins, which are GFPs, green fluorescent proteins, which light up when there are certain neurotransmitters being in the vicinity or inside the cell. So my project right now is making these kinds of cells for oxytocin. We have very good cells for vasopressin and now we want to make these kinds of sensors for oxytocin. After this kind of R&D stage, this will be very useful to see where these neurotransmitters actually move in the brain. And this will allow us to study kind of the brain more in depth. Currently I work with E. coli after a certain kind of testing stage and once we have good enough sensors, we inject them into mice and then we study these mice in vivo and we see how they respond or how their brains respond to these kinds of chemicals. So you end up taking a decent amount of your background. Okay, I see. So it's the neuroscience background for you because it's not too much neural networks on the computational side for you right now. So you're actually working on engineering the sensors for oxytocin. So when a neuron receives oxytocin, you're going to light it up with GFP is going to light it up. Yeah. And so then you're engineering that right now in E. coli. Yes, yeah. So we make the proteins and we introduce them in polymers and have the E. coli express them and then do a bunch of imaging tests and then see what are the best candidates and repeat and go from there. Okay, and then that gets injected into mice. Eventually, yeah. We package them into viruses and these viruses deliver into mice. Yeah, into mice neurons specifically. Yeah. And then how does then you have to then you image the mouse's brain to see the fluorescence? Yes. For specifically oxytocin. So then it has to have some sort of like a female mouse has to come up or something like that. Something like that. Well, we haven't done this for oxytocin in the E. coli stage, but for other kinds of sensors you'd either stimulate the mouse with something or you just simply inject the drug into the brain and see if it works. And then so you gave us this example earlier with the strength of simulation software being able to make a human brain and some teams already doing 10% of the human brain. Does it feel like we're already in a simulation? So this question caught me a little bit off. I don't know. I don't know if we are in a simulation. I don't know if it's possible to know this. And even if we are in a simulation, who's to say that our so-called creators aren't in a simulation themselves? Sure. Yes. Hypothetically, we could be and we could never know it. But I don't think this really affects my specific worldview because we just don't know. And it's better to just not assume we are in a simulation and that this is the real thing and kind of experience life from the perspective of this is the only chance we get. And this is the again real kind of experience as opposed to it's not just fake and created for somebody's research project. Seems like the computational capacity is increasing and unleashing super intelligence as in quantum computing is going to be enabling us to run our own simulations of brains and of civilizations and of all different types of things and that's going to give us deeper insight into our own source code of this reality. And that'll be a very fun awakening moment for many of us. And I don't think those things have to be exclusive like living life with a deep amount of meaning and passion and then whether or not this is a simulation. Yeah. So you can always live life with deep meaning and passion and purpose regardless if it is or is not a simulation. What would the skill that young people should know going into the exponential technology age be? Singular skill. I think privacy is not specifically a skill but it's a really important mindset to have especially with nothing that you put on the internet really ever being deleted and especially for my generation and the generation right after me where everything is recorded and everything is always on your phones. I think privacy is a really important concern. We have scandals of Cambridge Analytica recently as well as kind of the antitrust in focus on Google as well as other big corporations now that have a lot of our own data and what we do. So it's important to I think be knowledgeable about kind of what you post on the internet may come back and haunt you and but you later. But on the other hand it's also important now to have a healthy kind of social life on these social networks because if anyone is for example an interviewer, if an interviewer is looking for you and tries to do some kind of background research then having these experiences on these social networks and kind of posting the good moments from your life is also really important to show that to show what you've done and kind of what kind of person you are. Yeah it was good seeing your piano videos on the internet and your you know your profile on LinkedIn at least gave me a little bit more of a background on you. At the same time I like seeing people post themselves completely transparently like not just the good but also the struggles and the challenges and things like that and I myself do that. Yeah absolutely. That helps a lot too. It shows real humanity it shows vulnerability it shows it's good to talk about triumphs over those challenges because they can inspire other people to to do the same and it's interesting thinking about going into a completely transparent world and it's also thinking interesting about quantum encryption and all of the privacy safe world at the same time so it's kind of like hmm can we build that love trust and affinity for each other across the world can we not need such military budgets and regimes can we ascend our consciousness to more harmonize together around the planet. On that question how do you think we can inspire people around the world to work together. I think there's different you can kind of do this differently for different goals so if you I think science is a really good example of this where you have scientists from all over the world writing papers and reading the papers of their periods that may not even be in the same continent or hemisphere so you have this huge kind of collection of people that are indirectly working together and I think if we have this for similar have similar kinds of things for other kinds of fields of study or walks of life I guess this could be no this could lead to very good kind of outcomes so you have maybe for let's say the hard case let's say politicians right so you have a lot of countries with competing interests and I think if there was more kind of transparency between the issues of one country with another or even the wants and desires of one country over the other you could have reach more peaceful resolutions going back to what we talked about at the start of this interview of kind of more communication between parties yeah so I think it all blows down to a healthy amount of communication and transparency in certain areas what do you think is the overall meaning of this human experiment well but what do you mean by experiment no it's an experiment human experience both of them what is the meaning of it I think um on an individual level I think the meaning is what you give it really of for example of and I'm very happy studying and doing conducting research so the meaning it would the meaning for me might be something along the lines of I I want to create or I want to do more research or I want to kind of find the answers to these questions this list of questions someone else who finds satisfaction and purpose in doing other things that that may be their meaning of life I don't I don't know if there is a meaning of life for it necessarily all of us that is the same thing but I think on an individual level you can definitely find something that makes you happy and content do you think consciousness is a biological phenomena I think we well yes the short answer is yes the long answer is we don't know of what exactly consciousness is it's very hard to define um we know we have it we know rocks don't have it probably that's what that's what we've got um does it come it comes from the brain wearing the brain on the brain great open-ended questions in neuroscience right now do you think you have free will I'd like to live my life believing I do because at least in my personal experience I don't I choose to live as if that my choices really do matter and I have that I am the one making these choices and set some deterministic path that I've been set down by yeah that I've just been set down what do you think is the role of love in life um well there's many different kinds of right there is parental love there is kind of fraternal or brotherly sibling love there is obviously amorous love um so I think different kinds of love have different kinds of roles um the parental love of or kind of in family dynamics are very important to kind of culture and well-being uh obviously more personal um amorous relationships are also very important the role of love I don't know I don't think I can classify it into one oh this is the role of love to be happy I guess maybe I don't have the answer to this question I'm sorry this is why we ask yeah what do you think is the most beautiful thing in the world most beautiful thing um very similar answer there are many different kinds of beauty I feel um I'm a pianist I love listening to other performers and musicians especially those that are at the peaks of the peaks of their careers and who are the best at what they do they create beauty at their instrument um there's beauty in arts there's beauty in culture there's you know there's beauty in just people um I don't know if I can say of something like lines of this is the most beautiful thing I've ever seen yeah yeah yeah you're you're you're you're you being a pianist is also very interesting because in many ways that has a lot of computational properties itself as well there's specific notes that are played there's specific space between the notes there's a length of the note there's where is the note being played on the scale there's so many aspects of the competent torques of music and that's very similar with uh with computer science or with neuroscience so it seems like those things really mesh together well for building out a worldview music and science yeah we we've known this for a while actually that the same area of the brain that is activated when you play or listen to music is the same area of the brain which is activated when you do math so doing one makes you good at good and more precise and better at the other as well so they're very intimately tied and a lot of or at least in my experience a lot of doctors that I've known do playing kind of a play an instrument of some sort and they're good at it too yeah yeah thanks for coming on to our show yeah absolutely thank you so much for having me it's super fun I really appreciate it thank you thank you thanks everyone for tuning in we greatly appreciate it we'd love to hear your thoughts in the comments below on that episode let us know what you're thinking also check out the links in the bio below you longleylab.org also Anthony's LinkedIn profile also have more conversations with your friends families co-workers people online about neural networks and about neuroscience and about the future of all these fields have more conversations about it and also support the artists the entrepreneurs the leaders around the world that you believe in support simulation our links are below you can contribute to us patreon paypal cryptocurrency all those links are below and also go in and build the future everyone manifest your dreams into the world we love you very much thank you for tuning in and we'll see you soon peace it's a wrap my man good job all right thank you dude that was great yeah it was a lot of fun yeah