 Tim is wearing sensors. It's a recording from my ear, brain activity, and we're recording heart rate and we're recording movement from the chest. We're streaming this to the cloud, where we're translating this into measures of my relaxation level, my heart rate, we can get heart rate variability, we can get a variety of states that are related to the types of signals that I can get here from this in-ear sensor and from my heart rate monitor. And all that happening in the cloud and coming back to my mobile device. So we've got our EEG, our ECG, our heart activity, my relaxation score, actigraphy. So if I move around a little bit, it's going to go up, it gets all red. Now I'll be still and then my heart rate. And so we can take all of this and we can do something kind of cool like this because all my data is going through Neuroscale. Hey Alexa, ask Neuroscale for my current relaxation level. Your current relaxation level is 20%. You're pretty pumped up if you feel like relaxing a bit. We could do some meditation exercises together. And that's an example of how we can integrate something like Amazon Alexa through the Neuroscale platform, connect that to my brain and my heart, my body, do something useful like asking Alexa what's my state, how am I feeling right now. But it could also be your light, it could be your car, it could be any other internet connected device. And you can build that kind of application on the Neuroscale platform. Boom, what's up everyone? Welcome to Simulation. I'm your host Alan Saki and we are at the Transformative Technologies Conference. It is super fun here. And we have an awesome guest joining us for this episode. I'm really excited. We have the Tim Mullen, the Dr. Tim Mullen joining us. Hey man. Great to be here. Thanks for coming out of the show. Yes sir. Really appreciate it. And the CEO of Intheon and Intheon Neurotech anytime, anywhere. And we'll unpack how you're doing that and I will get there. I want to know who you are first and how you got to where you're at. How did that even happen? How did you come passionate about this? Well I was born and then I ended up here. So I wasn't just born yesterday but I am a computational neuroscientist by training. My academic training was first at Berkeley. I was a computer scientist back then. My interest was in artificial intelligence. And I wanted to really build computers that could understand us. That was my goal. And through that process I started to get really interested in neuroscience. Because it seemed to me like this is the best example we have of a thinking thing that can understand and adapt to the world and so we can understand this thing a little better and maybe we can build those thinking things a little better. And so I came through that process into computational neuroscience which is what my PhD was in at UC San Diego at the Institute for Neural Computation. And out of that came Intheon, the company that I'm currently leading and the amazing team that I have the fortune of working with there, helping build technology for everybody, neuro technology for everybody. Yes, yes. Okay, so what does, you give this really interesting thought is that you like brain, you like AI, you see AI, you want AI to be better and a good way for that to be better is to understand the brain. Yes, so that's one, but it's kind of a means to an end. You know, my goal and our goal at Intheon is to increase human potential, what humans are capable of. I see artificial intelligence as an extension of ourselves, of our biology really, if you think about it, the computing technology to create are just an extension of ourselves. And its purpose, I hope, will be to increase our capabilities as humans to get, allow us to do things that we would not have been able to do before. Neurotechnology serves a similar purpose. I think its ultimate purpose is to enhance our capabilities and take us to a new level and a new dimension of what humans are capable of. And that and AI together can play in a very synergistic way. Yeah, I love thinking about it as an extension of our biology. And then there is this element to it that I think the way you speak about it super calmly and super, I think it's because you studied it so deeply, I think. Maybe, who knows? But it teaches a little bit about the computation and the neuroscience at the same time. Neuroscience is a lot of data. There's a lot of data. There's a lot of math. There's a lot of everything from brain waves to how often a neuron is firing what neurotransmitter is being sent. The brain is very, very complicated. As a neuroscientist, I can tell you confidently that I don't understand how it works. So none of us understand really how it works, but we understand some things about how it works. We have models of the brain. The famous statisticians, Box and Jenkins famously wrote, all models are wrong, but some are useful. And the practical question is how wrong do they have to be to not be useful? So I like to think about it that way. We are fundamentally everything we think we know about the brain is probably wrong. Our models are probably wrong, but some of these models are actually very useful. And so some of the challenges that we face in computational neuroscience and neurocomputation in general is how do we take these signals, as you mentioned, a lot of data that we can record from the brain, from the body, also from movement from the eyes, and then translate that into something that's meaningful and useful that tells us something about your state. And there's a lot of steps in that which involve everything from taking those signals and removing all the noise. There's so much noise in the signal that's not related to those neurons firing your brain, which by the way we can't see from outside the head, but we can get kind of a noisy chorus of activity. And then we got to filter that activity and extract out from there the teeny little needle in the haystack that is the signal that is related to a particular state you're interested in measuring, like your emotion or frustration or something like that. That's essentially what we do as computational neuroscientists, as machine learning enthusiasts and experts to try to bring those things together and do that decoding. Yeah, the decoding. And then, gosh, as you're talking about so much noise, so little signal, I'm thinking about Twitter. Somewhere in the Twitter sphere there's a useful comment. Where is it? Yeah, where is it? Same thing up here. So this is really tough. Okay, now there's so many different methods to get the data that then has to be decoded into these emotion feeling. Now, the way that we get the data right now is mostly EEG, fMRI. Yeah, so EEG, fMRI are two tried and true methods for measuring brain activity, but very different methodologies. Is that where most of yours comes from? EEG, far more than fMRI. And the main reason is that an fMRI machine costs anywhere from $1 to $2 million to purchase, so I don't have one. Although we have access to them as scientists, then we can use them for our research, but it's not something you're going to just have an fMRI machine lying around, or MRI machine. It's an MRI machine and we do functional MRI. But the other factor is it's also very expensive to maintain. Second or thirdly, it is not a mobile imaging technology. Totally. You can't walk. You're lying prone. You're inside a massive magnet with a superconducting sphere around it, and it's not something where it's going to be useful in the workplace or in your car or anything like that. EEG, on the other hand, even though it's an older technology, it's over 100 years old. No. It is, yeah. Well, almost 100 years old, I should say. Soon it will be 100 years old. And this is just 40, 50, something like that. No, 1930s, actually. Damn. And so it goes all the way, and actually it goes back further than that. That's when Hans Berger first really popularized the notion of the electroencephalograph. Yes. And some of the signals that he observed way back then, like the alpha rhythm, we're still looking at today and still proving useful. We're still understanding more about it. That's how complicated the brain is. You can spend many, many decades looking at a signal and you're still seeing new dimensions of how that signal relates to behavior and to states. But what's happened over that period of time between then and now is the technology has miniaturized, miniaturized, miniaturized, and the computing capability has gone up. And we now can measure what would have occupied an entire room and had a lot of analog circuits and a lot of, you know, hardware. Now that can all be miniaturized into a little chip that I can pop in my ear and measure my brain activity there, for instance, or put on my forehead like Interaxon today was talking about their new device, which is just great for measuring brain activity, and all these kinds of wearable EEG systems. So that's really the big distinguishing factor between fMRI and EEG in terms of its utility. Can it get out there into the world and have mobile contacts? And be cheap, be cost effective, accessible and scalable. Most of these devices are like $300. They can be any EEG device can range anywhere from $50 to $15,000 or more. And it all depends on the quality of the signal and how many sensors. And all of that relates to how precisely you can image the brain with EEG because you can actually image the brain with EEG like you can with, say, fMRI. These are also the differences between the two technologies you can see into the brain. But also the kind of signal-to-noise characteristics. In other words, how well can I extract that needle in the haystack? FMRI makes it easier. It makes it easier to extract certain kinds of things. So if I want to look at a specific area of your brain and say, is that area of the brain activated when you're doing task X? With fMRI, I can have good spatial resolution. I can look at maybe about one millimeter of your brain. With EEG, it's a fuzzy view. Maybe the best I can get is a centimeter to image that. And that's what sophisticated signal processing methods, sophisticated mathematics that let us try to look deeply inside the brain. And try to see where the signal is coming from. FMRI is looking at blood flow. It's not really blood flow. It's a deoxygenated hemoglobin and other stuff. But basically what it's looking at is we can think of it as looking at resource consumption rather than the electrical activity directly. So when neurons fire and they're active, they consume more resources. In this case, yes. And there's an oxygen. So the oxygenated hemoglobin and deoxygenated hemoglobin ratio is different when there's more activity in a particular part of the brain. That's called the bold signal, or it's an extraction of that. But to be honest, and I'm not an expert with fMRI in the same way that I know about EEG, but I can say pretty confidently that also there's a lot we don't quite understand about exactly what the mechanism is that is driving that particular signal. So then out of now with this huge revolution of just following Moore's law that we're having a significant amount of computational power for lower amounts of cost. We're able to wear it. We're able to walk around with it. We're able to sleep with it. Amazing. And so now you're getting tons and tons and tons of data. And then, so what are the percentage most of your data from EEG? So a lot of our data is from EEG, but we also process data from all kinds of other sensors. So heart rate monitor. Oh, you do? Yeah, eye tracking devices. If you have a sensor on your muscle recording your muscle activity, we can make sense of that data too. Motion capture accelerometers. So we think of this as a whole systems approach to understanding the state of a person so that we can optimize that and improve it. And so the brain is a big part of that, right? But the body is actually the brain is part of the body, right? We often think of the brain and the body somehow like they're two separate things, but it's all the body, you know? The brain is a part of it. And you need to look at it from this whole systems approach to really understand what's going on, you know? Is it different algorithms for being able to make signal from EEG than it is from muscle or heart rate? Yeah, so there is some portion of the mathematics and the signal processing that translates across these and there's others that don't. And so there's, you know, if I'm looking at your movement activity, the patterns of movement activity and how I process that data is going to be different than if I'm trying to image activity inside your brain. But upstream of all that, let's say I have a signal that is telling me how activated or how much activity there was in a particular part of your brain. It's a time series, okay? Some squiggly thing. And then I've got some signal telling me how much you moved or what your posture was at every moment in time, two of these signals. At that point now it becomes a machine learning problem to say how are these related and how do they predict your state. And that part can be very similar for all these different domains. The input can be similar, it's just patterns. So I can take my favorite machine learning approach. It could be anything from, you know, your sparse logistic regression to your deep neural network to whatever you want, right? And I can take these machine learning techniques, AI as we're now seem to be all colloquially calling it. In other words regression on a lot of data. But in any ways, and I'm a big AI fan. But we're taking this stuff and now we're applying that to these kinds of signals and we can take the algorithms that Netflix, the types of algorithms Netflix uses to figure out, you know, what type of movie you like and we can apply it to your biosignals to those kinds of algorithms to figure out how you're feeling, you know. So at that point it doesn't relate so much to the sensor, the signal type anymore, although there is some relationship. But it's upstream that it really matters. And that's a lot of what we do at Entheon is that we figure out how to take that stream and turn it into something useful and then we apply machine learning on it and we give you back something very simple like what is the state of the person wearing the sensor in real time. Through the cloud. Nice. Okay, so as long as the sensors are connected to the cloud and then you have an account with Entheon, then my data can real time be being processed by Entheon giving me very close to real time results of my current physiology. Exactly. So my goal is to make it possible with as little as five lines of code for you to access a lab's worth of state of the art analytics and signal processing to obtain a meaningful state like for instance what's your frustration state or what's your emotional state or your attentional state. And to be able to integrate then that signal in real time into any internet connected device or application. So I could have a little wearable on my wrist and a sensor on my head or someday in my brain as we get to the point we can talk about implants if you want later but I could have that little wearable that has almost no computational capability at all tapping into almost limitless computational capability in the cloud and telling me in real time what that state is and that's the power of cloud computation. It's a sensor going to cloud computation and coming back to a device that allows me to act meaningfully on that state. Or my car for instance another good example is that my car could be asking our cloud service which is called Neuroscale asking Neuroscale what's the driver's mind wandering state right now is the driver focused or is the driver starting to drift away and it's just asking once a second or ten times a second what's that state. And if I'm wearing a sensor starting that data up then we can do all the sophisticated computation and my car can now be aware of my state and then it can act proactively to increase safety by maybe taking over if it knows that I'm not capable of stopping in time if something happens it can basically amp up its own automated AI system to basically say hey I got to take the wheel for a bit here because Tim's not in a good state. Yeah. And then this is also very helpful for emotion regulation. It could be. It could be. Where is it most helpful right now? So there's a whole emerging space of you know what used to be called neurofeedback but now it's kind of you know in the broader space of neuromodulation closed loop neuromodulation and the basic principle is that the brain is a plastic system which means that it is adaptable. It's changing constantly based on inputs and internal goals and there's a reward system in the brain that's constantly trying to shift and change how different areas of the brain wire up and talk to each other to optimize a desired outcome and that's evolutionarily what the brain is essentially designed to do or has evolved to do. And so if you can construct a system that can measure activity from a part of the brain let's say it's an area of the brain involved in you know that might be related to mood and I can record activity from that area and let's say increasing activity in that area of the brain is shown scientifically to increase my sense of well-being or my mood. Now if I can measure that signal related to activity in that area and then feed that back into maybe let's put it into a game where the only way I win the game is by increasing activity in that part of the brain. The brain will learn to increase activity in that part of the brain and we believe if you design this closed-loop system in the right way you can get that to be a permanent effect or at least a long-lasting effect. Could even Adam Gazeliat has been working on this for a long time. We love it. The neuro-therapeutic gains, right? Exactly. Achilles interactive. Exactly. So Achilles is trying to do that with behavior, right? Yes, yes. But it can also extend into the neuro space where at some point you're now directly reading from the brain and then you're adapting that circuit to improve performance or achieve a desired outcome. Yeah. Another possibility is neuro-stimulation. So you could also stimulate the brain after or while you're measuring it to try to change those circuits. And as you're saying that I'm thinking about the absolute incredible benefits I'm also thinking about what we've seen in the last 10 years with addiction to social media. Totally. Yeah. And we have to be very proactive in thinking about the future use cases of these technologies and we shouldn't revert to Ludditism, which is to say let's ban the technology, but we have to also not throw ourselves headlong and thoughtlessly into those domains without really thinking, okay, what are we doing? Why are we building this? Are we building the right thing? How are we using it? And that's a social discourse, a societal discourse and it's a global discourse that has to be carried out. Yes. And it is happening, but it is very important, of course. Yes, yes. Okay. So let's talk about the data that's coming in and making sense of the data that's coming in and what is the, but as much as we can talk about it, because I know at some point you're going way over my head with actual computational neuroscience analysis, so okay, you're getting this information that's coming in from let's say an EEG and that you are, I mean it depends on where the EEG is located on the person's head in the first place and then it depends on how frequently they're sending you the readout of information and then how many electrodes there are. All factors, absolutely, and how well we can decode a state. So, you know, well there wasn't exactly a question there but the answer, if I was to pose an answer to the unstated question. You were going to, yes, yes. Would be that you're pointing out one of the big challenges of neurotechnology, which is that the design of the sensors and the hardware and where you put it and how you measure that signal and then what you do with it, all allow us to achieve different levels of performance in decoding brain activity and when you're trying to say build a device that addresses a particular, you know, problem and you need to know where do I put that sensor and how many sensors do I need and what type of algorithm do I use to do the decoding. All of that can be a massive research project that's very difficult to undertake for like a few months or longer, years sometimes and a lot of expertise and resources. Now at Intheon, we've been working in this space for a long time, our team, so we have a pretty good understanding of what kinds of sensors will be useful for decoding a certain kind of state and where you should put those kinds of sensors and part of what we're doing by interfacing with many different sensor types and then having many different types of pipelines into those and allowing you to quickly iterate by taking a sensor off the market and plugging it into this algorithm and testing it within a week being able to say, oh, hey, did that work for me? Did that achieve the outcome I wanted for my product or my device, you know? Being able to measure that impact, you can very quickly iterate through that space to find what the right combination is of device and algorithm or pipeline for your specific application and so that's one of the things that's great. We're training models on, yeah, we are training models, we have models, we're training models with lots of different kinds of data and trying to understand how that data specifically relates to a state of interest, like for instance, let's say if we're just talking about emotion, you know, one of those questions might be, do I need four sensors? Do I need 20 sensors? Do I need 64 sensors? And where do I put those? Some of these are unanswered questions. We're answering some of those questions. Well, there again, it depends on the type of application. Within emotion, yeah. So I can tell you, for instance, if you have one sensor or two sensors, you're not going to get a reliable, well, two sensors is the minimum, you're not going to get a reliable measure. If you have 64 sensors, we can decode your emotional state in certain contexts with, you know, about 76% accuracy or so in certain contexts. And then in between, there's a gradient. And so, in that gradient, as part of what we help when we work with companies more closely, we help them understand, okay, for this sensor, this type of algorithm or pipeline that we have is going to be useful for you. But as our ecosystem continues to grow, we think we can automate all of this stuff where you say, okay, I have a sensor of this type. Well, we know that that sensor of that type, typically with this pipeline, will produce a good result for this particular application, like let's say for emotion. The sensor that you have is not going to be useful for decoding emotion. You need at least this many sensors in this location. And with this pipeline and this algorithm, we can now reliably decode that emotional state. That comes out of having access to more and more data. Yeah. And what does it look like when you partner with neurotech companies and then their hardware is getting the data? And then how does that work? So the way that our platform works is that you can kind of think of it as like, I don't know if you're familiar with Nuance. They, you know, dragon soft, naturally speaking became Nuance, but basically it was a speech decoding company that, you know, through the 90s and into the aughts became a platform in the cloud where you basically use their API and you send it waveforms of your speech waveforms and they run algorithms that they've now trained on tons of data and they provide back to you what the decoded, you know, speech is the text, right? And you basically pay for the use of that. Our model is similar but for biosensors. So you send it biosensor if you're a company and you want to work with us, you don't even have to partner with us. You could just use our API. You literally just connect your sensor to our API, send us data, stream us data, choose what kind of state you want decoded and not all state decoding is possible with all types of sensors but if you presumably have the right type of sensor then you can apply the appropriate pipeline and you get back, again through the API, the interpreted state. So if I want to measure, let's say again, something like let's just pick attention, your attentional state. If we have an attention pipeline here on the cloud, then you get a sensor, you plug it in, you say I want to measure the person's attention once per second and you simply with a few lines of code in your app you just ask the server, what's the attention state of the person right now. That's it and you just get that back as a number. All the rest of the machinery all happens on our end and you don't need to do anything more than just send data through that API. As a developer you can make the link Tindian and then the consumer makes the decision at when to actually make the calls. So the developer would build, let's say that you're a developer building an app to measure attention right. You would build an app for your consumer and your consumer and you as the developer, you as the developer for a company you know, have your consumer provide a relationship. The consumer never talks to us. The consumer goes through you. Yeah, so the consumer queries the developer. Yeah, the consumer basically just wears the device, right. They use the app, as far as they know they're using an app from let's call them, you know, super charged. Interesting. The consumer will never know about it. The consumer will never know that it's in Theon behind the scenes powering it. Like Amazon Web Services. Yeah, exactly. 90% of the time if you're using anything on the cloud or not 90%, but you know, it's high. 40% of the time probably you're using Amazon Web Services. You might be using Azure. You might be using Google Cloud. Either way you know that you're using this app. But behind the scenes all the heavy lifting in terms of the not necessarily the computation but the computing infrastructure is being done there. For us, it's all the intelligence of the decoding and making sense of those signals. That's all done by us behind the scenes. I still want to be walked through, if possible on your end what it would be like to be wearing one of these using a service and I am measuring my emotional state of my attention state. We were saying attention state. So if I'm measuring my attention state and I want a ping request would be to get a ping when my mind is wandered for like 5 or 10 minutes or something when I'm trying to focus or 5 seconds even when I'm trying to focus. And so then what I'm wearing probably a full cap or something close like I'm wearing a lot maybe but not necessarily. So certain things are measurable with increasingly more miniaturized devices. So for instance, measuring a rough analog of someone's attention is possible with as little as 1 or 2 sensors. Because you're only on prefrontal. Well, it's where it's placed is one aspect of it but also because the type of signal that's related to attention is a pretty large signal that's pretty well measurable through EEG. And of course I have to sort of qualify that by saying that we don't have a definition of attention that we're all agreeing on here. So I'm using attention very loosely here. There are forms of attention like spatial attention like are you attending to that point in space or that point that I'm not going to be able to measure with that one sensor. But in terms of like am I cognitively focused or not that's something that you can measure using a pretty low number of sensors if you deal with the noise appropriately to some degree of reliability that is useful. And so you don't necessarily need a large number of sensors to get something useful out. And even better if you can probe the brain like I can have you do a five minute task where I'm probing the brain and measuring the response of the brain in that task I can extract even more information with an even lower channel count or lower dimensional we call it sensor lower number of sensors to get a highly meaningful result out. Nice. And then that makes sense. And then if you also have the task be a measurement of maybe their baseline and then you can then work with the company to do an activity to increase the baseline. So that's one possibility is you can have an intervention or a boost or something and that boost could be stimulating your brain, doing an activity meditating if you're trying to say reduce your stress. One of many different kinds of interventions but the key is you want to then measure what the effect of the intervention was. So let's say I'm trying to do something here where my goal is to improve your attention while you're focused on a particular type of task and I do that by stimulating a particular part of the prefrontal cortex. So you put on a device you stimulate the brain and then you do a quick five minute task and now I measure the response of your brain in that task. Now I want to know when I stimulate your brain did I improve the response in your brain? Did I change brain activity in a way that indicates that I'm improving brain function? And then if I didn't then I want to modify my stimulator to target it in a different way to create the desired effect that actually has an impact on the brain. And so that cycle of measuring, predicting and intervening changing the state and then measuring the effect of the intervention, that whole cycle that's something that we help to make very well I wouldn't say very easy but much easier to do than if you were to try to build that whole analysis pipeline and framework for both doing the prediction and the assessment. Oh yeah. So now companies can come in with just the intention to have activities and then use your infrastructure to process the data Exactly. I'd like to figure out did that activity have an effect on the brain in these particular ways that we're interested in looking at? Did it change activity in parts of the brain that are related to attention, let's say? Yes, no, yes, good. That person, we have the right intervention. No, then let's try a different intervention. The key is being able to iterate rapidly in that space. So the key to all innovation is failing many times. We forget that failure is the path to success and we have to fail early and fail quick and in order to do that we need to not expend a ton of resources on a problem only to find that it was the wrong question we were asking, right? And how many of us have all embarked on a great ambitious project and dedicated a year of our lives or months or weeks only to find ah, there's nothing there. As scientists we encounter that all the time but I think all of us in some sense has encountered that in every creative project we do any product you're building you think you have the right market fit well you want to find out like really early is my product going to have the desired effect that I want as early as possible so that you can iterate and iterate and iterate until you go out with your product and hit the market. So again, that's an area that we feel is really important and it's one that we're really committed to helping shorten that cost and all of that signal analysis and processing so you can get an answer to your question really quickly is my device having the impact I want? Am I measuring something useful? In neuroscience if you make that mistake it's a really complicated place to make that mistake. It can be because it's hard to see where was the mistake in that pipeline. That's a lot of time that you were taking on that. So then, yeah. I want to ask this question as well it's where within your data or your machine learning algorithms where is the biggest hole in the data set for now like where is the least amount of data about the brain and where is the most amount of data that you have on the brain as much as you can share. That's a good question. So from invasive sensors like EEG we have quite a lot of actually laboratory grade data data collected from people sitting down in an environment where you're having them do a particular task and you've constrained it and isolated them. So we know a lot about for instance how the visual system processes targets like when you are looking for something like a where's Waldo and you find Waldo like we can predict when your brain finds Waldo as fast as your brain is actually consciously finding Waldo like as the neural actually there's a signature of that even if you're not consciously aware of having found Waldo it happens fast and even before the conscious awareness happens or as it's happening. So we can decode that very quickly. It's crazy. 250 or so. As I find Waldo you see that I found Waldo and then I become aware. Another example is making errors when you're making mistakes. Let's say you're doing a simple task where you have to press a left button or a right button at a particular point in time. It's a simple type of task like a flanker task, it's a little more complicated than that but essentially it comes down to making a choice between two buttons. Press the launch button or the not launch button or whatever. So when you press the wrong button some percentage of the time you will become consciously aware of having made that mistake. But well before that actually happens or even if you don't become consciously aware the brain knows that you pressed the wrong button and when it knows it generates a signal that is sort of a signal to the rest of the brain like a reinforcement learning signal saying we think at least saying hey whatever you did there was the wrong thing. So don't do it again. Now that signal we can measure faster than you can correct your mistake. Which means that if you're auto typing and you fat finger the button or whatever which I do all the time then in theory with one of these sensors actually in practice you could auto correct that mistake. So that's an example of an area where we have a lot of knowledge a lot of data around these kinds of you know of paradigms and of systems and we can do pretty good at decoding those kinds of states. Areas where we need a lot more data are multi-fold but at least in a few key areas. One is getting data outside the lab. So when I sit you down in a room and I probe your brain I'm seeing your brain in that context this is not your brain when you're out with your friends when you're at home when you're driving in your car exercising the brain is adaptive. Having a eureka moment like a genius moment of some sort. So the context of your environment changes your brain and how your brain behaves and even the signatures in your brain that might indicate that you found Waldo will be different than the signatures that indicate that you found the dress that you were looking for in the shopping while you're walking through Macy's or that you spotted your friend across the room although there will be some similarities there's differences so we need more and more data out there in the wild I call it or we call it the wild of the world which is not the lab. So one of the key things is making the world the laboratory where you put biosensors on people and they can just go out about their daily lives and we record data and try to understand how their brain is relating to the things they're looking at and engaging with and seeing and then building machine learning models that ultimately can make sense of that kind of data. So that's another challenge. That's why I want to put EEGs on the heads of the six artists that go with you Sokka to the the overview effect. That would be fantastic. It would be great to you know we're collaborating with the human spaceflight laboratory right now in North Dakota hopefully trying to help build some sensing technology that could help with long duration deep space exploration for NASA getting us to hopefully extend the duration of these missions to Mars and the moon and stuff and I'm a huge proponent of Neurotech in space and how important it is that we have to also measure how the brain is changing when you're up there in space. You're talking about something a little different which is what's the experience that the person's feeling the artist will convey that but what if we could see inside their minds now we're not there yet EEG won't give you a window that is crystal clear into that person's mind never going to happen with EEG but we might be able to see something cool from that and there's maybe decades from now that data that's collected we'll find the needle in the haystack that we can't find now so what's going on with all of the giants the kernel and the Neuralink there seems to be there's a lot of AI safety and security researchers but geopolitical pressures are like be the first ones collect all the data make the biggest moves make it difficult considerations which is making it difficult to have those hard conversations yeah so the future of Neurotechnology I think is is a story of exponential growth that we are at the very beginning of super beginning so it's important first to understand that because an exponential trend starts out looking very flat and even in the beginning of it it looks linear yes but the thing about exponential trends is they change very very quickly and we're right now entering into that upswing and it's been happening for a few decades on the other hand the fact that we can see what the trend might looks like means that we often have a tendency to sort of maybe prognosticate in a way that where we think we're going to be 10 years and we're all going to be walking around with implants in our brains and we're all going to have the ability to communicate through telepathy you know in 15 years and you know and we have to be a little bit conservative when we think about where the technology is and also how quickly it can be adopted out there in the world so what kernel and Neuralink are doing is they're working primarily in the space of invasive technology implants which ideally would not require a surgery ideally there are different ways of recording signals inside the brain now through for instance putting up your vein you can snake up a very very thin wire and record activity inside your brain it's called a stent trode stent trode it's actually because it's like a stent right that's used for you know cardiovascular at the bottom exactly the trode another technology that's made its rounds is called Neural Dust this was developed by Michael Maharbaniz and Jose Carmena UC Berkeley very cool technology MEMS well really this isn't quite yet MEMS but you know very small devices of course it uses MEMS technology but very small devices that can record brain activity and be wirelessly powered actually powered through ultrasound which is pretty cool and can even potentially stimulate so you can record and stimulate from these little teeny little sensor stimulators that can be sprinkled if you will throughout the brain or across the brain the surface of the brain that technology is still also in its infancy and it needs to be miniaturized significantly from where it is to be the really truly dust like where you just invisible stuff sprinkled throughout your brain but someday that kind of technology might be the future of neural interfaces and that future may not be very well very well in our lifetimes we will you may have this in your head in your lifetime that is a possibility and you know when that happens is still the unknown so that's an area those kinds of things are the kinds of things that like say Kernel and Neuralink are looking at and other things in that space others are looking like Mary Lou Jepsen with open water is looking at optical tomographic imaging approaches you know using light basically to image the brain and so that's a potentially non-invasive way to to image the brain at higher resolutions and see what's going on in there there's again needs to we still need to see how that will be proven out especially when you try to make a consumer device out of it so it's one thing to do in a lab with a million dollar device and another thing to make it something that's a little band-aid but that's the goal of say open water and there's others who are also pioneering in this space it's a very exciting and fertile ground in which a lot of activity is happening but it is still early days we at Infion see many of these technologies as the future of sensing and we are laying the ground work in our platform for these technologies we actually just finished out a larger project to integrate a large number of signal processing algorithms for processing implant data so now we can process over 15 different implantable file types and pull out spikes and spikes are little discharges of a neuron you know so we can process that kind of data too so you know we think in 20 years when those devices hit the market we'll be ready to decode the activity from those sensors too in the cloud and connect your brain directly to whatever device or application you wish to connect it to decoding neural dust decoding neural dust and that's going to require a lot of heavy lifting and processing because the amount of data you're going to be getting if we think it's a lot from EEG many many orders of magnitude larger when you're recording from implants both in terms of the you know the how frequently you're sampling data as well as how many sensors you have you know and how many different neurons you're recording from I mean there's you know 100 billion of them in there almost that's a lot it's all a lot now where does one take what are you guys up to next how many people do you guys have right now we're a pretty small team we've always been on the right now on the order of about 10 people full time but we hope to be growing a lot soon and we're doing some exciting things that I hope to be announcing soon but our focus right now is on this platform to empower and enable the community of transformative technology folks of people wanting to do more with their biosensor data to really empower them to find those killer apps and to get to that point very quickly where they can have a useful and impactful Neurotechnology application that really has meaning we want to help you do all the heavy lifting through our platform all that decoding and sensing and so our future for Entheon is very much into growing this ecosystem growing this platform I just actually today gave a little preview of our new Neuroscale Insights service that will be fully launching in a couple months but we're now accepting early previews into and this is a whole system for being able to just record data upload that data apply a pipeline and get back all your processed results, statistical models BCI models and also interactive reports that are graphical that show you, you can interact with them it shows you all your data features it shows you a lot of it rich information about what was in your data what's the quality of your data what do your neural signals look like what does your statistical model look like and so the cool thing is that you can do this if I was collecting data from you right now and I wanted to know did the data set I just got was it good quality did I get the effect I wanted from you am I seeing that thing in the brain I'm looking for I can know literally while you're still sitting there within like 5-10 minutes of you having recorded that data I can have what would take your research associate months to code up all the algorithms run this statistical model and make the figures all that stuff in your fingertips while you're still sitting there and then I could say oh maybe I need to change my experiment hey let me get another hour with you but also this kind of application for insights can be really exciting and useful I think for for product focused applications for instance one example is if I have a company that tries to optimize say human performance you know if I'm trying to optimize human performance I want to know did my intervention my brain stimulation or whatever it is that I'm applying did I get the effect I wanted well our reporting system can tell you that very very quickly from a person did that change in the brain that I was looking for happened when I stimulated the brain or when I did that thing and furthermore it can be useful for you to have your own kind of quantified self report that says oh here's how my brain activity and so all of these applications of this this new service that we're launching I love that synthesis just the landing page that just makes me feel like I am deeply in tune and in touch with what exists up here yeah that's a big part of it absolutely that we're properly crunching everything yeah so interesting now I want to ask you this question this is probably a good one to ask you is consciousness localized in the body hmm that's a that is a big question so I don't know and I don't think anybody knows but there's a lot of theories about where consciousness arises from um some have argued that there is places like the claustrum people it was a popular place you know that people thought maybe this is the nexus or maybe some kind of interesting integration happens which is necessary for consciousness you know Descartes thought it was the you know the pineal gland um the reality is we don't know where the seed of consciousness is if there even is a single seed of consciousness my personal theory is that the one that I subscribe to is that consciousness doesn't arise in any one place it is a distributed phenomenon it arises through the interaction of many different subsystems of the brain communicating with each other and modeling each other the brain modeling itself it's like Indra's net you know about Indra's net the mirrors that are all infinitely reflecting themselves the brain in some sense is when you look into the architecture of the brain it has some of these kinds of properties where every part of the brain connected to many other parts of the brain is sort of creating its own model of what it thinks that part of the brain's input is and what it thinks that part of the brain is doing so it's kind of like the auditory cortex having a version its own version of what it thinks the visual cortex is from the auditory cortex perspective and vice versa okay now out of this nexus may arise conscious experience the self-awareness that we that we experience and yeah that requires certain areas there are necessary areas of the brain for us to be conscious that if you remove those parts of the brain you are not conscious but it you know that doesn't mean that those are the seats of consciousness right and so I think that the big question for for consciousness research both up until now and in for the next hundred years is to try to look deeper and deeper into the brain and find those areas of the brain that are not only you know necessary but also sufficient or both sufficient and necessary for consciousness and then try to from that gain a clear theory of where consciousness truly arises what about outside of the vehicle as a soul that's a very interesting then we border on the metaphysical at some point potentially and I think that is also an area where one must depart from you know from science because science is only a tool for being able to explain things using observation and so the soul if I cannot observe it then I cannot use scientific methods to test for its existence for now unless you find a way to observe the soul so who knows I don't know but I personally believe that where we should focus our attention scientifically is on the questions that we can falsifiably develop a falsifiable hypothesis around in other words I can develop a hypothesis that I can test the truth or falsity because I have the tools to do it and if we focus our attention scientifically there then we'll make a lot of progress the other areas are areas that we absolutely should continue to explore and discuss and debate so for instance you know the question of where is the soul is there a soul that where does it reside that's a question that absolutely is deserving of discussion and discourse but science isn't really the right tool to answer that particular question because we don't have the observational tools that would allow us I think to gain that particular insight but that said you raise actually what you said though that was interesting is this notion may consciousness live outside the body or outside the brain and I want to posit this other idea that consciousness does not actually exist in your brain solely consciousness is a distributed phenomenon that also exists in the minds of others in the environment around us and we actually offload our consciousness and cognition constantly into devices that we interact with and into other people by interacting with them you have a memory of me for instance your memory of me that interaction between us means that to some extent the information that is part of my conscious self-awareness is also to some extent imputed into your system and then comes and feeds back into me when we interact so there's these interesting patterns of loops inside my brain I have these areas of the brain talking to each other but then we're all networked together as a society as well through our interactions with each other and also with the world around us that we interact with and some of our consciousness and cognition may actually reside there to some extent I don't really mean this metaphysically although it could be if you're a property dualist you may actually think that consciousness is in everything you know but I mean there's more in a kind of complex systems way that that consciousness may extend outside ourselves so to understand consciousness we need to not just understand the brain but we need to understand the context in which somebody's acting we need to understand the relationships with the people around them they interact with devices and things they interact with because then that's what actually ends up sculpting their architecture exactly and their thinking this is so awesome and so interestingly deep about the brain and I'm just I'm really happy to know someone that's taking the computational neuroscience approach someone that's building out the AWS of neural processing and that's fascinating like I love that thank you yeah well we just want to help grow the field and you know we all have these I think visions of what neuro technology can do and our goal is to help you realize yours basically we want to make it possible for your vision of what neuro technology can do for humanity possible yeah and to get there as fast as possible and as with as reliable and scientifically validated an approach as possible because that's key to you know the right way to start with good tools yes yeah this has been a huge pleasure thank you so much thank you real pleasure to be here I'm looking forward to some of your comments about this episode because this has been thanks for tuning in this has been such a it's a totally again it's a it's a computational neuroscience marketplace tool set building out the future of what we need to actually figure out what's going on in here in a software side of things yeah and that's always refreshing to hear about things that can really scale and can really have people from around the world sending in data learning about different cultures and learning about different emotions and feelings there's so much to unpack and it's just part one you know there's so much more left we got to get you into into the brain mind ecosystem there's a lot of really fun ecosystems do you know Vivian Ming over at Berkeley I know of and we've been connected but we have not spent time yeah I she's the one that taught me like four years ago she taught me about maximizing human potential and yeah and she got me hooked so many opportunities for that for taking us from where we are now to the next level if there's a civilizations have potentially scales 10 levels for civilization we're on level 0 I want to see us at level 1 exactly I know what's that going to look like a bunch of 5 and 8 year olds running around right now earth because we don't have the proper stewardship yet we're going to develop that out well it's important to build things like stewardship into our technological growth system being conscious about that as we're developing the technology being aware of what its implications are and building in principles of stewardship into the technological growth process so that the technology is inherently for good you know it will grow into a beautiful tree that will be good for everybody yeah yes everyone please go check out the link in the bio and also check out check out what you know with what you've learned go and build go and manifest your dreams into the world go and execute and much love everyone we will see you soon peace