 All right, how's everybody doing? All right, it's an honor to be here. In the next 10 minutes, Joe and I are gonna show you what the future of computing looks like. But first, I want everybody to just take a second and think about a computer and what comes to mind. I bet many of you were thinking of something like this, or maybe this, or maybe even this. Maybe some of the elder members of the audience were thinking of an abacus, or a TI-83 potentially. Either way, all of these computers are different, but they're similar in some ways. They all have the same basic components. The basic components of a computer are inputs, processing, storage, outputs, and the human. It's important for us to remember that the computer is worthless unless there's somebody to deliver the inputs and receive the outputs, right? And so we have to remember that the human is in the loop with the computer, and let's remember that for the rest of the presentation. Joe, over to you. Thanks. Yeah, so humans are already part of a closed loop with our devices. And today, the way the inputs and feedback come back to us are sort of limited by the technology that we have. We have a very conscious relationship with our computers, where through keyboards and mouse, we tell it exactly what we want, and it gives us back the information. More recently, there's a subconscious loop that's developed, where based on our habits and our dwell time and what we sort of interact with, the information is subtly changed, what's available through news feeds or other notifications. But still, what the computers today have access to is just the tip of the iceberg. The biosensing technology that OpenBCI has been developing for the last 10 years is going to fundamentally change what our computers have access to in order to optimize the loop that we're a part of. Connor's gonna talk about how that all started. Yeah, so back in around 2012, I became obsessed with this mission of decoding consciousness. I had suffered some concussions playing rugby with Joe in college. Members of my family and friends were suffering from severe mental health issues or spinal cord issues, and I felt that the way to solve these problems was to take the information from the brain, AKA brainwaves, and put that information into a computer. I thought that the brain was the seat of the soul, was the key to consciousness. And so we set on on this journey to build a wearable, to take brainwaves and cheaply put them into a computer so they could be expanded into other applications, robotics, art, entertainment, gaming. And this is what it turned out like. We ended up building the ultra cortex and the cytan, the biosensing amplifier. For the past 10 years, these have been OpenBCI's flagship products. They've been worked with in over 100 countries around the world. They've enabled low cost education, research and development, and even deep tech innovation in the laboratories of major consumer technology companies. But there's been one shortcoming, and we've learned this over the past 10 years, is that we built a one directional brain computer interface. We didn't build a practical brain computer interface because we didn't close the loop. We only provided the system with the ability to send information into the computer. What we learned is that to have a true brain computer interface, you need to close the loop, just like the computer. The brain needs both outputs and inputs. In addition, we learned that brain data by itself is actually quite boring. It needs context. In order to truly understand the mind, the brain is a very important part of the mind, but you have to listen to the body also. Things like your heart, your respiration, your movement, the sweat on your skin. Right now I'm sweating, and it's because I'm a little nervous. So all of these things matter, and they add context. Over the past 10 years, OpenBCI has been innovating nonstop. We've been working with members inside the company and outside the company, leveraging the power of the open source movement to maximize innovation and access to the technology. We used the knowledge that it needed to be a closed loop system and it needed to be multimodal, meaning lots of different types of sensors to work on a new type of product. It's called GALIA. So GALIA is a sensor fusion super tool. It's packed with sensors that can measure the user's heart, skin, muscles, eyes, and brain. And it couples that capability with audio and visual feedback or the ability to kind of close the loop by AR, VR, headsets, and high resolution audio. One really cool thing about GALIA is that it is not just a concept. We've actually been working with the system for the past few years. Our Alpha partners, there are only a few of them because these things are difficult to build, have been helping us do things like create hands-free control schemes, replacing the mouse and keyboard in the context of AR, VR, or even 2D screens. But on top of interaction, we're also interested in this more passive form of brain computer interfacing, which is looking at quantifying the qualitative. How do we take these squiggly lines coming out of a computer and turn them into things like emotions, intentions, your digital twin? And so with that, I'm really excited that we're about to unveil what we've been working on for the last five years. This is the product, the GALIA beta system that we're gonna be shipping to customers in Q2 of next year. So Joe, take it away. Yeah, here it is. So this is the latest form of our headset that we've been building. This is what's gonna go out to customers starting in Q2 2024. And we're very excited by what these early adopters are looking to accomplish with this system. These are enterprise teams focused in entertainment, gaming, and training situations that are looking to build adaptive experiences that can change based on the real-time reactions of the user's brain and body. Product development and human factors research teams in the automotive and aviation industry that are looking to do product research on products that haven't even been built yet. They're using VR and they want to augment their surveys with quantitative data from sensors. And it's also, in the healthcare space, there are people looking to develop new biomarkers for disease and analyze the effectiveness of treatments. Inside of OpenBCI, we've been using GALIA for our own assistive technology projects that Connor can talk a little more about. In 2019, a legendary neurohacker by the name of Christian Byerlane reached out to me and he asked if we could help him realize a vision of flying a drone. This is something he'd never been able to do. And at first, I was very skeptical. I was, you know, I didn't know if we'd be able to do it, but we started listening to his body and we tapped him into his body and we realized that he had residual intention. These kind of little buttons riddled around his body that he didn't even know he had access to. So we connected electrodes to them. We amplified them. We built a control scheme for him that allowed him to learn how to use his muscles for the first time. We then translated that into some square bar graphs and kind of circle visualizers and then mapped that system into a two-dimensional joystick. At that point, we multiplexed it and essentially built a joystick system, two joysticks that he was able to use to fly a drone on the TED stage earlier this year. This was one of the most meaningful things that I've ever been a part of in my entire career. But it took $100,000 of equipment, six months of focused R&D and system integration, and it doesn't need to cost that much. And that's really our goal for the next 10 years is to take everything that we did with Christian and everything that we're building with Gallia and bring it into one system. This is our other announcement, is that we are going to be working on Gallia Unlimited for the next however long it takes to make it. So Gallia is everything that you see there put onto the body. Still gives me goosebumps, and I've watched that video about 200 times. So Gallia Unlimited, let's tear it down. This was one of the first drawings that we did to try and kind of concept it. I'm going to start with layer seven. Layer seven is what we are dubbing the neck tech at this point, which is where a lot of the hot and heavy parts of the PC are going to be moving onto the body. The top layers are an exploded version of the headset starting with the layer in the front, which is an aesthetics visor layer. Then there's a modular optics layer where you can swap in different optical systems for different scenarios. Layer three is a kind of inside out tracking and eye tracking system for both environmental and eye information, and layer four physiosensors for the face. So yeah, this is a cool ghost shell wire frame of the concept system at this point. And now I'll just jump through these guys real, wait, let's go back to this one. This one's cool. Everybody loves this one of these, right? All right, jumping through. So these are those layers that I was describing, but in higher resolution. So I'm just going to click through these for time. Some ear sensors and here's an exploded. So Joe, what does this help? What does this all accomplish though? Like I said, let's go back to that loop. We know from our customers today that the latency and the synchronization of all of these different sensors is a major challenge. By bringing this all into one system and putting it on the body, we are reducing the latency and speeding up this feedback loop. When that loop reaches the point where it's happening faster than our ability to perceive it, that we can't necessarily keep track of the ways that all of these sensors and inputs are adjusting in real time, it's going to unlock an entirely new form of human-computer interaction that feels like a natural extension of our bodies. And this is gonna have a lot of unforeseen potential, but it's also gonna have a lot of risks. In order to realize the immense positive potential of neuro technology, we need to change the current unhealthy status quo that we have with our personal data and our personal devices. We're not in control of that today. We believe that users need to be firmly in control of all of their data and given the keys to a mental vault. OpenBCI is committed to prioritizing users as the number one stakeholder above all others, including ourselves as the manufacturers. We think this is a key design constraint for any neuro technology company trying to put these devices into the world in a positive way. And it's something that, in order to make sure that our devices are really our trusted confidants, instead of digital saboteurs, we need to change the status quo. At OpenBCI, we have had a vision that has not changed, but only become clear to design the next great technology, turning point in human evolution, and finally merge with our greatest creation. I don't believe that the 21st century will be remembered as the century of artificial intelligence, but rather I believe it's going to be immortalized as the golden era of human intelligence. At OpenBCI, we are building the first computer that leverages all of the vast and ever expanding capabilities of AI with a contained and controlled objective to put you in control of your mental vault, and protect your mental health, safety, and freedom to flourish. The computer will be a network of discrete wearables and peripherals that are managed by this mental vault to preserve your cognitive liberty. And in every way, this computer will behave like it's part of your body, brain, and mind. So with a final announcement, as of today, December 1st, we are launching our series A, and so if you are interested in helping us accomplish this mission to build the operating system of mine managed by you and your mental vault, please come find us after the talk, and we would love to chat with you. Thank you, thank you.