 So my name is Graham. I am the Chief Scientist with Interaxon. We make electron cephalography systems for consumers and for researchers. Very low-cost ones, high volume. And they actually are primarily oriented toward consumers and consumer use for neurofeedback and neurofeedback-assisted learning. We got our start with making experiences for audiences. This is 20 people hooked up to different EEG systems, and each of them is driving a different component of a symphony or a cacophony, depending on what your perspective is. The origins of how we got here was in trying to create a BCI, a brain-computer interface, to drive control of most cursor and interactions in augmented reality back in 2000, 2001 in Steve Mann's lab at the University of Toronto for wearable computing. So this is Chris, one of our founders, who was wearing this what they called ITAP, 640x480 thing that required that you have a giant backpack on. And we got into brain-computer interfaces in an effort to make interaction with this interface more intuitive and easier. It turned out that that didn't work at all. It's very, very difficult with EEG to get, especially sparse EEG, to get enough degrees of freedom to control an interface. But in the process of building this thing, and we took that whole thing to the Vancouver Olympics and drove the lights on the Parliament buildings in Canada and it's on the CN Tower on Niagara Falls, it turned out that in the process of building this, what Chris learned was that you had to learn how to control your attention in order to push the signals around for this BCI that we were trying to build and accidentally taught himself meditation. So the tool became not BCI for the control of augmented reality but a meditation learning tool, neurofeedback-assisted meditation. And there seems to be some evidence, pretty strong evidence now I think, that, or strengthening in any case, that this actually works to teach meditation. We're getting some interesting outcomes from this that people acquire the skill a little bit faster than they otherwise would. They stick to it so adherence to a program or a learning program is a little bit better. And the way that this works is it's very straightforward. You pop in your headphones, you put on this EEG system, you connect it via Bluetooth to your smartphone and it puts you on a virtual auditory environment. So the weather changes with shifts in attention. It stays flat and focused. The weather is calm. As your attention shifts, we pick up these signals that we interpret the EEG in a way that allows us to change the weather. So there's a dynamic environment where you get a thunderstorm when your mind wandering and your goal is to use those cues to focus your attention. Okay, so there's a whole bunch of gamification built into this. People get really into brain stuff as I'm sure you know. So you try to give them numbers and they get really hooked on those numbers and then you have a problem because people are hooked on numbers that are kind of a little bit made up. They're derived from EEG metrics. So that's an interesting thing we learned. And then that's still a problem we're trying to solve is how you make this as veritical as possible when in the kind of feedback you're giving people. You know, we see some interesting neuroplastic effects in longitudinal studies of this. So after about four weeks of use, you see augmentations of the P3 response in things like the strupe task. You see reductions in reaction time or in reaction time variance on the strupe task. And you see reductions in things like perceived stress and improvements in the brief symptom inventory. Okay, so we made this cool thing. It sort of works for most people. And that made us think, how can we put this into a form factor that is more engaging and easier to use? So one of those form factors was eyeglasses. We can make a four-channel system built into sunglasses, eyeglasses. You can put prescription lenses in this. This is very easy for people to wear. Another is neuroadaptive virtual reality because you have all of the surface area all over the scalp with a virtual reality system. You can put electrodes on it and you can capture data from a variety of different locations on the scalp. You can do visual tasks, you can do auditory tasks, you can do cognitive stuff, and you've got nice whole head coverage for doing some interesting stuff, and even source localization. The big advantage of this is not so much that the signal quality is as good as a laboratory system. It's not because you've got a dry electrode system but that the cost is so low that you can use it on a lot of people and the setup time is very short. So you can get comparable signal quality between active champ and muse, but the cost is, on the bottom right here, pretty significantly different and the setup time is shorter. At the level of populations, so we collect data and we save it to a cloud database to drive part of the experience for our users. And this allows us to dig into the data. We ask, you know, we do appropriate consent. We ask people to share their data with us and this is some data from Alice in Secular's lab at McMaster. It's our user data and they analyzed this and looked at the change in alpha peak frequency and age. So there was some indication that alpha peak frequency changed with age as a result of cortical slowing. And it turns out that this is actually a pretty linear phenomenon when we look at it from about 20 years of age up to 80. We see this is about 6,000 subjects. We could run this on 100,000 now if we wanted to. And we do and we find very similar robust results that there is a very linear pattern of shift in the alpha peak frequency. So this is a kind of neuroinformatics analysis that we've dug into and the plots on the left here are the distributions by age, by decade. So that's a sample size effect and it also seems to be that we are selecting for people who are particularly concerned about their brain health. So 80-year-olds who are using devices to learn how to meditate are probably healthier than 80-year-olds who are not. But the interesting thing is that this potentially allows us to detect when an individual goes off the rails or to look at population level differences associated with health conditions. This is some of Randy's data from Baycrest. Some interesting effects on speed of learning and neurofeedback. So you see these speed of learning effects where some people pick up this skill within a minute only in very large sample sizes. So this was 600 subjects collected over a 12-hour period. And the effect, Max-P, only pops out above about 250 subjects when you sub-sample this and, you know, simulate the statistical model. So the things that we think about in neurodata as a company that collects a lot of it, it's sparse neurodata, it's noisy, but we get insights that you otherwise don't get because you have such a large population. We get a lot of repeated measures. We think about tools for really big data. The tools that we need to do the kind of neurodata analysis that we want to do don't necessarily exist. So we work with the team at MNE. Nicole from our team who's here has been to some of the MNE code sprints and learned about how that works and brought that into our pipeline. Who sets data standards when neurotech companies, and this is something we should all be thinking about as a community, generate much more neurodata than academic labs do. So if you've got consumer neuro-technology out there, companies will be collecting data at a faster pace. If we can collect tens of thousands of EEG sessions per day, that's probably more than most academic EEG labs in North America combined. How can the private sector contribute to standards and to tool building? And this is, I think, important because whoever's collecting the most data is potentially going to set the standards and create the tools, and it's very, very important that there's a lot of interaction between academic science and between industry science. How do we roll out rigorous experimental protocols at scale to non-experts? So how do we make sure that science is reproducible when people who are putting these devices on their heads or using consumer neuro-technology are not necessarily going to be trained to the same degree as people who are doing this in the laboratories? And how are we going to trust that the quality of the data is what we want it to be? How, what I think about is how I get science trainees like you guys, some of the graduate students and post-docs here to think about private sector opportunities. So how do we get you more interested in bringing some of the skills that you've acquired in this community out into the private sector and really contributing and bringing best practices from the neuro-informatics community out into companies like ours? And finally, and I'll leave that for Richard to talk about after this, how do we protect IP in ways that support science versus locking it down? I'll save the rest for, I guess, a little bit later and hand it over to our next speaker.