 Well, hello everybody, I hope you all had a very good lunch, so I am going to be talking about open source in neuroimaging, so I'm going to skid him off, but hopefully quite a bit of a disclaimer and a warning, firstly I am not a medical professional, so please don't take my talk as medical advice, research anything for yourself. I am here for all my areas I am going to talk about, and I am not entirely sure that some of us are actually being really familiar with some areas, so there are actually restrictions on some of the items that we are going to be talking about. So, brief introduction, so I am a senior engineer and I am also an open source consultant at Codephy, I specialize in projects based in Manchester, and I have also been a Linux developer contributor, so I have some experience in open source. So, when they are talking to me about why I chose this area, and a few of the caveats, so brains, what does that 1.2 kilograms of stuff in your head actually do? I mean it is not just there for cannibals, I am just trying to figure it out a bit, and I found as interesting that open hardware software is actually showing us that we can actually find out how the brain works, and this is stuff that used to be just medical research, and it is now getting to the point where I am obviously going to do it. And I will end up with actually talking a bit about a project that Codephyn was involved in, inducing such a device. So, the caveats that you have seen in the previous warning, the talk is not going to be able to go totally in-depth into some of the topics. And in some places I have probably just taken the top Google example, so there are other things out there. So, brief definition, so neuroimaging is a direct or indirect set of techniques for looking at either the structure, the function or the pharmacology of the brain. This is all sort of under encendography, which means basically the same thing, and is generally of interest to such fields as medical, where often they need to diagnose some problem. The psychologists who actually like to try and work out how your brain is working, and I think the topic that most people would be really interested in is the personal machine interface. So, by researching this, I have seen a few actual products available where people have been trying to actually get your brain to control some device. So, I believe that there is actually a Star Wars toy where you try to produce the force to influence your computer. So, by researching this, I also came across the 10% of your brain, and if anybody hears that occasionally, that you only use 10% of your brain. We actually know that you actually use 12% of your brain to run the functions that you require to stay living. So, basically, it's about 10% is active at any one time. So, I'm going to talk a little bit about the structure, but not much, because the structure is difficult to actually view, especially if you haven't got a lot of money. Generally, nine ways of scan techniques are quite expensive. I mean, who around here has a functional magneto-resonance imaging machine around? If so, you've got too much money. So, with my techniques such as the FMRI, they're using standard MRI techniques that you use to look at the structure of the body and looking at how the brain is using blood, because as you are thinking about things, it requires energy to actually run, and therefore you can see the blood flow. There's also other things like PET, which is positive on emission tomography. Again, we're probably not going to see this in the hospital. It's very much an injection of a radioactive marker, usually a glucose, I believe, and you scan for the usage of the glucose, which causes a small, radioactive emission. And then you also, x-rays will also give you a bit of information. That was just a notable mention. If you've got a dead brain around you, you could have a little bit of an invasive biopsy, but that isn't going to be very easy to find. People don't believe the brain is lying about it. Again, also, recently, people prefer it if you're not invasive when you're doing this sort of stuff, because this sort of stuff can hurt. However, interestingly, because most of the studies that are done at a generally university or other funding, there is quite a lot of open data available. So when I was looking at Wikipedia, which is where I got quite a lot of some of my information from, there's about 3,000 scans listed. In fact, there's now databases of scans because there's that much available. So, for one example, there's an open FMRI site that is collecting, I think, about 3,000 subjects so far, and it publishes the data under a Creative Commons license, and then to get that data, I believe it's all anonymised. People obviously don't want their data being distributed with easily traceable information such as their names. There are tools out there to analyse that data. For instance, there's a tool with a reserfer which is designed to process this. Now, I'm not going to go into actually the tooling here, just to say that this sort of thing is standardised. There are standards for the data processing. Unfortunately, whilst the data is open, some of the rest of the processing stuff isn't. I know MATLAB is a very popular tool for data analysis, and that isn't open. So, a little bit more about the structure, and this is what we're generally interested in if we're going to be measuring something we can actually measure for ourselves. You probably know that the brain is made up of neurons. This is the sort of thing that gets talked about by neural networks. So, these are the hardware, what do you call the hardware building blocks of the brain. Then there's the big, about an 81 size. That's about 80 billion of them in your brain, so there are quite a lot of them. There are generally four or five different types. You have your motor neurons, which are outputting data to your muscles. You've got the standard processing neurons, though they're doing the actual data movement. Then you've got a sort of sensor neurons that are taking data that's coming from things like your eyes and actually bringing them into the brain. So, what we can measure is how they communicate. Generally, one neuron is doing one small bit of processing, and then they're limited to other neurons by synapses. So, I've got a picture of a basic neuron there. The basic communication methods are the chemical, so those are the neurotransmitters that you would hear about such as serotonin. I'm not going to talk too much about those because generally trying to alter them is either a medical professional's job or illegal. I would say, actually, some of the other things that if you are trying to do brain training, that is also, can be quite dangerous, but what we can measure is the electrical effects of the neural communication and these charges when actually in the neurons are about a tenth to a volt. So, the first technique we're going to talk about is EEG, which is an electro-insector. These are not easy to say. So, this is measuring the electrical activity when it gets to the edge of the brain, i.e. your scalp. Generally, this is done by attaching to the electrodes, and you've then measured a voltage against a reference point. So, you probably put electrodes around your head and stick on a reference electrode, and by the time the neural activity has permeated through the brain, that's about 10 to 100 microvolts, so it's not beyond the means of one hardware to measure this. We're talking, I mean, the brain's not that fast, so we're looking at frequency of about between 1 to 100 hertz. The other thing to say is that the charge produced by signal given by the time it gets to the scalp is too small to measure. So, you're actually measuring a group which could be 10,000 to 50,000 neurons in the world. So, the advantage of EEG is quite simple. So, most people here with some electronic skill would put together a device that would measure that sort of voltage. Let's pretend you're medical and do it. So, it doesn't require any invasive tracers or actual stitching things into people. And the good thing for, say, the hospital usage is that there's not any conditions that would limit the treatment. So, for instance, you can't go to an MRI machine if you've got a medical organ, whereas that's not the problem for the entire EEG. So, I've taken a quick example here. I haven't tried this myself, but this is actually open hardware and open software. So, this is openbci.com. They do have a store where you can actually buy the stuff. And they run a Kickstarter back in 2013 to produce this. That is a newer version of it. So, they're not only doing EEG. They're also adding things like ENG, so they're looking at the muscles, and EKG, which is looking at the heart. So, you can actually bring multiple measurements in to a system. The example shown here is what they call the Optic Cortex Mark IV. So, you can actually buy this pre-assembled from their shop, or you can buy some of the bits. And, as it's open hardware and software, you could actually have a free printer. Print some of the parts for this. They have the control board, the sampling board made. They also have pretty much a DIY. And, you're looking probably at about $200 investment to get something like that. So, that device can take about 16 channels as standard, and you could actually upgrade it. I believe this has been used a number of projects. Unfortunately, I haven't had the reference to one I was really interested in, where an artist who is struggling with muscular problems has actually used this, plus some eye-trapping software to continue producing art. This is an example of somebody using open systems to actually produce a useful thing. So, I'll talk quickly a little bit about hardware. So, this is sort of our original one was actually based on Arduino, and this new one is similar to technology. It's a PIC32, it was mostly card storage, and you can add things like Bluetooth. So, you can actually store quite a bit of data if you say you want to wander around in one of these, and you might get some funny looks. So, moving on, this is the example of the software. So, you have on the left is a trace of all the electrons that are electrical activity. Top right is actually a frequency analysis of that data, and then there's a plot of where it's coming from on the head. So, you can actually correlate the way you store roughly where it's coming from. So, you could do experiments like looking at a box of chickens, or a nice little duck, and that's a bit so that the brain isn't being stimulated by cuteness, and then you can have a look at, say, some of the politicians around it. So, software like this, there's other software available, and generally they have interchangeable data. So, in the interchangeable satellite. So, these are basically what you could call brain waves. So, this is, I can't actually remember what activity this is monitoring, but there's about seven different types of brain waves. So, the name guide could read letters, and for example, the alpha, which is about 8 to 15 hertz range, that's associated with, like, the relaxation. The, and there's other things, so the delta, which is under 4 hertz, and it can be quite a large weight, is often found in C. So... No. I'm actually trying to do this. I'm quite a fan of downloading stuff I think that are showing it on slides. So, I basically, I was showing the EEG stuff, because it is actually quite popular, and with the advent of open hardware, which I'm going to talk a little bit about these four projects, is becoming actually accessible to a number of people. So, the open EEG project has a new wisdom electrodes on it. That's quite an early one, and was designed for a few channels, connected to a PC serial port. So, it's not going to be hugely fast, but it was enough to get just some basic information. I believe that all events actually were selling, or maybe still are selling, a very similar solution to that. So, the brainstorms project, that's not hardware as such, that's more a software suite for processing data from a number of sources. So, it will support MRI, EEG data, and actually bring it all together, producing a way of analysing that data, so you can try and match the signal positions to, say, a structural scan. We'll talk a little bit more about that later. So, the last one, the HAC EEG, which is a bit newer than these, and that is interesting, it's an Arduino Dua shield, so you can actually stack these up and have those two channels, and that is an 8-channel board. So, that's quite new. I find that quite interesting in another aspect, is that it's based around a chip by Texas Instruments, that is pretty much an EEG on a chip. So, it has 8 input channels, differential inputs, differential inputs to the reference, and has 24-bit low noise and log-to-digital converts within it, which can go up to about 16 for the samples, and that's actually, I think they're available for about $25 a chip in small quantities. That's a few example projects for EEG. So, I'm now going to move on to a little bit of a lesser-of-hobbyist scale method, which is an EEG. So, this is a magneto encephal. I didn't pick a really difficult to pronounce subject, magneto encephal free. So, this is a magnetic sensing of the brain's activity. So, if you think about it, if you've got an electrical current flowing, you also get a small amount of magnetic field, which you can hopefully detect, so the average field is in the 10th femtometre test of the brain, which is actually quite small, so you need very high-po detectors. But the advantage, for say, analysts of the brain is that it's actually slightly more accurate. So, in EEG, you have currents radiating out, but they're actually not interfering with each other, and the brain is not a perfect conductor, and certainly, in skull are not a perfect conductor, so you can actually get interference. Then again, if you're using EEG, you're probably not looking for really detailed analysis. You might be just looking to see if the person is having a seizure, or whether they actually still have a brain in their protein. So, EEG has similar response times, but it's a bit more accurate. Now, the issue with this is that background noise is in the 1000th femtometre test, so you normally see we have a problem. If we want to actually detect an activity, we have background noise as a problem, which means that if you're actually going to build a machine with current techniques, you need to actually put it in an expensive shielded group. Which is actually really part of the rather large cost copy of EEG system. I'm talking about EEG, because this is part of the project I've involved in, and we'll look at that in the next slide. The other slide issue is not all neural activity actually produces a detectable magnitude field. Some neurons actually run against each other, which means that the fields they juice actually fairly well cancel each other out. So, like I said, it's not coming cheap. So, I'm going to talk a little bit about, I've talked a lot about this, something that I'm involved in, is the MEG project that I've been involved in, is not being actually going through FDA approval. So, why did we do this? Well, when we came to this, the latest technology is, what we call state of the art, is from the late 1970s. This meant it was difficult to repair. I mean, who were around here at a spare spark station? I didn't. So, our goals were not only to improve and make it repairable, but also we'd like to use as much open source technology as possible, and we're hoping that at least some of this work can be open sourced. So, why we chose open source? Well, this actually starts off because I knew somebody at university and they were doing a search project. They obviously wanted to be able to do MEG. Their system was basically crumbling around. This is now actually moved on to a commercial company called York Instruments. So, we wanted to be able to make a project that had a good longevity. We're not talking about an Android phone that you throw away a couple of years for things to be spending. I think it was about $500K on one of these machines. You can't just throw it away. As I said, we were having problems. So, the original manufacturer had gone out of business and we would like to avoid similar problems. A manufacturer goes out of business and no longer actually uses a machine that goes wrong. The other thing, these were often used in research projects. So, it's a very good feature to be able to create a review system. We also had a hard problem to produce so we needed to actually produce something that worked. Our original plan was to actually make something that would fit the original review systems. That has actually evolved now into producing something better. We also were worried about security. Obviously, people don't want their data being leaked or have the default password as password. So, if you need to be able to actually review that. So, I did put this quote on the slide. From our customer, they said, it is also a matter of confidence. With very few professionals in this area are confident with a product that's not based on a peer review platform. Again, to pick on Microsoft, technicians and clinicians simply don't trust Windows to be reliable, quick or useful when controlling an industrial hardship. So, quick overview of what we put together. So, we had a scanner that we needed to capture data from. So, we have down here the MEG. We built a system to actually capture the data from there. That is then transported over a network. We have multiple nodes. So, we have a data aggregator to store. We have control, lockator, and then any BLs who is involved in the experiment can not complete the data. We needed to be able to store the data. We needed BLs by viewing so that if anything was going wrong, we could do all that. Part of this as a research project is we also needed to be able to add arbitrary experiments. So, for instance, you might want to show the subject some images on the screen and explore the responses. You need a reasonably reliable way of synchronising that with the rest of the data. So, the other experiments and those could be disjoint. So, your experiment is not actually easy to get to the same bit as the scanner. So, I said it earlier, we had to actually magnetise your shield so you can't really show a monitor in the same room that's something that actually will detect that. So, a quick run through the hardware. Of course, I can't get a picture of this. Obviously, our customers are very busy dealing with the FDA and we've actually approved actual things for use. But one of the actual not-open improvements we made was that the original sensors for the magnetic field needed to be cooled by liquid helium. Now, liquid helium is not on my substance to have a round and it's not actually contained by anything other than expensive metal containers. It's also, I think, a whole lot. So, the design we're using is actually still cooled but it's actually self-cooled by road for adoration. So, the size of the problem, I mentioned this was started some time ago and believe that our initial design was done in about 2008, 2009 is that we're looking at about 300 channels of data recording because that's how many magnetic sensors we have when putting around some of these heads. We wanted to get 24 data out of the system and the original system thing was only like 14 bit. And for some reason, even though I've probably said that the brain tends to operate in about the 100 Hz maximum region we have 118 kHz of max data rate. So, that was producing quite a lot of data. We tended to split it into 8 channels per 60 channels per node. So, that's about 3,400 kilobytes a second if you were operating the system at a full rate. Generally, you don't actually need to sample at that sort of rate. A few kilobytes is usually useful enough for data analysis. If you're looking at that rate, we're looking at about 250 megabytes a second. And back in 2009, we were thinking that was a lot of data. Now, that's not as much your average cumulative PC. As you can see, not quite, but something you could just buy off the shelf is going to handle that sort of data. Now, I'll talk a little bit about the actual capturing nodes. So, we chose an arm-based system with net and EGA. We designed, as they were designing it before, like 16 channels per node. So, we were going to have a number of nodes. We didn't want them to be expensive. We couldn't call them in position because that would have had an electric noise to the system. And arm is an excellent choice for low power. So, we chose an SOC that was low power and had a reasonable network and system. And then the FPGA, we went through a low end basket in the end. But that was there because we're looking at trying to do something that is real-time in the way that the next really can't actually manage or any of us so we can manage. And we'll talk a little bit about that next. And we went for the Debian as a base because it supported the ARM ADEX-86 and probably anything else we were dealing with. We already had a requirement to be maintainable and Debian is both stable and maintained. And there was like build routes and things like that available at the time. We hadn't got to that really. But we didn't want to have sign-up that actually had to be manually maintained. We were happy that we could just get our date and that would be fine. It was also reasonably customizable for our needs. I mean, all the ARM nodes we tended to netboot and had a minimum Debian installed for those because we couldn't have hard disks but they produced magnitude noise. And we didn't have enough flash to put the entire system into each unit. Also big plus we had at least two Debian developers on hand. So, when we looked at the data recording and there was a format called HDF5 that was already well-used in other systems. So, that was well understood and everybody who had similar machines already knew how to process that. And that was designed for large time datasets. So, we had directory in build so it was actually easily searchable. Well, the second slide on the software is not surprisingly that we chose QT OpenGL to content work. It works and is actually across the platform. Whilst we are trying to do something that's open source, there's still people using closed source systems such as Windows. So, it seems rockable and reduces the time having to port it to other systems. What I was quite interested in is that a lot of this sort of field in the actual analysis of the data is now moving to using Python. So, we were also quite happy that there were a lot of Python packages available. So, things like NumPy, the deal with numeric processing. I think Arrow is the date in the library. And then you also have Hpy, which is a package that allows you to deal with HDF5 files. We also moved to Uboog. So, this was our first vendor problem. When we started, one of our vendors was all, yeah, we'll have all this great stuff. They weren't out of business. So, before we'd actually got into market, we already had a problem with longevity. So, we moved it to Uboog. So, all the nodes that were used in the project would go to Uboog. We were very nice. It took about a week to do, and then that was the problem solved. I'm not going to talk about the customer control software. There's a bit of stuff to do, actually control experiment. So, this is the sort of thing we're doing with QT. Again, this is like the EEG display we're showing activity, except now we have a lot more sensors. So, this system is, you can't probably see, we have 320 active channels, 32 reference channels. And then, you probably can't actually see that there are also 16 external digital inputs that are being recorded from the whatever experiment that is being done here. So, a quick note on the FPGA side of things. So, we chose the FPGA basically because we're having to deal with real-time sampling. We actually have finally settled about 3 nanoseconds between all the nodes in the system. So, obviously this isn't something you can really do in a microcontroller. But again, we actually could use open source. So, whilst the PCIe and DMA were close to those that we got from the vendor, we could use the bush phone specification to write our software too. We could use the open SPI implementation. What was very weird is that once we've got the PCIe bender port, the user manual said, well, if you want to do something with this, go and have a look at the example project. So, in a look at the example project, it's like, well, they've taken the PCIe to bush phone example on open source. It's like, why level the payment money for this? So, we've got a quick diagram, so it shows that the FPGA is doing positive work. There's an external ADC, and we have external changeable front-ends for doing the various measurements. So, you either have the MUG front-end, the measuring data sensors, or you could have your experiment front-ends that were taking the stimulus base from your experiment and capturing those with your reference. So, we obviously used internet to have the kernel. That was very easy. Both the SOC and the bender kernels when we were supplied were actually quite close to upstream, which was pretty important to us in the maintainability. So, we started, I think it was at 316, and we've upgraded to 4.9. Traffin mainline has actually been very easy to do. We haven't had to do much of that update. The only custom code we've had to write was something that detected our PCIe device and split it up into the various bits that the system required, and we could do that mostly via assistives because we didn't want to talk too much about that. So, it was actually quite easy. So, we're coming towards the end of the talk. So, from an actual project of you, we've found David to be very good base. We've upgraded from sitting in deviant 6 to deviant 9. So, that's been quite painless in system D. The open FPGA system has a problem that we've not been able to solve, and people are still working. So, there's no open tool chain or open IP calls, which makes very useful builds, which we'd like to do, very difficult, because we basically provide all our software with deviant packages. The PCIe core was possibly the worst fit to actually try and debug, because it's a black box you can't really see into it. We had some problems. It turned out that we'd spent this money on a licence for PCIe, and we'd given the tools a licence. Didn't think it would actually do a licence to do it. So, it would work for four hours, and then the kill time would really kick in. It was like, why hasn't it stopped working for four hours anyway? That one was quite difficult on it. I haven't said anything about PCV design. Projects like key carry coming on and making life very much easier for people who want to do open hardware. At the moment, we're still using closed solutions, because that's what everybody involved in the project is actually used to. When it started, that wouldn't have been an issue. We wouldn't have been able to do that. The improvement in the software has actually been really good for that. I'd say that the start of the project has seemed awfully large. Technology in the last five to seven years has improved us a lot. We'd hope that the MPJ tools will improve as we give feedback to the manufacturers to say that Windows tools are not good for people who actually want to do certain things. So, I'm going to bring the talk to hand. I'm going to overrun them very slightly. So, the thing I've taken away from this is that the open source community is helping the field improve. We have people being able to do their own experiments. We've been able to produce new hardware. Hopefully, once I'm involved in that, that will enable people to actually move forward, and hopefully it will spur on some competition. There are still areas that are unfortunately closed. We'd love it if you could actually get down to the point of having something like an image. Is that being affordable to not an average person, but maybe an average doctor? We'd love to see these sort of technologies come down in price. I'm hoping that, as more people are looking at this, that the issues around this can actually be part of the source. That's the end. The end. Thank you very much for attending.