 So it is now 10.40, which means it's the start of the next awesome presentation in here. So we have David Rowe. He seems pretty cool. He's presenting about radio, and it's like radio, Rowe, how to remember who the presenter is. So let's give a warm welcome to him. Thank you very much all for coming. It's a real honour to speak to you. My name is David Rowe, and I'm pretty excited about what's happening to radio. For the first time we, the open source community, are starting to get control of the whole stack right down from applications to the physical layer. And in particular today I'd like to talk about that transition from where we are now with radio and software radio right down into the physical layer. I'm going to take a few case studies from my own personal journey through open source. I've been working on open software and hardware projects, mainly in the physical layer DSP type area for the last eight or nine years. And I've learned a few things and experienced a bit of frustration, which is sort of pushing me in the direction that I'm in now. I'd like to talk about why we want open source in the physical layer. Talk a little bit about chip sets and how we currently do software defined radio. I'm not a big fan. Talk about spectrum and pirates. People are legally using spectrum and regulation. And some of the challenges it might be facing in the next few years as software radio becomes more predominant. And then some trends and then some challenges for you guys where we actually need help to make this transition at the moment. So I'll start off just talking about what is a radio. This is just a very general model that applies to both the transmitter and the receiver. You'll find all these building blocks in any radio, whether it be an AM broadcast radio or a car radio or a Wi-Fi modem. First of all, we have some sort of filter. What that does is radio spectrum is pretty broad. That limits us to just the sort of frequencies that we're interested in at the moment. There's always some sort of amplifier, signals we receive off the air are very weak and need to be brought up to levels that we can process comfortably. And we want to send a signal and a transmitter. We take signals that are fairly weak and usually pump them up to quite a high signal level to transmit them. The next two circles there are a mixer and a local oscillator. What they do is they're your tuner. They let you take that particular part of the radio spectrum that you're interested in and just isolate that and work on that down at baseband. So they just do a frequency changing step. They might take your 2.4 gigahertz Wi-Fi and shift it down close to zero hertz. So it's easy to process by your baseband processing software such as your modem, which is the next step. That takes analog samples usually, the signal off the air, and converts that into bits and bytes. And in the opposite direction you'll take your bits and bytes and convert them into some sort of analog signal that can be sent over the air. How well you implement the modem often defines how high performance your radio system is. That's a pretty critical part. These days pretty much all digital systems will have some sort of FEC or forward error correction. That's where we add some redundant bits to the signal we transmit and we use those redundant bits to help us correct any errors that might be received at the other end. There's often some sort of codec that will compress the source information that we want to send over the air into something that's a little bit more manageable for the radio bandwidths available. Spectrum's fairly limited, so if we can squash the speech or video or even text signal into a narrow bandwidth, then that's a good idea. So we often have some sort of codec. And just aside for that there's things like the protocol and the application. I don't really play there. One of the reasons is it's been done really well by you guys. We have some really good open protocols and applications. But it's all these lower layers that are still stuck in closed source land and that's where I like to work. So that model was fairly symmetrical for transmit and receive. Some of the blocks will be in different orders and things but you'll find the same sort of blocks. Same sort of design for pretty much all frequencies from broadcast radio right up to optical. In case of optical it might be a laser as your transmitter and a photo diode as your antenna for the receiver, but the same principles will apply. Similar design, no matter what your payload data is, you can even use that similar model for analog radio. And as we transition from analog to digital radio it's kind of useful to make comparisons. For example, when we listen to a radio or a GSM phone call and part of the phone call gets chopped out or a few words and we tend to put it back together in our own head that can be viewed as a form of analog forward error correction as distinct from what a digital signal do which would automatically put it all back together for you. As I said, the protocol and application are already pretty well done by us but what I'm really interested in is how much further towards the antenna can we push open software in particular. This is kind of where we are today. Those of you who are lucky enough to attend the open radio mini conf on Monday will have built pretty much what I've got in that red box there for the hardware. So that sort of stuff, when you buy a software-defined radio today or build one you'll find out that's where the hardware will lie. Everything outside the red box tends to be software. So in the case of what we build on Monday the software was running on the laptop for the modem, the forward error correction and perhaps the codec whereas the stuff inside the red box all those building blocks were in hardware. This is where I'm interested in going. Can the only thing that be left be the antenna that's actually hardware. Now that's a real aggressive goal and a simplification. What might actually happen is that things like the filter and amplifier might be a combination of open source software and some very simple hardware but I want to keep pushing towards that vision. I'd like now to talk a few case studies from projects I've worked on and that have revealed to me I guess some of the issues that are in the current way we build radios and how they can be addressed by the open source community. In 2008 a group of open source developers set out to build a mesh void router called the mesh potato. I did speak about that a couple of years ago here so there's videos available if you're interested. But it was a router that you could plug an analog telephone into. You could put this thing up on your roof, TV antenna height and create a mesh telephony network in a village for example. The application was in the developing world where telephony in particular mobile phone calls were very expensive and this was a way for communities to build telephone networks and talk to each other over moderate ranges. So the application wasn't particularly commercial. It wasn't like a start-up in the sense of we were all trying to get rich. We were trying to help people. It was non-commercial. We needed to use commodity Wi-Fi chipsets and we chose one from Atheros and we soon discovered it was very, very difficult to get any sort of data. When we came to manufacture the thing it had to go down a production line that had expensive test equipment with proprietary hardware and software to calibrate the thing so that it would work properly and within the regulatory frameworks. This is what the device ended up looking like with a typical end user although it would normally be installed at some sort of height like on the roof of a hut or something like that. So we had pretty close to zero support from the SOC vendor. Pain, frustration and delay in the project. And that was because we were constrained by this proprietary hardware. They had this nice little chip that was the central processor. A system on a chip also had the Wi-Fi radio but we just couldn't get into the thing or get any data for it. In contrast, we had some far more complex parts of the system. If you think of what goes on in an operating system we were using asterisks, IPPBX, Speaks, speech compression. All those things just worked. We hit make and they were running and yet we were really just held back by this closed hardware radio device. So some very frustrating closed source bottlenecks. And just to disclaimer, the latest mesh potato apparently has had much better support from the vendor so it's good to see some things are changing. But what worked in that project, the open source software really worked well. We came up a small team, came up with an innovative design and put the thing into production, our own custom hardware. So that was great. What didn't, as I said, the closed radio hardware. The other thing we found out was the Wi-Fi protocols were really bad for voice over IP, very inefficient. And I'll talk a bit about that later on. So it got me thinking about the issues of standards compliance for radio novel applications and the issues around closed hardware for radio. Another project I worked on around about the same time was an eco-canceler. Now these are used at any time a four-wire telephone network hits a two-wire telephone network. And a good example is when your mobile phones hit the exchange and it goes back into a landline. And you'll often hear this on mobile phone calls where you'll hear a bit of echo at the end of the line that comes in and goes. What's happening there is that when we need to convert a transmit-receive signal such as you get from a GSM phone or a voice over IP system to a two-wire telephone line that goes to a landline you have a thing called a hybrid. That's a little electronic building block so the idea is it separates the transmit-receive signals and combines them. Unfortunately they're not perfect. So when the far-end speaker is talking let's see how I can get that mouse across. So they're talking to you from the other end saying one, two, three you're listening to them on your handset here. A little bit of that signal gets sent back up the line towards the far-end speaker and he hears his own voice coming back to him, echo which puts a lot of people off. What the eco-canceler does is it measures what's coming off the line here, what's going back down the line and it tries to make a model of that echo and subtract it so that the other person on the other end of the phone doesn't hear the echo. And that's called an echo-canceler. They're actually really hard to write. I tried it off and on for around 15 years. Couldn't get one working in a proprietary closed environment. It was a huge problem for asterisks, any of you who were involved in asterisks for 10 years will know what I'm talking about. Many people had a goal and it couldn't get it working and even all the asterisks guys were saying oh, you have to have a DSP chip. A DSP chip is a small CPU that's optimized for signal processing. Usually it has proprietary software mass programmed on it. They were screaming it's got to be hardware. You can't do it in software. You need to use hardware because it's so hard. It's all covered by patents. I experienced a lot of past failure myself. Open source gave us a bit of an edge. Things like crowdsourcing the testing. When I came up with a candidate echo-canceler design I just threw it out there. People would start testing it. That helped us discover some really weird corner cases. For example, what happens when a microphone like this that captures quite a wide range of audio hits that audio signal, hits the telephone network which is designed for very narrow bandwidths. So we got some really weird corner cases that usually break echo-cancelers and we discovered them just by crowdsourcing it. There was no need for painful formal speech testing. Normally when something like that involves speech needs to be tested formally, you get hundreds of speakers, statisticians involved, hundreds of utterances and spend a lot of money. We just threw it out there and got people to use it in the real world. We didn't care that it wasn't perfect or complete. We added self-instrumentation. When people would complain of an issue I'd tell them to run a certain command line. That would dump the states of the echo-canceler and the input and output audio files. They'd email them to me and I could reproduce exactly what they were experiencing and debug it. All these sort of things are very difficult inside closed-source developments. So what we did here was open-source really delivered a better solution. The thing worked very quickly at minimal effort. Another case study was 3DV. That's what I guess my current area of work which is open-source digital voice for high-fringancy radio. Now high-fringancy radio is sort of the last area where analog actually outperforms digital and there is no real incumbent digital standard. It's a really tough technical problem due to the nature of the HF Radio channel. It tends to wipe out modem signals very effectively. And it's currently littered with proprietary modems, codecs and very expensive radio hardware. Quite easy to pay $8,000, $10,000 for a commercial quality HF radio. How people typically make digital voice systems for HF radio is you license a speech codec from one person, a modem from another and you might try and put it together with some FEC and some protocols. And I was told that you can't do layer violations. So you have to take these things as separate black boxes and just butt them up against each other without really considering what's inside. I didn't like that. I said, oh yes, you can. So I started mixing up modem and coding and FEC layers and breaking a lot of conventional rules. And we're coming up with a better HF digital voice system, basically. I did things like I designed the modem to suit the speech codec. So rather than accepting what was out there as a standard or a black box, I said, well, no. I need something a little bit different. I know how to write a modem. I'm an open source developer. This is how I'm going to do it. The initial codec I released, the bitstream format was particularly susceptible to a certain sort of errors that you get on HF Radio. So I hacked it. I changed it. I removed that part of the bitstream. It made it sound a little bit worse with no errors, but only a tiny bit, but a lot better when there were errors. So I designed the codec to suit the HF Radio channel. You can't do that if it's all licensed proprietary code. In fact, it's specifically illegal to touch it or understand what's going inside. You'll get in trouble. So I also have done things like used mixed analog digital techniques. One of the advantages that analog HF Radio has over digital is that the more important parts of speech, say when I speak louder or the parts of my speech spectrum that carry the important speech information, in analog speech, more power is allocated to those. They tend to be louder, so the power amplifier and the transmitter allocates more power. So I changed my modem to allocate more power to the important parts of speech. So it's this mixture of analog and digital techniques. Once again, simply not possible with closed source. And once again, we're finding open source developing a better solution. These guys, however, have done it all before. This is the SIG Sally Secure HF voice system from World War II 70 years ago. It's basically all the block diagrams of what I'm doing now. They've got a speech coder. It uses a very similar speech model to what I use. It's got a modem, a multi-tone HF modem, similar to what I use. And it even has encryption, which I'm not allowed to use. So it's a little bit more advanced technically. And like a lot of things in signal processing, it's all been done before. And if we look at a few trends, this is how it tends to work. So the example I'm using from is real-time speech compression, because that's what I do. So I have a bit of knowledge in that area. So in 1940s, we had custom vacuum tube hardware. And then by about 1970s, the same sort of thing, speech compression, this is several thousand bits per second speech compression, was possible with custom hardware. So you had to solder up the boards at a chip level. It wasn't reprogrammable, just gates and things like that. Then 1980s, the first programmable DSP chips came. And that's about when I started my engineering career. And we were very excited that we could have these little processes that were optimized for doing signal processing. And the key thing there was they could do a multiplier accumulate in one cycle. So we had fairly fast processes for the day as well. So suddenly, it actually became possible to do speech compression in software in real-time. And that was a real breakthrough, because for a long time, speech compression algorithms were so computationally complex, we simply couldn't do them in real-time in software. But then we could all of a sudden. The next step was it was running on a general-purpose computer. Suddenly we could run speech compression on something like an x86 of the day, using mid-90s sort of processing powers. By the 2000s, we were running speech compressor. It wasn't having to run the processor flat out. It could just use a fraction of a general-purpose CPU. So now we're talking about running, say, eight channels or four channels in real-time. And maybe we're running a voice-over IP client or a gateway at the same time or some protocol work. And now, we're running the same sort of speech compression technology on a fraction of a microcontroller. So we're now talking a $2 or $3 chip or $10 chip as I'm using in one of my projects that's running speech compression on a fraction of a microcontroller. Now, these trends are the same for all sorts of other signal processing. I've also seen things like forward error correction units. They used to be hardware. We had to solder up together, only just possible. Then they went to FPGA software, et cetera, et cetera. And now they're a trivial complexity on sort of current CPUs. So this trend is very similar in a lot of signal processing. So hardware migrates to software. It always does. It's going to keep going. What's hardware today and only just possible is going to be software running on trivial CPUs tomorrow. Software can be free. That's something we've all discovered. That means radio communications can be free. And that means free in all senses. It'll be essentially zero cost, and it will be free in terms of speech. So we can modify it, examine it, play with it, and come up with cool new things. I would say that communications must be free. I'd like to talk a little bit about chipsets now. Recently, there's been a wealth of new chipsets for radio. A lot of these, some of these are tiny little ones that will do an RF modem and... Oh, the whole thing right up to, you know, UHF, which is on a single chip for a few milliwatts, and you can put them in a balloon or on a satellite. They're wonderful little chips. There's also a lot of up-and-down converters that are being used for the current crop of software-defined radios that have just come out. However, I don't get excited about chipsets because I've been through this before, and what we're essentially doing is giving hardware vendors control. They always have some area where they won't let you into, won't let you play with. Sometimes that's for commercial reasons, other it's because there's bugs or problems they don't want you to know about, and they let you discover it a little bit further down the production line. It can cause huge pain. It means lack of control, and it often means lack of the data you need to build whatever you're working on. You're limited in the support. If you can't look inside the thing today, you're limited on them, and they might not feel like supporting you on that day, especially if you're an individual or a smaller team working on things, and an open source, that's often what we are. There are bugs we can't fix. End-of-life issues, that wonderful chip will disappear one day. You can't move your functionality across if it's tied to some guy's proprietary hardware. You get to sign awkward legal agreements or you're forced into a position where you may have to, NDAs. Then there's the portability issue. You can't move it to the next chip along. It's in their interest for you not to move it to the next chip as it comes along. I'd rather move our radios around like software. A better approach, the one I really like, is general-purpose CPUs. Even design the system to minimise the hardware. What I mean by that is that when someone tells you to build a radio or you want to build one for your open source project, you've got a lot of choices. Take the choice that minimises the radio hardware, rather than just say taking what's the most convenient chip in 2015. Actually design the system for minimal hardware. You could be designing the modem waveform and even a standard to minimise the hardware using the principle that open source is best. Everything should be open source, modems, codecs, protocols, even the hardware. Don't rely on black boxes or binary blobs for any layer. Don't stop at the protocol or chipsets, which is where we're staying, where we are today. You'll find some really interesting things. There's so many advantages in development effort. If you're bringing a product to market and it's software, it's so much faster than some sort of proprietary hardware chip that you need to depend on. You can learn more if you can see inside the thing. Education is a wonderful thing about open source. Learning about radio is really cool. Innovation, if you can get in there, modify it and tweak it, you can do some really interesting things. I've shown you some previous examples about how we're outperforming closed source systems in all sorts of areas of signal processing. Cost, essentially, open software means your bill of materials, your hardware costs drop towards zero. Portability, you can move it across from my PC to an embedded machine back again. Security, that's a huge one. If you understand and can see every single bit of hardware, no special chips with unknown firmware, no software you can't examine, then it can be far, far more secure. And ultimately, the performance. I'm an engineer. I want to make things that work well. I don't want to be constrained. So I can make things perform better with open source. Spectrum Safari. A friend of mine called Steve Song has contributed to these couple of slides. He's been looking in how much radio spectrum is actually used. Now, spectrum is interesting in that it's a fairly scarce resource. There's only sort of so much available in the universe. There's more and more people clambering to use it. It tends to be government regulated worldwide in most countries. Most of it's tied up or reserved and this has been going on for a long time. You know, getting on 100 years now. These sort of laws have been in place. And it's starting to run into issues of technology, which we'll talk about in a moment. So Steve and some of his colleagues have been researching spectrum usage and basically it's not being used, most of it. There's an average of 15% occupancy. There's great chunks that are tied up that simply aren't being used. The way they do that is they run around with you can use commercial high-end gear or you can get the little $20 SDR dongles and run around with antennas on your car and just put them on a mountain or something and just measure what frequencies are being used. This is sort of an example of them doing that. People have done this. They've done audits in various countries of spectrum to work out what's really being used out of all this stuff that the government says you shall or shall not use. Now, here's the problem. Spectrum regulation is going to get really hard to police as the cost of access comes down. And there's already some noted failures. If you try and use a HF radio like a shortwave radio in urban areas you're surrounded by noise from switch mode power supplies and other high-speed digital kit. Technically, those products all get approved but we can't use our HF radios in urban areas anymore in a lot of parts of the world. And then there's also some wonderful things that have happened when spectrum regulation isn't so tight. One example is the public park model of how Wi-Fi is regulated. One of the best examples I can think of is some friends of mine in India called Air Joudy and they've been doing this for I think over 10 years now. They started off with sort of first generation Wi-Fi APs and put their own dishes on them and things. But what they've done is made a rural internet network, rural India and the last time I visited they brought internet to over 30,000 poor and disadvantaged people which is a wonderful thing that they've done with essentially an unlicensed spectrum. Once that spectrum was sort of loosened up so you can do what you want with it. And I encourage you to check them out on the internet and read up. The innovation that people have come up with once they've given a little bit of freedom people playing with open WRT routers, putting antennas on it. However, there's some limits. We still can't play with the radios. Big business still controls all those chipsets and tells us what we can and can't do. Another way to use spectrum is pirating. There's always been people who will illegally or use some sort of dubious use of radio spectrum for transmission or receive. Sometimes they do that just to annoy you but other times it's just people who want to use the spectrum and they want to use it in some way that's outside the current rules. As we've seen most of the spectrum is unused anyway. Soon the cost of accessing that spectrum is going to be close to zero. We're all going to have SDRs in our laptops or something. And already there's people who monitor all these sort of old 20-year-old satellite services using SDR radios and little antennas. Some of them aren't even encrypted and things so it's already happening. To me there's some analogies with movie downloads. The technology extended to a point that the previous legal models starting to fracture and break and not make particular sense. A lot of people are scrambling to keep all those rules in place but I kind of think the direction is more in that public park Wi-Fi area where we need to be. And to me there's some ethical questions around pirating. What if you aren't bothering anyone? And in particular there's some applications in countries where the government may decide to restrict your communication for their own purposes and not for your benefit. The other thing is if we wanted to stop pirating, can we? It's very hard to detect especially if they're not monitoring anyone and not doing any harm. One of the things that came out last year and got me thinking about this was this air chat system and this is based on it uses these little commodity walkie-talkies that you can get for $40 or $50 some laptop software and it's fast enough to send things like emails and chat data across ranges of tens of kilometers with zero infrastructure. So it does serve quite a purpose. All this sort of stuff is possible with other services like ham radio or commercial services. It's basically quite illegal in most places but how can it be stopped and do we want it to be stopped? This sort of thing. In particular if they're not bothering anyone. So some ethical questions there I think. I've also had some thoughts about standards. As I said with the village telco mesh potato system one thing we thought was a real positive was standards compliance. It was all built around Wi-Fi standards but I found that once we started testing it didn't work very well. This idea of VoIP over Wi-Fi and this horribly inefficient protocol 802.11 for IP packets. The technology of the day was 54 megabits per second 802.11G I think and we were lucky to get one megabits per second of voice data through the system because it was all optimized for great big web downloads and things like that. Also the paradigm of mesh Wi-Fi of omnidirectional antennas didn't work very well and a lot of that was due to the protocol as well. So we were kind of limited at the time because you bought the chip came with the binary blob you had to run 802.11 but when I started thinking theoretically it was something like a thousand times less efficient than what we can do with similar sort of voice quality if we had our own sort of protocols. So you can get some huge gains if you maybe take a step back from the standards and think what do I really want to do? Standards have their place obviously you want interoperability but they shouldn't necessarily be a constraint and don't confuse standards with the laws of physics. Just like I've been told you can't do layer violations they also tell you you must stick to a certain standard. A lot of these standards are just there as thank you Timothy for pointing this out though just to support someone's patent pool. So there's sort of other interests involved and not necessarily yours. So ask yourself very carefully whether the standard makes sense and I've had a lot of fun breaking them and made things work a lot better and I like to be constrained by what the laws of the universe of physics tell me in particular for radio communication that's the fundamental limit and my imagination not big business. Now with open source culture it's generally individuals and small teams that we're not content to be told what to do and to me chipsets and standards that promote patents in particular that's what people are telling us we must do it their way and they try to understand us in that direction whether we want to or not we have other motives like profit and fun or helping people and to me it's all underpinned by this concept of GCC portable software now it could be your language of choice but the idea that you can take this software and in a moment move it across to another device not being constrained by the hardware or chipset. So to me the future is pure software radios and as hardware agnostic as possible you know I have my chipsets and microcontrollers I like to play with they'll be gone in two or three years time just like the last one but the software will still be there and when hackers get control wonderful things happen now the last few years there's been an explosion of SDRs I encourage you to have a look at all those they're all doing very cool things they tend to be based around the paradigm of being digital up and down converters and they interface to your host PC somehow what they do lack is power amplifiers and filters that commercial radios have that make them a complete radio that makes them more like a piece of test equipment but quite useful for hacking very useful, very cool and extremely cheap for the amazing capabilities that they offer now having a purely open software radio compared to other ways there's portability free as in beer and speech, secure as in Snowden very difficult for anyone to listen in if you've got complete control over everything right down to the hardware or if you've even disassembled it yourself like we did last Monday it's really easy to get these things going we got a software defined radio going in a few hours on Monday try assembling an old school pure electronics radio that way and fast, if you're a business time to market I've seen people struggling for years with these other sorts of radio architectures but we can get things going in much shorter time and time to market and one of the greatest thing I like is accessibility and learning the ability for all of us to learn and understand this stuff which traditionally being it's too hard and I've had the geeks tell me the asterisk guy say oh you can't go there suddenly they run away, it's not that hard we can get into it, we can play with it and a lot of the work that I like to do is talking about it, putting my simulations and code up there so others can play so I guess to summarize we want open software in our radios that's the only thing that will work we want as much open software as possible that's the priority and the smallest possible amount of hardware and that hardware must be open some challenges, as I said the whole theme here is pushing open software it's got to be open towards the antenna one of the challenges at the moment and these are things that you guys might have some ideas on how to work is just getting data off and on the CPU this is a problem with narrow bandwidth data and with wide bandwidth data we experienced for example the other day with the open radio mini comp just getting hold of, we needed two channels stereo left and right sound blaster inputs very hard to do, very hard to find the software for that so we need a way to get these samples from our radio into our processes and I'm not sure what that is maybe it's not a couple of cables maybe it's something else, but that's a problem that needs to be solved on the other end, if you're grabbing several hundred megahertz of samples how do you get that into your PC then you're starting to hit limits of USB and Ethernet bandwidths and also driver issues you might be okay over the Ethernet but they're not used to running that flat out and they're not used to getting the data into a big array where you can start playing inside your CPU one big challenge is filters and power amplifiers even today's most advanced software defined radio still have a bunch of clunky relays switching filters in and out and big hard to get expensive transistors for power amplifiers what can be done to make those open hardware and software make them much simpler now that's a real challenge because a lot of these things by definition 100 watt digital analog converter for example but maybe it's not impossible maybe there's a way to do it or ways that we can combine software and hardware in clever ways and I'm starting to see a few people work on those sort of areas another thing that I'm playing with at the moment is good quality open source modems that are fully explained that anyone can use in their applications and sometimes if you get the modem right or wrong it can mean a difference in 10 to 1 in terms of performance of how much power you need another issue is power consumption that's one thing today where chipsets do win however a lot of really low power microcontrollers and DSP chips coming out that we can program ourselves and keep the device open okay that's it and I'd like to throw it open for questions, thank you how long do you think it will take having in mind that open software now runs on proprietary hardware like Intel processors and ARM processors and you don't have the sources for it or the access to it when you look at this it makes perfect sense in terms of time how long you think this will take yeah I don't know I think there'll be some steps in that direction for example someone will come out with a radio chip that's completely open rather than proprietary and that'd be a nice step in the right direction yeah I really don't know but I don't think it's less than a decade I've I've built a pirate radio as in Pi as in Raspberry Pi it's not exactly open hardware as such can you recommend a platform that we could be using which is open hardware these days for the signal processing work yeah the Raspberry Pi's a great one once again challenges there on IO getting things into an RF system and out again I've been playing with some little microcontrollers the STM32F4 there's a $18 development board that's fun for the baseband processing and take a look at our open radio platform as well so you seem to jump from chips that do protocol level or encoding level work and through to GPIO into an antenna there are a bunch of chips that are DAC straight into a mixer with a straight LFO they obviously do have limitations but much less and several of them are coming from vendors who actually do know how to document and do indeed document is it are you working for the end goal because that is the end goal or yeah I kind of like that, that's fine if they're fully documented and open and to support that sort of thing I think that's wonderful any more questions yeah you mentioned there's the cost comes down to zero it's a lot easier to get before defined radios and do things like create pirate radios does that also mean that it becomes a lot easier to detect when others are using license yeah good point that's a very good point and it might become more easier to surveil what's going on I know there's a bit of a loaded question but in the open radio we did the first building block what's the next building block that you have in mind the next building block I'd like to play around with things like the power amplifiers and filters that area as that's the next tough problem there was a lightning talk yesterday that talked about communication issues on islands that have restricted access to all sorts of communication infrastructure would there be a possibility to have low powered radios that transmit just enough data to have a basic channel of communication that is hard to moderate let's put it that way yes I believe so that's probably more the protocol level that sort of thing so would that air talk project be one of them or are there others hard to say yeah I'd have to look into it a bit more I haven't started it in depth but that is designed to be difficult to intercept and to be secure as well do you ever see the potential CPU dies becoming open source any time in the future yeah good question it's possible you can synthesise CPU cores and FPGAs and things like that I don't know if it's necessary I just like the idea of general purpose mips being out there and using that it might be running on my phone as an app for example with some minimal hardware next to it so I just see them as the engines but just don't get hung up on the particular one you're using and it's advantages or it's integrated peripherals it's got to be something you can move portably around thank you