 Okay. Welcome to Virtual Copenhagen. I'm actually in Berkeley. I took ill and was unable to travel. So I'm giving this talk by video thanks to the assistance of Julian and others. I'm here today to tell you about work that I did about a year ago that's going to continue soon. This is work that was done in conjunction with people in Berkeley at the university and also the Intel Research Lab here to do long distance wireless using commodity wireless parts. Anyway, the background behind this talk is that basically in emerging regions, installing infrastructure for internet access and other uses is actually quite expensive and in some cases totally infeasible. You can't imagine trying to drive a backhoe in some of these places and the cost would be prohibitive and in some cases there's just simply not funding in order to do it. So people have been looking for a long time given the availability of commodity wireless parts for ways to leverage that technology to provide IT infrastructure in emerging regions. This application is actually quite enabling to people it can significantly change their lives in ways that you may not consider. Some of the cases that have been studied and actually made real involve doing things like providing people with voice over IP telephony between villages in Africa where they've never been able to communicate with their siblings except by actually physically visiting. In other places such as India there are test beds that have been installed and they're doing diagnosis, medical diagnosis over wireless links between places where people simply aren't able to travel. So the ability to do long distance wireless is actually turning out to be a really important piece of development work which is showing up in the emerging regions. And the work that I was doing with the people at Berkeley that I'm going to talk about today discusses work that we did to try and take some of this research work that was done by the UC Berkeley people and turn it into some more production quality devices. So first I'm going to start telling you about, in order to tell you about the work that I did you have to understand something about the tier project. The tier project stands for a technology infrastructure for emerging regions and it's a project, as I said, a joint project between UC Berkeley and the Intel Research Lab. It's actually been going on for several years. It's a multi-organizational group that involves not only computer science people but also people in other curriculum. And the tier project has a specific goal which they say is to address the challenges of basically deploying IT infrastructure in emerging regions. And tier has been going on, as I said, for several years. There's an awful lot of information about the tier project itself and about their results that you can get on the web. And I provided you with the URL so you can locate that. And you should definitely do that because it has a lot of papers both published and related to the wireless work but also related to the deployments and the experiences that have been received from doing the tests. Anyway, as part of tier, the wireless infrastructure work that they did came about and they coined a term called WildNet which stands for Wi-Fi based long distance networking. And that project, as you can see, is focused on taking the commodity 8 to 11 parts that are now as cheap as a dollar or less per part and using that to build systems that people can use to tie together and provide an IT infrastructure in these regions where it's not been feasible in the past. So the two key aspects, the two key developments that are driving this work or have driven this work are the availability of commodity parts and the fact that these commodity parts work in the unlicensed spectrum. So one thing that you can't lose sight of is that while 8 to 11 infrastructure may be possible it also has to be deployable and for that to happen the people who are doing it must not have to pay for licensing fees to actually run their wireless networks. So the first question of course that you're going to ask is what is long distance? So when I talk about long distance, first you have to understand that 8 to 11 as a specification was designed for indoor use mostly. There have been some applications in the outdoor arena but for the most part the specification is designed for use with access points and stations, laptops that are separated by probably at most 100 feet. Typically it's even less. But in our deployments what we're looking at doing is using the same technology to set up mostly point to point links that are typically only over 30 kilometers and often 100 kilometers or more. And as you'll see we've actually been able to do some pretty astounding tests. So given this requirement that we'd be able to run 8 to 11 networks over very long distances what are the challenges? Well the first key challenge, the first thing that you need to understand and I'm not sure how many people are familiar enough with the 8 to 11 protocols and how the MAC layer that controls access to the medium works but the MAC itself is not well designed for running at long distances. Wireless networks based on 8 to 11 use a mechanism which is very similar to wired Ethernet, CSMA, carrier sense, multiple access. Which allows multiple stations basically to contend for access to the medium and people or devices that want to get on the network listen or wait for basically a lull in the energy spectrum in the signal to drop and they use that as an indication that the medium is idle and free, potentially free and they can get in and start transmitting. When you have stations separated by huge distances the propagation delay between these stations may be so large that you may not be able to deal with having the time required in order to listen for an idle network. So basically it takes too long and you may have to sit there for a very long time waiting for things to appear idle. So the 8 to 11 MAC layer itself really isn't suitable. You can tweak various parameters such as timeouts and so on to make it usable up to a particular distance but at some point the fact that you really have to deal with stations taking a very long time to actually propagate data from one point to another just the physical transfer of that energy dominates the process and you're not able to make effective use of the medium if you continue to use the 8 to 11 MAC layer design. So this problem is commonly termed the hidden node problem which is that when nodes are very distant you can't hear them and so it appears as if they're hidden like behind a wall or something. So people who have looked at this problem have looked at various solutions and one of the common ones is the one that I'm going to talk about today which is TDMA which is an alternative to the standard MAC layer. Now another thing to understand is that at large distance separation other factors come into play that don't normally show up in a close setting. So when you have a laptop and an AP close by you may have interference from things like microwave ovens and so on interfering sources and you may even have other interfering or difficulties such as multi-path where the signal from one transmitter to a receiver bounces off objects that distort and relate the signal but when you're talking about very long distances you have other interference sources that appear and you also have to worry about things like the curvature of the earth and you have to worry in particular about the environmental factors such as rain and depending on the frequency of which you're working and operating there are other factors as well. So anyway so the challenges for these long distance networks are great. In fact there may be a few others that you may not notice or that you may not be aware of. In fact here's an example of one which was quite surprising this is a deployment at Arvind in India where the telemedicine, the tier people set up a telemedicine network and it turns out that every morning so they have radios there should be a monkey clawing or a money clawing oh sorry thanks I'll have to fix that so it turns out that every morning they have their long distance wireless parts mounted on these huge towers and the gardener told them that every morning this monkey would climb the tower and get up to the radio and pull on the connectors he would jiggle the radio and the antennas and so when you're deploying these things in way out of the way locations there are these unexpected factors that you have to take into account and we'll talk a little bit about that when we talk about the system that we build. So anyway so now that I told you a little bit about what it's like in terms of trying to do this long distance networking I'm going to talk about some of the approaches that were taken both by the tier people in our group to develop solutions and workarounds for these problems. As I said the 802.11 MAC layer is not suitable for use in long distances instead what we've done is we've replaced the MAC with a TDMA implementation TDMA meaning time division multiplex access and basically it's just a time sliced access fixed scheduling mechanism for gaining access to the medium so if you have two stations that are transmitting each one of them gets a slot and that slot is fixed relative to the other and at that point in time when the slot begins they're allowed to start transmitting. So as a member of the audience how wide are the slots and how do you keep them synchronized over a hundred kilometers on? Right so the hard part about TDMA is that you have to keep the slots synchronized and I'll talk a little bit about that although that's part of the trickiness that I may have. Synchronized from whose point of view? Right so the slots have to be synchronized so that you don't have overlap because when you have overlap you have collisions and packet loss. One of the other issues which Julian brought up was this question of how wide are the slots because if the slots are very long and if one side isn't transmitting then that slot is basically idle so in order to effectively use the medium for bidirectional communication in a TDMA network you really want the slots as short as possible so that you have low latency and you have effective use of the channel and these are trade-offs that you have to make in terms of designing the TDMA implementation and I'll talk a little bit about that. So given TDMA the other aspect of the 802.11 networking protocol that we've discarded is that by not using the MAC each frame in an 802.11 network is actually acknowledged so the receiver acknowledges each transmit if the transmitter doesn't get an AC and a sufficient amount of time then it retransmit the packet up to a certain number of times. We've actually dropped that mechanism since you would have to schedule the ACs in the next slot it wouldn't be feasible and you don't want to slow down the network to a synchronous operating mechanism so instead what the tier people have done is they've actually added a mechanism for doing bulk acknowledgement of packets so you can send multiple packets in a TDMA slot and then you can acknowledge all of them at once in the next slot and the transmitter can use those bulk acts to decide whether it wants to retransmit some or all of its packets it turns out we can talk about some later if there's time that you can effectively just ignore acts and allow higher level protocols like TCP to do the acknowledgments for you that works up to a point if you start having high packet loss TCP needs to be aware of that and needs to use particular algorithms that are different than normally used in order to effectively recover from lost packets in an environment where you might have this sort of configuration another thing that tier people have done which we have not done but which I'll just mention is that they've added error correction using a forward error correction technique you'll read about these if you go to the tier website and look at some of the papers this is all discussed in their papers we're still looking at whether it's needed to add forward error correction to our system so aside from the protocol as I said part of the hard part of building these sorts of systems is actually the system design given a long distance to transport packets from one end to the other you actually need quite a bit of energy to direct those packets and one of the things which has enabled the low cost deployment of these systems is the fact that you can actually buy radios that are very high power these days so you have a high transmit power which means that you can actually communicate from one end to the other over a great distance there are alternatives you can always just use external power amps and high gain antennas and in fact high gain antennas are an important thing to use regardless of whether you're using an external power amp or a high power transmitter but the advent of high power transmit radio cards has again lowered the cost of entry and made these sort of systems feasible in places where they weren't before now one thing that you need to understand is a lot of people say I just want high transmit power but you really need to be able to hear at the other end so even though you have these high transmit power cards receive sensitivity which is the other end of the equation is really critical and as we'll see we chose cards that were based almost entirely on the received sensitivity because the high transmit power as I said is really an effective way simply to lower the cost of the system by eliminating the external power amps the last thing to understand is that the signaling technique that you use in order to do long distance wireless matters quite a bit this is related both to the frequency in which you're working and also to the environmental aspects it turns out that OFDM and CCK which are the two main signaling techniques have different characteristics and you have different success rates depending on the environment and the setup OFDM is higher rate, higher transfer rate and so more efficient but in certain cases CCK is more useful Is it possible to dynamically select between them? You can in fact most of the radios can switch back and forth but that actually significantly complicates the design of the TDMA implementation so in practice what you do with TDMA is you want to use a fixed transmit rate because, and this is one of the interesting results is that if you actually try and do variable transmit rate control over a TDMA slotted network you get into some very bad behaviors and the ability to schedule the packets requires that you be able to tell ahead of time what the transmit rate is so you can calculate the time that's going to take over the air and so if you start varying the transmit rate you can actually get yourself into trouble it makes things much more difficult so we go with a fixed transmit rate and so typically what you do is once you've got things set up and you've got things working and effective you really don't need to vary the parameters significantly unless something dramatic changes your environment that's based on your experience so another aspect of the system design is that you need to pick frequency the 2.4 gigahertz spectrum is unlicensed and that's effectively where a lot of people want to operate so of course it's very crowded now in the environments that we're talking about the places we're out in the middle of nowhere there's nothing there, that's really not an issue however if you're in locales where the spectrum's regulated and people are very sensitive to usage you may be forced off 2.4 into other areas and in fact you want to in some cases move off of 2.4 because you have different operating characteristics for example if you have obstacles and so on 900 megahertz for example is much more effective but if you operate a 900 megahertz range in some places where you have GSM for cell use, cell phone use people get very very upset so what we've done in terms of our system design is we've tried to build systems that are very flexible in terms of being able to operate in many different frequency spectrums parts of the spectrum and you can pick and choose according to the requirements of that setup so I've talked a lot about tier I'm now going to start talking about the project that I worked on which is an offshoot of tier so the tier project was a research project as I said UC Berkeley and Intel Research people wanted to do a more production oriented version of this system and make it available in real deployments that they were going to set up and so I became involved and I also worked with the tier people as well so we shared a lot of research results but the RCP project is called the Rural Connectivity Platform and it's an offshoot of tier as I said that was funded by the Intel Research people and its goals are shown here the main one being that it's production quality and that it be self-configuring and automatically configuring when you're trying to set up these systems out in the middle of nowhere you really want to take a box that you can strap on a mast stick it up 100 feet turn it on and not have to do anything whatsoever I mean if you have to sit around and fiddle with stuff you're just going to go nuts and in some cases it's going to be impossible to get things to work right so we designed a system that was intended to be very hands off and auto configuring and the results are sort of mixed but that was the main goal so this is a picture of an RCP box sitting on a mast connected to the back of the antenna and you can see hopefully you can see when the slides are shown that it's got directional or omni antennas sitting off the bottom which are used for a local access point which we'll talk about so you can set it up remotely that is standing on the ground without being connected to the top of the mast to configure the thing wirelessly from a laptop but then it has multiple ports so you can connect the box to the directional antennas which are used for the long distance wireless so the prototype system is based on off-the-shelf hardware we used the Gateworks Avila board it's an X scale processor it has memory flash all the things you'd expect it has two power over ethernet ports for wired ethernet this is a special build that we did so that both ports are POE that was actually very critical one of the things that we learned the tier people mostly worked with soaker ports and one of the things they learned was not having all the ports POE powered was waiting to shoot yourself in the foot so if you're out in the middle of nowhere and you plugged a power cable in the wrong port and you blow your board up you may not have spare parts also if you're trying to climb a ladder and you're at the top of a tower 100 feet up and you're trying to figure out in bright sunshine which port is the powered one and you're sticking a cable in is it this one or that one you know you're going to get it wrong so one of the small things that we did was that we made both ports POE powered POE setup the other thing that we did is that the boards are built with extreme temperature parts so that they can exist in the environments we're talking about putting them this adds cost to the platform as I said these are just test builds that we've done for trials if we were to go to a production build we would cost reduce all this stuff the boards that we have have four slots or many PCI slots in them you can get them with as few as one as much as four we used atheros radios exclusively for doing the wireless and in particular we used mostly high power atheros cards your hard press basically to find high power radios with any other parts associated with them you can get the old prism cards with 200 mW but these are 400 mW or sometimes more cards the main card we use is a wistron card the DMCA 82 which is actually a dual band card there's tradeoffs in terms of how much transmit power you can get and the quality of the radio the effective quality if you go with a dual band solution you actually are forced into a lower power solution ubiquity their original line of cards has since been replaced by their XR cards they tend to be higher power because they're single band solutions we also sometimes use low power cards mostly for access points local access point control but for the most part what you want to do is you want to have a box which has replaceable cards so that if one fails you can swap over to the other so this also ties into the question of a very little thought of issue which is that the connectors on the cards have to be compatible because typically you want these boxes set up ahead of time and you just hand some on a box and tell them to mount them on a pole or put up a mast or run on a tower or something like that but inevitably these things get jiggled and bumped and you have to open up the box and you have to reseat the connectors or you have to reseat the cards or you have to reseat the connectors from the cables and you need these things to be compatible so that you can swap cables around now there's some other aspects I'm not going to get into we did some physical design and some decisions so that you can do things like know exactly what radio goes with what connector by using conventions and so on but the thing to learn that we learned from this is that you really want cards with MMCX connectors not only for the lower injection loss on the cables on the pigtails but also for being more robust the other thing I wanted to mention is that initially we used a different card which turns out was supposed to be rated as a high power card and turned out to not pan out so you really have to be skeptical of vendor claims about high power radios and in fact we've done bench testing of all the cards and before you build a system you need to build one in the lab test it with a power meter or a spectrum analyzer and see exactly what you're getting these systems are often very finicky you can have significant loss just in the construction both from the cards, the connectors the antennas, everything so it's sometimes very tricky so I'm probably running the one time is that true? No so the software in the system the systems we built used Linux we have a custom distribution okay it's very stripped down specific to our needs the wireless support was all done custom all leveraging open source software and we did auto configuration from scratch so the RC scripts and all the other things are set up specific to our installations we have a web GUI most of the configuration work is done over the web usually by wireless to an access point card which is in the box or it could be wired a wired connection we can make it as painless as possible because we know that once people are left with the boxes they're going to be technical at some level but not highly technical so they need a web based or GUI based interface in order to maintain and configure the systems but the goal on most of these systems is to really be self-maintaining so there's a lot of logging and remote access mechanisms one of the important things that we did we spent a lot of time making sure that the field upgrade mechanism was seamless and reliable both in terms of upgrading forward and rolling back in case of problems when you have these boxes mounted up 100 feet up a tower you don't want to have to go up the tower in order to swap compact flashcards or something similar so these systems really have to work they have to be reliable and they have to be maintainable remotely so this design is not real interesting it's a backbone which is bridged at level 2 and then overlaid on top of that is a routed network so you can think of this in terms of the long distance point to point links which are wireless are all bridged and then in the locale like in a village through a variety of means either through access points or wired connections and all the traffic there the people get IP addresses through DHCP they get a DNS name dynamically it all gets pushed around over the network as a whole and traffic is routed early on we had a purely bridged network and the feedback we got from trials that we did was that routing was just critical so we switched to a bridged backbone with a routed overlay and that's turned out to be pretty good so far the most important thing is that it allows for internet traffic to operate well without having the backbone up so other issues that we had to make sure that we consider in our design you have to have quality of service so that things like voice over IP phones video conferencing and so on can operate well even when people are surfing the web so just to go back again to this issue of internet versus internet people seem to lose sight of this when they talk about deploying infrastructure and emerging regions they seem to think in terms of well these people were going to give them the internet what turns out that most of the people have no interest whatsoever in going out to the internet what they really want to do is they want to talk to their friends in neighboring villages so intranet traffic that is traffic within a deployment in a geographical area is far more critical and important to be reliable than being able to get out through the one satellite connection to the internet so in terms of wireless design for the system now this is the RCP this is not tier we have a forked version of madwifi I took the madwifi is the linux version of the athero support that I worked on a while back we started working with that several years ago and at some point with all the changes that we had for doing tdma and other issues fixing other problems we ended up diverging significantly from the public codebase so we ended up forking it's been too difficult to back merge the changes due to the evolution of the code at sourceforge or not sourceforge at madwifi.org so a lot of these changes haven't gone back at some point it may be feasible to try and return some of these fixes but they're probably pretty meaningless at this point in time the software that we ship that we deploy in the field uses the public howl for atheros cards we do not have any private changes anywhere except in the driver the tdma is done entirely with the public howl and I made a lot of effort I put a lot of effort into the howl in order to make sure that all the hardware mechanisms that were needed were exposed so anything that I've done anyone else can do there's nothing magic here in terms of the wireless stuff a lot of work was done to add support for high power cards and also for the cards that operate in different spectrums different frequencies I made changes to the howl for example for the 900 megahertz cards that everyone now has the tdma stuff we've talked about I'm going to talk some more the other thing that you have to do for systems like this is if you have a system with multiple radios in there and it's acting as a relay you have to be able to scan you have to be able to scan over multiple radios and make decisions that are based on information you get back from all the radios so rather than scan for neighbors for example over one radio you have to be able to scan over multiple radios because you may be able to hear a neighbor or see a neighbor when you scan over both your radios for example and you want to be able to pick the radio which has the strongest signal and you can only just scan and correlate results over multiple radios is really important and I did that and it all operates in user mode there's no kernel involvement whatsoever and again these are things that you can do yourself the other aspect is the tying to the auto configuration system so it's not very interesting so tdma I'm going to talk a little bit about what we do and how we do it once again tdma is time division multiplex access so what we're doing is we're dividing up access based on time so every station in your tdma network gets a slot of time and you're allowed to transmit only in that time at all other times you're listening and receiving traffic from other stations in the network the tier people implemented tdma using click which is a modular system which is layered on top of Linux and other systems click is a really great research tool it allows you to build a lot of interesting things like a C++ network but tier click rather places some restrictions on what you can do in terms of tdma so for example in order to know whether in order to do scheduling properly they can't queue multiple packets to the driver simultaneously it's basically a synchronous api so click wasn't considered when we did our stuff instead what I've done is I've gone directly down into the hardware and I'm using all the hardware mechanisms to implement tdma directly that's very efficient it turns out that we can do some really interesting things with it it also means that you have zero overhead in the host for doing tdma and it also means that you can get very accurate timing and results one of the fringe benefits of having a hardware do all this is that the driver requires very few changes and the tdma operation over the air is transparent to all the software above so for example all of the qos hardware scheduling that's done in hardware all the other mechanisms work over top of tdma one of the nice results from using the hardware is that we're able to get very high channel utilization that is that the amount of time that you spend potentially transmitting on the channel is very good we don't have a lot of idle time that we need to wait for things to happen or synchronize things to happen that's one of the difficulties that the tier people have with click is that they need to wait for events to take place things are more synchronous because we use the hardware directly I'm able to get 70% or greater channel utilization in a two slot network the slot configuration is dependent on the hardware that you have not all atheros max are the same you can't use all atheros max to do this you have to find ones that are a certain rever later I don't recall exactly which rev it is but you're also limited by the hardware capabilities so when you're calculating time slots the granularity of the timers the hardware timers comes into play and you're limited in terms of how accurate you can be and how much a slot you have to throw in for a garden of all to ensure that the slots don't overlap and slide and collide with each other what do you mean by two times 10ms? the typical configuration is a two station network point to point two times 10ms means that the typical configuration we use is for a point to point network you have two slots that are alternating use so each station gets to transmit round robin and those slots are 10ms apiece so a station gets to transmit for 10ms then it listens for 10ms transmit for 10ms listen for 10ms we can go down as low as 1ms but which reduces the latency because if you have packets to transmit and you don't fit in the slot you have to wait for the next slot to come around to transmit however due to the granularity of the hardware timers the slots actually have to grow a little bit in order to deal with round off in the calculations because of the timer granularity and you also have to take into account propagation delay between the stations so the effective slot time grows so what you want to do is you want to find a sweet spot between the slot size the slot length and the effective use that you can get and what we found is that 10ms slots work pretty well you don't get extreme latency on traffic so for example TCP is not significantly affected by having this scheduling but you can get very effective channel utilization as I said most of the tests I've done on a wired test bed using a link emulator that simulates noise multi path and other environmental effects and I use a 2 slot 10ms configuration typically with 24 megabits OFDM because that's the highest power transmit capability for the radio and we can get channel utilization which is about 17 megabits per second or more so that's an effective 70% utilization of the channel so that's both sides transmitting as fast as they can so the other important issue when you're doing a TDMA network of course is that you need your slot synchronized so that you need the clocks scheduled so that you don't transmit while somebody else is transmitting at the same time because otherwise you get collisions we are not listening for carrier we simply jump on the air and transmit when we're scheduled so if you transmit while somebody else is transmitting you will collide and the radios are half duplex so you can't hear the other side and so you'll have packet loss so you need the slot scheduled and we're able to do that using some techniques that I'm not allowed to give out just yet but using the hardware and we're able to get very accurate synchronization so the system itself is self configuring I've done 802.11 modifications so that there's actually protocol messages actually information elements and the beacons that are transmitted so you can identify TDMA networks the networks and the TDMA network itself can coexist with the 802.11 network in terms of recognizing overlap with the 802.11 networks moving off channel coordinating with other 802.11 networks and sort of cloaking itself so that regular 802.11 stations don't try and join it however you can't have a TDMA network coexist with a regular 802.11 network on the same channel because the TDMA network will not honor CCA it will not avoid joining other stations already transmitting so you'll get high collision rates and basically destroy the effectiveness of the other networks so for our purposes it's not really important but for operation in crowded environment or where things are highly regulated that means that the TDMA implementation is not really an option it has to be really out in the middle of nowhere and part of the work that we did makes sure that that stuff is aware of other networks and get it off the channel when it sees overlap so that's really all I have to say about what we've done what I want to do is talk a little bit about the test deployments there have been several deployments some successful some not so successful the most recent one in Venezuela was actually highly successful and that's what I'm going to talk about I'm just going to give you a brief list of deployments that we did this is RCP not tier and then I'm going to show you a slideshow of some of the tier work so we've done some deployments in the bay area most of those are between hilltops although there have been some up-down kind of links you have to remember wireless doesn't go through obstructions like mountains and stuff like that so when you want to do multi-point you have to go to the top of the mountain then down again we did some test installations in Ghana and the refugee camps where they had to go over hills so they had multi-hop multi-point transitions those had mixed results we have some stations in South Africa which I don't know a whole lot about but it's a multi-station again relay involved in Panama there's a pretty short point-to-point connection which is actually in the most recent tests we're actually super long distance down in Venezuela and we were able to run some really impressive tests over 279 kilometers with full bandwidth of 6 megabits bi-directional 3 to 4 megabits in each direction and that's one of the ones I'm going to show pictures of so I'm going to switch to the slideshow alright so these are slides from yeah there we go slides from trials that the tier people did the first slides are from installations in the Bay Area for those who are familiar there is a naval station at Point Richmond UC Berkeley and Intel arranged access so some of the very first towers that they raised and installations that they did were in the Bay Area at Point Richmond so this is the tier guys these are all graduate students at UC Berkeley they actually took them 3 tries to get the towers up and working properly and of course this is all experience that they used to do deployments in other places like overseas so as you can see this is all very hands on there's a lot of stuff that was built in the machine shops at Intel Lab things like cable connectors that had to be handmade and stuff I wasn't involved in any of this this was part of the group that was doing stuff before I showed up at the lab and of course this is everyone proudly standing in front of their tower after it's been erected after the third try some of the guys that I'm going to talk about later are in this picture there are two people from tier that were significantly involved in the RCP project Matt Podolski all the web GUI development and has been a mainstay in our project and another person of tier Rabin Patra was a big help as well in some of the protocol design this is actually on top of the Intel building in downtown Berkeley Intel has a suite on the top level in the penthouse of the power bar building in downtown Berkeley and you can see there's a sweeping vistas of the Bay Area you can see this is the Golden Gate Bridge in the cloud cover and so on but off to the right is the Golden Gate Bridge this is a view of Mount Tam from the top of the building and they set up towers, they set up antennas on top of the building to communicate with Point Richmond here's some of the grad students setting up stuff I have edge beer so I'd never go up there or go anywhere near any of this stuff but as you'll see later on there's some pictures of some pretty impressive stunts that people have been climbing things so again these are directional antennas these are high gain directional antennas mounted directly behind the antennas are the boxes you want cables as short as possible okay not only to minimize loss on the cable but also because of lightning strike you have to take into account a lot of environmental conditions when you put up these things outside you can see the center blocks there so here's some slides of the top of my colleague Kevin Fall his roof he's got quite a few antennas up there there's Rob Bean and Michael Rosenblum again they're sitting on top of the roof training antennas off to Point Richmond Station I believe I think the distance to there is probably less than 30 kilometers that's not too far here's some slides top of Mount Tammel Pius over in Marin they have a lock box at the top of the tower at the top of the mountain and they can set up antennas this one's just sitting on a post I'm not sure exactly what they have for complete installation this might have just been doing for a trial there's some people that live in Marin that have houses up on the hills with installations this is a set of trials that were done in Guinea-Bissau in Africa this area is once again an example of an emerging region there's not a lot of money there for putting in infrastructure so they're looking for ways to deploy networking that are low cost this gives you a flavor of what it's like in that area when you're out there doing installations you're really on your own they might have a hand held spectrum analyzer like this the Sun Ritsu analyzer looking at the spectrum for noise and so on but you really don't have a whole lot of tools this is a shot of the tower looking down you remember I've mentioned several times that once you put these boxes up you don't want to have to go up there to work on them again this is the control room area the guy working has a soldering iron it turns out it's shared with the radio station the local radio station another issue that you have when you're doing these deployments is power these guys are running the laptop off the car radio off the car batteries rather here's another picture from the top of the tower looking down on the village you can see what the environment is like so once again this is in Guinea-Bissau I think this was mid to late 2006 and there's the radio station where they left the controls for the wireless setup that was up the tower I think this stuff is still in use but I don't know so this is the last set of trials that was done this is in Venezuela there were four test installations that were done this is the first side erecting the tower erecting the mast with the high gain antenna at the first point Rabin Rabin Patra was the tier person involved there's a group of people in South America who sponsored the work Rabin's on the left they use whatever tools they have to get the antennas up this side was I don't recall exactly which side but this is the view over the valley to the far side where they're going to put the other tower the first test installation that was put up was 279 kilometers and later on they put some other ones as you can see the conditions are not the best there they are huddled over a spectrum analyzer it's pretty wet out there Rabin's got his and it got even wetter this is running tests link tests over 279 kilometers using a laptop in the rain must be foggy I don't remember exactly what they used this is the next morning it started to get cold so as I said they were able to get 3 to 4 megabits a second they ran 11b so it couldn't have been 2.4 it had to be 2.4 you can see the antennas set up the second installation that they did was 382 kilometers and you're going to see this coming up next when they set that up they were up higher and the conditions got even colder this is Rabin standing posing with stuff as you can see it's pretty spectacular up there on top of the mountains and the weather is kind of inhospitable the longer distance trial here he is this is actually the 382 distance trial Rabin said that's 0 degrees C right there when he ran the test this is the morning after with the sun coming up over the hills so 382 kilometers they were able to get nearly identical results as at 279 but the RCP box apparently didn't work but the tier box did so we need to look at the results to understand why these are just more photos the last thing they did was this is Rabin carrying the antenna up the hillside they decided they were going to try 400 kilometers so trying to carry this stuff up at that altitude is non-trivial they set it up at the top of the mountain and tried at 400 kilometers but apparently it didn't work the problems worked so here they are standing proud with their installation at 400 kilometers and I think these results are really quite spectacular this is them packing up to go so by comparison the long distance official long distance record I think it was done at DEF CON a couple years ago and that was only I don't remember but it was a significantly shorter distance okay so I'm going to go back real quick to finish the slides there's just a couple more slides keep going what the project was like and what the difficulties were and what we did I'm actually going to go back to doing some more work on this project here in the fall one of the things that we're going to be doing there's a person at the Intel Research Lab who's got a project going on with steerable antennas and we're going to integrate his steerable antenna work with the RCP you can find information about the work that he's done Alan Mann wearing on the web I'm sure if you go to the Intel Research Lab I'm sure there's papers that describe the work one of the experiences one of the things we learned from some of the trials a lot of this stuff is work in progress setting up relay installations is really kind of hard when you have multiple radios and you have to figure out which antenna goes in which direction and getting things lined up and trained plus getting traffic routed through and stuff it turns out it's actually quite complicated so we're going to work on on the relay on the systems that are acting as relays that is they have multiple radios multiple point to point radios and they're acting as part of the backbone the current RCP box as I said the focus has been production quality for sort of foolproof deployment we haven't incorporated all of the work from tier but we're going to look at incorporating some of that like the bulk acts in particular and possibly the tier people have also looked at multi-radio scheduling so that you try and synchronize the TDMA slots so that if you want to operate multiple radios in the 2.4 band something I haven't discussed you can ensure you don't have collisions between multiple radios running simultaneously because you can't do that anyway another thing I want to look at is that there are new atheros parts that support 80 to 11N that have more hardware capabilities including finer grain higher resolution timers which will allow us to do better TDMA slot calculations we might also be able to leverage the block act support in hardware for doing block acting of the TDMA transmits one of the key things that we came up with we sort of knew all along became all the more clear from experiences was that when you're setting up these systems a really really hard thing that's really critical is being able to set these directional antennas in the right direction optimally pointed where you want them to be and we have some ideas about how to build systems using sound that can help us do that it's really not a hard problem but it's something that's lacking from our system that we think will really help in terms of deployment and last one of the things that we've talked about doing is trying to integrate our system with existing mesh implementations like the stuff for OLPC the Meraki networks and stuff we can act as a backbone and tie into existing mesh networks this work a lot of the work obviously is not my work the RCP work is my work there are a lot of people who have contributed significantly they're listed here the people at the lab, Intel lab that I've worked with are listed Kevin Foll and Eric Brewer in charge the UC Berkeley tier project gate works where the boards came from the prototype boards, Ron Ellsworth was a big help and Jim Thompson and NetGate helped with other things he's been very kind in donating cards and providing samples for test and evaluation so the last question which I'm sure everyone wants and I did not go to Copenhagen because of that is that can we get this stuff first of all, this is not a product I have to say it multiple types this is not a product this is a research prototype and the availability is unknown the TDMA implementation I've described uses all publicly available components you can do it yourself with a little thought and if someone wants to try I'm more than happy to help them I can't give out the code that I have but I can certainly help people try and do it themselves the one question is probably other than this is kind of neat what does it have to do with a BSD conference well, all I can say is that if we move forward there's a very good chance that instead of using Linux the RCP platforms may be BSD based and there's no reason why Linux was simply there because at the time free BSD and the other BSD systems didn't work on it and there are some advantages to Linux for our needs in terms of students familiarity with it and so on but in the future this may turn out to be a BSD based system as well anyway, thanks for listening and I hope you enjoyed