 My favorite time with Jeff was in Australia, just before he accepted the grade in research position in Buffalo. He and his wife, they spent half a year with him. Three months. Three months. It felt like half a year. In North Sydney Beach is a really beautiful place. They had a house overseeing the ocean. He was doing nothing else but cooking three months in a row. And recovering, sort of recovering from the hard work he had done. I've never actually recovered. It's just like I have. Anyways, back to the research stuff. So, I'm going to be talking today mostly to systems people. But I've made a mistake and I've made this talk to a bit more generic audience and they all played through the talk. So, I'll try to spend first half of the talk to them, motivating applications that motivate all the development of platforms and system services. And then the second half of the talk will be a bit more systems oriented on one of our new platforms that we developed in Siro and the mechanisms that we developed to improve communication and computation of sensor network platforms. So, Jeff gave you a bit of an overview of where I'm located at the moment. So, I work for CSIRO, which stands for Commonwealth Scientific and Industrial Research Organization. It's the largest research organization government funded in Australia. It's been a comeback in history for quite a long time. They started originally working in astronomy. They were developing satellite dishes and astronomical relations. There's a movie about it called The Dish. If you're interested in seeing it, it's quite a funny movie. But at the moment, we have around 6,000 employees, a lot of people with GEDs, organized in different division, scientific divisions, many people looking at mining industry minerals. Many people studying oceans, climate, ecologists. And our division is relatively small within Siro. It's only around 300 people and it's called Information Communication Technology. VR, in particular, our lab is one of the four labs in this division. It's located in Brisbane, which is sort of halfway through South North in Australia. Really nice weather. It never snows. It's the coldest I've felt in the last two years. And it's just a little bit of driving distance from the Great Barrier Reef. So there's great stuff to see on the water there as well. So again, the lab that I'm working at is called Autonomous Systems. And it sort of merges two related areas. One of them is the field robotics. We have guys that are working with these autonomous vehicles from ground to marine to UAVs. It's quite impressive, actually. Those are quite big. You have this huge hot metal carrier and it just autonomously drives around the campus, working on different tasks. And so that's about two-thirds of the lab and then about one-third is working in pervasive computing, which formerly was called sensor networks. And so the area of interest of our group or of the team within the lab is, again, autonomously operating network systems that span spatially quite large areas that have no infrastructure so they have to self-organize over multiple hopes, self-communicate the information and deliver it to the user in a useful form. CSI was actually quite well known in the Sensor Networks community for a number of deployments. Historically, there was a lot of interest in microclimate type networks. We have one deployment that also will be in the later slides that we've been operating now for almost three years. We have 250 sensors deployed in the rainforest. It's been now delivering data to biologists who started the rainforest over three years. Then we also moved from just the stationary deployment into capital tracking and other agricultural applications. This was actually quite exciting compared to the stationary network so it requires new protocols to handle mobility. It still operates in environments with no infrastructure. So you need to provide your own networking, your own charging capabilities. We have sensors that we put on collars of the kettle that can sort of track the location of the kettle. We also have a way of controlling the movement of the kettle through little electric other structures that can turn the kettle around and allow the farmers to draw a virtual line instead of putting a physical fence to constrain where the kettle is grazing. We also put sensors inside animals. We designed these nodes that we have the kettle to swallow and they're measuring sort of CO2 methane type of greenhouse gases so that farmers can correlate the food that the animals are eating to the amount of greenhouse gases emissions they are producing and develop better techniques in the agriculture industry. More recently we also moved into urban spaces. We've been looking at energy efficiency in buildings in sort of strategies for usage patterns of how people are using appliances or are looking into ways of controlling HVAC systems in an intelligent way. So if you collect the occupation of the building by the people occupants, sorry, or the thermal comfort of the individuals present in the buildings. Okay, so what now I'll do is I'll give you an overview of a couple of these projects. One is the Spring Group project to give you an idea of the scope and the scale of this project. And then I'll follow up in two projects that we started about three months ago, new and exciting. So back to the Spring Group project, we started, this is a video that is sort of prepared to present the project. So we started to look into this area in southern Queensland where there's this piece of land which has been used for a hundred years for agriculture. There used to be a rainforest there, which was important ecologically. And we grew an ecological movement. The government is providing a lot of funding to buy out the agricultural land and restore the environment to its original state. And so, oops, started. So we worked with ecologies that sort of got this land from the government close to the coast. It's an interesting area, spatially. So this is the deployment of around 250 nodes in about three by few kilometers area. It's an interesting area because if you look at the elevation map, it's quite varied. And then so if you look at the climate, with the small distance, with the small spatial distance in the environment in the large variation in the climate, there are areas which are sunny and warm and a hundred meters away, it could be really cold and rainy. And so originally the biologists were using satellite data with coarse resolution. With sensors, we allow them to see this kind of data where you sort of see in a very detailed way the temperature or humidity or slow monster gradients. And then they can sort of plug that into their models with the model of how the plants are growing. They have different areas in this space where they are trying different strategies to bring back the original native plant. And then they are studying which methods are working better than the others. These are just examples of some of the sensors that they deploy. It's all autonomous powered by the solar panels and it's all waterproof. It's actually quite interesting to really sort of stop any sort of maintenance about a year ago. And it's been working, I think, still about 95 person nodes working with no maintenance over the past year at all. So this was sort of the... There wasn't really any commercial application in here. It was more of a technology demonstration and we helped ecologists to study some of their techniques. But what you can do and what we started applying this technology to this year is mind rehabilitation. So there were some mining people who were seeing presentations with our system working and it turned out that they had this huge problem that part of the contract with the government, so the government made works in Australia is that mining companies are leasing the land where there are ores or other resources they're mining for. Part of the agreement is that they are paying for the lease but also at the end of the mining operation they need to restore the environment to the original state. And no one defines how this process is supposed to be working. They just need to convince one of the government officials that the environment is healthy and is going to be able to continue growing for the years to come. Until they convince them they have to keep paying the lease for the land. And at the moment they are using biologists to estimate how the environment is going to be happening but they wanted to have good sensor networks allowing them to quantify a bit more what will be the outcome of rehabilitation. And potentially if they can project the growth of the forest with small error over a number of years potentially it will lead them to decide their rehabilitation better and faster and save resources. Also if you have more detailed data from the environment you can proactively discover there are problems. Part of the soil is not suitable for even plant species you can adjust the problem before it's too late before and if you use from now the whole forest sort of ties off. So here is sort of the process that we are following. There is a system that was developed by the Australian Department of Energy Management and they were using satellite weather data to predict using some sort of simulator to predict rehabilitation outcome and produce a report card that mining companies produced going into government. The variation or the accuracy of this weather satellite data was not very high so there was a huge variance on the input data which resulted in a large variance in the outcomes. And so the hope is that when it replaced the coarse-grained data from the satellite when it replaced it with fine-grained data just as I showed you in the spring group you'll be able to predict with a much better accuracy and confidence what will happen with your ecosystems in the future. And here is the status we had around 25 nodes deployed and so the biologists can already start downloading the data and look at it. So this is an example of how sometimes technology demonstrated project can lead to an industrial application that brings funding from industry. Another example that we also coincidentally started three months ago is a long-term tracking of line foxes. Line foxes are these large bats, fruit bats. So they are nocturnal animals, they live during the day or they are active during the night. They sleep throughout the day. They are huge, they have beautiful animals and you know I was telling Murad in my life often have dinner on the balcony and we just look at the city, Brisbane city and you see this huge stream of these vampires that are descending on the city and spreading around and eating mangoes, luckily mangoes. And so it turned out that there are a lot of problems in Australia associated with the flying foxes, with these bats. And they are native species and all native species both animal and plants are protected in Australia. You can't kill them, you can't remove them. You need to get a special permit to sort of change their natural distribution. And the special problem with flying foxes is that people don't really know too much about them. So the reasons why to track them is that they cause some diseases. I think they can spread diseases such as Ebola. The one that was actually in the news in Australia recently is called angiobioids that spread from flying foxes to horses through feces. So the flying foxes are eating fruit, are defecating under the trees. Horses come, they eat the grass, they get sick. And then horses die, which is one of, you know, it's a lot of damage in terms of the horse industry, but also the disease can then spread to humans. There are a couple of deaths in Australia, so it became a politically important sort of project. And there was also some governmental funding to study flying foxes. The problem with flying foxes is that they cover huge areas in Australia. So they sort of live almost everywhere. This is the distribution of four different species of flying foxes in Australia, and continues all the way to Asia. And they cover large distances every night, and they live this nomadic lifestyle where they sort of stay in one camp during the day or about a week, and then they move somewhere else. And so it happens oftentimes that people see flying foxes in their garden for a week, and then they all disappear. And so people start complaining, oh, all the flying foxes are dying off, whereas the farmer next door gets all the flying foxes coming. So he's complaining, oh, there are all these flying foxes who should be killing them. No one really understands whether they are endangered or whether they should be killing them. And so the ecologist proposed this national flying fox training program where we will deploy a number of devices, GPS tracking devices on them that will let us study in a better way of how they're using the habitat, what are the individual interactions between flying foxes, and how they interact with farm animals. They managed to get a lot of funding from the government for the deployment, and then we managed to get a lot of funding to develop the technology. There are a number of challenges. So obviously the flying fox is, even though it has a one-meter wingspan, its weight is limited. So which limits the weight of the device that we can put on them to 30 to 50 grams. I already mentioned the mobility that travel large distances during the night, which are offered to be able to handle. It's a truly remote deployment. They expect them to be crossing the Indonesia and Thailand will never come back, and potentially some of those animals coming back. But not all of them. So we need to try to use the fact that they are meeting somewhere outside of our coverage zone, and yet they should be able to bring the data back from other flying foxes possibly as well. And also, the mobility, we are constrained by the size of the device or the battery that we can put on these colors is quite limited. So we are thinking of how to harvest some energy on the way to sort of enable perpetual function of these devices. So when we decided on the platform, we decided to put other sensors and GPS on the platform, mostly because we didn't pay much for the additional sensors in terms of weight. And we chose sensors that are relatively cheap to run in terms of the energy. And what the sensor lets us do is we can detect activity of flying foxes and then find locations of activities as opposed to just periodic locations over their lifetime. For connectivity, we decided to use a low-power 900-members radio that's sort of customary for sensor network deployments. This is as opposed to 3G chips which are heavy and still require large energy usage or satellite chips that have sort of nice coverage but are also expensive in terms of bandwidth and energy. And I already mentioned we are aiming for perpetual operation using solar energy harvesting. The research questions that we are sort of looking at from an ICP perspective is, first of all, how does the mobility influence the data that we are used to, such as GPS location, radio communication, solar harvesting. Then stepping high from these basic modalities is how do we combine inputs from multiple different sensors, such as, for example, GPS, initial audio, to detect activity type and location. And finally, can we learn by observing the interaction patterns of the different line boxes, can we learn something about their social dynamics and then apply this knowledge to improve information return in terms of ICP algorithms from these devices. So how do we set up our routing protocol, data moving protocols, compression protocols so that we maximize information return? It's at its early stages of the project so far with very basic testing. We purchased a whole bunch of UAVs from Germany that led us to control experiments with mobility, 3D mobility. One of the results that we expected was that the speed of the device or the relative speed of two communication devices doesn't influence much the radios. So here on the x-axis, I'm showing the range between the base station and the mobile node. On the y-axis, I'm showing the signal strength of the radio signal and the different colors correspond to different speeds. Blue is 0 to 2 meters per second and 2 to 4 and then about 4. And we couldn't really see any dependence on the speed which was a good thing for us. We have also looked at the GPS lock time. How long does it take for GPS to get a lock? After on x-axis is how long the GPS was turned off. So depending on that, GPS has to go either to the cold start or warm start. And the good news was that we could lock the GPS in about 5 to 15 centimeters no matter how it was a growing tendency but it was still acceptable. And then we also looked at how much energy we can harvest on the node which was attached to a flying fox as opposed to a node that was just stationary placed on the ground. Obviously throughout the day hours of the evening, the energy on the ground you see here the spike, this was in the cage as the sun was sort of casting the cage bars we were casting the shadow on the solar panel. You could see the spikes. It looks like we could harvest very little energy but even with this little energy we calculated we can develop another periodic fix from GPS throughout the whole day. So it was again encouraging results. I also have early data processing results where we were able to detect some activities from inertial data. So here on the x-axis is basically time y-axis shows the values coming from the accelerometer and the activities that we were able to detect is really interesting. Although expected, most of the time the flying foxes are hanging in upside down in the camp. But when they want to urinate or defecate obviously they turn around. And so it's really easy to detect that. It's a simple algorithm to detect that. Fighting also we can distinguish it quite well from the resting period. We estimate we'll be able to detect mating when we have a video. Try the C issue. Try the C issue. Also we looked at the microphone track so we have this microphone that we can sample with up to 16 kilohertz on the mode on the tag. And we were able to using simple parameters such as zero crossing which are basically a proxy for frequency and frequency and also mean sound level and duration were able to distinguish the calls that the fly fox is making. Okay, so that's it with the cool slides. Non-systems people can go back to sleep or can go to sleep because I'm switching to the systems part. So hopefully I've convinced you that we've done a lot of deployment in the real world and making these systems work for long periods of time, unattended is a challenging problem due to different things. Most of all there is a complex and time varying relationship between the environment and the system performance. And so the lessons learned from these deployment all trickle down over time to our platform design. When we look at the status quo in the existing platforms, we see different approaches of how people are tackling these complexity in the environment. One good example is communication. And in particular, there are protocols out there now that let you collect data with high reliability in particular. A good example is a so-called collection data protocol, CTP, which is quite famous in sensor network deployment, which was tested in a number of testbeds many times and there was a compressive paper evaluating this protocol, which showed that you can receive all the data except for 0.1% from such networks across the whole network. However, the reason why this works is that we are spending a lot of effort in making this reliability happen in terms of estimating the link qualities through control packets and using acknowledgment and a lot of read transmission if linked there. So we do get reliability but it's at a high cost in communication. If you look at computation in sensor network, the problem is even bigger. If you just do a survey of applications in sensor network, you'll find out that there is this basic dichotomy of where does it live? People believe that it can have either low power deployment or high performance. You can't achieve both at the same time. And the reason is that the main focus has been to save energy in terms of the individual devices and that's why people have so much with low power MCUs. Just two examples from HATML and NTI that limit the memory and co-frequency of the MCU. And the consequence of that was that most sensor network applications out there are limited to sense and send or sense store and send applications where you send something from the environment you don't do any processing and any complicated processing and you just forward to base station and all of these processes. So these are the two basic problems that we were sort of looking at. Can we improve reliability without requiring the additional overhead and can we break the dichotomy of low power or high performance in a new sensor network platform? Now if you look at the development cycle of a typical sensor network platform you will find out if this is true whether it's a UCB, UC Berkeley which was sort of behind most of the development in sensor network platform or the internal cyber platform which was, they prove many in terms of platform was mostly manufacturer driven. As the manufacturers for example HATML, KML is a better faster MCU or more memory or a 16-bit MCU or similar to the radio as they were coming up with higher throughput and longer links the technology trickled down to the platform and was progressively being added. So what we looked at in this work was whether you can sort of break this incremental approach to improving existing platform whether you can do more of a change or sort of a one step higher improvement. And we looked at two basic areas one of them was data processing where we wanted to develop an energy efficient platform that would be both low power but also would allow you to do higher processing so that you can step beyond sand store and sand applications and in terms of data collection we wanted to improve reliability of radio links and develop protocols that are inherently more reliable than the single chip solutions that are customer-incentive networks. So I'll talk about both of these areas briefly I'll start with data processing there is a law that everyone in this room knows which is the Morris law which people mostly understand in terms of computation here's the computation in terms of how many cycles per second how many computations per second you can do. What's actually behind this law is that is that the density at which we can produce transistors at the most most effective rate is doubling every year every sort of every two years and what that means is that the distances between components are getting shorter and around it the higher frequencies the latencies are shorter so the computation power levels. What is the other aspect of this is something that's called Qumi's law by Stanford professor where people notice that by increasing the density of the transistor we are shrinking the area of the MCU and what it means is that you are emitted and dissipated by the area of the dye and so we are improving energy efficiency. And so when Qumi looked at the energy efficiency he actually noticed that at fixed computing load the amount of battery that you need flows by factor of two every one and a half years so it's actually a bit faster than the Morris law which leads to the erosion of mobile devices in the recent few years what was even more interesting was that Richard Feynman actually looked at the theoretical limits in 85 and he found that an improvement of 100 billion in 85 is possible in terms of the energy efficiency of the batteries and so what it means is that we are still a long ways to go because since then we've only improved 40,000. Anyway so what I'm trying to get at is that if you look at this curve from early MCU's through laptops and probably mobile computers at the moment you see this trend of improving efficiency and so if you are not increasing the size if you are not after computation you can actually get much better energy efficiency with today's devices than what we used to be able to do a few years ago and what it necessarily leads to is that the modern 32 bits of MCU will be eventually catching up with low power MCUs in terms of energy efficiency and that's what we looked at the problem is that for those low power MCUs that were traditionally used in sensor networks there isn't a whole lot of funding for research people are not doing as much progress as modern MCUs and there are actually 32 bit MCUs based on ARM architecture that are almost beating the old style of MCUs in terms of energy efficiency and so what we took for Opel was one of these chips the most energy efficient that we could find and we sort of compared its performance to the old style sensor network platforms here is what we arrived at so MSP430 is an old style 16 bit processor used in a platform called OSP when you just look at the datasheet numbers you'll find out that if you look at basic operations such as ADC sensor reading or logging the sensor data in the flash TELS-B is performing much better than the new MCUs it's 15 times using 15 times less energy however if you look at other operations such as the sleep current that it achieves it's much better more favorable it's only 4 times as bad when you include the radio into the picture that's basically consuming the same amount of energy on the two platforms it's only around 1.5 times as bad so if you look at the current of the whole platform which consists of measuring sensor data time to time sleeping for the most part taking radio occasionally you'll find out that the two platforms are not too far from each other so if you look at the ratio of energy used by the new platform versus the old platform for different parameters of how often you are sampling the sensor data and how often you are sampling the radio the sampling of the sensors on the X axis the more to the right means you are sampling less and less and the radio are different curves so the lower you go the more often you're sampling the radio and so for some typical values for radio sampling which are these two curves you see that as soon as you are requiring samples about half a minute which most sensor network applications do you quickly convert to something like being 2 times as bad as the old platforms which is not too bad so the new platforms are sensing applications at only 5% of the old platforms which was a good result what you get in return is energy efficiency at signal processing at higher computationally intensive applications and the reason for this is that they are implementing 32-bit processors and they are implementing single cycle instructions that the old processors do with additional multiplications one cycle where a 16-bit processor will crunch spend more time crunching through the data so what we looked at in this experiment was we looked at typical processing units that you use in your signal processing code if you want it to process raw sensor data in something more sophisticated such as Huffman encoding FFT, linear regression, DCT simulation operations like that now you can group these operations based on whether they are dominated by 32-bit operations or whether they are using floating point data or dominated by the 8-bit operations and they sort of correspond to these classes so Huffman encoding we did it on 8-bit data stream it was heavily 8-bit dominated and we had floating point data operations and then the 32-bit operations and we found that as we tried to do more and more computation the platform the new platform actually starts out performing the old platform by a number of orders of magnitude and you not only get more energy efficient computations with the new platform but you also speed up in terms of time of how long it takes for it to do the processing so you will be able to do it in a given time okay so I'm going to stay at this high level so that you guys don't fall asleep and I'm going to switch back to communication it was the second problem that we tried to address if you can back up on the slide can you just go back to the previous slide so there's clearly some interaction between this and the stuff we talked about before as far as sampling rates I'm going to get all this huge amount of data to process if you're sampling once every 30 seconds so the higher power platforms are winning on the big data sampling task but they're also losing on the big data sampling task they'll do the processing fine so where's the crossover point there right so I guess one way you can see that is that you could collect data in the flash and the processing once every 5 minutes or something or once every hour right but then you just like the absolute even if you have a high factor the absolute amount of power you would say is still small yeah it's a good question the good question is for even sampling frequency is that good I mean the sampling frequency has been a dictate how much of a savings you're going to get based on what the competition is let's just take the time to gather I guess the other thing I'm wondering is there has been I know the architecture, there have been people in the architecture community looking at accelerators for some of these operations that would be implemented to be essentially augmented a more general-purpose there's something nice about using these general-purpose CPUs right but none of them are really optimized for these types of operation you mean like you could use FPGAs or something like that to process it? Well no, if you would use FPGAs you'd just be by a chip that does help kind of go with something like baked into the silicon if you had a guy in I don't know if Mark comes to that in front of me, I've been working on this for a long time what accelerators would you want on your sensor to work? No, I have deeps for example, very common operation so it's right a little harder to accelerate if it doesn't Well then the question is these chips are commercial of the shelf available? Yeah of course it's going to be really cheap for you to put them together I mean one of the reasons is there is almost no cost difference between getting a Cortex chip and then all start MSP430 chip Right, right, yeah So the question we are trying to ask is it done now to switch or is there still a lot of benefit that the power of MSP430 chip has when it looks like you sort of might be able to get both the best on the field Okay so now I'll have a few more slides on how we address the communication reliability problem So again the problem is that development in sort of a platform space both driven by manufacturers who are looking at improving data rates, improving the sensitivity of the receiver and the communication range but they rarely look beyond the physical link layer and we focus on layers about link layer to improve reliability of the camera In particular what we actually looked at was to include multiple radios per sensor node for operating in different frequency bands and we try to quantify what are the radios we are using let's say 400, 900 and 2.4 gigahertz radios as opposed to just a single 2.4 gigahertz chip we also use spatial diversity so it's spatially separated antennas for the different radios and try to study how that helps So just to give you an overview of what is radio diversity and how does it help as I mentioned spatial and frequency diversity and also to include antennas you see that more and more Jeff has three antenna router at home so basically we almost can't with routers with single antenna and with wifi routers so it's a commercial technology that's sort of overtaking or taking over the old design with single antenna chips often times even the small cheap low cost sensor type application they already do provide support on the receiver and for automatic antenna selection based on the signal quality similarly to frequency diversity we are getting now routers wifi routers that work both on 2.4 and 5 gigs and can automatically select better radioing you see less of those trickling down to the low power world time diversity is actually part of most of the standards for example 18.4 and so why does the diversity help there is this simple conceptual idea if you have two links that your radio transmitter can select to transmit data to the receiver one of them has a reliability of P2 the reliability at the receiver the losses are uncorrelated you send both packets at the same time is calculating according to this for now if you have a 90% reliability the one the joint probability of success will be 99% so that is the basic conceptual idea of why your diversity will help you in improving your reliability overall reliability of the radio now we tried that so I mentioned OPAL OPAL platform has multiple radio chips that operate on different frequencies especially separated antennas we just tried basic experiments to see how does the reliability help so here we sort of sampled for different locations in the lab we sampled packet pulse rate and on this graph we are plotting with the blue color when 2.4 megahertz was performing at 100% so all packets got full and 900 megahertz band was performing at 0% the red one is the other way around and the green one was the 2 band performed in a single way and so you see that there is a lot of interesting areas there are areas where 4 band performs much better than the 900 band there is a point right next to that point where it sort of clicks that gives an intuition that for small spatial distances there is a significant difference into which into how the individual band are performing so that hinted at us that both frequency and spatial diversity are important we also repeated a similar experiment in an open space we were just driving a car over one and a half kilometers with 210 out on the roof and from GPS locations of the car we then put together this graph where again it's really hard to see but there is a green graph showing that 900 is performing better blue graph than 2.4 sorry the green graph shows reliability of the 900 blue graph of 2.4 but the red one is the different which is the important one so if the red one is above 0 that means 900 is performing better if it below it means 2.4 is performing better and you see that there is a lot of variation depending on your distance from the receiver you will see that but that only matters in a certain area or certain distances where the signal strength is low enough to be influenced by the environmental interference but it's not too low for you to lose connectivity completely but this area was sort of large it was sort of 400 meters where you could improve by selecting the right link you could improve by almost 100% reliability in many of these places so these are sort of motivational one-hawk scenarios we were able to demonstrate that both frequency and spatial diversity are important what we then implemented was I mentioned the collection data tree protocol which basically implements a multi-hawk data collection from a static sensor network to a single root where each node is continuously estimating the best parent through which to send the data on the base station and each node forwards on behalf of the nodes behind it throughout the base station we implemented this protocol using multiple radios so the nodes were not only selecting the best suitable parent but also the best suitable radio for transmission and we deployed this on our campus so this is the sort of floor plan of our campus with this multiple building a 215 building there is a hill with a radar tower we have the whole metal carrier moving around as I mentioned so it's a challenging environment with a lot of areas where you don't have a line of sight between the two nodes now for the same deployment we tested the network operating only with 900 MHz radios only with 2.4 MHz radios and the dual band and we were able to get quite a big improvement so the 900 MHz links lost about 30% of all the packets 2.4 was from better but the dual band was able to get almost all the packets through at a lower total cost so this was an exciting result published in ITSM when we looked then after this where it was how to improve energy efficiency of this protocol so the disadvantage is that we can improve the reliability of the data collection quite a lot now we need to run two radios so if you implement it in a naive way we need to pay double the energy for running both radios at the same time so we had a follow up word that looked at whether we can improve the energy consumption of the dual radio versus single radio systems and so we looked at something called low power listening of the entire protocol so the way how you typically address energy usage in sensor networks or similar type systems is that instead of so if you want two nodes to communicate both of the nodes have to have the radios on one of them needs to be in transmission mode the other one in the reception mode that's very energy consuming because your receiver needs to keep its radio on at all times how is this typically addressed in so called the protocol is that the receiver turns this radio on for a very short period of time so the node is sleeping not consuming any energy then it wakes up for a little bit turns its radio on, puts it in the receiver mode if it doesn't receive any packet it goes back to sleep if it receives the packet and goes back to sleep so this lets you save a lot of energy on the receiver at the cost of the transmitter needs to transmit at least as long as the distance between the two radio checking results on the receiver so we sort of pay more for the transmission but save a lot of energy on the receiver ok so when we enabled this low power listening protocol in our dual radio implementation we observed that so this was the data so this is the table is showing an overhead of the dual radio versus the single radio if the low power listening was not enabled because 50 person and more overhead if we enabled it because around 33% which decreased with the increasing number of packets decreased to almost 0 but there was still some overhead for one packet per second type of duty cycles around 20 to 25% for typical low power listening settings so what we then looked at was we tried to find what is the inefficiency of the low power listening protocol, there is this overhead having problem and what we find out is that the receiver node needs to be running the two radios at the same time what we observed in the field was that actually most of the time only one radio was being used the radio had a better link and that changed only very rarely in time if there was some external interference or something that had changed the link sometimes flipped but when we looked at distribution of when radio one versus radio two was used it was pretty much binary so the idea was 0.4900 so what in effect it meant was that one of the radios was almost never used and yet the node kept checking whether there is transmission on the radio so the improvements that we did was we implemented a way to adaptively increase the radio checking involved and along with some other optimizations where we let the transmitter to estimate that the receiver is waking up and to delay the transmission until that time we were able to bring down the energy or to increase the energy efficiency of this protocol and we will be compared single radio operations out of single the protocol that we developed you could see that the visual radio were actually able to improve the energy consumption of a single radio by some amount in terms of mean energy and maximum energy as well your maximum energy typically defined your lifetime okay so I've talked for 50 minutes time to conclude so I've looked at initially I gave you an overview of a couple of recent projects that are motivating some of our platform work at CSIRO I presented a couple conclusions on the modern MCUs that let you be there are almost as energy efficient as well low power MCUs but let you do single processing at higher energy efficiency and in communication I have shown that it is possible to improve reliability by implementing the radio link diversity in such a net platform and improve reliability not necessarily require higher energy usage to implement your networks taking away that the overhead of multiple radios is the cost of the hardware the energy cost of running the additional device okay so that concludes the talk I have a couple publications that I can share with you and a lot of contributing other labs that contributed to some of the software for Cortex and Freebase the platforms in INUS or on all of our sensing of our listening protocols that make 15 minutes of questions only to do how do you decide that which node is and which node is transmitter do you decide to study or are you deciding that than you can so typically you have a large network of nodes deployed which is predefined it's a stationary node that's sitting somewhere in the field then all the rest of the nodes imagine that's a climate monitoring network so every other node is periodically measuring some data and then transmitting color to the base station so each of the nodes is transmitter at some point in time transmit data to the base station and if you can talk to the base station directly and the node is transmitter the base station is receiver if it requires another node for forwarding the packet then the other node will become receiver first and when forwarding it becomes transmitter so it's not statically predefined because the network, the data collection network which is the graph through which you are sending data to the root can be changing over time but each node at some point becomes a transmitter and most of the time it needs to forward some data for other nodes to come to receiving I'm asking about the bats I was just curious if you'd looked at having the nodes coordinate their receiver periods as well presumably you could if you have three nodes sitting around the base station you could each give them a third of the you mean TVMA like time division multiple edges yeah that's pretty good okay it's a novel radio okay yeah so people try this approach it's hard to coordinate its sensor networks so typically the energy that is spent for coordination even the low rates at which you are sending the data typically outweighs the benefit that you gain so typically the channel capacity is not a problem we are sending way less data than the channel capacity so the way how the collisions are so I'm not talking about collisions I'm talking about so one of the problems you mentioned here is figuring out that having the nodes the active and in receive mode when one of the other nodes is trying to send to the base station I guess you considered having the nodes sort of coordinate to try and figure out when other nodes will be trying to receive right so that was this part and I sort of went too quickly through it's actually been dropped off by other people as well yes so what we can do we can go back here yeah so what happens here is that if the node one is the transmitter okay so if it has no idea when the receiver will wake up then it will just transmit start transmitting when the packet becomes available and it will transmit until it's acknowledged by the receiver yes but now let's say you have node three that and both node two and node three are closer to the base station than node one or at least have a lower have a higher chance of contacting the base station with node two try to coordinate with node three so in the basic implementation now there was an extension that was actually published last year in rbsn where the first node that wakes up will acknowledge it and if it makes at least some improvement of the base station so each node has an estimate of how much effort it takes to transmit packets from each location to the base station so how many hops on average it takes for a packet to get there so if you have two nodes one of them can be actually further away from the base station than the other so but what this guy that's why you always select the best one to go through and you have to wait until it wakes up what this guy did was whoever wakes up the first will acknowledge it as long as it makes some progress towards the base station and they've made some improvement over the base of protocols but what I was trying to get at before sort of jumping to me it was that there is another protocol it's called Weisman that's a bit smarter so the first time when you start transmitting you have no idea when this node is waking up you start transmitting when the packet becomes available but then you can record the time when the node actually acknowledged from the known byte payload length you can estimate when it woke up and you can remember that time and this period is typically static so it's 512 milliseconds so you can then estimate when it's waking up and the next time around when you have a packet you wake up just before it wakes up and you say then it's called Weisman it's probably one of the most efficient protocols in SensorNet the problem is they never made it public so no one could use it no one could test against it it's a proprietary protocol but yes that's what we are actually at the implement in our dual radio protocol and that's what helped us Thanks So I wonder if the flying boxes do they like some birds use magnetism for navigation and is that consideration to make them sensors? Good question I don't think they do I think they mostly use visual clues I don't think the encoders understand them to say for sure not but I don't think the sensors have had high enough created high enough magnetic fields to disturb I know they were deploying GPS tags with satellite transmitters on pigeons and pigeons clearly used those are also birds that are known with magnetic fields for navigation and I think they haven't found that the pigeons would have a bigger problem with navigation but also the thing with pigeons is like allegedly it only makes a difference if it's cloudy because if it's not cloudy they'll use the sun and if it's cloudy then you can try to manage their back or whatever it only bothers them only when it's cloudy and GPS doesn't work well when it's cloudy it doesn't? I think it does not always that's another cool thing to look at I never assumed that there would be a problem with cloud and GPS can I go back to the processor stuff you can't give up, right? no I feel like there's when you guys maybe this is a false choice that's my point for the 32 bit cortex? yeah I mean one or both I think this is always an interesting question because throughout my experience there's always been this problem where people will have a difficult time putting a monetary price on reliability or maintenance so for example here you have this power consumption issue you're talking about a factor of 2 so that's actually a lot of cycles that's twice as many we're going to change batteries or do maintenance some maintenance costs associated with battery draw that would be cool why not say we're going to use cost components but we're going to increase the price of each node by 50% like an M3 and then I'd also like the ISA compatible they have another one in the same line it uses a subset of the ARM ISA runs at even lower power so that way you don't have to because essentially what these numbers show is that the higher power processors are better at processing and the lower power processors are better at not processing they're better at doing stupid little things like servicing hardware like grabbing a sample and going back so why not use them both for what they're doing you always would have to support it it's probably much more difficult to implement support for dual CPU than dual radios yeah, thank you but I feel like it's interesting because you feel that sensor numbers are trying to build up into these higher power processors I think mobile devices are also going up but they'd also like to go down because the sleep states for phone processors aren't as low as you want so phones might want a lower power CPU so phones might want a higher power one then you have a common set of challenges which is how you manage this potentially a slightly heterogenous set of devices can you enter all these discussions of what are you actually going to be running on the ARM CPU are you going to do much better at running Linux you wouldn't have to need to compile stuff for some weird language such as tiny pets oh yeah, no, that's fine they're open-cv and there's a file and you use your mode as a low power makeup device problem with Linux is that it's using a lot of RAM and you need to refresh that so you get either a latency problem or buffering that from the flash or high energy consumption so that was the argument for using tiny OS but yeah how do you defeat it one issue is that when you look at the buff form as such it's rarely limited by the MCU you have voltage regulator and voltage from the hardware that I don't understand so it's probably when you look at the buff form sleep power and it's going to be way less different there than just when you're looking at the MCU no sure, but you usually don't care that much about the sleep power you care about the easy sleep power how much power does it take to get up and grab a sample and vibrate there are refreshments upstairs I think now in 310 does anyone else have any other questions I was just wondering if you consider the availability of energy in each node let's say if a node is about to die for routing so that you can make some decisions dynamically or something like that that's what Jeff looked at in his PhD yeah we just decided to get up we did some work looking at trying to utilize trying to wrap around energy poles and things like that and it turns out that there are some wins there that you can get I still think it's a clever idea I gave up on trying to convince other people about it so there is there's a lot of other energy about their work in general it can be difficult I think there's a lot of extra forecasting and prediction you have to be able to do before you can really decide essentially what you're trying to say and this is the argument I was trying to make for a while is that it's better to spend more power but to spend more power at devices that have more power the problem is that I don't just my intuition is that's not true that often I mean it sets our networks in general when you're power constrained to this level spending as little power as possible is almost always the right thing to do sometimes you might start to worry about where power is in the network because power is not transferred but in general the shorter routing path the better quality wins almost every time you have to see some pretty significant energy disparity also probably gets more complicated once you start harvesting and the different nodes harvest the different rates harvesting makes it in some ways a little bit more interesting now because you can get into a steady state so if you imagine a network where every node is harvesting at a different rate and then you also overlay some sort of reliability topology on top of it there's probably depending on your traffic pattern there's a closed form solution for an optimal routing tree there are respects both the charging rates and the data traffic rates and essentially optimizes some metric whether it's the overall weighted uptime of the network or something else the delivery ratios in some level in our work we were always trying to make the argument that you were going to do solar power charging because if you're not then care the network's going to run out of power eventually can we check our speaker again? I don't think we can I don't think we can I don't think we can I think it's just my personal experience when I started the GPA after driving after I started driving it took much longer than I mean I'm still parking