 Good afternoon and we're very pleased to have our first close speaker this semester, I realize. Our speaker's name is Dr. Archan, professor, Dr. Archan Mesura. He's now at the School of Information Systems, the Singapore Management University, which I understand is a relatively new university in Singapore. Archan has worked at IBM and at this telco in Belgrade. Belcore. Belcore. Belcore was put in Belcore, but it was then teleported here for many years at IBM Research. And he's currently an associate professor in Singapore. His research interest is in the area of basic computing, mobile systems, with a specific focus on energy efficient stream and the matter of data mining, for example. And, you know, lots of research area in that area over the past 12 years has worked and extensively in the area of wireless networks, basic computing, mobile data management. I'm going to call out the papers that received the best paper awards in several conferences, including EUC, 2008, ECM, I completed Walmart, 2002, and I completed Mailbox in 2001. Speaking of the Walmart, actually I had the pleasure of working with Archan many years ago, I think it was 2006. I was the TPC chair of the Conference for the College of Engineering at Niagara-Cole, New York. He's also an editor of the I-Trope Transaction on Mobile Computing and I was the Bureau's general professor of Mobile Computing. He chaired the I-Trope Computer Society's technical committee on computer communication from 2005 to 2007. He was PhD in ECE department from University of Maryland at a college park. He knows our president Satish Chappati, he was there. He graduated from I-I-T Karakpur, India in electronic communication in America. Welcome Archan. Thank you, Trinme. That is the longest introduction I've ever had. Very comfortable. Thank you, I didn't even realize I told you all those things. Yes, it's a pleasure to be here. Speaking of, you know, you said about your president, I remember he was, you know, obviously a full professor. I was the youngest student at his first Indian Festival Diwali party at his home and I looked at that and I said, boy, he's good to be a professor. That's what I should become. You know, it's really an amazing coincidence to see that back in Buffalo and, you know, he's obviously a president after so many years. So, once again, thank you all very much for coming. I see it's a full house. I know for some of you, for the grad students, it's mandatory attendance, so I can sense your lack of enthusiasm, just your feeling. But I promise, you know, I'll let you out here by when? By 4.30. If I have about 50 slides, if I went through them, you'd be here until 9.30. I'm not going to do that. I will touch on some of the highlights. I've tried to skip most of the math in the top, not because you guys don't understand math. It's just less boring. And if you have more details, you know, you always welcome to, you know, contact me afterwards and talk to me about it. So, the one thing that you didn't mention in the introduction, so actually recently I've had introductions where people say, oh, I was trained as a computer scientist and now I'm working in transportation engineering and so on. So, I am the example of reverse. I'm the untrained computer scientist. I had no training in computer science. I did, you know, communication engineering. Of course, I did all the courses. But now what I do is basically all computer science. And coming to, you know, a department school that's in computer science, it's a pleasure to be here. So, I'm going to talk to you. The talk is divided into sort of three parts, or actually four parts. So, about 15 minutes of the talk, it's going to be a non-technical description of this project called LiveLabs. So, along with my colleague, Assistant Professor Rajesh Balan, the two of us directed this project. It's a very large mobile sensing and lifestyle experimentation project. I'll give you high-level overview of it. You know, there's a phone lab project out here that's started. Some of you will see the synergies. I'll also point out what I perceive to be the difference between the projects. So, I'll spend about 15, 20 minutes on that, motivating why we chose to do this project, you know, who our end customers are. And part of the message that I wanted to give out to you is, you know, over the course of the last 12 years, I've found that, you know, I do a lot of computer science, but in many cases I find that a lot of faculty and, you know, professors and students, you tend to get constrained in your thinking by thinking that computer science must only apply to computer science companies. So, it can only help Microsoft or Google or so on. And I think what I hopefully will illustrate to you is there's a world of possibility out there where the value of computer science and some of the sort of technologies we're developing translates to companies that you don't think of every day. You know, it's the Starbucks, the Visa, the proper and gamble. They all tend to derive value from some of the underlying technologies that we create. And I just wanted to make you aware of that. And then I'll talk about three problems in greater detail. So, what is Light Labs? You know, it's the way we wanted to think of it. It's providing you a participant base. We are handing out, not phones, but software on people's phones. There are going to be about 30,000 opt-in consumers in three public spaces in Singapore. One is our SMU campus. It's about 7,000 to 8,000 students. Two, at least two, big malls. I don't call them shopping malls because in places in Asia, especially in Singapore, they're not just there for shopping. You catch the train over there, you get a haircut over there, you go on a date over there, you get divorced over there. You pretty much do everything inside these malls because it's hot and humid outside. So, it's what we call lifestyle malls. You know, you go for eating to catch the movies over there. They're about a million square feet. They get about 60,000 to 70,000 visitors a day. So, incredibly dense, incredibly packed urban spaces. And Sentosa, for those of you who don't know it, think of it as like the Disneyland of Singapore. It's sort of an island with a lot of outdoor attractions, some indoor attractions. It's a combination of the two. It gets a lot of tourists. The key distinction between the malls and Sentosa, and I'll lead to it a little bit but not too much, is that Sentosa gets about 80% of its visitors are foreign tourists. And why that's important is because these people, even though they have phones, they don't have data plans because it's very expensive to roam. So, we have to figure out ways to interact with these people through other technologies, such as Wi-Fi, et cetera. And that shows up some interesting challenges. So, at its heart, it's like a two-legged zoo. So, it's a combination of two things. For the networking people in this room, tend to think of it as an advanced wireless broadband network. So, we are going to have our own LTE network in our campus with our own dedicated band. So, all our participants will get really high bandwidth, access bandwidth, and so on. So, that provides this localized, high bandwidth for application. So, we get really incredibly high density of users. Whenever there's free pizza during lunch, all our students, all our faculty show up. So, we get about densities of one person per square feet. So, if you think that's not dense, this room is what, about 250 square feet of 300? Imagine the 300 students, people over here. So, that creates what's localized hotspots of very high demand. People are playing games, downloading videos, YouTube videos, and so on. So, how do we do that? Part of the other challenge, we have a major telco as a partner, is how do we collect real-time contacts? So, your phone, as you all know, I don't need to preach to you, is in a very sensor-rich device. It can sense a lot of things about you. There have been applications that I won't get into now, but that also talk about, you know, there have been applications of adoption and so on that sense how your emotional well-being is based on the manner in which you're talking with people. Are you being aggressive? Are you being passive? So, it's getting really fascinating out there. It's not just, you know, the ability to sense whether you're walking or sitting, but it's the sort of holistic picture of your everyday lifestyle. So, we want to do things such as fine-grain in your location, monitoring of events, such as which application you're using, et cetera, and we want to do this continuously. So, energy is our big challenge. So, that's the technology part. But even if you abstract out the technology part, the bigger value over here is the so-called easy strut-strict experimentation service. And I'll explain it to you later. Because it lets lifestyle companies, companies that don't care about mobile technology, don't understand Android, don't understand programming, it just wants to test how people behave under different contexts. It allows them to do, we automate participants' selection, we automate the delivery of content. So, we allow you to run controlled behavioral experiments. This is what psychologists want to do. This is what, you know, marketers want to do. They want to understand how people behave when you give them different incentives in different, you know, contexts. They want to automate the deployment of apps and so on. So, when I talked about the testbed in three places, so there is a testbed around campus. This is where the LTE network goes, you know, the advanced network. There is some Pemtocell that are being put in place, a directional antenna system put in place to provide high bandwidth in this sort of quasi-enduring environment. And here the focus, the sector we're focusing is sort of telecom and digital media. This is where you want to test out the next video distribution service, the next multimedia chat service. So, how do you need bandwidth for this? How do you download mobile games? So, there are a lot of sort of 3D mobile games and high-definition mobile games coming out. What bandwidth do they need? How do we offer a little latency? How do we do data offloading between Pemtocell and local networks to, you know, broadband, 3G networks, etc.? So, there are a bunch of partners, you know, Microsoft Research and Qualcomm. They play in that space. Then there is the tourism and hospitality sector. That's like the Pemtocell place. The focus there is on trying to get people's context to improve their behavior in these theme park environments. So, we do a lot of crowd coordination. We figure out where cues are building up, how do we give you better itineraries, dynamically plan that, okay, it's going to rain in about five minutes. Can I redirect you to an FMV outlet? You're there with your child, they're maybe getting a little bit tired, so I give you a different attraction that suggests a different itinerary. So, and the last one is perhaps some, in some sense, the most exciting. This is where the most monetary value is attached. This is the light lapse of the malls. The focus here on retail and consumption lifestyle. So, it's about the ability to get real-time insight into what shoppers and visitors are doing in malls. What stores are you visiting? What are you doing inside the store? Who are you with? How long did you sit down for coffee? Did you take the stairs off? Which stores did you visit, etc.? And this can be used for a bold variety of applications. So, our partners are Capital Malls Asia, which is actually a behemoth. It's the largest mall operator in Asia. They own a ton of malls. DBS is a major bank. We have Visa Worldwide as one of the partners. So, if I set that up, let me give you one. Now, this is not a representative example, but at least it illustrates at a high level. Don't understand technology. What can you do with light lapse? You think you're a lifestyle company, and your lifestyle company could be your movie theater. We have a cloud service, and we tell that, the movie theater tells the service that I want to try out this experiment. So, I say, if a group of four or more people they exit, for this is the cafe, the movie theater is on the opposite side, if four or more people sit down in the cafe, they have a meal together for about ten minutes, or more than ten minutes, then when they leave the cafe, I want to give them an SMS with a discount for the movie. Because you think, okay, they've had a cup of coffee, they might be more interested in watching the movie. That's what this third-party company wants to do. It goes to a website and says, this is the experiment I want to run. These are the control group or the test group. These are the triggers. This is the stimulus that must be met. This is the intervention I want you to do. That's all they tell us. Now, what I'm illustrating over here, you know, one person comes in, sits down, you know, part of the animation. So, five minutes later, three of his or her buddies show up. They all sit down. Now, all of this live-life software that's running on the phone is continuously monitoring these people. It's figuring out that you walked into the cafe, you sat down, you sat down together. Reporting all of that to a back-end service. And that service is intelligently figuring out, oh, there are four or more people who are sitting down in the cafe. So, that conditions we met. So, 10 minutes later, you know, part of the overanimated slide, they walk out. As soon as they walk out, back at the back-end, now they realize, oh, you know, these people have actually left the cafe after 10 minutes. So, I've measured their real-time context. I've realized that they're satisfied with this condition. Once that happens, the cloud service will be used. SMS could use, you know, mobile advertising, could use HTML5, whatever. There are various ways to, you know, touch base with the consumer. It shows this notification on yet 20% of movies. So, suddenly you get this pop-up on your phone that says, oh, there's 20% of movies in the theater if you come in the next 30 minutes. This is just illustrating the whole life cycle of live labs, the ability for you to do real-time in-city experimentation using the context of one or more users. That's fundamentally what makes it so attractive to people because now you can do real experiments with mobile services and mobile applications. Okay? So, in terms of the ecosystem, and I said it's a large-scale research test bet. So, I think what is important to point out is what makes it attractive is its large scale. Now, you know, there are 5 million people in Singapore. We're not putting this offer in 5 million people. We just don't have the money and the capacity to do that. We're going up to 30,000 users. So, it's large enough that the people who run consumer experiments will say, we're getting statistically significant and valid readings. So, they can do control groups, market segmentation, demographic segmentation, etc. Yet it's small enough that we can hopefully manage it because when you get into the practicalities of building these systems, they're a nightmare. Why? Because people have different devices, different OS versions, different firmware releases. So, you know, when you start building the real system, it becomes a scalability nightmare to manage. So, we have to try to manage it at some point because we can't grow too big. So, it's a flagship project. You know, this project is funded at a $10 million level. I illustrated some of our partners. So, the technology partners are, you know, the suspect like Microsoft and Starhub, which is the telco over there. They're the number two telco. So, they're kind of like a Verizon or AT&T of the U.S. There are the venue owners, the capital mall centers of the Changi Airport. So, the Changi Airport, which is, you know, many of you know, one of the, you know, the busiest hub airports. It's also, I find that it's, you don't tend to think of an airport that way. The Changi Airport is also on the Dollar Valley, the largest mall in Singapore, because you people do tremendous amount of duty-free shopping. So, a lot of their revenue is actually derived not from planes landing and taking off, but from people shopping at the airport. So, and then you have all these people doing services, DVS, Visa in Moby, which is a mobile advertising company. So, to set this thing up and put this in perspective, and this lays the groundwork for the technical portion of my talk. So, I wanted to illustrate that Light Labs itself has four real technology pillars. So, the first pillar is what goes on the phone. We have the software in the phone that does this mobile sensing and the localized analytics. So, it can use your accelerometer to detect your activities such as sitting, standing, etc. It can use, you know, tapping into your Android notifications to figure out what you're browsing, you know, what you are and your visiting. It can use Wi-Fi scanning. It can use inertial navigation to figure out your indoor location. So, it gets all this information. So, hold on to this for a second. In the example I sent, there's a lifestyle company. It could be the Starbucks or, you know, the movie theater company. And it says, to the intervention engine, it pushes, okay, tell, deliver this movie discount when four people leave the coffee shop. So, this piece in turn tells our real-time analytics engine that I want to be notified when four people leave the coffee shop after 10 minutes. Okay? So, in the meantime, the software which is collecting sends the data to data storage. A server that takes the data and stores it. But storing the raw data is only the easy part. You know, you have to decide the schemas, etc. That's not so hard. But what you have to do after that comes this part where, again, this is not how the query is going to look like. I wanted to simplify it because most people find it easy to understand SQL as opposed to if I put an R script out there, for example. So, the reason I wanted to illustrate, oops. So, I'm telling this engine that I want to select the user location in the coffee shop and the activity has been sitting for more than 10 minutes. So, you have to do this at scale. So, scalability of the analytics engine is a real challenge because it's 30,000 users in real-time. Different people, you have different experiments going on. Some people sitting in the coffee shop. Some walking by a store. Somebody else playing a certain video game with some of the selected friends. So, all of this we want to detect in real-time. So, this is when the analytics comes in. But once that is done, you notify that intervention engine. So, the analytics engine notifies the intervention engine that I've met this trigger. This was my best query subscription. And then you deliver. So, this delivery mechanism, again, goes through the same software. You know, you have a server site. There's a control channel that delivers this thing. It'll be a pop-up notification. It could be an SMS. It could be an email sent to your email account. All of these are possible. Okay? I also wanted to spend a little time, now this is more administrative, but I wanted to illustrate the Light Lab projects. As I said, it's about 10 million. It's actually affiliated with an independent project called LARC, Living Analytics Research Center, which is something that they're jointly doing with Carnegie Mellon at the university level between CS and business schools. So, that, the focus there is on large-scale societal analytics. So, the only thing I wanted to mention is the data that we collect from Light Lab is a micro, is a very deeply instrumented, medium-scale test bank. That data feeds into other sources of data. So, we have projects that are mining Twitter data, that are mining Facebook data, for example, that are mining, you know, transit records of people as they use the MRTs or the subways and they pay tolls of different places. So, that's like societal-scale big data. So, I wanted to distinguish between the big data piece and what we call the deep data piece. This is very fine-ring data, but not from the millions of users. This is, you know, sort of online interaction data or visit data, but from millions of users, not necessarily in real time because collecting this data in real time is a challenge. So, on top of that comes all the analytics services that LARC is working on. These could be things like in business analytics. These could be things like figuring out, you know, where you want to do better optimization of pricing strategies. Maybe for MRT rights, you suggest that they give a 15% discount in certain times of the day and so on. So, there's a whole bunch of other faculty who are working on that space. So, what LARC helps Light Lab by providing us analytics support in some of the key areas. So, some of the group analytics algorithms I'll talk about technically sit in the LARC project with which I'm affiliated as well. And then, they build a lot of apps and they deliver this. Now, we said there are third-party apps. So, some apps are from Light Labs. Some are from LARC, but okay, this is still the university ecosystem, but the vast majority of apps come from external companies. They come from game developers. They come from proper and gamble. They come from Starbucks. And some of them need not even be apps. They could just be these experiments. They didn't tell us to test this out. Okay? So, let's get to the technology piece of the talk now. So, what are some of the key R&D challenges? And I will only talk about a few of them today. So, one, we have to do this deep, continuous context collection. Forget about the year one, two, and three. That's, you know, it's a slide we used for internal scheduling, you know, when you report to the government. But fundamentally, what do we do? We have to collect continuously the context from people and to transmit it. So, energy is a bottleneck. Privacy is a bottleneck. I will not talk about privacy today. It's a challenge. I broke up one of these contexts is indoor localization. So, we have to get indoor location because most of our environments are indoors. We need to track people because location is still the most important context. That's why I separated that. There's been a lot of work in indoor location and we are very respectful of that work. But, you know, I'll make this broad claim on camera that very little of it actually works in real environments. And I'll illustrate to you we haven't cracked this problem, but we've at least encountered some of the challenges why some of these techniques don't work. Then, the challenges derive the analytics. This is the group analytics piece I was talking about. I'll allude to that. Then, the ability to run these automated social experiments. We have to find the right interventions. We have to allow third-party verification of these applications, etc. This I will not talk about today. I will very briefly talk about the ability to handle network transient traffic loads because both of these, frankly, we haven't done much. It's very much early work. I don't have anything significant to report. Okay. So, let me pause here for a second. Anybody has any questions about the fundamentals of line labs? Anything you want to ask me just to clarify? Please feel free to do so. Yes. I might get to this in a bit, but you sort of alluded to the fact that you were using R as a query language. It's one of the possibilities we're exploring because some of the queries are specified not, you know, give this discount to person X, but you specify this more as give, you know, if the person is talking to the three most popular people that he communicates with, then trigger something out. So, there is some sort of statistical correlation-based prejudice. So, for that, you know, we're exploring the user R. Okay. Are there other languages? Are there other languages? Are there any other representations? There are many others, of course. We just haven't got to that point yet. So, that's that thing. You're going to spare a comment. Right. Thank you. Because that comes with the experimentation framework, which we have done very little of work on, right? So, once I broke out, as I said, it stands on four pillars. So, today I will basically talk about these two because we have some work to report over here. I will leave it a little bit to it because we've just started working on this. And this one, frankly, we haven't done much work at all. So, I will leave that out for today's. So, let's start with the deep-context collection problem. So, the problem here is the energy overhead. So, we want to collect all this context data. And what I illustrated over here, this is like no live-latch. This is the power consumed on a Galaxy S3. It's about 250 million watts. You know, just running regularly. If we run the ability to collect all the things such as your context, which application are you using, what you are visiting, then you end up almost collecting no additional power consumption. If you run activity recognition, which is the ability to use the accelerometer, etc., to figure out if you're sitting or walking, you begin to see that you get about the 30 to 40 percent rise in energy consumption. Now, once you begin to accumulate, you begin to take GPS and the denominator and so on. And you see that the energy begins to shoot up. Now, it's a somewhat surprising result, because, you know, traditionally, we all get told GPS is the most expensive sensor. You know, it takes about 350 million watts. Nothing else consumes. But it turns out that, yes, the sensors may not, but just because the CPU is on, because it consumes energy, so the energy overheads already get pretty high for even some of the other sensors. So, even for the compass, you see the energy consumption is not very negligible. This is the assumption you make only if you test the compass sensor is not very high. But when you begin to do, you know, run the queries on it, etc., and it's on all the time, it gets pretty high. So, fundamentally, the challenge is if you think we went from 200 to about 450, what it does at a very crude level, you're having the operational lifetime of the device. So if the person was charging at 8 a.m. in the morning and was next charging at 11 at night, now you're saying it's going to run out of battery, let's say, at, you know, 3 p.m. in the afternoon. And that's just fundamentally unacceptable. Nobody is going to sign up for our surveys if we tell them that you have to recharge in the middle of the day. They'll say, okay, thank you. Right? So we have to do something to figure this out. So there's about a 20% to 30% increase in all cases. So the point is a lot of the activity recognition literature, which many of you have done, I worked on, it's okay for intermittent sensing. So if you want to figure out, suddenly, okay, where is this person? You know, what are they doing, then it's okay. We still need open, it's an open research to address this problem continuously sensing in the background. So it's running all the time. It has to be sort of subliminal. It has to have no energy impact at all. So keep trying to keep this simple, especially to address the first year grad students. So I want to reduce the energy of an activity recognition. I'll use only an exemplar of using the accelerator to detect whether you're sitting standing or posture, you know, these things. So typically, one of the steps is making the sense on the phone. So first, you have to do the sensing. From the sensing, you have to compute the coefficients. You know, you have to extract features. You extract Fourier coefficients, you extract the entropy, et cetera. After you use the features, then you do classification. So you have to run it through some kind of classifier. Could be Bayesian, could be, you know, many different classifiers. You can use a neural network, doesn't matter. But it will output, you know, you give it label data, you train it, and it outputs something. I want to separate out the classification of low-lying activity from something I call context, which is the higher-level activity. So, you know, both queuing and exercising might involve a few steps of walking. But they're fundamentally, logically different activities. So I wanted to separate them out at this level, that context. Then comes the high-level query. So if I wanted to figure out that is Archon, you know, sort of forlorn and standing by himself in a queue, or, you know, is he with a bunch of friends and he's really happy, that involves not just the context from one phone, but it involves the context from multiple phones. That's what I want to illustrate. It's a point to departure. You typically, in many cases of practical interest, you're not just interested in one phone's context. You're actually interested in the collective context for multiple people. So once I break that down over here, what I'm going to talk about in 15 minutes or less is three different pieces of work. Again, at a very high level. So the first piece, which I call A3R, it's going to focus only on adapting the sensing and feature extraction part. It's just trying to make some optimizations over there. The second part, which I will describe, this I will skip over because it's subsumed in the next one. This focuses on one phone and tries to optimize the high-level query to context detection piece. Okay, I just wanted to explain these. And the last one, which is an ongoing piece of work in cloud query optimization, it's trying to do the optimization across multiple phones. And this is frankly the most interesting and exciting part because this is where you'll be able to realize the most amount of savings. Okay, so let's get into A3R. I actually never remember the terminology. I give it different names each time I talk. But okay, so it's some accelerometer-based activity recognition. So the key idea, the main idea of this work is very simple. Again, I'm just using this in the illustration of many other approaches in this field we've been working on, is can I adjust, remember, the sensing and the feature extraction? I said these are the only two pieces I'm looking at. Can I adjust these based on the current activity of the individual? So what does that mean? So there are two parameters for the sensing and feature extraction. One is how frequently you sample. So that's the sampling frequency of the sensor. And then what features do you use to classify? So these are the two things I'm going to play around with. And the goal is to reduce the energy overhead without sacrificing accuracy. Because obviously there's always a trade-off you can make there. So again, I want to tell you that the work is very empirical and motivated. So we do a lot of experiments. You know, we find out how commercial devices behave. So what this graph is illustrating, X axis is sampling frequency. What happens if we sample at, you know, 5 hertz, 16, 15 hertz, 50 hertz, or 100 hertz. And the Y axis, the two lines here are the energy consumed. In one case, when I use only time domain features, so I use like zero crossing and, you know, the amplitude, et cetera. In the other case, I use frequency domain features. So I, you know, use the Fourier coefficients. I use, you know, the entropy, et cetera. So the two things you see is that obviously, as you increase the sampling frequency, the energy consumption goes up. Not surprising, right? You're sampling more and you're using the sensor more. The other thing is if you use time plus frequency domain, you use more than you use time domain. Again, the interesting points are, look at these two points, for example, that using time domain at 100 hertz is actually less expensive than using time plus frequency domain at 16 hertz. And then as you go on in the frequency domain, the scale is nonlinearly goes up because you have, you know, it's like N log N, if I remember correctly, the Fourier transformation process. So there's sort of a nonlinear slope as you have more sample points to compute on. So the point is it's not immediately clear that, oh, I'm sampling at higher frequency, therefore I must take more energy. It all depends on what features you use at what rate. So after that, we do, okay. What I wanted to show you here, actually I will skip in the interest of time. So I said, okay, what is the different combination of features and sampling frequency? So what this graph illustrates is for different activities, going downstairs, taking an escalator down, walking, sitting, or standing, I sample at different frequencies. And I see what is the accuracy I get. So clearly the lower I sample, typically the accuracy drops off. But the thing you see over here is that some activities, the drop off is much sharper. Like going up the escalator, if you sample at like five hertz, you're going to lose a lot of accuracy. Some activities such as sitting and standing, there really isn't that much of a difference at what frequency you're sampling. Because these are pretty sedentary activities, you can sample at a lower frequency. So once I have this activity, again, very complex figure, don't pay attention to it. Let me tell you what my algorithm does. It starts at this level. It says, highest frequency using time plus frequency domain. This is the most energy intensive. Suddenly it figures out, okay, you're matching the activity of sitting. So at that point it says, I'm going to drop down to five hertz and use time domain features. So as long as you continue sitting, I'm going to be using the lower frequency and the lower set of features. At some point you will stop sitting. I don't know when. You will go into some other activity. The moment my confidence in sitting drops below a threshold, I go back to my highest activity level and try to classify what you're doing next. When I find out what you're doing next, I will go into that level of activity and that frequency features. Very simple. Again, I don't need to illustrate the whole process. I'm taking advantage of the fact that different activities, you don't need all of the features and you don't need the sampling of the highest frequency. Why would this help? So we did this study with, again, forget all the details, but the point to illustrate, collecting the data is really an effort in itself. So we had six people. This was done in collaboration with EPFL. These are people in Switzerland. We had them carry these phone around for two to four to six weeks. They collected the data. And what I'm illustrating here, first, let's illustrate this point. So this is what they did on a daily basis. Not surprisingly, you find that most users spend a lot of time just sitting on the day. They spend a fair bit of time sitting. They spend a modest amount of time walking faster and then we had this thing called slow walk. So you kind of emerald along. And some people stand and then they take very little stairs. So the whole point is that whatever percentage of the day, 60% of the day you're sitting, not surprising, right? That's why we're all going fatter because we're all sitting around with computer scientists after all. Let's take advantage of that fact. Let's take advantage of the fact and basically lower the sampling frequency at that point. So what this graph, again, I'm not getting into the details. You will have the slides, so you're more than welcome to look at that in the papers. But what we say is if you compare that to the quality. So if you always sampled at 5 hertz, you would save more energy. But then you would miss certain critical markers like when you went up the stairs or you went down the escalator, so on. If you sampled at 100 hertz all the time, you wouldn't miss anything, but you would waste a lot more energy. By doing this adaptive thing, you're kind of getting the best of both worlds. It's important for me to point out one of the fundamental assumptions. This is all about detecting or you stand up and you sit down because there's a hysteresis effect here. I'm looking for a certain duration. That's not what we're after. So this is not to be used for game-playing recognition. It's all about the fact if you sit down, unless you're crazy, you don't usually get up in the next one second. If you start walking, you kind of tend to walk for about 30 seconds wherever you're going. Even if you're pacing up and down in your room, you'll be doing that for more than 30 seconds straight. So we built an application, we gave it out to people, and to skip all this, we illustrated the point for two users. The green line is when you have no activity recognition. The red line is activity recognition at 50 hertz. The blue lines are active. Now, this is not an orange or orange or active comparison. These are three different days of the user because we didn't want to take the same trace and play it back. Why? Because we found out that morning that guy decided to watch a lot of YouTube videos when he was on the train. So his battery drained up really fast anyway. But our point is across users on this is a snapshot of two different days, what we find is we end up saving about 30% of energy through this whole process. So not tremendous amounts, but it's something that, you know, we're nothing to be ashamed of. So that's the first piece of the story. The next thing I wanted to talk about was I thought this part I would skip because this is all one phone. So this work was born out of some healthcare or wellness-related applications I was doing in my previous job where I said you had body-worn sensors. The phone was the gateway. It was collecting the data from everybody. So we said typically you would have queries like this running on the phone. You wanted to alert somebody. You wanted to alert somebody if your heart rate was above 100. And I'm using very simple descriptive non-representative terminology. If the acceleration was less than two, what did it mean? That meant you were not doing vigorous activity. You were just sitting down and your heart rate was very elevated and your temperature was, you know, this is the ambient temperature outside. It's more than 80 degrees. So, you know, this is a condition where you might have a stress on your heart. So in that case, you would want to SMS and alert the caregiver. You know, let's find out what's going on. So can we save energy without not having to take the data from all the sensors all the time? So the idea is simple. If I want to say, let me just explain this without even looking at this slide, I want to find out if everybody in this room is sitting. That's my query. Right? So the naive way which is what people would do is each of you are a sensory. I would tell you, just tell me if you're sitting or not. Report it to me. I will make a determination are you sitting or not. I will make a determination. The smarter way would be I would just go around asking each person in turn, are you sitting? Are you sitting? Etc. And the first person I hit who says, no, I'm not sitting, I'm standing, I'm done with the query. Because this was an all or nothing query, right? Everybody had to be sitting. So I would have saved, if I started off with you and I was lucky enough, I would have saved asking all the other 50 people in the room. So I figured that if my average heart rate is over 100 over 5 seconds and my pulse oxygen level saturation is ground below 90, do something, learn something. So which sensor do I query first? The SPO2 or DHR? So the reality is different events are less likely and different sensors need different amounts of data. So without getting into the details of the table, there are some obvious strategies. So the first strategy is you evaluate the predicates with the lowest energy consumption. There's a cheaper sensor which is SPO2. First, verify the predicate for that. This is what compilers do all the time. It's 101 for compiler optimization. Right? Evaluate the predicate that's cheaper. And in this case, it's an AND query. So if that's true, then you get the sensor from the other one. The other one is select the predicate with the higher false probability. The more selective predicate. Because if it turns out to be false, you can abort the query. So in this kind of setting, again I'm not getting into the math, the right way seems like would be to do some combination of the two. You would have some balancing factor between selectivity and cost. And that's precisely what we're doing. Now that was a very simple example. Now the thing where we're going is we're running queries on all of you at the same time continuously. And there are multiple queries. Some say are you standing and are you near the door? Somebody says are you on your phone or are you down, et cetera. So there is a lot of sensing, computation, communication cost to be traded off over here. So think of these as a complex set of queries. How do you do the optimization? So that's what we had a couple of papers on. Basically it's a more complex optimization logic, of query optimization logic. And what I wanted to illustrate at the end of the day when we ran these simulations with these health sensors and that's illustrated with it Bluetooth and Ether 2.11. But let's just pick any one. So we pick this one. This shows the energy state. So this is the night wave where everybody reports. This is the static wave where I figure out your predicate selectivity first and then I decide a sequence. This one I didn't have time to get into. This basically says that at each instant of my query evaluation I change the order because some of the data may already be in my buffer. So the cost metric changes. So if you do that you go from 2500 to about 800 joules. So you get about a factor of 3 reduction. But you see in the bytes transmitted you get a much steeper reduction. So we saved a lot in the data but we didn't save that much in energy which was really our key metric. Why is that? Because these interfaces they're not linear. They have a startup cost. They have a shutdown cost. The costs are amortized over the data. So, and you see the behavior depends it's different from Bluetooth and it's different for Adifulec. The point here is the high level takeaway is do not just look at the bytes saved and assume you're going to save energy because the way the interfaces work they have their startup thresholds for the thing to power on transmit the data and then power down that can give you very different real life savings numbers. So with that in mind let me get to the point of illustrating where are we going with this? This is our cloud query context service. So what we say is I want to sense a lot of things about all of you but the reality is different queries sense different things about different people and we're saying there's going to be a cloud service some server all the phones are connected there all of the queries get dumped on this server is doing the query optimization answering all your queries but doing it in an intelligent manner so it reduces the total energy drainage across all the phones. So I just maybe it's a bad example here so one of the queries for example could be tell me when three students of 2012 taking ISM which is a course are co-located so I can design my assignment so the example here is the high level query you're not interested in one person you're interested in a combination of context for multiple people you want to know when certain things are satisfied right so somebody else has a query says let me know if one of these three persons ABC are back in the office not using the cell phone I can discuss my assignment with them so research question is can we save energy by better coordinating the queries across a large number of phones so to illustrate this process there is this service two different queries they come from the M is the membership set so which phones are you interested in so one says I'm interested in your phones another query says I'm interested in those people's phones and they can overlap and Q is the query so in this process this system will figure out I'll take the data from two phones first and with only the data from the two phones I'll answer query one and then it figures out to answer query two I need data from a third phone and then I answer application two and you'll see where the query optimization comes in so this was from something we were presenting to Changi airport so little airport scenario so suppose there is two queries in this case one says inform me when A is standing and is near the security gate two conditions both on A the other query is inform me when one of A or B is standing so typically how are we getting this I mean in the knife case you will use the accelerometer from A to see if it's standing the Wi-Fi from A to see their location and for B you only need standing so use the accelerometer sensor from A to B and you will be sending this data all the time so the first approach which is the ACO approach I described you optimize them individually you look at A and say I want to get location Wi-Fi and standing acceleration which one is cheaper etc let me get that so I might end up getting the location Wi-Fi first and for B I might get the standing first it turns out that if you do that again very simple query optimization and laws of combinatorics that if one of these is true you will always need the other one so the smart choice here is if you do that optimize them jointly in one case you will have to do all three sensors what I wanted to illustrate just the thought process here if I first find out if A is standing let's say I use the accelerometer on A A standing suppose that is true then I will never need to know if B is standing or not because the odd condition so what I'm illustrating here is you will either need this sensor or this sensor you will never have a case to need both so I'm already saving one third of the sensor data just through the simple example now I'm extrapolating this to real world scenarios and real world testing and you will see that you can begin to save a fair amount of energy and the initial lab experiments this is by no means conclusive suggest that when we do the individual optimization versus asking you all to send the data to me you save about 48 percent when you do the joint computation we're saving another additional 27 percent so collectively if we can indeed save 70 percent and we still satisfy all these continuous queries that would be a huge win for us because remember we were about 40 percent higher than the base rate we were about 140 if we were able to save about 70 percent of the 40 percent that's the additional overhead then we're only 10 percent higher than the base and that's acceptable so we're back at a point where we can do continuous query estimation without killing the battery of individual firms and that's what we're after okay time for me to switch I'll talk about neural localization it's going to be much griefer it's a much much research topic I'll just explain to you some of the things that we've been working on so the first point I wanted to make and I made this to some of you I see a lot of the literature in neural localization say okay we are this accurate because we can get to within 2 meters we can get to 3 meters etc so my claim is that in a practical system it's not just accuracy that's the important metric it's a combination of accuracy and the energy because we're doing it continuously what is the energy cost to actually get this accuracy and then there's the infrastructure cost or scalability and the second point I wanted to make if you look at the vast majority of the papers they'll say we did a neural localization we tested it with some users using an android device why do you use android? because you can do wifi scanning and finger printing it's an open API it allows you to do that in Singapore as in many other parts of Asia 80% of the people still use iPhones not android so in a practical system we've actually built a localization system we've deployed it, we've found only 10 people sign up for the service the other 200 people said oh do you have an android version? no do you have an iPhone version? no why not? because iPhone the API is locked it doesn't give you, report to you the signal strength when you do wifi scanning so what do we do in this case so the researcher approaches to say okay we publish the paper with android the practical system building says okay we have to have a solution for iPhone the operator basically told us if you don't have a solution for iPhone go away don't talk to us because we're not interested you guys computer scientists great publisher papers I don't want to hear from you so we have used a combination of techniques wifi fingerprinting based triangulation all of these are well known techniques so movement based continuous tracking where we do tech helping and the bitter beast moving which is with the accelerometer compass and then I'll talk about the accelerometer this is a new sensor here is our basic strategy so it's a combination of three things one periodically at a certain time we do the wifi scan based on fingerprints this could be either done on the phone for android for iPhone we use the controller so every wifi infrastructure has controllers that report to you the signal strength that the APC's about the phone one big difference for those of you working in your localization you find commercial systems so in the android you do a scan and you get 15 AP's actually when we did it in our initial in our lab we got 240 access points at one point many of them were virtual access points because it's the same AP with different SSID's we didn't realize that so even if you filter out many many points so you can do very accurate many multi-dimensional fingerprinting but when you go to the android at the iPhone based system the controller will only report to you the signal strength of the AP to which you're associated you suddenly drop from 30 dimension to 1 dimension so that reduces the accuracy of your system a lot so we do this periodically in the meanwhile we're using the accelerometer and the compass on the phone to do the inertial you know the inertial tracking the dead reckoning how much have you moved in one direction very standard technique and then we run this bittersweet computation for those of you who know it's like computing the most probable path through a trellis because I won't get into all the details so some of the key practical challenges so again without going into all as I said I'll skip all the details some of the techniques are well known so the first thing we wanted to find out is how accurate can we even detect indoor location if we did fingerprinting and what this thing shows is two landmarks this is SIS our campus building this is Plaza Singapore our mall this is the Earth Mover Distance which is a measure of the divergence between two distributions these are distributions of the signal strength across different areas so one thing you observe is in landmark night in SIS the Earth Mover Distances start moving up really fast what does it mean that means I'm standing here the statistical histogram of my signal strength here is different from the signal strength three meters away because the distributions are different what that suggests is if I fingerprint at that granularity I can start localizing you at three meter accuracy why is that because our campus building many nooks and crannies many doors and obstacles so you go around an obstacle very different radio environment than when you're on the other side of the obstacle two meters away you see very different radio environments you go to Plaza Singapore which is a mall and if you know the malls in Asia they have big hollow dome in the middle you can see all the floors along the side and all the stores are along the side when you do that you begin to see that the distribution of the radio signal strength as measured not by on your phone remember this is not you know, custom made measurement devices with very high sensitivity on your phone you see that as up to 16 meters you don't really see any statistically different distributions because it's all sort of liner side open propagation so immediately suggest the key takeover is you know you can take your algorithm and somebody asks you how accurate is your algorithm, what is your error rate you cannot give an answer the answer is building specific and that's very unfortunate because it suggests that there is no technique out there that you can immediately say oh I want two meter accuracy I'll use that algorithm it depends very much on the layout of your environment the building, its obstacles etc that's point one the second point I wanted to mention the visitors so first let's look at plaza sing and rsso what we show over here you know when we do a scan on android this is the rssi range for different things in dbf with the negative sign and this is the number of access points at that point with that signal strength so what you see here is this is low density when very few people are around when a lot of students their phones come you can clearly see the system shift to the right in terms of peak so what is happening there are a few APs and at the end there is not that much difference so let me translate what this graph means it means that there are some APs that you hear very weakly very poor signal quality those APs more people come in they will still remain indistinguishable their quality is going to be poor there are some APs that you hear very strong signals those APs suddenly become moderate signal APs so their signal becomes weaker why is this important this suggests that if I fingerprinted during a time when there was low density users and I used this fingerprinting strategy for localization I would get errors why is that because human beings we all absorb in the 2.4 GHz so we block signals our phones cause interference so there are all these obstacles density increases and signal strength tend to move to the left so this illustrates what have we got so far so right now our results I wanted to illustrate this so at the 80 percentile accuracy we get about 10 meters maybe 8 meters in SIS which is a campus building and we only get 15 meters in the mall now this is not the reported results where you get 2 to 3 meter accuracy in many papers so what this suggests to us is that reality is because of the fluctuation densities the layout of the places the true results are often a lot more pessimistic than what your initial calibration experiments in your lab will suggest to you this is not too bad because the average size of a store in Singapore is about 8 meters which means that we are getting your location plus minus 1 store which is okay as a starting point but there is still work to do and we have created the analysis and we expose it this is a value add to the mall they can get deep maps of where people reside how people transition they can track individual users etc so in this connection I wanted to explain my femtocell work because we have a femtocell test bed so largely the analytics and prediction people use RF measurements you measure signal strength, RSCP, RSSI variations you collect and you predict where people do it and the movement speed so our question is can we use this to predict various things such as network coverage so this is a quick schematic of our layout across two floors this is one floor we have three femtocells 3G femtocells HSPA let's give the high level results so nothing is surprising but it's just a quantification so this pick anyone so this shows the lines are for different days it says the distance from femto one when we handed off to femto two for three different speeds walking very slowly walking like this and walking a little fast and it turns out that the variation goes from about 10 meters to about 14, 15 meters you might say so just your movement speed affects where you hand off by about 4 to 5 meters you might say okay 4 to 5 meters is not a big deal but the whole radius of these femtocells remember these are small cells is 8 to 10 meters so basically you are getting about a 50% divergence of the point where you hand off because you change your movement speed the next thing is look at the throughput this is for a TCP connection forget the red line the blue line shows your throughput look at this axis this is when you are moving slowly the throughput we got remember there is nobody else on the network no interference we got about 5 megabits or 4 megabits per second of throughput when we started walking fast it dropped to 600 kilobits per second an 8-4 drop so while the qualitative results are all expected we all know this as you move faster you hand off at a longer distance you get 4 signals see in the femto environment when cells are really small indoors this difference is accentuated so it's pre-dramatic differences finally we see that the signal strength themselves should show as much as a 10 to 15 dBm variation at a specific location on different ways at different points so that means even if we did fingerprint this thing would vary on different days so where we are going with this is we are trying to address this problem of real-time RF creation the logic is simple I have the phones in a crowd sourcing manner report from their current locations what signal strength they are seeing they report it to a server from that I am going to try to create a prediction model for the building real-time given the context and predict what the signal strength is going to be at the other points so to do that we use two parts one we cluster the data I am skipping over a lot of details so the clustering and I don't have the graph here you will see if you leave the door open for example the propagation effect inside and outside almost identically you close it the outside is very different than the inside so you will see as our clustering algorithm automatically continuously running creates different clusters for the room a different cluster outside if you leave the door open it will put them all in the same cluster because they have the same propagation effect so first we create the cluster then we estimate the alpha and r this is a log distance path loss model we parameterize each cluster differently so we predict the values for each cluster and the end result what we show is that previously we had about 25 dBm of error if you did the only static measurements and you found this is standard deviation but now we are down to about the mean error of 3.3 dBm and a percentile the 95th to 5th percentile is 12 dBm so it's not perfect by any means what it suggests is there are opportunities here because indoor environments are notoriously difficult people move things around they leave doors open people charge down the characteristic change but if enough people this is showing the number of fold use for training and what it suggests is from 50% of the points if you get readings I can predict with high accuracy the RF behavior of the other 50% of the points so what this is suggesting why is this useful so the accuracy obviously degrades when we have less observations so if it's nighttime and there are only 10 of you in this building I'll predict the other locations much more inaccurately but that's okay because fundamentally when do we need high accuracy when the network is congested when there are a lot more issues in this building so when it's congested I will get better results I'll make better predictions which is exactly what we want when it's not congested who cares you know I'll get it all wrong but you have the network to yourself so I'm going to sort of end over here by just pointing out for the group analytics what are one of the things we are doing right now we're doing this called what called Q detection so imagine there's a Starbucks suddenly a bunch of people queue up and there are other people around you know just sipping their lattes out waiting with their friends what we're trying to do is to use the accelerometer to figure your activity such as standing, walking, swaying the compass and the Wi-Fi to get your location in which way are you changing your direction of movement all the time which suggests that you might not be queuing and all of this we are building up a sequential logic to determine if you are queuing now that's never going to be as accurate so this is actually a real trace of a person queuing an accelerometer what they do is every now and then they take a few steps forward this is what you expect in queuing it's a little different than walking so again some final details typically use frame sizes of 5 seconds to get the distinct behavior of queuing you have to use smaller frame sizes because you kind of move and stop you don't continue walking so right now for one person in real life you got 90% accuracy now this is I have to say also with the phone in the pocket these are real people standing in queues we have clinic queues in Singapore there's a lot of people there but we get 90% accuracy with the phone in the pocket once you take the phone out you toggle it around the accuracy drops so what are we trying to do this is the accuracy for one year but queuing is basically a collective activity can we figure out how many people are queuing in different groups so we figure out how many queues there are one queue or two queue so to answer also the question that I think you raised why do I care about the number of queues so imagine there are shops which are right next to one another so it's hard from the location to distinguish whether you are in this queue versus the other so you do need to know there are actually two different queues for two different shops because you might want to give the incentive only for one shop the other shop is not participating in the process so then I want to find out the position why am I doing all this because Starbucks is the canonical example what it wants to do is automatically detect when people queue up and says okay there are 25 people in the queue now it's taking long to serve so all the people beyond 20 give them a 5% discount on their phone and tell them if you hang out you get 5% off coffee so this way they are incentivized to stay on this is an example of a context-aware service so I covered the group analytics very briefly as I promised the experimentation service maybe the next time I'm here we'll have something to talk about we'll stop over here just wanted to acknowledge a lot of people including my colleague at SMU several faculty members as I mentioned with CMU so my main collaborates in session and a bunch of PhD students and research engineers so with that I'm done the scenario which you told initially about people sitting in a cafe were giving a discount for four people so you said you would have a localization up to 10-15 meters so is it the same amount of localization you achieved even if the users have kept their phone inside the pocket or inside the bag or something so actually I'll answer that in two parts so right now this is a phase it's a 5-year project so to detect whether they are sitting together in a cafe the 8 meter localization or 10 meters is totally useless because they might be sitting in different parts so our end goal is to get to 1 meter localization how? we don't even know right now maybe it's a combination of some infrastructures of peer-to-peer sensing might be involved so that's an ongoing process I just want to be very upfront about that now as to their question so if you look at the RF algorithms etc they don't really matter that much if it's in your pocket or in your bag etc if you take the phone out and put it on the table can I detect you're sitting or standing absolutely not so I think the real challenge here is going to be, I didn't show this slide I cannot get this right 100% of the time it's just not going to be possible so the question is can I detect historically let's say you always come in with a group of four friends three of them are sitting and your phone is in the same location I'll make the assumption that all four of you are sitting together so there is a statistical likelihood on the basis of which we'll do the interventions but we'll never be 100% correct I've just got one more question so when you're actually tracking the user using an accelerometer so how often does the user actually convey this location to the hotspot or to the access points doesn't it actually consume much of a battery or something okay so actually I skip that great question so what we have tried out right now is remember I said there's the VRB algorithm that does this, you're moving on the start so we do the fingerprint based computational location every 3 seconds or 6 seconds or 12 seconds we try these 3 seconds the reason we try these 3 is that roughly if you measure your gate in 1 second when you're walking you move about 1.2 meters on average so every 3 seconds you'll be moving about 4 meters okay so as it turns out that's why I said those metrics are important that while VRB based location with the accelerometer gives you sort of the dead reckoning all the time it consumes way more energy if you just do the Wi-Fi scanning and reporting it actually doesn't consume as much energy if you do it every 6 seconds but that's not the whole story that's why you know and again we're not the only group doing this so there are well known approaches that say that periodically sample the accelerometer so if you find you're sitting down don't even do the Wi-Fi scanning go into the size of the state so that goes back to the A3R type logic so in this activity you adjust the sampling and the feature of reporting frequencies and you try to be conservative so eventually we will our goal is to reduce the energy consumption and frankly at this point our evidence suggests that all of that VRB based fine-grained dead reckoning is not worth it when you consider the energy cost so better to ditch the accelerometer, ditch the compass and just stick with the Wi-Fi fingerprint for now little less accuracy but it lasts much longer so you said you had about 30,000 people on the program are you planning on getting more we started like a month ahead of you guys so right now we have about 50 active users on the program we'll get to 30,000 using the support from our mall partners and CentOSA and so on they'll be incentives to sign them up it's not just phones, they get rebates and vouchers from the mall they get special discounts and freebies to sign people up what are the chances of all four people having that app on their phone and active for them to actually get that query that's a great question beautiful question I don't know the full answer but I'll give you parts of the answer so deliberately we have 5,000 people who are our students the students as you know it's birds of a feather flock together so the student body is sort of hyper sampled so if the students go out for coffee there's a very good chance the four buddies going out for coffee from the cohort they all have the same app running now for the members of the general public what we're trying to do is to incentivize people to say when you sign up try to get your buddies and your family members to sign up they get additional discounts so we're trying to leverage on a little bit of a weak network effect so if you sign up chances are people in Iraq will also sign up how successful it's going to be I really don't know at this point because we know where close to that so besides the fund and sell and the AP we want to deploy some other sensors and infrastructures to move some other consumption from those mobile users to those fits it's certainly possible right now that's not the plan so in the SMU campus it's possible so there is an ongoing piece of work which I don't allude to here so we already have basic things like motion sensors and stuff so you can already think of combining these to get some combined sensor fusion the challenge is if you think of motion sensors they give you only aggregate information they say somebody passed through doesn't say who passed through etc and in large public spaces it quickly becomes a problem with this data doesn't become really useful because dozens of people are passing through it's hard to disambiguate who actually passed through which would help in the location there is some kind of RFID based indoor organization of course so that you can use their identification through the phone that you know these people passing through and I know that this person is around the biggest yes so I'm not ruling anything out right now look at the environments one is the malls the malls are very low to put any additional infrastructure out there for many reasons one of them is actually cosmetic they don't want any new equipment sticking out anywhere etc and it gets to this is a philosophical answer so can we shift what do we want to shift the energy burden of course that's good if you want more accuracy for location it's not clear to me after all this work there is really a market there is a demand for finer location so you have these things you know RFID tags store shelves you walk right next to it it will tell you you need conflicts this has been done for like 10 years now have these really been deployed no because you go pretty much where your spouse tells you to go and pick up your conflicts you don't need an RFID so part of it is it's the philosophy that because we're working with commercial players that's great technology we don't need it in our campus which is more of an experimental setup sure we might go with RFID we might go with other sensors UWB gives even better accuracy if you have a UWB download in your phone but how many phones have UWB downloads? Zero right now so right now our philosophy is not to try to augment the infrastructure at all will it change over 5 years? there is a very good chance it might help us when we figure out what works and doesn't hit the magic number of three questions let's stand to speak again that is the action packed kind of like high speed movie very exciting I'm the action hero of this thank you very much again thank you oh how do you make sure that you don't have to come that time yeah I'm not going to be a hero I'm not going to be a hero hey so you come back for your process I'm not going to be a hero I'm not going to be a hero I'm not going to be a hero you guys want to come to dinner? oh I can't I'm not going to be a hero I mean I'm a class I'm not a hero I'm a guy that came to deliver So in the country I have to get together and what time? five or five believe Do you want to get a skipper? I leave it in your cabin we should get on We're just doing the... Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.