 Ευχαριστώ για την επόμενη στιγμή. Θα εγγραφήσω το ολτρασάνοδικοσύστημα, που είναι ένα εξωτικό και λίγο γνωριστικό ολτρασάνοδικοσύστημα. Θα εγγραφήσω να ξεκινήσω με μια σοχή για το πρόγραμμα, που είναι also our motivation for this work. Υπάρχει κάποιο ώρα, υπάρχει ένα πρόγραμμα που εγγραφήσει στο ολτρασάνοδικοσύστημα, που δεν μπορεί να είναι προσφέρει by humans. Το πρόγραμμα was actually an interesting idea. It was very promising and everything, but it also had a fatal flow. So now that I've done this introduction, I would like to tell you more about the story of the product and how it came to be and what was its life circle. So 2012, the company called Silverpuss was a startup in India. It was founded there and they had this ultrasound cross-device tracking product. I'll go more into the technical details later. So for a couple of years they were working on that product and it wasn't until 2014 that they basically got some serious funding from Ventures and other angel investors, a few millions. So in 2014 they also got a few months after they got funded, they also got some press coverage about their product and they got some pretty good reviews on newspapers and articles about what the product could do and at the same time they were doing what most of the companies are doing, like publishing patents about their technology and everything. So things later started to go like a year after, year and a half maybe, started to go not that well for them. The security community noticed and there was some press coverage about the product that was not so positive anymore. So this is one of the very first emails that appear on the web regarding the product. So it's from a W3C working group. So a researcher there is basically notifying the other members of the group that okay there is this product, maybe there are transparency issues and certainly the users are not aware of what exactly is going on there. So let's keep an eye on it. So this was one of the very first things published about the product from the privacy and security perspective. So what happened then was the press took notice and they got all those headlines urging users to be very careful and oh, this is evil. Take care, people are interrupting on you and stuff. So of course, this led also on the FTC to take action. They organized a workshop on cross-device tracking in general thing and they made specific mentions for ultrasound cross-device tracking. Don't worry if you're not familiar with these terms, I'm going to define everything later. So what basically they were saying is transparency issues, how do we protect ourselves, how is that thing working? So then the users of course started to react and there were like many people were unhappy, they were complaining, what is this, I don't want that thing. Some people were actually suggesting solutions and the solutions were making sense. And of course, you have always the users that are completely immune to what you have there. So what happened then is like five months after, the FTC took much more serious action regarding this specific product. So it sent a letter to all the developers and the letter was essentially saying, you know, you're using this framework and you're up. We've seen it in Google Play Store. It's not enough that you are asking for the microphone permission. You should let the users know that you are tracking them if you are doing so. Otherwise you are violating rule X, Y, Z and you are not allowed to do that. So this was pretty serious I would say. And what happened next is basically the company withdrew from the US market and said, you know, we have nothing to do with the US market and this product is not active there, you shouldn't be concerned. So end of story, like the product is not out there in the US at least anymore, are we safe? So it seemed to us that it was assumed that this was an isolated security incident. And to be fair, very little became known about the technology at this point. The press moved on to other hot topics at the time. People went quiet like if people are not using it, it's fine. So everyone seemed happy. But we are curious people. So we had lots of questions that were not answered. So our main questions was like why they were using ultrasounds. We'll see that what they are doing you can do with other technologies. How would such frameworks work? We had no idea there was no coverage or nothing about it technically speaking out there. Are there others such products there? Because we were aware of one. Everyone on all the articles was referring to that one product but we were not sure if there are others doing the same thing. And of course we were looking for a report about the whole ecosystem and how it works and there was nothing. So what you do then, if there are no technical resources, you basically, we decided to do our own research and come up with this report that we were lacking. So we're done with the motivation so far. We were pretty pumped up about looking into it. Okay, what's there? The rest of the presentation will go as follows. Like first I'm going to introduce ultrasound tracking and other terminology. Then I'll go on with the attack details. And indeed we have an attack against the Tor browser. Then we're doing a formal security analysis of the ecosystem and try to pinpoint the things that went wrong. And then we'll try to introduce our countermeasures and advocate for proper practices. So to begin with, I'm Vasilis. I've done this work with other curious people. These are Suan Hao, Yannick Fratt Antonio, Christopher Krügel and Giovanni Vigna from UCSB and also Federico Maggi from Politecnico di Milano. Let's now start with the ecosystem. So apparently ultrasounds are used in lots of places and they can be utilized for different purposes. Some of them are cross-device tracking that I referred already to. Audience analytics, synchronized content, proximity marketing and device pairing. You can do some other things but we'll see them later. So to begin with what cross-device tracking is, cross-device tracking is basically the holy grail for marketers right now because you're using your multiple devices, smartphone, laptop, computer, maybe your TV and to them you appear as different people and they want to be able to link those devices to know that you're the same person so that they can build your profiles more accurately. So for instance, if you're watching an ad on the TV, they want to be able to know that it's you so that they can push relevant ads on your smartphone or follow-up ads. So this is employed by major advertising networks and there are two ways to do it, either deterministically or probabilistically. The deterministic approach is much more reliable, you get 100% accuracy and it works as follows. If you're Facebook, the users are heavily incentivized to log in from all their devices. So what happens is that you can immediately know that okay, this user has these three devices and I can push relevant content to all of them. However, if you're not Facebook or Google, it's much more unlikely that the users would want to log into your platform from their different devices so you have to look for alternatives. And one tool to come up with those alternatives are ultrasound big cons. So all ultrasound tracking products are using ultrasound big cons. They may sound exotic, but basically what they are doing is they are encoding a sequence of symbols in a very high frequency that it's inaudible by humans. That's their first key feature. The second one is they can't be emitted by most commercial speakers and they can't be captured by most commercial microphones, for instance, found on your smartphone. So the technical details are the following. I know there are lots of experts in these kind of things here. So I'm averaging out how the companies are doing it right now. I'm not saying that this is the best way to do it, but this is more or less what they're doing. Of course they have patents, so each one of them is doing a slightly different thing so they don't overlap. They're using the near ultrasound spectrum between the 18 kHz and 20 kHz, which is inaudible usually by adults. They divide it in smaller chunks. So if you divide it in chunks that have a size of 75 Hz, you get about 26 chunks, and then you can assign a letter of the alphabet on each one of them. Then what they are doing is usually within 4 to 5 seconds, they emit sequences of characters, usually they contain 4 to 6 characters in there, and they use it to incorporate a unique ID corresponding to the resource they attach the beacon to. So there is no ultrasound beacon standard, as I said previously, but there are lots of patents, so each one of them is doing a slightly different thing. But this is the basic principle. We did some experiments, and we found out that within 7 meters you get pretty good accuracy and low error rate. So of course this depends exactly how you encode things, but with applications found on Google Play, this worked up to 7 meters. We couldn't find computer speakers that were not able to emit near ultrasound frequencies and work with this technology. This is pretty known for these kinds of frequencies. They cannot penetrate through physical objects, but this is not a problem for their purposes, and we did some experiments with our research assistant, and we can say that they are audible by animals. So if you combine cross-device tracking and ultrasound beacon, you get ultrasound cross-device tracking. So now what you can do with this, and this is a pretty good idea actually, because it offers high accuracy, you don't ask the users to log in, which is a very demanding thing to ask for. You can embed those beacon-scene websites for TV ads, and this technology however requires some sort of sophisticated backend infrastructure. We're going to see more about it later, and you also need a network of publishers who are willing to incorporate beacon-scene content, whatever this content is. Then of course you need an ultrasound cross-device tracking framework that is going to run on the user's mobile device smartphone. So these frameworks are essentially an advertising SDK that developers can use to display ads on their free apps. So it's not that the developer is going to incorporate the ultrasound framework, it's going to incorporate an advertising SDK with the varying degrees of understanding of what he does. So here is how ultrasound cross-device tracking works. On step one basically we have the advertising client, he just wants to advertise his products. He goes to the ultrasound cross-device tracking provider who has the infrastructure, sets up a campaign, and the provider associates a unique ultrasound beacon with this campaign, and then pushes this beacon to content publishers to incorporate them into their content, depending on what the advertising client is trying to achieve. So this is step three. Step four, a user is basically accessing one of those conder providers, either this is a TV ad or a website on the internet, and once this content is loaded or displayed by your TV, the same time the device speakers are emitting the ultrasounds. And if you have the ultrasound cross-device tracking framework on your phone, which is usually listening on the background, then it picks up the ultrasound and on step six it submits it back to the service provider which now knows that, okay, this guy has watched this TV ad or whatever it is, and I'm gonna add this to his profile and push relevant targeted ads back to his device. So, of course, by doing this, they're just trying to improve their conversion rate and get more customers. Another use of ultrasounds currently in practice is proximity marketing. So venues basically set up multiple ultrasound emitters. These have kind of fancy name for speakers, and this is kind of the nice thing about ultrasounds, you just need speakers. So they put these in multiple locations in their venue, either a supermarket or a stadium, for instance, and then there is a customer mail app, if you're a supermarket, there is a supermarket app, if you're an MBA team, which we're gonna see later, you have this fun application that the fans of your team can download and install on their smartphones, and then once this app is listening on the background, and it picks up the ultrasounds and submits them back to the company. So the main purpose of using this is basically to study in-user behavior, in-store user behavior, provide the real-time notifications, like, okay, you're in this aisle on the supermarket, but if you just walk two meters down, you're gonna see this product in discount, or the third point which kind of incentivizes the users more is basically you're offering reward points for users visiting your store. And actually there is a product doing exactly that on the market. So some other uses are device pairing, and this basically relies on the fact that ultrasounds do not penetrate through objects. So if you have a smart TV, say, or Chromecast, for instance, they can emit a random pin through ultrasounds, your device picks it up and submits it back to the device through the internet, and now you've proved that you're on the same physical location with Chromecast or whatever your TV is. Also Google recently acquired SleekLogin. They are also using ultrasounds for authentication. It's not entirely clear what their product is about, though. And also you have audience measurements and analytics. So what they're doing is basically, if you incorporate multiple becomes in an ad, then you can basically track the reactions and the behavior of the users of the audience in the sense that first, you know how many people have watched your ad, and second, you know what happened. So if they switch channel in between and they submit only the first beacon of the two if you have two, then you also track their behavior. Okay, so we've seen all these technologies and then we started wondering how secure is that thing? Like, okay, what security measures are there applied by companies and everything? So I'm going to immediately start with the exploitation of the technology. So to do that, we just need a computer with speakers and the Tor browser and the smartphone with an ultrasound-enabled app and a state-level adversary. I'm going to say more about the state-level adversary later, but just keep in mind that it's on the Tor threat model. So I have a video of the attack. I'm going to stop it. I'm going to pause it in different places to explain some more stuff. Yeah, okay, so I'm going to set up the scene before that. So let's make the assumption that we have a whistleblower that wants to leak some documents to a journalist, but he doesn't know that the journalist is working with the government and his main intent is basically to de-anonymize him. So the journalist does the following, asks the whistleblower to upload the documents to a Tor hidden service or a website that he owns, and the whistleblower basically thinking that he's safe to do that through Tor, lots of page. So now I'm having, I have the demo, which is exactly that, implements exactly that scenario. So the whistleblower opens the Tor browser. So the setup is the following. We have the phone next to the computer. This can be up to seven meters away, but for practical purposes, it had to be next to the computer. So we have the Tor browser. What we're going to do first, for the purpose of the demo, we use the smartphone listening framework that's visible to the user. This is basically a demo app. Those apps, ultrasound cross-device tracking apps run on the background. So now we're setting it on listening mode so that it starts listening. Of course, in normal frameworks, the user doesn't have to do that part, but we want to show that, we want to show that what's happening. So now the whistleblower is going to load the Innocio's webpage suggested by the journalist and see what happens. Okay, now we've loaded the page and the phone is listening. In reality, in the background. So let's see what happens. Okay, this looks pretty bad. We have lots of information about the user visiting our service. I assume you already have some clues about how this happened. What the information that we have is the following. First of all, we have his IP address, phone number. Don't call this phone number. There is his Android ID, his email, and his Google account email. So this is enough to say, and his location of course. This is enough to say that we essentially de-anonymized him even if we had the IP address that would have been enough. So before I explain exactly how the attack worked, I'm going to introduce some tools that the attackers have at their disposal. The first one is a beacon injection. So what you can essentially do is basically craft your own ultrasound beacons and push them to devices listening for beacons. And then their devices are going to treat them like valid beacons and submit them back to the company's back-end. And then the same thing is basically you can also replay ultrasound beacons, meaning that you can capture them from various locations. And this is actually happening on the wild at a large scale for a specific application. And then once you capture those beacons, you can replay them back to the company's back-end through the user's devices. To give you a clue, there is a company that incentivizes users to visit stores by providing them offers and points when they are visiting stores. And people are capturing the beacons and are replaying them back to their devices from home. So they are sharing the beacons through the internet so that they don't have to go to the actual stores. Okay, the problem here is basically that the framework is handling every beacon. It doesn't have a way to distinguish between valid and maliciously crafted beacons. And my favorite tool for the attackers is basically a beacon trap, which is a code snippet that once you load it, you basically reproduce one or more inaudible beacons that the attacker chose to. So this can happen in lots of ways. On the demo, I used the first one. So you build a website and you have some JavaScript there, just playing the ultrasound on the back. What else you can do is basically stored cross-site scripting vulnerability. Just exploit it on any random website and then you can inject beacons to the visitors of this website or money in the middle attacks, just adding short JavaScript snippet on the user's traffic or just send an audio message to the victim. So how the toward the anonymization attack works is the following. So first the adversary needs to set up a campaign and then once he captures the beacon associated with that campaign, he builds a beacon trap and essentially on step three lures the user to visit it. This is what the generalist basically did for the whistleblower on our scenario. Then the user loads the resource. He has no idea this is possible and he slept and admits the ultrasound beacon. So if your smartphone has such a framework, it's going to pick it up and submit it back to the provider. And I don't know about you, but when I'm using Tor, I'm not connecting my phone to the internet through the Tor network. My phone is connected through my normal Wi-Fi. So now the ultrasound service provider knows that this smartphone device admitted that specific beacon. And then on step seven basically the adversary with the state level adversary can simply send Pina the provider for the AP or other identifiers which from what we've seen they collect plenty of them. Okay, so the first two elements we have them already like the Tor browser computer with speakers, fine. Smartphone with ultrasound cross-device tracking enabled framework, fine. What about the state level adversary? So we didn't have a state level adversary handy. So what we did is basically we redirected the traffic from step six to the adversary's backend. And I want to stress a point here. This is not a long short assumption. So what we've seen in October is the following. I don't know how many of you realize, but AT&T was running a spy program I think called Hemisphere and it was providing paid access to governments only with an administrative subpoena which doesn't even need to be obtained by its ads. So it's pretty easy for them to get access to this kind of data especially we're talking about an IP address. It's very easy for them to get it. So we also came up with some more attacks. The first one is profile corruption. Advertisers really like to build profiles about you, your interests and your behavior. So what you are basically doing is you can inject beacons to other people or even to your own phone and then you can malform their profile. Exactly the impact of this attack depends on how the backend of the advertising company and the infrastructure works but the attack is definitely possible. And then there is the information leakage attack where works under a similar assumption you can ifs drop and replay beacons to your own phone to make your profile similar to that of the victims and then based on how recommendation systems work you're very likely to get similar ads and similar content with that of the victims. So of course this also depends about exactly how the recommendation system is implemented but it's definitely possible. Okay, so we've seen certain things that makes us think that okay, the ecosystem is not very secure. We try to find out exactly why this happened. So we did a security evaluation and we came up with four points. The first one is that we came up with we realized that their threat model is inaccurate that ultrasound beacons in none of the implementations we've seen had any security features. They also violated the fundamental security principle and they lacked transparency when it came to user notification. So let's go through them one by one. So inaccurate trend model basically what they do is basically they rely on the fact that ultrasounds cannot penetrate the walls and they travel up to seven meters reliably. However, as a matter of fact they also assume that you cannot capture and replay beacons because of that region. What's happening in practice is that you can get virtually closed using beacons traps. So their assumption is not that accurate. Also the security capabilities of beacons are heavily constrained by the low bandwidth that the channel has. The limited time that you have to reach the users. So if someone is in a supermarket he's not going to stop somewhere for a very long time. So you have limited time and a noisy environment. So you want a very low error rate and adding crypto to the beacons it may not be a good idea but this also results in replay and injection attacks being possible. We also hear the violation of the principle of list privilege. So what happens is basically all these apps need full access to the microphone. And based on the way it works it's completely unnecessary for them to gain access to the audible frequencies. However, even if they want to there is no way to gain access only to the ultrasound spectrum. Both in Android and iOS you have to gain either access to the whole spectrum or no access at all. So this of course results in first malicious developers can at any time start using their access to the microphone. And of course all the benign ultrasound enabled apps are perceived as malicious by the users and this actually will show more about it later. So lack of transparency is in close ties is a bad combination with exactly what we've seen previously because we've observed large discrepancies between apps when it comes to informing the users and also lots of discrepancies when it comes to providing opt-out options. And there is a conflict of interest there because if you're a framework developer you want to advise for proper practices for your customers but you're not going to enforce them or kind of blackmail them either you do it properly or you're not using my framework. So there is a conflict of interest there. So what happened because of lack of transparency is the following. The Signal360 is one of those frameworks and the MBA team started using this on May and then a few months after there is a sue and someone claims that thing is listening on the background and what's interesting is on the claim what they're saying is okay I gave permission through the Android permission system for them to access the microphone but it was not explained to me exactly what they were doing and this is in close ties with what the FTC was saying in their letter a few months ago. Also again the same story football team starts using such a framework a few months after people are complaining that they are being eavesdropped on. I think what happened here is that when the team was playing a match the application started listening for ultrasounds but not all your fans are going to be on the stadium so you end up listening for ultrasounds in a church and other places. So yeah people were also pissed. Okay just to put it into perspective how prevalent these technologies are. The ecosystem is growing even though that one company was true there are other companies in the ecosystem coming up with new products as well. So the number of users is relatively low but it's also very hard to estimate. Right now we could find around 10 companies offering ultrasound related products and the majority of them is gathered around proximity marketing. There was only one company doing ultrasound cross-device tracking at least we found one and this is mainly due to infrastructure complexity it's not easy to do all those things and secondly I also believe that the whole backslash from the security community disincentivized other companies from joining because they don't want to turn this reputation. Okay so we have this situation right now companies are using ultrasounds what are we going to do? So this was our initial idea. This is what we thought first but we want to fix things. So we tried to come up with certain steps that we need to take to actually fix that thing and make it usable but not dangerous. So we listed what's wrong with it we did it already we developed some quick fixes that I'm going to present later and medium-term solutions as well and then we started advocating for long-term changes that are going to make the ecosystem reliable and we need the involvement from the community there definitely. So we developed some short and medium-term solutions the first one is a browser extension our browser extension basically does the following it's based on HTML5 the web audio API it filters all audio sources and places a filter between the audio source and the destination on the web page and filters out ultrasounds to do that we use a high-cell filter that attenuates all frequencies above 18 kHz and it works pretty reliably and we leave all audible frequencies in fact but it's not going to work with obsolete legacy technologies such as Flash. Okay, we also have an Android permission I think this is a more medium-term solution what we did is we developed Yannick developed a patch for the Android permission system this allows for finer grain control over the audio channel basically separates the permission needed for listening to audible sound and the permission needed for listening to the ultrasound spectrum so at least we force the applications to specifically declare that they are going to listen to for ultrasounds and of course users can on the latest Android versions can also disable this permission and it can act as an opt-out option if the app is not providing it we also initiated a discussion on the tour bug tracker but we have we are advocating for some long-term solutions so we really need some standardization here let's agree on a ultrasound beacon format and decide what security features can be there I mean we need to figure out what's technically possible there because it's not clear and then once we have a standard we can start building some APIs and the APIs are a very nice idea because they will work as the Bluetooth APIs work meaning that they will provide some methods to discover, process, generate and emit the sound beacons through a new API related permission and this means that we will stop having overprivileged apps we won't need access to the microphone anymore which is a huge problem right now and of course the applications will not be considered a spying anymore and there is also another problem that we found out while we were playing with those apps if you have a framework listening through the microphone other apps cannot access it so we were trying to open the camera app to record a video and the camera app was crashing because the framework was locking the access to the microphone now we may have some developers from frameworks saying you know I'm not going to use your API I'm going to keep asking for access to the microphone but we can force them to use this API if we somehow by default filter out the ultrasonic frequencies from the microphone and provide the way to the user to enable them on a pre-application basis from his phone Okay, so here is what we did we analyzed multiple ultrasound tracking technologies we saw what's out there in the real world and reverse engineered such frameworks we identified quite a few security shortcomings we introduced our attacks and proposed some usable countermeasures and hopefully we initiated the discussion about standardizing ultrasound bitcoins but there are still things left to do so for the application developers please explicitly notify the users about what your app is doing many of them would appreciate to know that also we need improved transparency in the data collection process because they were collecting lots of data and very few information were available about what kind of data the frameworks collect we also think it's a good idea to have an opt-in option if it's not too much to ask at least an opt-out and standard security practices as always so framework providers basically need to make sure that the developers inform the users and also make sure that the users consent regularly to listening for bitcoins like it's not enough if you consent once and then a month after the app is still listening for ultrasound bitcoins you have to periodically ask the user if it's still okay to do that ideally every time you're going to listen and then of course we need to work on standardizing ultrasound bitcoins and this is going to be a long process and then building the specialized API hopefully this is going to be easier once we have a standard and see what kind of authentication mechanisms can we have in this kind of constrained transmission channel Thank you Vasilius if you have any questions please do line up at the four microphones here in the walkways and the first question will be the front microphone here Hello, thank you for your presentation and I have a couple of questions to ask that are technical and are very related first of all do you think that blocking out in a system level the high frequencies for either microphone or the speakers as well is something that is technically feasible and will not put a very high latency in the processing So we did that through the permission you are talking about the smartphone right? Yeah basically because you have to have a real time sound and microphone feedback So we did that with the permission and I think it's not No it's not to resource demanding if that's your question So it's definitely possible to do that And the second part is So there is a new market maybe for some companies producing microphones and speakers that explicitly block out ultrasounds Right? Possibly, possibly I'm not sure if you can do this from the application level We developed a patch for the Android permission system I think our first approach back then was basically try to build an app to do that From the user land basically I'm not sure if you can I doubt actually on Android if you can filter out ultrasounds But from a browser we have our extension It works on Chrome You can easily use our code to do the same thing on Firefox Thanks The next question is from the front drive microphone Thank you for your talk I have a question about the attack requirements against the whistleblower using Tor I'm curious The attacker has access to the app on the smartphone And also access to the smartphone microphone Wouldn't the attacker then be able to just Listening on the conversation of the whistleblower and thereby identify him? Yeah, absolutely This is a major problem The problem is that they have access to the microphone So this is very real And it's not going to be resolved Even if we had access only to the ultrasound spectrum But what we're saying is basically If we only had access to the ultrasound spectrum You're still vulnerable to these attacks Unless you incorporate some crypto mechanisms That prevent these things from happening Is this your question? Well, I can still pull off the same attack If I don't use ultrasound, right? Through the audible spectrum Yes You can absolutely do There is one company Doing tracking in the audible spectrum This is much harder to mitigate We're looking into it about ways But there are so many ways to incorporate beacons in the audible spectrum The thing is that There is not much of an ecosystem in this area right now That's all You don't have lots of frameworks out there As many as you have for the ultrasounds Thank you Our next question will be from the internet Why our signal angel? MEST is asking Have you heard about exporting parasite ultrasound emitters Like IC components? Can you please repeat the question? Yes, sure The question is Can you use other components on the main board Or maybe the hard disk to emit ultrasounds And then broadcast the beacon via this So that's a very good question The answer is I don't know, possibly And it's very scary Hopefully not But I doubt it I think there should be a way to do it Maybe the problem is that You cannot do this completely In a completely inaudible way Like you may be able to emit ultrasounds But you also emit some sort of sound In the audible spectrum So that the user will know that something is going on But yeah The next question from the front left microphone Hi, thank you for your talk And especially thanks for the research So do you know of any frameworks or SDKs That cache the beacons they find Because for my use case My phone is mostly offline I just make it online When I have to check something So I'm not that concerned But do you know if they cache the beacons And submit them later? Something like this? Of course they do I'm not surprised, unfortunately Thanks Next question from the rear right What is the data rate you can send in the ultrasound? Very good question And it's totally relevant to the Cryptographic mechanisms we want to incorporate From our experiments In four seconds you can basically send like Five to six alphabet characters If you're willing to kind of Reduce the range In less than seven meters You may be able to send more But it's not very robust in this sense But these experiments were done with this kind of naive encoding That most of the companies are using So if you do the encoding in a very smart way Possibly you can increase that And the small second part What's the energy consumption on the phone If that is running all the time Wouldn't I detect that? So it's not good We saw that it was draining the battery And actually in the comments I don't know if I had that comment here Some people were complaining that I tried and it was draining my battery And there is an impact, absolutely Amazon Nest and Google Nest And all the other parts Aren't you more worried about that? You know the always listening thing From Google and Amazon And everyone is coming up with something like that That's always on and... So it's kind of strange Because the users consent But at the same time They don't completely understand So there is a gray line there Like you can say that the user is Okay, you consent it to that app Starting with your phone And listening on the background But at the same time The users don't have the best understanding always Thank you Next question from the front left microphone First, thank you for the talk I would be interested in how you selected Your real world applications And how many you found that already Used such a framework So what was the first part of the question? How you selected your real world applications From the Google Play Store If you had any So we're trying to do a systematic scan Of the whole market But it's not easy So we're not able to do that There are resources on the internet Luckily the companies need to advertise their product So they basically publish Press releases saying you know This NBA team started using our product We did some sort of scanning Through alternative datasets But definitely we don't have An exhaustive list of applications What I can say though is that There are applications with Using such frameworks with Nearly up to If I remember correctly up to 1 million installations One notable example Okay, I'm not entirely sure So I won't say it But up to 1 million We definitely saw Okay, thanks Do we have more questions from the internet? Yes EF is asking Is he aware of Or are you aware sorry Are you aware of any framework Available by Google or Apple In other words How do we know that it's not For instance Siri or self snitching on us? How do we know that it's not? Sorry It's not Siri or some Maybe Alexa snitching on us We don't I think that's a That's a very large discussion, right? So it's the same problem That these companies are having Because if I go back Είναι στρατιότο την ανάπτυξη που κοπίζω. Από το θεωρείτες ανάπτυξης αυτές οι ανάπτυξες δεν μπορούν να βρήκαμε κάποια αυτή η δεύτερη προσφασία αλλά, εσύ, είναι πολύ δύσκολο να εκλογίσεις ότι τα οικονομία που είσαι ακολουθούς σπίκτρος. Ενώ εξωθούν για το όλο το αγαπητικό αίδιο στο μικρόφων, Αυτό θα βρεις πάνω σε αυτήν την στιγμή σας. Είναι το ίδιο πρόβλημα που Αλέξης έχει από Amazon, αλλά σε αυτήν την στιγμή θα μπορείς να εξεκτικόμαστε από το σπίτι που έχεις δημιουργεί. Στην πρόβλημα από την πρόβλημα της λευκότητας. Έχει κάποιος κάνει μια δημιουργή δημιουργή από αυτές οι πρόβλημας που μπορείς να εξεκτικόμαστε από αυτήν την πρόβλημα της. Είναι ένα δημιουργή δημιουργή για το μάγκο σου, να κάνεις κάτι τέτοιο. Λοιπόν, δημιουργής ένα δημιουργή, αλλά από αυτοκρότημα. Είναι αυτήν την στιγμή, αλλά όλα αυτές τις στιγμές είναι πολύ συγκριμένοι με τη δυνατότητα. Έτσι, they publish what's needed for marketing purposes, like accuracy, sometimes range, very limited technical details. But apart from this, you have to get your hands on the framework somehow and analyze it yourself. So in this kind of overview we did for the ecosystem, we had to do everything by ourselves. There were no resources out there, we were very limited. Or recording it and playing it down and transposing it yourself, if you know where there's a beacon? Possibly, I'm not entirely sure. Another question from our signal angel. A master's again asking, would it be possible, even if you have a low pass filter to use, for instance, a Nyquist effect to transmit the beacon via ultrasound but in a regime which is free for the app. So it's basically the question, can I somehow via aliasing use an ultrasound signal to make a normal signal out of it? Possibly, I don't know. I think you are much more creative than I am. So maybe I should add more bullet points on this beacon trap list here. Apparently there are many more ways to do this. Possibly, like hardware missions, this one sounds like a good idea too. Next question from the rear right microphone. I apologize if you explain this right and they didn't understand but is sort of drowning out the signals like jamming by just broadcasting white noise in that spectrum, an effective countermeasure and as a follow-up if it is, would it terrorize my dog? So absolutely it's effective, I mean it works up to 7 meters but we're not saying it's not fragile, so you can do that but it's noise pollution and my dog, I don't think it was happy, I did it for a very limited time, I could see her ears moving but I don't think he would appreciate if I had a device at home doing this all the time. Do we have any more questions from the internet? Yes, Ulex is asking, to what extent could we use these for our own needs, for example people in repressive situations, for example activists, could use it to transmit secret encrypted messages, are there any efforts in this area? Yes, there are people developing ultrasound modems, I think there is even a TCP stack on it and yes of course there is. So I would say yes, I'm not entirely sure about the capabilities of this channel in terms of bandwidth but this is why we're not advocating to kill the technology just to make it secure and know its limitations. So you can do good stuff with it and this is what we want to do. Next question from Luria Wright. Yes, I'm wondering if you could transfer that technique from the ultrasound range also to the audible range, for example by using audio watermarks and then your permission thingy with the ultrasound permissions would be ineffective and you could also track the user. How about this, is it possible? Audio watermarks in the audible spectrum. It's absolutely possible, our countermeasures are not effective against this, it's just that from our research it's just one company doing this, so just one. I think technically it's a bit more challenging to do that instead than just a meeting ultrasound so they are doing it in a very basic way. So hopefully if there is a clear way to do it through ultrasounds they are not going to reside in the audible spectrum but our countermeasures are not effective against the audible watermarks. Yeah thanks. Next question from the front left microphone. I've heard, I don't think it's very credible but I've heard that the subsound on the subsound spectrum there were some experiments showing that they can influence our mood, the mood of humans. Is there any relevant information about how ultrasounds could affect us? So without being an expert in this particular area, I've read similar articles when I was looking into it. I can tell you it's very annoying, especially if you're listening to it through speakers, headphones. You cannot really hear the sound but if you're using headphones you can feel the pressure. So I don't know what kind of medical condition you may develop but you won't be very sane after who I have. Do we have any more questions? Yes, one further question. Would it be possible to use a jamming solution to get rid of these signals? Yes, but you're gonna pollute the, it's gonna result in noise pollution but if you're being paranoid about it, yes, and it's I think a straightforward thing to do. Any more questions? One more on the front left microphone. You said that physical objects will block the ultrasound. How solid do the physical objects need to be? So for example, does my pocket block the ultrasound and does prevent my phone to call the environment and vice versa? Okay, that's a good question. I don't think that clothes can actually do that unless it's very thick. Thin walls definitely block it. Thick glass, I would say it reduced the transmission rate, the signal to noise ratio by a lot. But it could go through it. So you need something quite concrete. Metal light, I don't think it goes through it. So are there any more? Doesn't look like it. Maybe one more, sorry. Goodby Kitty is asking, could you name or compile a list of tracking programs and apps? So that's a good question. We're trying to make an exhaustive list. Try to research this in a systematic way. I've already listed two frameworks. One is the silver push one, three actually. One is the silver push one. There is another one you use by Signal360. So developed by Signal360. And then there is a listener one. These are very popular. And then each developer is incorporating them into their applications in different ways. Offering varying levels of transparency for the users. So it's better if you start knowing what the frameworks are and then trying to find the applications using them. Because you know what you're looking in the code and you can develop some queries and enabling you to track which applications are using them. What we observed for the silver push is basically after the company announced that they are moving out of the US. And because of the whole backslash, maybe even before that, companies started to drop the framework. So all their versions had the framework, but they are not using it anymore. I think that's it. Thank you very much.