 The first talk of today we're going to visualize electromagnetic radiation in the range of the Wi-Fi frequencies. We have Friedmann Reinhardt from the Munich Technical Institute. He has this junior research project and with one of his students he pursued this project. The student is Philip Hall and the stage is yours. Thanks. So good morning and thanks for having me here on this congress, which I enjoyed very much so far. So in the talk that I would like to give, I would like to look at wireless signals from a very different perspective. When we think of wireless signals, we usually think of data packets, messages, runtimes, protocols and so on. But we can look at them from a very different point of view, from the point of view that it's actually just light. This is kind of a bold statement, but it's actually a fairly exact statement. From a physicist's perspective, a wireless signal is virtually the same as the beam of a laser or the light coming from the sun. It's electromagnetic radiation, which is described by Maxwell's equations and so on. So it's an oscillating electric and magnetic field. The only difference to visible light that we can see with our eyes is that the wavelength of these signals is much longer. So while the visible light that makes up the visible world has a wavelength in the range of a micrometer, the wireless signals that we typically employ in our devices have a wavelength in the range of few centimeters to a few tens of centimeters, and as a second very important difference, we cannot see them. But from a physicist's perspective, this is very interesting because that has a somewhat surprising implication. It has the implication that our modern world is actually brightly lit by all the wireless devices that we use every day. So this whole room as we sit here, it's actually glowing in various colors. Only we can't see it because it's glowing somewhere in the microwave radio domain. And this also implies that there is kind of a potential big security leak associated with these wireless devices in the sense that any device that you use, no matter how well you encrypt your data, is going to transmit a full three-dimensional picture of the world to your environment, no matter what you do. And so when we realized that we got intrigued and we wondered whether there would be a way to actually record that radiation and to reveal that picture to, in a way, build a camera that would be sensitive to this kind of radiation and that would enable us to see the world in the way that we would see it if our eyes were sensitive in this frequency domain. And so to cut a long story short, we managed to do that and this is the result that we got. This is a picture of our lab and the colorful thing that you see is wireless radiation coming from a router on the backside of the room that we visualize with our method and we can actually see it propagate in space and we can even see that objects that we place in the lab cast a shadow in that beam. And in what follows, I would like to walk you through how we did that and what it might be used for and I also would like to discuss whether it's actually a serious security concern. But before I do that, I would like to start from a somewhat broader overview and tell you a bit more about what has happened in the past few years in this field. So it turns out that hacking wireless signals in the sense of thinking about other ways, other useful data that we can extract from that signals that are omnipresent in our modern world, it has become quite a vibrant area of research in the past five years. And we haven't been the only ones that realized that this is actually a very interesting thing to look at. So there have been many other labs that have pursued many different directions to look at Wi-Fi radiation from a different point of view and maybe the most famous result is this. This is a result from a group at MIT in Boston, Massachusetts and they actually hack Wi-Fi routers in the way that they exploit the fact that modern routers have an array of antennas and they use that to actually send beams in different directions and to focus the beams on the devices that they want to interact with. So modern routers, they can actually send two different signals to cell phones on the right and on the left end of the room. And you can hack that, you can use that to scan a beam in two dimensions and look at the reflections. So in a way to build a phased array radar, if you wish, and this is the kind of picture that you get with that. So you can scan a beam through the room, you can look at the reflections and you get a fairly detailed picture of the world. You can even vaguely resolve human beings. You can also just pursue a simpler approach, you can only map attenuation. So you take a Wi-Fi router and a Wi-Fi receiver and you map the attenuation of the signal between the two of them and you can then mount them on a movable platform. So as a fancy example, for instance, a drone. So you can fly these two emitter and receiver drones around a building, map the attenuation on every line of sight that you covered during that flight, do tomography on that data and by doing that you can actually extract three-dimensional views of what has been in between these drones. So there has been a lot of activity in that field, but still this previous work did not yet fully well answer this question that we got interested in, like what would the world look with Wi-Fi eyes? And if you look at that question, the state of the art had been pretty much that we had something like the compound eye of an insect. There have been many ways to tamper with wireless radiations in the sense that we look at different directions either by directing a beam into some direction or by looking at attenuation in a certain direction and do that for multiple angles to get a coarse picture of the world. And so we got interested in whether we could actually get a full three-dimensional view of the full wave front and all the light propagating in space pretty much in the same way as we can do it with visible light. So we thought about that and it turned out that you can do it and you can do it in a surprisingly simple way. So there is a technique that essentially solves this problem and it's holography. So holography, holograms is something that you probably all have seen. It's these amazing pictures that appear three-dimensional. If you look at them and that you can tilt and you can actually look at objects from different angles, they do not look like a photograph. They look much more like a window into a virtual world that has been frozen by taking this hologram. And you can make them actually in a fairly simple way. So you don't even need a lens. All you need to do to record such a hologram is illuminate your object with what we call coherent light. I'll get to that in a second and record it on a face-sensitive camera. So these are very technical terms. Let me try to explain that in a more visual way. So in a way, a hologram is a photograph to zero. It captures more information than a two-dimensional photograph, which we usually take, where information is restricted in the sense that a usual photograph only captures intensity of light. So it can record in a two-dimensional plane where there was much light and where there was very little light and the resulting photo will be very bright and very dark, but it's going to be restricted to a two-dimensional plane. It will appear as a flat piece of paper if you look at it. A hologram can do more. So a hologram is a face-coherent recording, and that in a visual way means that it not only records brightness of the light, it also records the direction of the light beams hitting this hologram during exposure. So in a hologram, in a way, you freeze all the light rays that enter this specific plane, and if you develop that hologram, you in a way revive all these light beams again, and so when we look at the hologram, we really look at the very same light rays, at the very same light field that came from the object during exposure, and that in particular implies that we can look at it from various angles and we can have a three-dimensional view of it. So how is that done? So this is done by a face-coherent recording, and this technical term essentially captures this idea of recording direction along with intensity, and to see how that works, I have to get back to the point that light actually is electromagnetic waves, and waves pretty much like waves on a lake have valleys and hills, so areas where electric field is very strong and areas where electric field is very weak, and these valleys and peaks propagate in space. So if you throw a stone into a lake and you watch the resulting waves, you will notice that these waves always move perpendicular to both the peaks and the valleys. So the direction of the wave is in a way encoded in the difference between these peak and valleys. So if you can take a photograph where you also register whether the peak came first or the valley came first, you also register direction, and that's what it takes. So it takes a light source where these peaks and valleys are very well defined. This is what we call a coherent light source, and it takes a face-sensitive camera, a camera that can record whether the peaks or the valleys came first. Now it turns out that this is a fairly difficult thing to do in the optical domain. People managed to do it, but it was such a big breakthrough that it actually got awarded a Nobel Prize 40 years ago. Fortunately, it's actually much simpler in the radio frequency domain because the frequency here is much lower than for visible light, and so we can actually record every peak and every valley simply by registering the wave with a good oscilloscope. And that in particular means that it should be possible to do holography of this wireless radiation. So come up with an experimental setup where we take a picture in some two-dimensional plane of space, we register the brightness and phase of the microwave light at every point in space, and we can then later on use that to reconstruct a virtual view of the world. And this is pretty much how we did it. So this is our experimental setup. We explicitly wanted to capture light from arbitrary devices, so the light sources that we used always were commercial off-the-shelf Wi-Fi routers that we asked to download a big video from YouTube to generate a lot of traffic and data that we could image, but we make no assumption whatsoever on the signal that they actually emit. It could be anything, it could even be encrypted. We sometimes place some objects in the beam to get a more interesting picture and we finally record this beam of microwave light in a scanning aperture approach. So unfortunately it's very expensive to build microwave cameras. You would need a huge array of antennas, so in the first step it's easier to just take one antenna and scan it across every pixel that you want to image. This is what we did. So we have an antenna that we mounted on a scanning platform and we referenced that signal to the signal of a stationary antenna. This is what it looked like in real life. So you see it's not very fancy, it's probably the cheapest experiment I've ever been involved in and we used plywood and fissure technique and tools like that to do it, essentially to solve this problem of scanning the antenna through our lab, but at some point we made it work quite well and so we could record these pictures. So the signal that we get from this setup is a wave, a wave that we can record with an oscilloscope and that we actually record twice. Once coming from the scanning antenna, this is what is going to make up the image data later on and once from this reference antenna. So we make no assumption whatsoever on the signal that is transmitted, so we cannot rely on knowing what kind of bits and bytes are transmitted here, but it turns out that if you Fourier transform these signals and you divide them by each other, you can in a way normalize that signal to that reference and get a virtual second wave where the bit pattern actually cancels and you only record the phase delay and the attenuation of that wave as it travels to the scanning antenna. So for every pixel we record a set of data like on the upper left, we do some processing to remove the bit pattern and only get the coherent wave and then we end up with something like on the lower part of this plot. So we do that for every pixel, we get the brightness, the amplitude and the phase, the direction of the beam. We actually get that for a wide range of frequencies because Wi-Fi is a multi-frequency scheme so you can have different channels and we end up with a data set like below. So this is a hologram. It's unfortunately not something that you would easily be able to make sense of just by looking at it. So since we don't have eyes for this radiation, we cannot see the radiation by itself and even if we record it like that, the information is not very meaningful. But fortunately there exists a number of reconstruction algorithms. So this problem has been solved in coherent optics where people actually can render three-dimensional views of what a hologram would look like simply by knowing the pattern that you produce on a photographic plate. So by scanning a photographic plate and feeding that in the right algorithm you can get a three-dimensional view of the hologram without ever looking at it in real light. And this you can also do with microwave radiation. So from that hologram we can reconstruct three-dimensional views. We did that and this is the test setup that in the end worked best. So we have a commercial wiper router sitting on the back of the lab. It's labeled emitter in the upper picture. We place some absorbing objects in the beam that we made from aluminum tin foil and then we record a hologram in a plane that you cannot see here that is essentially the front plane of that image. We feed the daytime to a reconstruction algorithm and in that way we can now render a three-dimensional view as it would appear if we had eyes for wiper radiation and we looked through that plane of the above picture. So we can actually, since it's a three-dimensional approach we can actually not only reconstruct this view we can actually focus on different planes. And so this is what we do here. So obviously a first interesting thing to do is to focus back into the emitter plane. If you focus into the plane of the wiper router you actually nicely see a bright spot at one point in the image which is the image of this glowing light bulb in the microwave domain. It turns out interestingly that the picture is not very pretty if you just do it like that because if you only do it with one frequency, with one wavelength of light you become very sensitive to something that is known as speckles. So this microwave light bounces back and forth between all the walls in the lab and these waves interfere with each other in a very erratic way so we end up with something that looks like clouds. And we can interestingly enhance that quite a bit if we repeat the experiment for different frequencies so for different frequency bands within a Wi-Fi channel and we superimpose the images we get a much clearer picture where it is very clear that the router is the brightest spot. This is actually something that is very difficult to do with real light people struggle to make that work for microwave light it's essentially straightforward. And we can then go on and focus in different planes so for instance in the plane of this absorbing object in the beam and if we do that we nicely see that this so as we move from the emitter to the object we nicely see how the beam expands and that once we get to the object plane there is a big shadow appearing in the light wave. Nicely you can actually see that if you focus slightly below or slightly above that plane the image blurs so it really is a three-dimensional image it's like focusing with a photo camera you can blur it if you defocus into the wrong plane. So that's essentially the data that we took. There is potential for a lot more data processing so it turns out that there are numerous very powerful imaging schemes that work with coherent visible light so in coherent visible light you can take just plane-dump photographs as we do that most of the time but you can also play clever tricks to enhance for instance absorbing objects in the light it's called dark field microscopy you usually do it with microscopy you can image polarization of light or you could enhance only weakly refractive objects and all these schemes we could actually emulate by just numerically post-processing the hologram data so there's a lot of things to be looked at like what does the world look like if we enhance absorbing objects or do chairs have an index of refraction that is different from air and questions like that so in a way to summarize that we managed to record pictures and we actually managed to establish kind of an analogy between wireless communication and coherent optics so I made up a little dictionary how terms translate into each other so wireless signals that we usually think of as packets of data it's actually light it can have different colors this is a degree of freedom that we haven't used yet but we could do it for instance by splitting signals according to their SSID that they transmit so in a way, once we look at the big bit pattern signals with different bit patterns in a way correspond to different colors so we could have multiple emitters in a picture we could actually tell them apart and would get a multi-colored picture where every router in our room illuminates the room with a different color this would be a very interesting thing to do in the future it turns out that the pictures can be a lot better if you do white light holography if you use more than one frequency band this is something that has an analogy in the wireless domain where increasing the band with usually improves performance of radar if you want to learn something about the environment rather than the signal itself you usually do time domain ranging in the classical wireless domain but in optics it's just imaging so you form an image and you look at it and what used to be the runtime delay of a signal is now phase and direction of a light field so that was the work that we did we were very happy that we made it work we actually got it accepted in this journal and that was kind of interesting it's a fairly prestigious journal in our field and so when you submit a paper you have to fill a questionnaire on why you believe this paper is worthy of being published there and there are categories like it's a big advance of an established technique it's an unexpected breakthrough it's this and that and we decided to opt for category number four which is it is of singular appeal to all physicists so by that argument we actually made it accepted there and it was met with a lot of interest not only in science but even more so in the real world of media and of companies and so that's what I would like to look at in the second part of my talk what can we actually do with that will it ever be useful so if you think about that question what could you use it for there is a very obvious answer if you don't have it right now then you can actually read it on Russia today they picked up our story and the headline was that this could be used to map your home so you may wonder whether I know some people say this is only fake news but I can tell you that at least for this story this is not fake news that's based on a real thing so it's this idea can we use it to spy on other people or in a more optimistic fashion can we use it for security enforcement and security applications and this is something that many media jumped on so many of the headlines that we got were related to that can we use that when neighbors see me in my bathroom when I have my cell phone lying there and things like that so it's a question to which we somehow need to respond would that be feasible and at least in principle it looks like that so if you have a Wi-Fi router in your room and you walk around then your body will scatter some reflections and you can pick up these reflections run them through our algorithm and get an image of the world I'm somewhat skeptical whether this is a really serious use case though because first of all remember what it looked like so it's a huge device and it's always going to remain a huge device because you need to have information on a very large area this is what the whole scheme is based on so even if you mounted that on a drone you still would need to fly around many points around a building which would probably not go unnoticed and maybe even more importantly it's something that actually is already being done right now so for many of these applications you actually don't need imaging imaging is kind of a bonus but it's a complicated and expensive bonus and you can learn a lot about the world just by looking at the signal at one point in space and this is something that is going commercial so there are companies that sell systems where you essentially plug something like a second router into your room this router is not emitting signals that are used for communication it's actually looking at signals that are scattered from the environment it's analyzing them in a very detailed way and by doing that you can actually get a fairly good idea of whether there are people moving in the room how many of them are there are they in the bathroom or in the kitchen or whatever else so it's something where our scheme probably is too complicated which is good news because it's not a security concern the bad news if you wish is that Wi-Fi is a very leaky thing and this general idea that we can use the stray radiation to spy on other people and that's a valid security concern okay if we wanted to do it if we wanted to sell that as a security application I think we would run into a second very important problem and the important problem here is that if you have an application where people actually are willing to buy expensive equipment to get a lot of information then it's going to be some kind of dedicated device and so it's not so much more expensive to actually combine that with a dedicated tailor-made emitter that is emitting a specific signal and once you can do that you can play a trick which is called ultra wideband and this is a very powerful trick that is spoiling the game for many of these applications for us so I'm going to explain it in a little bit more so the idea that you might want to use is that if you want to build something like an x-ray for rooms you emit some signal from a dedicated emitter you detect the reflection in a dedicated detector you measure the runtime delay and from that you infer whether there are people moving around and at which distance they are so if you do that the resolution that you get is related to the bit rate of your signal if you wish it's essentially the inverse of the bandwidth so if these data packets are very broad the resolution will drop and from the point of view of radar Wi-Fi actually is not a very good signal because it has a very narrow bandwidth so for 2.4 GHz Wi-Fi it's actually only 20 MHz and this corresponds to a resolution of 10 meters and if you build a dedicated emitter obviously you can drop that restriction you can build an emitter with a much higher bit rate and this is what people do so you can push that up to a bit rate of GHz to 10s of GHz an emitter that is sending some garbage signal that is spread across all the frequencies up to 10 GHz and by doing that you actually get down to centimeter resolution in radar so if you invest into a dedicated device this is a very powerful trick to play and this is actually being done so I did some research on the internet and there are devices for security forces that look a bit like these devices that you use to search for pipes before drilling into the wall you place them on a wall it takes a few seconds and it's going to tell you whether there are people in that room that you can't see and at which distance they are so it gives you an image on the display which looks pretty much like that so to sum that up security applications well are an interesting thing to think about but it's probably not a really realistic use case for this scheme let's look at something else let's look at civil engineering I already mentioned it there are these devices that you use before making a drill and these devices well they work but I did it myself it's not a very reliable thing to work with so there is room for improvement and it's interesting to ask whether we could fill a gap in that market so the way that would work would be that you rely on the radiation that is emitted from wireless devices in other rooms across the wall that you want to drill and that will illuminate this wall with microwave light so that pipes and power lines in the wall would cast a shadow and then you would record a hologram across the wall that you want to drill reconstruct that image and see every single pipe still we have this challenge of technical complexity but I think in this case it would be acceptable because we could actually scale up this system from a single antenna device that is scanning the plane to a one-dimensional array of antennas like a magic wand that would wave across the wall and that would give you a picture on your smartphone of what's inside so this is a fairly realistic thing unfortunately here too you have to compete with ultra-wideband so if you build a complicated magic wand it's not so much more expensive to include an ultra-wideband emitter and to do very precise radar and there are companies doing that you may have read about wall about it's a kind of a radar that you can plug on the back of your smartphone and you can x-ray your wall and it's going to do some signal processing and give you actually a picture of where the pipe is so there would be strong competition still our scheme would have some advantages so we could use radiation that is actually coming from things behind the wall that could be an advantage and so in principle that could be something to think about so could work there's one thing that we initially thought would be a good idea that is tracking emitters inside buildings so this is actually a very important thing that is becoming more and more important as we move towards internet of things to actually tell where in the building in three-dimensional space you have some RF tag and our scheme in principle could do that so we could think of installing a two-dimensional huge array of many many antennas in the ceiling of this building doing holography and then see actually the position of each and every smartphone and Wi-Fi router moving around in real time we didn't do the experiment but we did a numerical simulation to see whether this could work and this is what you see here so we built a 3D model of a virtual storage hall with steel bars in the floors and with some steel shelves in one of the floor and the image that we get is encouraging so with that scheme we probably would be able to track down emitters with centimeter scale precision at video rates and what is more we probably would even be able to get course information about the objects in that building so this is a movie of the simulation data it's essentially the hologram reconstruction as we focus onto successively lower planes moving all the way from the top of the building to the ground floor and so this is what we get and we nicely see these shelves and we nicely see these bars casting shadows in the beam before we converge on the emitter which is appearing as a spot with centimeter scale resolution so this could work but obviously it would be a very expensive thing to do so you would need a large array of antennas it's probably not something that you could do right away but when we talked to companies it turned out that there is a solution that is somewhere in between where it could get really interesting and it's in a way a reduced implementation of that scheme so rather than going to a full two-dimensional array of antennas we could actually go to a one-dimensional array like this magic wand that still would speed up acquisition by a factor of 100 to 1000 compared to our single antenna proof-of-principle implementation so you could actually take pictures of large scale, 10 meter scale structures probably on a minute to our scale if you scanned this device around the structure by say a drone or a car and once you have that data you could actually look into the building with Wi-Fi eyes and understand how radiation propagates in there and this could actually be relevant for this whole field of indoor tracking so maybe to be a bit more specific on that this is actually a very important unsolved challenge at the moment so many companies get more and more interested in locating RF tags with a centimeter scale precision to track inventories or devices and it's a market that is actually predicted to grow to a billion dollar size in a few years however at the press... so one very straightforward application where we could enter here is that we could do site survey for existing solutions so if you bought some system for indoor tracking and you found that it for some reason didn't work in your factory hall then you could come with this one-dimensional array of antennas get this full picture and you could see that there is actually a nasty reflection on this metallic wall and if you replace that by some concrete coating it will make things much better, this kind of thing so this could be actually interesting to get the full picture only for that reason but it could also be interesting as an R&D tool to actually understand better what these signals look like and how they propagate to make these schemes better and there's work to do because at the moment these schemes are very successful but still not fully satisfactory so people have tried many things you can move around with a camera and scan barcodes you can use RFIDs but then you can only read them from a very close distance so it's not very convenient for large-scale factories you can use all kinds of beacons you can tag labels with ultrasound, Bluetooth, Wi-Fi, whatever but with that you run into this bandwidth issue and you only get meter-scale resolution here again you can play the ultra-wide band trick so you can have dedicated emitters with a very wide frequency band and that will give you centimeter-scale resolution and this already is incredibly successful today but it comes at a large price and it's a physical price these chips are expensive so the tags will cost you something like 10 euros or more and it's very hungry in terms of power so you need to ship it with a battery and you need to replace that battery every few months or years and this is not very convenient so there is a quest out there to make these schemes work with passive tags where we actually, rather than having an active emitter on a label we just have some absorbing little thing and we illuminate it by some other source and we image the shadow that it casts in the beam at the moment this is not doable you in principle could solve that problem by many ways you could buy more receivers you could do more signal processing whatever but it's not clear which route would be the most successful and so rather than trying them by trial and error one by one one interesting idea could be to actually record this entire wavefront look at the picture understand how radiation propagates in typical buildings and what part of the signal you actually need to see this shadow best and here our scheme I believe could come in we could record this full wavefront and we could simulate any kind of reduced scheme like a scheme with only few antennas or looking at only part of the frequency band with our algorithms so summing that up it's an interesting technique I don't think it's a serious security concern but reduced implementations actually might be one civil engineering might be an application where we could move into indoor tracking probably is too difficult but R&D for simpler indoor tracking schemes could be a very viable way to go with that I'm at the end I would like to acknowledge the person that did all the work Philip Hall he actually was brave enough to start this project as a bachelor thesis at the time when everyone else was frowning upon the idea and he was very successful in making all this work so that's the end of my talk I would like to draw your attention to the fact that you can look at wireless signal from a very different perspective it's just light and you can hack them by just using this stray radiation that you get and if you do it right you can actually take real pictures this could be useful for maybe tracking maybe civil engineering probably most as an R&D tool to implement to develop future wireless applications so thank you very much we have 15 minutes for Q&A go go hello thank you for your talk it was really interesting I was wondering if you could build Wi-Fi lenses that's a good question so it probably would be difficult if it should be a physical lens so you not only would need to find the right material which you probably could you also would need to make it large because the wave fronts are large but again, somewhat paradoxically there are actually much simpler schemes to do that if you take a train somewhere in the country you may see that nowadays these satellite receivers they are no longer like a parabolic mirror sometimes they tend to be flat like just a flat plate and this is in a way due to a virtual lens for this radiation in that frequency band so it's a phased array where you have many receivers and you delay the phase electronically which is essentially what a lens does in the optical domain and by doing that you can simulate any kind of lens including a parabolic reflector looking at the antenna so for our scheme it's actually even better we don't need a lens because we can simulate any kind of lens in the computer and this is something that we actually thought about doing trying different, building different kinds of virtual objective lenses to look at different parts of the radiation but I think electronic solutions will probably be the easier way to go than a physical lens We have one question from the internet What is your prognosis? When can we expect a practical camera for different spectral ranges like TV or radio? So technically I don't see an obstacle for transferring this technique to other frequency domains like radio or so I don't think we will see it in the near future as a commercial device because applications are so limited but maybe some lab will pick up the idea and do it so it might be something that we might read about in a few years Microphone 1 Thank you for the talk Do you know how much of the technology is present in a typical router so could a hacked router gain some physical information about its surroundings? This is an interesting question so obviously you couldn't use it for our scheme because you would need to scan it You could probably to some extent use it for these other schemes of the other groups by using this MIMO capability of scanning the beam I don't know how technically easy it is to do that You probably could do it in some way but the other groups actually use development kits for routers so it doesn't seem to be very straightforward to do at least or at least if you think about coming up with some project to do it it's probably easier to buy a development kit for a router which has a somewhat more dedicated hardware than the router itself Microphone 4 Yes, the technology that you showed looks a bit like a bi-static radar system but the power levels are much lower here what's about the ratio between a commercial radar system and actually a ratio of power between a commercial radar system and just Wi-Fi Oh, I probably should be frank in saying that I don't know so my lame intake on it is that at least military radar is employing huge powers like kilowatt maybe and we are definitely way below that because we work with commercial devices that are in the watt range at most so that would be a factor of 1000 but there might be advanced radars which actually use much less power Microphone 3 So how long did it take to take one of the hologram pictures and how large is the picture? So it took us a night so we typically programmed the device when we left the lab and when we came in the morning if we were lucky the picture was there so it took something like seconds of acquisition time per pixel and the pixels were something like few hundred by few hundred pixels in size so that took a night and this of course is a big drawback as it stands right now but as I said I think it would be very easy to get better in that respect so even if we only scale it in one dimension you take an array of antennas in one dimension like a magic wand with many many antennas and you scan it only in the second one and potentially the third one then you could speed up acquisition by a factor of 100 to 1000 so it would be in something like a minute to hour range to get a full picture of a building Microphone 1 So in that case wouldn't be forming help because well it works both ways and this way you could also scan the signal you eliminate the space with wouldn't that decrease the critical dimensions you need on the acquisition antenna array a bit Yeah this is a very good point and we actually thought about that so you could come up with kind of a hybrid scheme where you do scanning to cover some space but you take more than one antenna say you take a set of antennas that is like the inverse of the MIMO array in a commercial router so it could record where the light is coming from along a whole two dimensional set of directions and then you might be able to actually accelerate that scheme quite a bit and I think that if we wanted to go for passive localization and all these things probably the solution in the end would look like that having a set of some enhanced antennas with a good direction sensitivity and having a discrete set of them at strategically placed points in a room that might be sufficient to get all the information you need to at least see the emitters Having a smaller array with a lot of antennas like 8x8 but this would basically end up being a space array radar button steroids Right, right, so this is a very realistic thing to do I believe so still if you ran holographic processing on that data it would be a very coarse resolution image because it's only 8x8 pixels but if you combine that with scanning it to only some strategic points then that might be sufficient to actually get good pictures even without scanning a whole two dimensional plane We have another question from the internet Aside from the paper, did you publish source code or hardware schematics to make it easier for others to reproduce your results? We didn't publish it, but we wrote a very extensive supplementary material where we describe every detail of how we actually did the acquisition including references to all the hardware components and plans but we already got requests from hackers and students that wanted to recreate that and of course if people just write us an email we would be very willing to give away all this software and information Microphone 3 Hello, you had the simulation of the antenna array in the ceiling of a building but you said it would be too expensive to actually refurbish a building that way What happens if you construct a new building and plan to build this in from the get-go? Would that be more reasonable? Yeah, so probably not so much refurbishing the building which would be so expensive it's just the physical array of antennas itself Unfortunately, if things grow with the square of pixels it quickly gets very expensive so we did some estimates of it even if each of these antennas would cost you only 10 cents and you wanted to have a thousand by thousand pixel array then you would end up with 100,000 euros of investment only for the electronics to acquire the signals so it would be a very high barrier I think to practical implementation Microphone 4 Yeah, given that it's relatively easy to put a wireless file into continuous wave mode do you think this would improve the signal quality and is it something you've tried? So setting the router to some continuous mode rather than downloading a video we tried both and for our scheme it doesn't matter because we throw away all this bit pattern information anyway so we started with that then we bought a model where we couldn't do it for some reason and then we switched to downloading a big file but it actually does for us doesn't matter for future experiments probably one very very powerful thing would be to move to dedicated emitters obviously so if we could have an ultra-wideband emitter these pictures would look way better right from the start and this follow-up project would definitely be a thing to do Hi, sorry one quick follow-up question couldn't you just maybe print antennas in a cable and then just lie cables on the top of like a big warehouse or something like this? Yeah, you might be able to find some way to make that work but as soon as you need to at least switch the antennas to select a particular one you end up with a switch and that will be a semiconductor device and that will be expensive so I'm not an electrical engineer I don't see a straightforward way to do it in a completely passive scheme where you only have antennas and nothing else it might be doable and then that array could be a viable thing to do but as soon as you need only a little switch at every point it's probably too expensive Microphone 3 Yes, what do you think it's feasible to turn this around and use a transmitter array like I built a wand with a hundred ESPs which send rapidly in succession a frame and receive it with one receiver to get the hologram from there so to invert that to have something like a holographic array that could create any wireless waveform that you want transmitter array like transmitter array okay and then have one receiver and look at it well yes I think that in principle should be doable since as you say it's just the inverse of our approach yes if you could switch them one by one probably I think that should work but of course it would be way more expensive because emitters are more expensive than receivers I don't know at least in principle I think it should be feasible an HD you get for three euros put a hundred percent on the wand and you have one array with a hundred pixels right yeah that's true I mean this is maybe maybe in a way this is what our colleagues did in the other schemes where they actually had arrays of emitters and they could face them in a way that you they could scan the beam they looked at reflection for rather than receiving the signal at one point but this is something that you probably could do the other way to okay to modify your setup for a different Wi-Fi standards like IEEE 802.11 AC or N or any of the others no we wouldn't and we actually did these experiments both with 2.4 GHz Wi-Fi and 5 GHz Wi-Fi the 5 GHz Wi-Fi looks a bit nicer in the resulting images because the wavelength is smaller so you can see more details but the scheme doesn't make any assumption on the standard employed and we can transfer to any standard we like and this is actually an interesting prospect for the future so both the bandwidth and the frequency of these wireless communication systems is probably going to be increased in future implementations so people talk about even moving up to 60 GHz or so and that would with that you would be able to see pictures of nearly optical quality with millimeter scale resolution and if the bandwidth grows along with that with much less speckles than we had so then the whole security thing might be worth another thought too could you for example place a small array of antennas at every corner of a building and then record different frequencies or waves which enter this array and calculate the position of a source in the building from the different signal strength of different array receivers so without arrays this is actually being done this is how many of these indoor positioning systems work that you measure signal strength to specific routers in the building so we could do that but I think that actually blowing up these antennas to antenna arrays is a very interesting thing to do so as I said kind of a hybrid scheme where you have some strategically placed points but on each of them you have a small antenna array that could be the way to go to make it a really viable scheme for commercial applications still a microphone one this is going to be the last one sorry just one quick question what you do sounds very similar to what radio telescopes do so did you actually talk to people from Lofar or SKA or something like that not with respect to that project it's true so it's very similar they essentially do the same thing they actually do it on a global scale they link together telescopes on different continents to get a very sharp picture and they think about building kilometers scale arrays of little antennas to do that we haven't been talking with these people yet because they probably will be doing precisely that so they in a way will use it in very much the same way maybe in a slightly more restricted way because they look at very focused sources and they in a way end up with beam scanning but it's indeed a similar approach we're now with SKA moving into tomography and 21cm signals and so on and it sounds very similar to what you were trying and I guess they have very advanced algorithms because their signal to noise ratio is abysmal and so you might be actually taking advantage this is definitely a good idea there is one important difference though that they only look at signals that are infinitely far from the receiver so it's a good idea and probably we should look at that a second time so when we did it it looked like looking at the coherent optics papers would be more useful once it gets to reconstructing three-dimensional things that are very close to the receiver but they are probably very good in terms of sparse sampling and estimation schemes if you have very noisy data so yes definitely alright so the next talk we'll be here in 15 minutes it's about machine-checked proofs in everyday software and hardware development please give him a hand