 All right. So we're good. Well, thank you everyone for coming here. So some of you don't know me. My name is Cédric Honnet. I'm a research engineer in embedded systems applied to human-computer interaction. I work at this university over there. It used to be whatever. But I work on this project called Hive Tracker, which is a miniaturization of this big thing that basically allows people to 3D position objects in virtual reality. So this project is in partnership with a lab in London from UCL. They're like a neuroscience lab. Another lab in Spain. Also kind of neuroscience. And I'm more in the robotic side. So the context is virtual reality. I think all of you know HTC Vive. Ah, we have a late person. All right, that keeps going. So any of you doesn't know the special aspects of the HTC Vive? You're all good with it. They're basically positioned in 3D. So you know exactly where the head of the user is. You know where their hands are. And so it allows you to do very surprising things in 3D. If you've never tried it, it's really worth it. I recommend. But the problem is, if you want to 3D localize something like a hand, you see the size of an object is like an orange. So you cannot track everything you want. I'm going to explain a bit what we want to do with it, where we try to miniaturize it. But I think the most interesting part of what I can show is how this thing works because it's pretty smart. And how we went from this range that we just saw, a big object to a much smaller object. So basically, a bit like the GPS system, you have lighthouses that are generating an infrared lasers. Yeah, infrared lasers. And you can localize yourself according to their signals. The way it works, basically the infrared laser goes through a lens that diffracts it and creates a nice line that either scans horizontally or vertically. So to simplify the way it works, you have to imagine that you are maybe somewhere, let's look for example here. So if you know that you are somewhere on this line and you want to localize yourself on this line, you have to imagine a lighthouse that sends a double flash to say, I'm going to start counting. And then you imagine the scan that comes, and when it hits you, you can calculate the angle between, let's say, the zero at my left and where you are somewhere on this line. So if you know exactly where you are on one line, because you counted one, two, three, you have an angle, maybe 30 degrees, if you know how to do that particularly, you can have two angles. So you do a scan horizontally and a scan vertically. And with two angles, you can estimate your position with a line, a kind of vector coming from a base that could be, as you can see on the animation at the corner of a room. And this line points at you. But if you are in three dimension, one line is not enough, you need another one. So you are at the intersection of the two lines if you have another base. So basically what we see is one of them, but you need two. So the idea of this project was to convert time in angle as I was explaining, like the three microseconds could be for example 30 degrees. So you have this flash that I was talking about, it's like a double flash to say start and then you count one, two, three, and then there's another flash. Did I touch something? No? Okay. Is it well, Alex? All right, very good. So we have one angle with one timing, one other angle with the other timing. And that's for, it looks like it's three bases, but it's one base. And so one base will give you one line in 3D and the other base will give you the other to have the intersection. So we can count the time, I'm not going to go through those timings, but basically they are very accurate and very fast timings because the idea is to localize someone in 3D with a quarter of millimeter accuracy five meters away from the base. So the first prototype used only one photosensor that is receiving the laser signal that I was just talking about and microcontroller that's pretty fast, some of you might know it, it's called the TINSI, the board is called the TINSI and the microcontroller runs at 120 megahertz, so it's 120 millions of instructions per second, which means that we're fairly accurate. The problem is that if you have an object with only one photosensor, when it's not looking at the base, you don't know where you are anymore. So you need at least four photosensors so that with this kind of tetrahydric shape and if you are in any orientation possible, you always have one photosensor that can see the base. But you have to be able to measure the timings for the four photosensors. And if you want to do that, normally you would do it with some very complicated electronics that I'm going to talk about, but I don't know why this slide is here, I think I should move it, but basically what the first prototype did was to do some kind of 3D graffiti and some of the applications of what my lab wants to do are documenting unique and maybe dying handcraft traditions in something like the maker space of the future. The lab in England that I was talking about, they want to track rats and put electrodes around their brain so that they understand what's happening when they're changing their path. Many other applications are possible but there's this list, whatever. So if we go deep in the details of the timings, which I'm not going to do right now, we can see how accurate we need to be and how we need very particular processors. So is anyone here not very familiar with the concept of FPGA? Some of you are not there. So a microcontroller is a small processor to which you give a set of instructions. They have to do things sequentially. And you have to imagine the code that you put in it is a bit like any script that you would give to a theater actor. When you give code to an FPGA, it's more like the DNA of the electronic circuit that you want to give birth to. You have to give it what it has to be, not what it has to do. It's a really different way of approaching the problem. So those components are usually pretty big but if you dig around you can find some pretty small ones. So there's this project that you might have heard of on Hackaday that made a really tiny FPGA development board. So we could put that on this board. It's not impossible but if we find a way to avoid it, it's even better. So digging in the datasheet of this magic microcontroller, the NRF 52 by Nordic Semiconductors I found that there is a kind of FPGA that allows you to connect things in hardware and do real parallel processing. So as we have four photo diodes listening to all of the lasers coming from everywhere we want to be able to parallel process these signals and give them a timestamp. And if we don't do that really in parallel with the common way to do which is doing an interruption, timestamp the first, doing another interruption, timestamp the second and so on, you lose accuracy because the time that you spent doing the first timestamp will delay the second, the third, the fourth and you don't know which one came first in the next ones. So this magic thing, PPI for peripheral, I forgot what, interconnection I think it's peripheral-peripheral interconnection because basically you can connect digital input, GPIO. Programmable peripheral. Thank you. Programmable peripheral, whatever, interconnection. So you can connect GPIO, a timer, many other peripherals. The idea here is to just connect the photo diodes to a timer and measure the timings. So I use this magic little microprocessor that we can see here. All the details of this board are on the website. But basically this guy is pretty smart because it has the microprocessor, the Bluetooth communication module, the antenna, all of the components that you need to make it work like the capacitors and the two quarts, one low frequency when it wants to sleep and be in low power and one high speed to do the radio communication. This other guy here is a 9 degrees of freedom motion sensor so it can measure its orientation, its accelerations and it does the fusion inside. It's a Bosch processor, IMU, that has a processor inside. Hi! And then the rest is not so important here. It's the power supply, LED, button. And then on the other side, we have the connectors to put the photo sensors. So we're going to see it on the next slide. But basically that's the board once it's done and it's about the size of one Singaporean dollar coin or one pound coin or... I don't know, for the US dollars, whatever. Probably smaller than a quarter I would say. So to give another scale, after the assembly it looks like it's the size of a sort of one of my fingers. And so the photo sensors that I was talking about, they have... So it's actually a circuit with the photo sensor itself and a bit of electronics made by a company called Triad Semiconductors that help to do all the filtering, the amplification and the digital communication. So this is just the first step that makes it easy to develop. So I just have to put connectors to top this slide. It's not very obvious but those things are flexible so you can put it in your 3D object if you 3D print it or if you put it anywhere like a candle or I don't know. It's flexible, it takes the shape you want and it can be shorter or longer. But the next step that would be interesting I think would be to have almost the same object but instead of putting connectors like these we could put the photo sensors directly on the board and give them the exact orientation that we want. So these guys are through-hole. It means we can solder them through-hole or we can solder them on top also. But once they are soldered on top we can bend them and they keep their orientation. That's the interesting part. I think I can stop here and if you have more questions I have a lot more things I can show. If any of you has any question about how to integrate that in virtual reality or how to make boards because I understand that some of you have. So you can contact me on Twitter on that. Otherwise the website here has all of the hardware, the firmware that is in progress. And I think that's it. Thank you for your attention. When you say it's in progress, how far is it? That's always the hardest question, right? So the way you analyze these timings from the lasers is a bit tricky. You need a kind of state machine that's using interruptions and this PPI mess the special hardware that you only find in this processor. So all of the fundamental blocks are pretty much done. The state machine is done. I had to modify the Arduino environment so that it works with this processor because the idea is to make it as simple as possible so that anyone can use it. So the Arduino environment modification is done. All of the blocks are made. So technically it should be fast. In real it's probably going to be a bit painful. I would say a couple of months at least. One of the interesting next steps is to use some smart filtering like Kalman or an improved version to use the integration of the acceleration measured by this sensor so that we can accelerate the positioning estimation because for now we have about 30 Hz refresh rate of the lasers because even though they are going super fast you cannot do much faster than that. And 30 Hz is usually good but when you do a fast motion the accelerometer could do a good extrapolation of the positioning in between this 30 Hz refresh rate of the optical sensing. So that part is a bit tricky. I have a few students working on it right now. I'm lucky I can have slaves. They are really impressively fast and it's quite cool to have them. And the next step is I'm probably going to have an intern for six months working on that also. So across fingers this should be at least done in six months. You want one? Any other question? Can you tell a bit more about the applications? All right. So the question is if I can talk a bit more about the applications. So as you can see this is probably small enough to put on a wrap. It depends on how big a wrap you have. Exactly. It depends on your right. I think you have big rats in Singapore. We do. Well lab rats should be okay to carry that. Maybe Singaporeans could have bigger, with bigger batteries. Yeah, there's the battery that comes with it and the photo sensor that we saw but it should not be much bigger. I think from what I understood I'm not a neuroscientist but from what I understood by those guys the Chemflab, they want to understand what's happening in the brain when you are in front of, for example, a food predator or your board or this kind of thing. And the orders that you give to your body to your limbs to move around as a reaction to what happens around you it's not completely clear how it's working and some animals are easy to analyze. They are pretty rare. Are you familiar with the cuttlefish? So for those who don't know it's a very surprising animal, a bit like the chameleon. It can change its appearance. To me it looks a bit like an eating display. It has a lot of pixels on the back and it's really surprising when you put food in front of it it has this kind of warrior mode it changes its color. It's like I'm going to fight to get that food. And when it catches it if it's trying to get it and it's frustrated because it doesn't come you can see the frustration on the back. It has frustrated return on the back. It's really funny. And then when it gets it, it's like happy. It's just return on the back. So this cannot be done with rats. Not as easily. They tend to change colors sometimes. Like you get red when you're angry or green when you're cold all these kind of things. So that's one of the experiments that are done by the lab that started this project. The micro manipulation part so the third dot here is about how to grab things. It doesn't have to be just like surgery or watchmaking can be like how to grab an atom and feel it with a haptic feedback like a tactile resistance. So there's this Teletweezer. It's a project of the lab where I work at Sorbonne. And they are basically localizing in 3D with big sensors using cameras. The position of a tweezer and it gives you a feedback of what you're grabbing and it knows exactly where it is but it's really big so it's not very practical. And on the other hand it's remotely controlling a robot that is grabbing things or a laser that is actually controlling the position of an atom or weird things. So with this stuff you have like one of these fingers? So it's just a tweezer with a sensor at the end. So you know your orientation and your position would be a bit simpler. The 3D haptic texture mapping is more for virtual reality when you're trying to either touch things that don't exist. How do you feel like this is maybe you're in front of a wall it's in bricks or it's maybe in carpet or in foam you can maybe go through it this kind of thing. We can create haptic illusions and make you feel like you're touching things that you're not using special vibration patterns. And the last one they are performances are how to visualize or sonify the movement using this kind of sensors for example with dancers but it could be any movement like a graffiti artist doing his motion could create sound on top of the visualization that's already created. And if you have other ideas we can talk about it. It's still in progress but when all of these things will be working they will be published either on my account or this website. Any other questions? How does the sensor use when the laser start to scan? The timing when the laser starts and the timing when you use it? So it's a pretty dumb sensor as you can so the question is how does the sensor knows when it starts scanning? So here I didn't explain it so well but today it's the superposition of several chronograms coming from several photosensors. So here you can see there's four photosensors connected to this board this is the official development board by trite semiconductors. If you graph the signals that you get from them you can see the colors for several sensors and they basically all start at the same at the same moment and this is because the two bases they synchronize each other one of them is the master basically and the other is just the follower so the first one will do a flash and then the second will do another flash No sorry I'm talking shit the first one will do a double flash and so this is for example for one scan like the vertical scan one of them so it will do a double flash saying I'm going to start now so you should start counting and depending on the length of this you can estimate I mean you can decode using the length you can decode are you going to do a vertical horizontal scan or are you going to flash from the base A or B and so once you have that you have the scan and then depending on the sensor because they are not placed at the same place you will have a pulse at a different position because different position in real time then this will happen again the same thing will happen another time but instead of being vertical it will be horizontal and the start signal will look almost the same except it will say now it's not vertically it's horizontal and then the second base will talk and will do almost the same so I have to double check I don't remember by hand but I think yeah this is base A and this is base B and you can differentiate them and this is like 400 microseconds but basically you have to create a state machine that looks for a double pulse and because this would if you look at only one sensor for example only the red this is not a double pulse with the correct length so you have to look for this exact patterns with different it's a variable double length it's a bit of a pain I can show you more if you want after what if there is any other question maybe there's no so would you like to show your project a bit I think other people would be interested it's everyone what do you think we can just put everything on the table sounds good to me