 Good evening everyone. I am Ashish Raste. I work at Acoustic Research Lab in US and first of all I am thankful to Chinmay and the Hacker Space guys for organizing this wonderful event. I mean I think this is close to a year, right? I mean no. It's the 12th meeting so I think once in a month. So for a year I mean it's pretty awesome. I have been here a couple of times only but so I just wanted to share and simultaneously work on something so that I could share with other guys on what I am interested in. So today's paper is a little bit, that's it I guess the intro is enough. Robust range only became localization. This paper was written by the guys at CSIL at MIT and one of them I mean most of them are kind of pioneers in marine robotics field and this I would get started with what they have done in this paper. So this the research work done in this paper is more kind of applied robotics that is field robotics and so that you apply the concepts of science, maths into the robotics arena so that to improve how robots behave in an underwater scenario. So first simply to put I would like to ask if you are given a skateboard then how would you navigate in your office space blindfolded I mean when you don't see anything and you are on top of a skateboard and you want to go from point A to point B then how would you probably navigate? You don't have any sonarism you are just a human. I mean at the max you can shout and if there are some echo walls that can give you the feedback then you can localize yourself where you are. So the concept here is that you have walls around in your office space right I mean if you know them well that by touching them and sensing them oh okay so this wall is kind of bulged out so I am based on your past memory you could localize yourself where you are and so that you slowly move to point A to point B. So a similar concept is applied to the robots in underwater environment where robots don't know where their position is because you don't get the GPS signals underwater. So this task is kind of an analogy for the underwater environment the skateboard task. So basically an underwater vehicle has a couple or more of sensors the primary ones are the Doppler Velocity Log. I will just go through each of the I mean what those sensors give you the feedback of Doppler Velocity Log is used for primarily is for bottom tracking where you get to know what speed over the ground I mean the bottom surface of the water that your vehicle is moving in and initial measurement unit has a set of I mean it might have one or more of gyroscopes magnetometers so that you get to know the roll pitch yaw similar to an aeroplanes sensors so it tells you the orientation of what your vehicle is and a compass is of course gives you the true north heading. So based on these sensors the robot has to localize where it is in a given environment and it has to navigate around in a region so this is the task that has been discussed in this paper. You can just interrupt me if you have any doubts anywhere in along the slides I would be glad to get interrupted and will try my best to resume from there. So I think this slide could be explained better by drawing something and before that I would like to tell what I think I didn't tell that what beacons are. So here we are talking about long baseline transponders I mean which are kind of acoustic modems kind of thing so where you have a transponder and you have you have a transducer and you have a transponder so that transducer sends acoustic pulses and in for the feedback you get the information of where that particular transponder is so here we are talking about LBL beacons so each of the beacons have their own IDs so you can differentiate one beacon from the another and they give you the range information range is nothing but in a terrestrial space it's similar to the laser range finder so that if you have you might have heard about the drones using laser range finders or camera tracking systems so that they get to know how far they are from each of those sensors so these are their underwater analogy I mean for the underwater environment. So here the beacon gives you a unique ID for a reply as well as the range how far your vehicle is from those set of beacons. And those beacons are static? They are they should be static okay so then based on the ID you can know which beacon exactly the beacon well if you know the beacons exact position yeah and how far you are then you can localize yourself slightly I guess. You can try and give it right? Yeah, the primary task but here there is one more kind of they have complicated the problem such that you don't know the beacons position and they might be deployed whenever you want I mean using drones or something so it's kind of immediate deploy and take action environment so this kind of task might be useful for such in rescue operations where underwater vehicles go and localize even for aircraft rescue you have a transponder there also it it emits pulses at about 30 kilowatts I guess for about one month I guess they have improved they have increased that time to two months or so so within that period you have to see and hear acoustically where that transponder which you're talking with is so these beacons are used primarily for that mainly these are also used by oil and gas and other subsea based companies for giving their vehicles and remotely operated vehicles an accurate position to for so that the vehicles could know where where it is and it could I mean it could satisfy its task so I'm just drawing the beacons in the more the number of beacons the better your vehicles navigation would be and these each of these beacons would form a kind of a region and wherever you want to deploy them and your vehicle would be somewhere here so so this is the underwater vehicle AV and it will send a ping like a query ping asking each of those beacons where it is and in response as I said it will get an ID as well as range like some distance let's say that hard and it keeps doing this for continuously until it localizes where it is within that region and later it localizes the beacons where they are these are just the relative coordinates so that if it is some coordinate let's say zero zero for convenience then you can just plot the XY map so for the 2d region the AV localizes first itself and then it localizes the beacons and then that that is useful for the AV to understand in which I mean it will know its region and boundary so it will just navigate and do its tasks within that region because they are just used for responding they don't act as kind of they are that kind of transceivers I mean because you have to receive acoustic signal and then respond with some response but you could make the beacons stop which is that in that case they will be acting like kind of modems underwater acoustic modems so you have both modulation and demodulation taking place in that case they could localize like that network yes yes they are in fact I mean some of them that kind of technology is being in fact developed by our lab also acoustic research lab where we you might have heard about Subner also it's a startup company where Shan is working Shanmugam so they have the underwater modems where these modems could be deployed over a large range of area such that they will be able to localize themselves on talking to each other and even to transmit packets from one point to the other point so it's kind of network Wi-Fi network underwater so it's kind of cool wouldn't that be an issue with how are you sage because now you're doing a lot more than just responding to print definitely for a task will be there so that's why they are limited to only until certain events something like this is simple because the beacons could just go to sleep and not whenever they are not they get pinged and then they will okay I'm gonna respond usually they do that I mean they're underlying electrons that's why they can last what you last for what like two months or whatever that that is for the transponders okay but even then they are what they every minute optimized way would be that I guess yeah I haven't studied about that interesting thing in in the driverless trains here they have they use beacons as well driverless what trains trains okay use beacons on the track as well yeah yeah and only when they are as if beacons like what you say they are not and they are not active they are energized and then right understand yeah so the word is there I guess passive beacons so only when they are great they will respond because they're on the track right so the train has to go above it but I guess for this case yeah it's not possible like yeah that's the difference this is 2d where the tree is one one day it's in fact 2d the way the whole whole region but the AV has to localize itself in 3d dimensions yeah so that's why you have those sensors no they'll be on the floor of the sea yeah and they should be static sometimes it happens that they tend they might move as well if strong currents are such things are there that's why you need to have some robust localization techniques that this paper talks about yeah so you can assume that the beacon is always always static yeah so you have to keep localizing them as well yeah altitude definitely changes a lot yeah so it may not be able to send a direct signal yeah in many cases it's it the signal it's kind of 3d I would say it transmits a spherical spherical kind of signal so that it's not limited to two dimensional circle but here we explain the concepts in terms of two yeah so that was the basics on the sensor range measurements I mean here the sensors are nothing but the beacons and you can see at time the three different times are given here T1, T2 and T3 and at each of the time it gets a range measurement from those beacons and the circles are nothing but having a radii of how much the range that it is getting like say the first larger one might let's say five meters the next three and next four and so is that so it knows that the beacons okay the beacons are on the circumference of the circle and it is far from so-and-so meters or kilometers for that purpose so yeah the main problem here is whenever acoustic related concepts I mean they are applied underwater you have many kind two kinds of noise primarily one is additive noise and one is multiplicative noise multiplicative noise is nothing but based on the sounds travel time it might get diffracted or interrupted by many other in between particles or whatever their underwater and it kinds of directly multiplies to the noise that I mean the range that you get and additive is like you get multiple reflections like safe from the surface water or from different organisms let's say even I think snapping shrimps and those kinds of animals underwater they have their own acoustic pulses that you might get and sometimes it happens that their frequency are much closer to the beacons frequency so you might get confused with those noises but on top of this the vehicles that we are talking about here they might have some sonars like the person said earlier so these sonars have operate they sometimes operate around the frequencies of these beacon communications and usually the frequencies that we are talking about here is about in the range of 20 kHz to 130 kHz some sonars operate in this range also as well yes so here in the third instance of time we might not be sure whether it's an actual beacon or not because of the noise that we discussed just now so to identify which set of I mean what what are the sets of beacons that are true and they are not noise we have to do some kind of kind of outlier rejection technique so it's very plainly simple that we apply usually for image processing signal processing whenever you have a set of data even for database mining and stuffs you have definitely some noise and you have to kind of remove all this noise to get a good set of data after which you can process upon on that so the outlier rejection technique is the I would say the main component of this paper and the next component that that is to navigate around the space it's kind of well researched upon so this technique is it it was I mean I was impressed by this technique and because it used a graph cut algorithm here I will just quickly go through that so here let's assume you have a beacon and at time T1 you get a measurement of how far you are from that beacon so you your vehicle imagine early draws a circle kind of thing with that range that the beacon gives you so and at the center of the circle is nothing but the position of your vehicle and so at time to do you move to another point and you again query and you get back some range and at that at the intersection of these circles should should be the point where your beacon must lie so here we have two intersection points so both are good possibilities for your beacon to be present there at those points but since we have noise we might we will surely have noisy data so we have to kind of neglect or remove the noise from the actual beacons for that we form a graph in this graph is nothing but the set of measurements that you consecutively take I mean this paper recommends they for their work they took about ten minutes of continuous measurements with the five second gap I guess the query period in intervals of five seconds they the vehicle queries and then sees what the beacons that have responded and then they take those measurements and form this graph so this graph is the nodes are nothing but the measurements and you connect two measurements the node you connect two nodes if you see them to be consistent here the con the meaning of con consistent that the author supplies for the measurements which are kind of valid I mean like like the one shown here M1 and M2 sitting on top of each other so you have two intersection points so this this is for one beacon so they take this set of measurements as consistent so they do this for a few minutes and then they form this graph so when you see measurements they are actual distance measurements are okay these are all form from the range okay in this so here let's say you have only one beacon that you want to localize and you want to make sure that that beacon is not an noise it's an actual beacon so you take measurement one that's time given and you have that range based on that again you go to some other point and then you take and usually for the triangulation that our friend just noted about for that you should not travel in a collinear line I mean in a straight line you should make sure that you should travel in random on but make sure that it's not a collinear because it won't make sense to then later localize where the beacons are so can I just say you were saying that if you are if you're connecting two measurements and you've seen that means they make sense so for example in this scenario M1 and M2 make sense because they intersect but if M1 and M2 were such that they would they were not intersecting then they would mean that they are not probably one of them or both of them are then you won't join them in the graph yeah so the ones that are intersecting for a beacon I mean right yeah here you can have multiple but we are only talking about starting from one beacon and then later you can localize other beacons as well so you connect the measurements only when they are I mean you you call them as consistent only when they are intersecting no before filtering out the packet I think this is a good question I didn't think about it before before getting the packet your vehicle should make sure it just at the lower level it senses that some packet has received has been received or some signal has been received so you can eliminate then and there itself at the DSP level or before decoding what information is there inside that packet I think that is the thing that the beacons like does not know the ID right you can't validate ID if it gets a wrong ID just assumes that it's another beacon or does it know all the beacon IDs that exists it should primarily know the beacon IDs but if you like if you drop new beacons then you there is no way to tell the way there is no way to know whether so I need them but it's like a whole spec right you can have something that error detecting coding of the IDs and then about like on the lowest level again you have some you have some coding so then again you have the data and then you take measurements but at the same time one of the other noise sources could be reflection so it could be a valid beacon ID the distance is wrong because actually reflection is not a direct so the reflections are the primary problem here but as you said I believe you can also have a vehicle moving such that it doesn't need to know that it is beforehand because in the environments that these things operate I think that wouldn't make sense yeah it's not like the beacons know the distance right you try to figure out the distance from the signal you get back so I agree that the ID could also be a certain pattern you know if the pattern is wrong then yeah it's not a valid ID you know but I mean even if the whole packet you get it correctly it goes correctly but it's not like it's two fields one with an ID and one with a distance the distance is figured out from the musical and the interesting thing is because of the way this works you get the distance measurement very quickly because it's like just the time of arrival the difference in like when you send it when you get back whereas from that step to actually decode is a lot more computation especially for embedded systems like these right either like tiny little robots so it's a lot more computation to do that but here I think I should note that the beacons themselves don't explicitly tell what is what the range is right based on the time of flight the vehicle calculates yeah so here there is a maximum probability that because of the reflections the noise will come out I mean they can come two or three or more than that number of times later in time so the vehicle might wrongly assume the range to be something else so looking at these are underwater autonomous vehicles even is for submarines sometimes even the ships large container ships and other things use this I mean for them there are kind of three classes of baseline yeah like small baseline arrays medium I guess and long baseline arrays the long baseline arrays are more used for accurate positioning of the beacons as well as where you have a task let's say digging some nodules underwater even you get magnesium nodules in deep depths so if this underwater kind of vehicle navigates in that area then it needs to know accurately where it is for this kind of purposes LBLs are used even ships use kind of other baseline arrays called as small baseline arrays where the transponder I mean it is attached to the ships hull at the bottom and you might have two or more hulls and the vehicle will be roaming and it doesn't need to know the accurate location I mean it doesn't need to have an accurate location of itself but just a little bit approximate location is good enough for that then in that kind of scenarios you attach to the ships hull at the bottom and then with whatever you can also inverse that reverse that process so that whoever replies might be at the ships hull or your you can put it as a way that your AV replies to the ship yeah based on the task that you are looking off so looking at the graph here we let's say we have eight measurements of a single beacon and as you can see the well-connected nodes are one to five and later in time the other nodes are not connected that that much so this problem they have taken it as a graph cut problem where if you have a minimal cut then that means you have a maximum information it this theory I think is also applied in network flows where you have a set of nodes even in social networks and any kind of networks I think this theory is applied if you can get a minimal cut then that means that you have a maximum gain of permission how do you define a minimal cut means so you have partition you just partition the graphs into two sets let's say and you make a cut among the nodes such that you just segregate the two sets two sets of nodes completely with minimal I think it's with the minimal number of h okay crossings yeah okay so continuing that so any graph could be formed as an adjacency matrix here our matrix is a the diagonals where the elements are like say having the same indices are initialized with zero and the ones are the indices where you have consistent measurements let's say in the previous graph we have one two connected one three connected then you can see at the one two and I think one three I think I have written the matrix wrong one one two one three right this should be one yeah so yeah you initialize I will just correct it later when I submit the slides this is initialized such that and in this ij will take one if a measurement i and measurement j are consistent any doubt in that I'll just explain how it forms here one two and four are connected and three is kind of an outlier among this set of nodes then you can just form a four by four matrix where diagonals because you don't consider a node to be consistent with itself so in the matrix in that diagonal elements you have zeros and if one and two they are consistent so you have one in ones here and it's kind of the transpose of the matrix is the same and it's also a admission matrix those terms are what I recently learned of going through the linear algebra concepts after reading this paper in fact while going through this paper and one four you can just check you can have to check so here since one two four are well connected you have the ones in the respective indices and the others are zero only three is connected to four so the third row fourth column we have one and so this is the adjacency matrix for that set of nodes so a good cut off this graph will give you the in-layer measurements like say for that example of four nodes you can assume with some statistic that the third node is kind of an outlier or a noisy measurement so that you could only concentrate on the first one two and fourth measurement first second and fourth measurement so that whatever range that they have given to your vehicle I mean the range calculated from them could be taken as a valid range more thing I think I missed an indicator vector right so here we take one more indicator vector this vector is nothing but a kind of binary vector one or zero where it will have its elements as one if that particular measurement is taken to be a consistent measurement I mean after cutting the graph so here for that example we have indicator vector is a column vector so let's say we are assuming one two and four to be consistent so you have one one and you are assuming three to be an inlayer so you have one one zero one and this vector will also play kind of it plays an important role to validate that you have removed your outliers you you have a statistic I mean they they provide a statistic calling it as quality of a cut from calculator from r u here r u I mean it's at the formula at the bottom but the thing to be noted here is that whenever let's say you have taken only four measurements there but or eight measurements in the slides but as as and when you keep on traveling and keep on hearing the ranges you might not be sure whether to update the matrix and because it takes a whole how to put it takes I mean your efficiency slows down your vehicle won't be able to keep on updating the ranges and to see which are the consistent ones and forming the matrices again and again so here they just take a statistic and calculate the derivative of it so that you get to know among those measurements where the maximum change is happening so that it's kind of boils down to a eigenvector problem where in a matrix let's say for an example if in a picture I'm taking an example of image processing if you want to segregate a proper foreground from the background let's say for some segmentation technique then you just calculate the eigenvectors so that you see where the maximum change of those vectors is happening and based on that you will get to know the pixels so that you can segment it appropriately so this this task also same takes the same kind of technique so that they calculate this statistic r u and later differentiate it with respect to u so that at the numerator they equate it to 0 so that you get the extrema of that statistic r u so that a u will become r u which is nothing but an eigenvector problem a lambda a x is equal to lambda x yeah I have a question yeah sure if you go back to the previous study you're saying the u is is guest or how do you start with how do you get the u yeah u is primarily so you have a guest first and then you no you don't need to guess you don't need to solve the mean so problem it's a polynomial polynomial optimization problem for a given problem given as a sensor matrix just that you don't okay keep resolving it right also to make sure that you have a proper u they they don't calculate the u at first I mean because later in point in time you will get continuous measurements yeah so yeah we are discretizing the space here we are discretizing it to one and zeros so you need to make sure that whatever you have assumed are correct so what they do is since they have reduced it to a eigenvector problem they just calculate the eigenvector and the maximum I mean that maximum eigenvalue and the corresponding vector will give you the maximum change and that is nothing but the u so if you are familiar I mean with power series I mean multiplying a matrix adjacency matrix with itself and with a I think with a unit vector a few number of times let's say 100 or 200 number of times will give you a close approximation to the eigenvector that has the maximum eigenvalue so here you can have let's say for this example you can have at least eight eigenvalues and for that one of them will be the maximum and the corresponding eigenvector will nothing it's nothing but the u right yeah so for the example in the slides I just calculated assuming that assuming that the first five measurements should be one the first five measurements are in layers so you have a u vector something kind of first five or ones and next three are zeros and but instead of calculating that way we are not guessing that u to be the in-layer set of measurements what what we do is as I said you multiply the adjacency matrix multiple times with a vector of unit length then you get some values like this so r is nothing but the statistic and here it is the maximum eigenvalue that you obtained and the corresponding eigenvector so here these are in the continuous space to discretize them we just threshold it it's it's very similar I mean to the image processing and other signal processing techniques you you either apply a low pass filter to it so that you get a threshold and above the values that are above the threshold you take that into consideration yeah oh just a second so here the u of thresholding applied u of t is since we are talking in terms of discrete values that is one and zeros we want to convert this u into a set of I mean matrices of with values one and zeros for that we apply a thresholding t where we get to calculate t opt t opt is nothing but the optimal value and it is nothing but finding the dot product between u and you initially take one one element of that u vector as t let's say four zero point four zero eight two and you multiply I mean whatever values are above that point four zero eight two you assume them to be one so you initialize the vector v of t like that so for that example if you are taking the threshold as four zero eight two greater than equal to four zero two you will have a bt of greater than equal to four zero two I think I made the example more convenient to me let's say greater than four zero eight two so it will have zeros because this will form the exact u vector that we were talking about so here we have v of t and later you take the dot product of v of t and u of t to get an optimal t opt is nothing but the optimal threshold that you want to apply this is so because one one of the values if you have a dot product which itself kind of approaches the maximum negative so you won't miss the diagonal so yeah at last you might get series of five ones and so and now we might ask ourselves where are we and what is the use of all of this so the thing here is you get to know where the maximum change is happening so that is nothing but clearly an eigenvector and you at the end you get a nice vector vt where you you know which measurements are in layers so that the zeros are nothing but the outliers so the here we can see the first five measurements are in layers and the next three are out layers so you just ignore them from this graph and so the cut b will have a it will be assumed to be a good cut because it can be no cut a cut a will be assumed to be a good cut and here as you can see it passes minimal number of edges but it kinds of separates a consistent set with a non-consistent set so that's why it's kind of minimal cut after getting the measurements as we have initially discussed here you get to know which are the consistent measurements so you eventually localize the beacons where they are their x y points coordinates and no my mistake here you just get to know which of those measurements are in less that is which of them are consistent are good measurements but you still don't know where the beacons are so based on those measurements you just have an assumption of 2d grid and if you are familiar with who transform there who transform is primarily was initially used for finding the lines in some cloud chambers where some electrons are made to flow in a kind of hydrogen liquid at some temperatures such that you get to see their path of travel in what path they're traveling and you have cameras that take snaps of those travel I mean it continuously takes snaps such that later if you just filter among those pictures you get to know the exact path and they are surrounded by magnets and so it was it in fact fetched him the inventor noble price and fetch the inventor of gas chambers and noble price not the inventor of who I mean not the inventor of who yeah so here we just have this grid is nothing but set of 2d cells each of them you can assume starting from zero zero with some cell width of five meters or 10 meters so when when whenever you are traveling and querying those beacons if you see that since we saw that a beacon might be localized in two positions so you want to accurately find which of these positions are true so you take a pair of measurements continuously and you vote let's say this is 5 comma 10 this is 5 comma 5 so the corresponding cells will make it 5 comma 25 that it doesn't violate the 10 meter thing that I have said there yeah so you just vote the cells where you found the beacon location to be and in the end the cells having the maximum number of votes are nothing but where exactly your beacons are in that in that manner you localized the x y coordinates of the beacons so here it might happen that a case such that you might have two cells having equal number of votes in that case you just continue taking measurements more measurements such that you can validate I mean you can localize the beacon locations and usually the authors recommended some vote ratio of the highest cell and to the next highest cell to be of two and they based on their experiments they said that it is a good statistic to have a ratio of two for the first highest to the divided by the next and then if for a given beacon you don't have a like a cell with a high enough vote you just discard that and you pretend that you know nothing about that may I say that you all the cells are uniform votes like that something like that can I well I mean you said that a good threshold is when the highest vote itself for given beacon as at least twice as many votes as the next yeah yeah so but if that's yeah if you don't get that ratio then they recommend to continue taking the measurements yeah so that means that you still don't have the exact you haven't localized the beacon exactly so so after you have localized the beacons let's say the four beacons in that region you know the x y coordinates your robot knows that you want to navigate in that region and this this task is nothing but most popularly called as simultaneous localization and mapping it's used in all the robots that we see today even the drones the land terrestrial robots and underwater vehicles where you build the map where your robot is traveling and as well as the same time simultaneously you localize yourself so in in such kind of maps as I said in the beginning you need some some kind of sensor that that makes you human analogy will be like walls that you can touch and you can sense and you can based on your memory you can think where you are and where though those walls are just relative coordinates such that you can navigate from a point to another point and something like that so so I ekf slam the localization mapping technique that I just talked about it's a whole concept by itself and it will take it'll take more time to explain and the authors here just have kind of they because it's well popular within the community and they just made sure that whatever formulas that they have applied have just been discussed there but what I felt was I just diagrammatically explain what happens in ekf slam so you can have I mean we can all have an idea of what happens in the vehicle when it tries to navigate within that region so initially I'm considering the vehicle as a triangle with a circle on its nose kind of thing so that not that shows nothing but it's bearing where it's pointed towards and here I've taken three beacons initially the vehicle localizes the beacon as explained in the previous part after removing the outliers and then forming that grid voting technique so after localizing the beacons and it it kind of travels and based on its odometry data like the dvl imu it knows its estimate I mean it just guesses its estimate where it would be and in what orientation it would be and this is nothing but the state of the robot I'll interchange robot and vehicle simultaneously here in this problem let's say we have the state of robot as so I've taken a vector which tells the state of the robot and here rx ry are the location of the vehicle because based on the odometry data like you travel with some velocity and if you assume that you are traveling let's say from the origin and for let's say for a couple of seconds at some velocity then you can definitely know at which point you are with and the orientation that you are in rte is nothing but the theta bearing from the compass and bx by are the beacon locations so this vector is the state vector and this will be used for estimating the way I mean the vehicle will be using that to estimate its position and the beacon's position and continuously what we will do here in in the ekf ekf I haven't expanded it ekf is nothing but extended kalman filter basic kalman filter you might heard I will provide good references for it so you can go through later in in non-mathematical sense diagrammetical manner such that it will be it will be easy to pick up and it's kind of very popular and used in almost all aeroplanes tracking system missile guidance systems all such things and yeah so here at time t is let's say t is equal to 2 the vehicle has moved some distance among those that beacon field and things it is at a given location but what happens is that next when it queries the beacons and senses it notice it notices that it is not in the location that it expected itself to be I mean it's slightly off from its current location current notion of its location so there is an error in its estimate so to correct this error here we just apply I mean basic kalman filter I will just explain it very very shortly so X is the state you have initially probability distribution I mean after moving for certain distance the vehicle assumes that it is at this location this is after let's say based on odometer and let's say time here I'm representing the state of the vehicle as Gaussian curve that is the notion that it thinks that it is at that position and it will have a peak value so at time t is equal to based on its velocity things that it's at its state is x1 but after the sensor measurement that is the range of measurements it notices that it is at I mean its state is x2 so the simple update step would be to combine these two but just multiplying them and so you will have a new Gaussian that will have a huge peak and this this will be your new estimate x let's say x cap of 3 yeah so the middle one middle the Gaussian at the center kind of almost closely tells you approximately tells you what your current state is so it is like the more you sense the better you can estimate yourself similar to what we touch the balls and sense where we are so the more and more the vehicle has the range measurements it can better sense itself so in the triangle the dotted line is it's after the you know it was based on odometry and the dash line the dash triangle is based on the sensor measurements to correct both of this as I said the central Gaussian over there you get here a new estimate of the vehicle state where the dash dash triangle tells that but the solid triangle is the actual state of the vehicle which the vehicle itself doesn't know but it kinds of closely approximates to its actual state so the filter is nothing but a step of this loop keeps on going within the vehicle continuously it moves it thinks its state to be something and then it senses it updates the state and this keeps on going so that it at least gets to know a close value of its coordinates and yeah so in this way the vehicle localizes itself localize the weekends and travels within the region of the weekends navigates within that space I could quickly fortunately I got hold of a video just showing how this works and this is this was I think this to be shown it's made out of moose moose is a kind of equivalent of Ross developed at MIT and they primarily developed it for underwater and surface vehicles and it's widely used in many other organizations in Europe and agencies as well no it's a distributed framework and of course you can build a simulator on top of it and as you might know of Ross where you have certain I mean the distributed term is used there because you have different nodes talking to each other communicating message packets and stuff and so that all happens at the same time with good efficiency so that it it kinds of useful in robots I will just run it so here let's stop it here the beacon is located here the vehicle is given a lawnmower kind of path lawnmower nursing but the garden lawnmower you tow it in a lawnmower pattern in your garden such that you remove the grass right so the vehicle is given that part this is nothing but you can call it a mission and here the beacon is doing nothing but it will keep on the vehicle will keep on querying it and the beacon will respond and with the range measurement at the end of the mission it will just home into the beacon's location so you can it's kind of deploying a beacon such that it acts as a home station for the vehicle where it needs to return to so the endpoint is here the users don't tell where it has to return to but based on range measurements it eventually after localizing the beacon it just goes in there so yeah it will just take some time so first the vehicle queries I think it's not visible but these messages are just some the code encoded version that they have used in their message packets they don't have the range values yeah so this if they if they end if the two circles encounter then it's not it's not it doesn't appear at any distance yeah but it should be the English growing circle and every point where it gets back to the client and you would see that so here when the vehicle responds and the next measurement when it queries it if if those two circles intersect then you have a consistent measurement of the beacon right yeah if we could visualize better by showing the circles should grow here and we guarantee all it's the no no you should have more and more so yeah just it just honed in into it yeah so this kind of application I mean more complicated stuff are kind of expanded from this so in fact I would say that they exactly home into the exact location but a few meters let's say within tens of meters 10 or 20 meters and that kind of accuracy is you can call that algorithm or approach to be a good one yeah so when when I started reading this paper in fact wanted to try out in in the vehicles that we have in our lab and I hopefully might try out with our software distributed system and probably will demo it the future but I could get hold of the concepts of whatever that are being discussed in this paper and even I mailed the author asking him the data set but since oh yeah this work was done around 2007 in Italian coast I guess yeah it was a joint mission by MIT guys and some other guys that's it I will update the references and other good resources where you can if you are interested you can get to learn about Kalman filters they are kind of cool even recently in hacker news I think there was one post related to Kalman pictures Kalman filtering using only images only pictures yeah you can also check it out I have a question yes you said the audio waves are 20 kilohertz acoustic yeah it's the same as audio waves I don't think so it's different what acoustic waves these are just the sound waves sound waves right and so you can produce audio waves underwater having the same frequencies you you know I my question is like this is at the riverbed sorry seabed seabed do they at all talk about the effect on like deep sea marine life because of these kind of weekends issues kind of raises but as far as I know I haven't read much about the cases where it affects the marine life because I think they kind of act as a barrier only so those measurements eventually are just noises but they don't affect directly I mean biologically those marine organisms are living things underwater and in fact there are few organisms or living beings underwater like snapping shrimp they're they're they kind of snap very frequently even in the Singapore coast our lab has in fact recorded some data and they are very loud and kind of provide your noisy environment wherever so that you won't be able to know when the ship is humming into the dock and you also kind of track when the ship is coming in when it is going when you have a busy port so when the snapping ships are around it's hard to tackle and these have a high frequency I mean the frequency in those ranges as well yeah and of course they are marine organisms so they don't affect the color I think the one the video is it's not the lawnmower is not concerned with the beacon localization it's just a mission prepared by the mission planner guy or who is operator beacon localization how because you said you shouldn't try to create line yeah yeah yeah so how should the vehicle drive do you beacon localization please I think I missed that part the last part they talk about optimal exploration where did I miss that part yeah optimal exploration where they say that you should move such that where the maximum gradient of those measurements happen I mean where you can see the maximum change of this measure let's say at some point you hear it with some range five meters and at some point you don't hear it or you shouldn't move I mean of course your vehicle will know based on dead reckoning and the compass sensors if it is moving in a straight line so just by making sure that you are moving in a direction such that where let's say initially you are not hearing anything and suddenly you you just hear a range measurement so you travel in the direction for some time and so okay now you're sure that you have heard something then you just take a random turn and move such that you see whether still you are hearing the same range let's say whether it's the same five meters or five meters then you can make sure that you have the maximum gradient let's say you are rotating within the same space and you are constantly getting five meters range of a particular beacon with a of course it's beacon ID then you can be sure that uh those measurements won't be of any value I think that that would be best explained by a vector field beacon is somewhere so I'll just draw some vectors it will be easier to explain so the length of the vector is nothing but the change that is happening when you are getting close to the beacon essentially you get some measurement or you can start with no measurements and the more and more you approach your circle radius should shrink telling you that you are getting close to the beacon so that tells you a good gradient is happening in that direction so if it is not happening then you can be sure that your problem is not traveling in a good path so you should get closer to the beacon here the vector size is nothing but yeah of course you should get closer so okay you can even cross cross it so here the vector size might increase and later it might grow smaller smaller and smaller it diminishes as you grow so the more information gained that you have the better so these things I in fact I'm talking out of what I think of but I wouldn't say they are perfect in explanation but only after experimenting and actually implementing in a vehicle you get to know what the actual scenario is like and I think that is the beauty of field robotics where you apply algorithms and you see in real-time I mean what happens exactly otherwise theoretically we can keep on speaking yeah I feel so how frequent do they pull this big one so here five five seconds of intervals yeah doesn't it depend on the speed of the no at maximum your vehicle under water vehicle travels at about three to four knots okay so five seconds is total yeah good enough so all the under water vehicles because they act like a system where if you if you are not moving good I mean good enough at good enough speed you will eventually drown I mean because the pitch will keep on increases and like the aeroplane yeah it's very very similar to that okay yeah so you have to maintain a minimum speed so with that minimum speed in mind you can of course tell what the time frame should be time interval should be so that's it guys any other thing to discuss I would love to discuss