 Good morning, everyone. I'm Naman Kumar. I'm a robotics leader tartan sense, which is an agriculture robotics company based out of Bangalore And today as some old mention, I'll be talking about something which is being used everywhere from your mobile phones to airplanes It is being used in software robotics finance marketing literally any field you can think of and guess what? It's just a bunch of mathematical equations. It's called a Kalman filter, but I'm getting ahead of myself here So let's go through a few examples to build some intuition. Let's start with an example of autonomous driving autonomous driving Has been a hot field for more than 10 years and today it's getting more eyeballs than ever and rightly so Not only it's an uber cool technology, but more importantly It's about the safety a small error made in the decision of the car can lead to a loss of life And to avoid such accidents It is of utmost importance to know the position of the car at all times with high precision For example knowing if there's a pedestrian in front of a car or if the car is in front of a red light So for the car to know its state with high precision, we can use a GPS But the problem with using a GPS is that it's never good enough to rely on just one sensor For example GPS doesn't give really good data if there are a lot of tall buildings or in harsh weather conditions Also, cheap GPS are not accurate enough and have a lot of inherent errors So a better and a more intuitive ways to use a bunch of sensors and fuse the input coming from them intelligently So let's talk about a specific case where we have two sensors. I am you which is an inertial measurement unit and a GPS So now we take data from both of them and fuse them and if you consider a case Let's say the car is driving in a neighborhood where there are a lot of tall buildings or maybe simply the GPS data stops coming So in that particular case nothing will happen self-driving car will drive itself using just the IMU and Once we get the GPS data will start incorporating that as well. So how does this work? So that's where the Kalman filter comes into picture Another example of this in self-driving is obstacle avoidance. So when I was working at Faraday Future I worked on that specific problem So we had to predict the state of the people and the vehicles in our self-driving cars vicinity into the future For example knowing where the person will be say in 10 seconds So what we did was we came up with a model for example We know that a person is has certain physics a person walks in a certain way So using the physics of how a person won walks we came up with a model Which basically helped us to predict the state of the pedestrian as well as the car into the future For example telling where the person will be saying 10 seconds So at every moment we had a model like a physics model which helped us to predict accurately using past behavior and historical data So now let's consider a case where we have a bunch of sensors because again relying on just one of them It's not a wise choice. So we had LiDAR cameras radars ultrasound sensors Because if you just have like a camera for example camera doesn't give us the position or velocity of the object and radar has a really low resolution So what we did was we use a bunch of sensors fuse the data smartly and use the output to track the pedestrian and the cars in the vicinity of the Self-driving car So now we have a model and we can use that to predict the state Now once you get the measurement from these sensors telling us, okay This is the exact position of the pedestrian and this is the position of the car So we fuse that measurement with our prediction to get a much better estimate of where the car is or where the pedestrian is So that's how we were using Kalman filter in our own application at Faraday Future Well, we didn't use a vanilla Kalman filter in its most basic form, but the underlying concept remains the same So before I move on I would like to tell that there are two steps to the Kalman filter The first one is a predict where we use the systems model and perform prediction into the future In the second one is measurement where we use measurements coming in from the sensors and use that to update our state Now let's consider a simplified example So let's say we have a robot with a camera C and it's moving in a one-dimensional space And there are three doors door one two and three and let's simplify the problem further Let's save a knee that we know the initial state of the robot and a task now is to keep track of the position of the robot similarly So how we had to track the position of the self-driving car in the previous example So before we move ahead, I would like to tell you about something known as belief So what does belief mean in this context? Simply put belief means what the algorithm or the robot thinks of its own state So if you look at this in the first plot figure a you can see that it's a Gaussian It's a simple Gaussian that means it has a mean and a covariance now the robot starts moving and is in front of door one and Before the robot starts it's actually pretty confident of its own state as you can see the Gaussian is really narrow That means covariance is less and it's more certain of its own state Now the robot starts moving again and is in front of door two and now as you can see in figure B The black Gaussian is wide that means it's more uncertain of its own state Why because if you think about it all motions have some uncertainty There can be motor bias in the robot They can be wheel slippage because of the surface and that induces noise in the system Now in figure C the camera on the robot gives a measurement that there's a door that the robot is in front of door two shown with the red Gaussian in figure C So now what we do is we have this red Gaussian in figure C and the black Gaussian in figure B Which is a prediction we fuse both of them We fuse these two Gaussians to come up with a much better estimate of where the robot is Shown in the black Gaussian in figure C So if you look at these three Gaussians figure B black Gaussian figure C red Gaussian in the final black Gaussian in figure C You can tell that the figure C black Gaussian We are more certain of the state of the robot because the Gaussian is really narrow that means it's more certain covariance is less Now again once the robot starts moving it becomes more uncertain of its own state because of the motion till it gets a new measurement So again to reiterate similar to the self-driving case here We have two two steps first one is a predict Assuming let's say the robot is moving at a constant speed we can predict where the robot will be say after five seconds and Then when we get a measurement from the camera on the robot We use that measurement and fuse that with the prediction to come up with a much better estimate of where the robot is Now let's talk about a non-robotics example Let's talk about a project management example, which I'm sure most of you are familiar with Let's say you're the project lead and you've got assigned a very critical project with a hard deadline And since you have experience with agile you decided to go ahead and set up bi-weekly sprints Now before the project starts you have some idea of how the project is going to go You can predict when you'll be able to finish the project depending on the team size skills They have specific to the project and so on Now the sprint has started and remember at every point of time You know how the project is going you have some estimate depending again on the skills The team member has number of hours. They are putting in and so on and You can roughly predict how much project you'll be able to finish at the end of sprint one Now the sprint one has ended and you get a bunch of data like Jira tasks You finished how many unit as did you pass? What is the integration testing status and the code coverage? So these measurements are exactly similar to the measurements in the previous case like the presence of a door or The measurement we get from the GPS sensor So now you have them you have these measurements and you already have a prediction of what you thought how much project You will be able to finish at the end of sprint one Now you fuse both of them you fuse your measurements these print this this this input with the prediction you had and This helps you come up with a much more precise estimate of the project So Calvin filter in the case of project management gives us an ability to predict accurately if and when we will meet the milestone So that's how you can use Calvin filter in a project management example But I would like to clarify a couple of things here this is a very minimalistic example and I brought this example up just to build some intuition the problem here is the inability to come up with a good model For such a short project and sprints are just not enough We have 10 sprints and that means we only have 10 measurement updates, which is just not enough Ideally, we would like a longer project or more measurement updates But at least hope these three examples build some intuition and now let's move on now before we move on I would I would like to talk about what we are doing a tartan sense and how calmen filter is really useful for our own application So tartan sense is an agriculture robotics company based out of Indra Nagar in Bangalore And we are trying to solve the weeding problem in cotton farms So our job is to kill all the weeds in the cotton farms automatically without killing any cotton And for that the state estimation or the calmen filter for navigation becomes all the more important Because we need to know the position of the robot with high precision at all times in the field Why because if we know the position of the robot that means we know when the robot is on top of a weed And then we can switch on the sprayer spray the chemical and kill the weed and That's why calmen filter or state estimation is really really important for our own application And any minor error in tracking the position of the robot might result in not killing of weed or worse killing of cotton And to tackle this problem again, we have a bunch of sensors like IMU encoders and so on Because relying on just IMU is not a wise choice because it accumulates a lot of drift as you go on and same If we have wheel encoders it gives bad data if there's a lot of slippage So we take data both from IMU and wheel encoders fuse that again using a calmen filter to come up with a much better Estimate of where the robot is So how this works is the robot in the field is going around its killing weed And we have a calmen filter running which tracks the position of the robot by predicting the state into the future And then updating that state by using the measurement coming from IMU and encoders Now once we know the state of the robot and we have a camera as you can see on the right side at the boom So we use that camera to detect weed using a deep learning technique And that algorithm sends us the position of the weed to the state estimate algorithm Now once the state estimation algorithm has the position of the weed We track the position of the robot and once the robot is on top of a weed We switch on the spray and spray the chemical and kill the weed That's how we are using calmen filter for our own application So what I discussed now is shown here in this diagram This camera basically tracks the position and tracks the weed and detects the weed using using our deep learning algorithm And then we have a calmen filter which tracks the position of the robot and tells us Okay, the sprayer is now is on top of a weed and then we switch on the sprayer spray the chemical and kill the weed Before you go to calmen filter. Let's talk about base filter And let's continue with a previous example So let's say a tartan sense reading robot is in the field and its initial state or its initial position say xy is x0 and Then let's assume that our robot is moving at constant speed. Let's call that u1 Now we know where the robot is right now and we know its average speed So it's very easy to predict where the robot will be saying five seconds or ten seconds Let's call that x1 dash and that's our prediction Now we get a new set of measurements from IMU and encoders. Let's call that z1 now We have a prediction and we have our measurements and we fuse both of them using some mats Which will get too soon and we fuse both of them to get an updated state estimate x1 Which tells the state of the robot with a higher precision more accuracy and this x1 becomes your input for the next iteration So for example your prediction can be okay The robot is at 10 comma 2 at the end of 30 seconds and then after you incorporate the measurements You get a much more accurate state estimate and let's call x1 be 9.8 comma 2.1 at the end of 30 seconds And this x1 becomes your input for the next iteration and the process repeats till the robot is in the field So basically what base filter does is it helps us estimate a probability density function using a series of measurements And some maths recursively over time Before you go into the equations, let's go through this block diagram So to reiterate we know base filter or Kalman filter has two main steps predict and update One of the inputs to the predict is the control which in a previous case was the average speed of the robot and The other input is basically the belief from the previous time step Bell xt minus 1 Now prediction step it using the dynamic model of the system predicts the state into the future and let's call that bell dash xt Now that bell dash xt becomes input to the update function and Along with that input update also takes in the measurements, which was I am your encoders in our previous example now we fuse that prediction with the measurement in the update step and Finally output the more state as accurate state estimate That becomes the input to the predict step and the cycle repeats So sorry, so the just to clear the robot is never traveling at a constant speed It's physically not possible for a robot to travel at a constant speed especially in a farm condition because the surface is like really rough and Even if you say give a constant velocity to the robot say 0.3 meters per second Never it happens that it goes at 0.3 meters per second That's why we have like encoders and other sensors which actually tells us at what speed the robot is going at again So as I mentioned earlier that can be because of like wheel slippage Like a wheel is slipping or they can be like motor bias or issues like that So now coming to these equations. So if you look at that Bell dash xt is exactly similar to x1 dash in this case and bell xt is exactly similar to x1 in this case So this is just a mathematical representation of what we have been discussing till now Everything else remains the same. So if you look at the first equation What we're trying to do here is we incorporate the control ut which in our case is the average speed of the robot Into our current state of the robot which is xt minus 1 and We find the probability that when the robot with whose current state is xt minus 1 Goes at the average speed of ut it reaches the state xt So basically what is the probability of reaching xt given your control and your current state and We simply multiply that with the prediction with the output from the previous step And this was 10 comma 2 at the end of 30 seconds in our previous example Now you use this prediction and you multiply that with the measurement You should get at the state xt So basically the first term in the second equation is what is the probability that the robot will observe a certain measurement when it it is at state xt and You multiply both of them to get a more accurate state estimate Which in our previous case was 9.8 comma 2.1 Now this becomes your input for the next step and the loop continues So we have been talking about base filter and we went through few examples, but what exactly is a Kalman filter So if you know what exactly what is a base filter Kalman filter is exactly like a base filter with just two conditions The first one is all the variables we discussed till now have to be normally distributed and the noise we have has to be Gaussian The second assumption is or the second condition is that it only works for linear systems For example, it won't work for cases where variables have sign or cost relation among themselves We'll get to the nonlinear case later, but Kalman filter only considers linear systems And again to reiterate Kalman filter has two main steps The first one is predict where you use the system model and predict the state into the future along with some uncertainty And the second step is the update step So once you have the prediction and you get the measurement you fuse both of them to get a much better estimate of the state of the robot So one important thing to notice here is that Once we have the measurement Kalman filter figures out using a bunch of maths What estimates should be given more importance and what estimates should be given less importance Should we give more importance to our prediction or should we give more importance to our measurement? So this is what Kalman filter is and as mentioned it has two main steps update and predict and We start this process with some initialization. So we initialize the system state and its corresponding uncertainty So to initialize we can use our best educated guess use some historical data or past behavior It doesn't have to be super accurate and since we haven't got any measurement yet So there is no update step and there is no output. So in the first iteration the initialization directly goes to the prediction step And in the prediction step we use the dynamic model of the system and predict the state into the future Along with its estimate uncertainty Now this becomes the input to the update step And once we get the measurement meaning we have the measurement data and some uncertainty associated with it And we can find measurement uncertainty using data or it's generally provided by the equipment manufacturer Now using the measurement uncertainty along with the state estimate uncertainty from the previous prediction time step we find Kalman gain and Kalman gain is the key to tell us which should we give more importance to measurement or should we give more importance to prediction Now once we have the Kalman gain we use that to estimate our current state and its corresponding uncertainty Now this is the output of the Kalman filter, which you can take and use and it's then also fed to the prediction step And the cycle repeats Now let's dig deeper. I know there's a lot going on in this equation in this slide But let's walk through an example So let's say that we have a tartan sense robot and its current state or position like x y its x t minus 1 And we know that it's a linear system because Kalman filter only works for linear systems So let's assume that a tartan sense robot is only moving linearly for now and we'll get to the non-linear case later So what Kalman filter does is it helps us estimate the state of the robot It helps us model the linear system along with the noise in the system And it combines both of them and it helps us to keep track of the noise in each of the input parameters Now we know that it only works for linear systems So that means our predicted state x t will be a linear combination of our current state x t minus 1 in the control u t So basically when we apply say average speed to the robot who's at x t minus 1 will reach x t and All systems all processes have some noise. So we have added some Gaussian noise and A and B depends on the system So for example, if you are assuming a constant velocity model that our robot is always going at constant velocity We can use equations of motion and come up with a and b and same goes for the measurement So z t is the measurement which the robot ideally should observe when it says state x t and again Some Gaussian noise is added here ct also depends on the system And it's basically a matrix which transforms state space state space x t into measurement space z t x t is basically the state of the robot and z t is basically the measurement we are getting and And it only it's only true for linear systems Now we all know that These variables are represented by a Gaussian that means it is a mean and a covariance Which means to find x t we have to find mean and covariance and that's what we are doing in the first two equations in the Kalman-Felt algorithm shown in red so to find x t we have to find mean and covariance and We split the equation one which is x t equal to a t x t minus one plus b t u t into two equations One equation we find mu and other equation we find sigma So the equation for mu is exactly the same as x t is just x t has been replaced by mu and this is our predicted mean and For a predicted covariance. It's a function of the covariance from the previous time step and some Gaussian noise is added So this mu and sigma shown in red is a predicted state Now before we move ahead, let's go back to a robot and we know that its state is x t minus one and that is shown with the red Gaussian in figure one and Now we get a set of measurements from imu and encoders and let's say that's shown by the blue Gaussian in the second plot Now our task is to fuse both of these Gaussians meaning to fuse a prediction shown in red with the measurement shown in blue So how do we do that? So that's where the Kalman gain comes into picture. So what exactly is a Kalman gain? Kalman gain simply put is error in prediction divided by error in prediction plus error in measurement It's quite intuitive if you think about it. So if error in measurement is really high That means our predicted state estimate is more reliable than the measurement Meaning that k will be close to zero and we'll just use a prediction as our current state But on the other hand if the error in measurement is really low That means measurements are more reliable than our prediction and k will be close to one Now once we have the Kalman gain we use that to find a current state Which is muti and sigmaty? So we use Kalman gain and come up with a new updated state shown with the orange Gaussian in figure three Now as you can see the orange Gaussian is much narrower. That means it has less covariance That means we are more certain of the state of the robot after the fusion and again after we fuse We are more confident. Okay. This is a state of the robot and Once we have that the process repeat itself So this is what a Kalman filter looks like and before we move ahead. Let's talk about a very specific example Let's say that state estimate uncertainty, which we discussed here In the red which says estimate uncertainty, let's say that that's 400 and The measurement uncertainty for the IMU is 144 and the measurement uncertainty for the encoders is 36 In reality, it's a metrics, but let's say it's a simple number just just for discussion Now a state estimation of prediction says that the robot has actually moved two meters Then the IMU comes in no IMU say the robot has actually moved 1.2 meters. So whom do we trust? How should we fuse this? So that's where the uncertainty comes into picture and we know that IMU uncertainty is 144 Which is much less than the state estimate uncertainty of 400 So we give more importance to the IMU data and let's say our updated state estimate is 1.3 meters and Same goes in the case where we have encoders and we will rely the most trust on the encoders Because it has the least uncertainty of 36 So we need to keep one thing in mind here is that after the Kalman gain calculation We update all these uncertainties and use those uncertainties in the next iteration. So this is it So this is what a Kalman filter looks like as I'm sure you can already tell that there are few shortcomings of this Kalman filter Which we always discussed earlier. So it only works if the variables are normally distributed and the noise is Gaussian Well, that might be okay for a lot of cases, but still there are a lot of cases where that will fail and we'll get to that soon But a bigger assumption is about linearity linearity rarely exists in real life So if you take a simple example of say you're driving home We'll keep on driving at a constant velocity till you see a car in front of you And you've got this measurement and you use this measurement to update your velocity to avoid a collision So let's say you de-accelerate and reduce your speed So it's this de-acceleration which induces non-linearity in the system and you can no longer use Kalman filter So what do we do? So that's where we have extended Kalman filter If you remember in Kalman filter a predicted state is a linear combination of our current state and the control in extended Kalman filter Our predicted state becomes a non-linear combination of a current state and control and let's call that function g and Similarly for the measurement, let's call that function h Now that's good now we have two non-linear functions, but it actually has a few problems So the first one being that the belief is no longer Gaussian because if you think if you pass a Gaussian to a non-linear function the output doesn't remain a Gaussian anymore and that can actually be a huge problem for us because all the equations We had were based on the assumption that the belief is a Gaussian and those equations won't be valid anymore So what's the solution? Well as simple or stupid it may seem we simply linearize the function using Taylor expansion So we linearize our non-linear function around the mean using this Taylor expansion I know there are few issues with the Taylor expansion and we'll get to that But at least for now we have a linearized function for further processing and everything else remains the same Don't worry too much about the algorithm here. It's exactly the same as we discussed earlier It's just linearity has been replaced by its non-linear counterparts Okay, that's good So now we solve the non-linearity problem and our system will work for non-linear cases But can you think of a case where even this will fail? So talking about where extended Kallon filter can fail. Let's go back to our original example So if you remember in this particular example We knew the initial state of the robot and the robot was moving in a one-dimensional corridor with three doors and doors are marked One two and three but let's talk about a more real-life scenario Let's say we don't know the initial state of the robot and doors are not marked one two and three and we have the same Task of keeping track of the state of the robot now the robot starts moving and It is in front of a door and camera sends a measurement that there's actually a door here But since there are no markings on the door, we are not exactly sure which door it is So the probability of the measurement is basically shown by three Gossians Next to the position of those three doors basically it can be any of those doors and Similarly it goes for the belief of the state robot can be in front of any of those three doors Now the world starts moving again and again It becomes more uncertain of its own state because of the motion as you can see with the Gaussian now It gets a new measurement that there's a door present again for the probability of the measurement it can be in front of any of the three doors and What we do now is we fuse this measurement probability with the state estimate from the previous plot to figure out Where exactly the robot is and as you can see in the second last plot it actually has five Gossians And each one of them has some probability of the robot being there But clearly one of them is much more probable than the others because the Gaussian is really really narrow That means it has less covariance and it has more it's more certain of the state of the robot Now so without knowing the initial state of the robot. We are able to localize the robot Now the robot starts moving again and it becomes more and more uncertain of its own state till it gets a new measurement So clearly extended Kalman filter won't work here Because as we discussed extended Kalman filter only works for Gaussian and it only works for uni model Gossians But here we have multi model Gossians There are other algorithms which we will touch upon in the coming slides like particle filters histogram filters some of Gossians But extended Kalman filter will not work here at all So talking about where else will extended Kalman filter fail So if you guys are a little familiar with Taylor expansion, you will know that Taylor expansion does a really bad job in linearizing a highly nonlinear function and our estimates will be pretty bad and The same goes if it's a really wide Gaussian that means the degree of uncertainty is really high Is there a better way to linearize there is so that's where we have unscented Kalman filter So simply put in one line what unscented Kalman filter does is It takes a bunch of points compared to just one point in the extended Kalman filter where we just took the mean and Linearize the function around the mean here We take a bunch of points called sigma points We pass those points through a nonlinear function and get a non Gaussian output and then we simply find the best approximate Gaussian fit Now since our belief is a Gaussian everything more or less remains the same So one thing which you can remember to decide which Kalman filter to use in which situation So if it's a linear system use Kalman filter if this minor nonlinearity in the system use extended Kalman filter If it's a highly nonlinear system, I'll suggest you give unscented Kalman filter our shop So I would like to end this by giving you a glimpse of an alternative series of algorithms It's called non parametric filters. So till now we were discussing Gaussian filters or parametric filters because they had an assumption of Gaussian's but these non parametric filters have no such assumption and two of the most common ones are histogram filters and particle filters and Both of them work really well in the robot scenario where ekf failed in the histogram filters The belief is represented by a histogram and in the case of particle filters. We use a bunch of weighted particles So if you would like to discuss any of these algorithms, please let me know and I'll be happy to do that I hope it was at least a little useful and you got something out of it And if you would like to discuss how you can use any of these algorithms in your own project, please let me know and thank you So the application of extended Kalman filter Yes, so basically ideally what happens the problem here is Similar to deep learning like if you have a very less data, you cannot do much Right. So here what we want is either it's a really long project Where you have a lot of sprints going on and you get a lot of measurement updates like a lot of these information or You have like a hierarchical project where you can this where you can take inputs from different projects and come up with a better estimate So coming up for something like this for a very small project because there are no measurement updates So you're just predicting predicting, but you're not able to improve your state estimate You're not able to reduce your uncertainty. So ideally this is Project management is a better suited for a case where we have a longer project going on or more measurement updates We can discuss this in detail after after the presentation also So your case of robotics here. So one approach is like take look at it very mathematically The Kalman filter and everything the other one is like we can look at the physics of motion. Yes, the physics of motors so Then you can have a physics model or if one can have a Kalman model But you forget about the physics you just put those matrices or whatever and for a bigger prediction Which one you found working really good? Yeah, so basically as I discussed Kalman filter actually uses some physics model inside it For example, if you if the robot is going at a constant velocity Then you have to use physics like some basic equations of motion to predict where the robot will be say after 10 seconds So that's where the physics comes into the Kalman filter itself So I think that's like the right combination because to model any system like to model the motion of a vehicle You need physics and you use that inside a Kalman filter to be better predict and estimate the state So it's like ideal is like a combination of both of them. Hi there Really cool talk. Thank you. Thank you. It's quick question about Generally the mapping of your farm I guess do you do this in a one-shot kind of scheme where you just assume no map to begin with or you have You map it once and then when you really run the robot in your so we have no map So and we don't actually need a map also because it's everything is happening in a relative term locally We keep track of the state of the robot and when our deep learning algorithm tells that this is the position of the Weed we keep track of the state of the robot and we know when the robot is on top of us We'd switch on the sprayer and kill the wheel so everything is happening in relative terms So we don't need like an absolute map for this application. Yes Hi, so this is regarding the application of the Kalman filter in agriculture. So this is more around the uncertainty of the Agricultural field like if in the nth moment a bird, let's say a bird flies down And so the the camera did not we have such scenarios happened where the camera did not detect So that so camera won't detect the weed or there'll be a problem with the state estimation It might collide with the bird or any any any uncertainty uncertain object that comes in contact So we haven't faced that problem till now because one of the reasons being Cameras are pointing pointing downwards. It's like a meter from the surface and it's like a moving machine So ideally birds and other animals will try to stay away from it There is a system of error you have right and that is what you're handling at what point this breaks For example, this vehicle is a moving in 300 kilometer non-agricultural use case Can you repeat the last line please non-agricultural use case? Suppose the vehicle is moving and say 300 kilometer per hour. So there is a registration error Right. So at what point? Kalman filter failed so the problem like if a vehicle is moving at such a fast speed What we have to do is the processing has to be really really fast We need to get data from all the sensors at a really high FPS and do the fusion at a really high FPS That's one of the challenges if the vehicle is moving at a like a really fast speed because everything all processing has to be really fast So if I take an example of self-driving and let's say it's moving at like 80 kilometers per hour Which is very different from our agriculture robot, which is like few meters per second So in that particular case, we need to have cameras and have like a long vision We should be able to see far ahead in the future or in the front of the car or the robot So that we can use that information to like basically come up with a better estimate So for example as I mentioned if we did the pedestrian far away, but the car is moving at 100 kilometers per hour We need to know the pedestrian and its state and we should be able to predict where the pedestrian will be saying 10 seconds So that we can maneuver the car accordingly But if you get that information like in the last second like we can't do anything So I think it's all about like being able to see far into the future and being able to predict more accurately If like for high-speed applications Just a follow-up question and that's a what kind of computation power we are talking here. So right now We are simply like testing for so this particular doesn't require a lot of computation like the Kalman filter inherently doesn't require a lot of computation but For example, it's right now. We are like bottlenecked by the deep learning Deep learning application which requires a lot of computation But Kalman filter inherently doesn't require a lot of computation, but for example, if you have a camera And you need to get some data from the camera That's where the computation comes in running these equations and running the Kalman filter actually doesn't require a lot of computation But how you feed that data to the Kalman filter how you get data from the camera LiDAR GPS That's where all the computation comes in not like in the implementation of the Kalman filter My question is There are two applications that I see one is when the when your object is actually moving like when a human being is moving and Another is when the weed is stationary So my question is when should he apply it when the object is moving or when it's stationary. I mean I don't know much about it, but What I feel is if it is dynamic Then you should be more keen in applying something like Kalman filter Sorry rather than it being stationary So in the case of weed also, yes, the weed is not moving but our robot is moving And that's why we are using the Kalman filter to basically track the position of the robot and robot is continuously moving And yes weed is stationary and we did like a secondary thing But for us the robot is moving and to be able to know when the robot is on top of the weed we have to apply Kalman filter because We need to track the position of the robot I mean to tell when the robot is on top of a weed We can be anything it can be a cotton plant also that's secondary But the primary thing is to be able to track and know where the robot is with high precision Yeah, so like in both the cases even for example, if there's a robot, let's say Let's say we get a data from IMU encoders at what 10 hertz or maybe like 1 hertz Let's take like the worst case and we're getting the data from IMU encoders at 1 hertz But we still have to predict how the robot behaves before in between we get the data Right, and so that's where we need to use Kalman filter even for the robot because the frequency at which we get data It's not like thousand hertz or mega hertz Especially for like IMU encoders we get like at a lower hertz like 10 hertz 100 hertz But we still need the ability using the physics of the robot to see okay, where will the robot be? Where will the robot be after some time and same goes for the case of pedestrian for example In the case of a person we have some physics. We know something. Okay a person's average speed is this and We're using that information and you're also using if we are continuously tracking a person How that person is move has moved in last five minutes or five seconds in using that information will help us know Okay, the person will be there in said 10 seconds 20 seconds, and if you have a car going at say 100 kilometers per hour Ideally we would include that information and try to maneuver around the person So I think it's applicable in both the cases to improve improve the state estimate Yes So yeah, so that's one of the challenges We are also trying to figure out right now because the problem is not with the state estimation state estimation can actually be decently accurate like up to few centimeters But a bigger problem for us is to detecting the weed when it's near a cotton So basically if we have a top-facing camera and this cotton and weed is actually under the cotton You cannot actually see the weed that's a bigger problem actually for us right now than state estimation So there are other options like detecting weed from the site because after say a few days when the cotton The plant grows tall you cannot actually see the weed from the top view and that's actually a bigger problem State estimation since we are doing everything relatively like locally It's much less problem for us than like actually detecting the weed using like deep learning So when that happens we don't spray on it because our higher priority is not killing of cotton Then a second priority is killing of weed Yeah, so if there's a weed very close to the cotton We simply don't kill the weed because we don't want to damage the crop