 Hi, so I'm first a bit about myself. So I have two sites one is an entrepreneur and CO on my other side during the nights. I really enjoy participating in in deep learning competitions, and this been recently going on since 2017 and typically I spend my time in a place called gobble, which is a Website where people from all over the world compete So typically companies have a problem and they want to crowdsource if you will the solution to different teams from different Perspectives, and they have been really getting momentum in the past few years So you can see I got really excited when I was a browsing through Twitter And then I saw this guy who I follow so he's a Chinese guy who happens to be doing his PhD In the US and then he said something on the lines of hey You know so this is what he said the way I read it was okay So, you know you guys do these fancy competitions in the West in Kaggle, but in China You know we have these places right so he List you know different websites which you had no idea that existed then I went into one of these websites And actually I wanted to do a few of them and they were really really really Challenging problems in China. There's a whole world of deep learning and the learning competitions that we at least in in the West I didn't you know, I didn't know anything about them So I want to one of these places and of course I see this Okay, so everything is in Chinese, but then I hit translate. I don't speak Chinese and then I get you know the sense that this is a Competition about LiDAR, you know, and it sounded you know pretty decent. So, you know, I decided to join So I kind of find the the join Button in this place and then you know the registration said basically, you know in China everything happens on the internet You have to have a cell phone. So in China is really mobile first. So the authentication required you to have Chinese mobile phone number which I didn't have so, you know, I was a bit frustrated, but I am Persistent person. So I write You know, I send this email to the organization and say hey, you know, hey guys, you know I really like to join this competition Can I join and you know, no response. All right, no response. I got a bit frustrated then I then I read Also on the MIT Technique review that, you know, China has this great master plan to become, you know, an AI powerhouse in 2030 and they said, you know, we are be we are a bit isolated, you know We really want to be more collaborative with the world, etc. And I'm thinking to myself, you know, how it upgrades these guys You know who you know, they are like, you know boasting about that They are going to be open and I cannot join the competition. So I write again, right? I write again No response in the first email. I read again. Hey guys, it's me again. The guy from Spain I really want to join this competition. No response. I got really frustrated and I you know, almost almost Want to tweet, you know, the Chinese prime minister You know, what the fuck is going on guys with your openness collaboration deep learning. I cannot join and I'm thinking, okay You know, Andres, hold on because if you ever want to go to China, you know, if you do this You may not, you know, be able to come back And I may be sent to a like a education camp or something. I don't know So Then I go back, you know, I compose a tweet, etc. I was about to tweet it but then I go to the website again and oh my god, they added Spain So there's like China, Hong Kong, Taiwan and Spain Hopefully we don't become a province of China So I joined and you know, I joined and then you know, when you join this competition You have to pick up a name for your team. I have this friend this friend So I live in Alicante. I have this friend from Madrid. So I'm not from Madrid I have no idea about almost anything in Madrid and he tells me like he lives in a neighborhood called Sanchinaro and I'm thinking to myself These guys in Madrid are crazy Do they have also like a San Korean arrow Or some Japanese arrow, too So it's you know for you English guy Sanchinaro could be almost translated as like a saint You know and Chinese and big Chinese guy Right, so I'm thinking so this is why it's all Sanchinaro So, you know, so this really is taking my head. I don't know why so I was like whole day thinking about something I was Sanchinaro and then I joined this competition and then I put a You know that in my in my name Now the deal they know that eventually we were going to win So, you know, usually I got asked right like, you know, the Chinese guys really wanted to know what Sanchinaro meant, right? So it was very difficult to explain. So I what I did is You know, this is son, you know for a son is like a saint, right? So there's the in China, you know, the same way in the same way as in the West we have Aristotle, you know They from the Greeks so they have Confucius, right, which is a very wise person So they have a pronoun to name, you know wise people, right? So I said, you know, these first two characters are like the equivalent of wise person, right? And Sanchinaro right China in Spanish could also mean stone By a long stretch And then the other part is, you know, do you know this stone, which is a precious stone green jade stone? So so I tell the guys like Sanchinaro means wise jade stone And then the guys said, oh my god So that's just for the name So that actually I Competed so this is kind of my bio and I joined this competition with a friend of mine who I've never made So I've met this guy with color in competitions So, you know, we embarked in this challenge So now going back again to the competition what this is really about is to they give you Later point about what is called a point cloud if you see self-driving cars They have this device on top of them except for Tesla which for for cost Reasons they cannot add a lighter Almost all of the others especially Self-driving fleets like taxes. They do have like Uber has this, you know ugly thing on top Way more has to etc. So everybody uses lighter. There's many reasons why you want to use lighter Essentially it works during the night So the way in which it works is it has, you know, multiple beams of lasers that go and then Do like a cycle like a turn they turn very quickly and then they project Laser beam it reflects and they measure two things. They measure distance and And they also measure the amount of intensity So they're they are not able to see colors. They're actually in infrared But you know the different reflections different materials have different reflections. So this is so they're also the good way in which A good characteristic of lighter is that of course because it's emissive. It's not reflective In the sense that we know we don't we do not depend on the Sun it works during the night and also give you natively 3d representation of points. So what I'm showing in the In the in this video is like the top view of the lighter But in reality you have like a 3d points You have the x y and c coordinates of each and every point that you get plus So you have the position and the intensity That's what you have. So this competition is really there's only I don't speak Chinese But hopefully there are formulas which are universal. So they ask you to Classify each and every point. So this is for you for those of you who know about deep learning It's called a segmentation problem So for every point they ask you to classify them in one of seven categories like pedestrians cars Motorcycles crowds, you know vans, etc. Seven categories and then they Wrang you with an objective score. So this is what I read like about competition It is not, you know, how elegant the solution sees, you know, based on some sort of subjective Judge, this is like a metric and if you do well, then you win. That's it Then the other obviously Alibaba is interested in this for commercial reasons So in this competition, so you have to get, you know, as As, you know, the best you can get with this score plus you need to be able to process this thing in real time And by real time they mean you have to process each frame each cycle of the lighter in In a less than 100 milliseconds with, you know, 1080 GPU and this I7 CPU So that's what they ask you and Also the training data that they give you is heavily unbalanced. What this means is that you may have lots of let's say Actually, the most abundant class is Really the floor and nothing like, you know, just the environment and then there's a less abundant classes like Like vans, for example, there was this abundant in China. There's lots of motorcycles But there are there are heavy heavily in balance, which, you know, has some challenges for deep learning. So So that's what we are asked to do. Now the first thing if you've ever Faced You know deep learning problem the first or even a machine learning problem Even more than a deep learning one Is to know, you know, what type of inputs you get now now here. The challenge is that the LiDAR You know is this right is well actually it's not this so this is a 2d projection So we are projecting a 3d Point cloud in 2d and from from enough on the outside now the beauty with LiDAR is that you can project in 2d in Many different ways, right? So this is kind of a perspective from the outside. You could also project it You know as you were seeing it from the sensor You can project it from the top you can put it from the side Okay, but the important thing is that each LiDAR frame a frame means like a cycle of the of the lasers has about seven, you know five 57,000 points per frame So each point is a reflection now something that is challenging is that if you are seeing this And if you've ever faced a deep learning program So the first issue is that so this is really the native format of LiDAR is just to give you points x y C coordinates plus the intensity, right? So and then there is every frame has a different number of points. So not every frame has 57,000 some frames have more points other frames have less points So now you can see this is different also the order of the points doesn't really matter Right in an image If you think about an image the the order of the pixels in an image is really important, right? Because if you shuffle the pixels you get coverage In this case because you're given an x y and c coordinate That's what really defines the actual order, right? Is the is the position is encoded in the in the point itself? So the other thing which you're given these six sequence It's irrelevant in that sense. It's not really a sequence It's really a set right a set of points that you're given So that's a first issue. So, okay, how do we deal with LiDAR point clouds? You know in this competition is not not of course not only in China, but here also in Kaggle To win these competitions or even to score in the you know one top 1% you need to do two things You need to do the first one is obvious, but it's difficult is to do everything right So whatever you do you have to do it, right? There's no room for mistakes in any of these competitions and the second thing if you really really want to be on top By top. I mean top 10 you have to do something different. So now this telling right you have to do everything right and has to be different so when I embark on this kind of challenges, I always try to Approach using my weaknesses as a strength and say, okay, what is everybody else doing? what is the state of the art in LiDAR recognition so instead of the art essentially if you score through patterns through Publications, you will see that what people do is They they take the point cloud and build Projections from the top from the sides so they build images, right? That's an image with projections so that's and then of course because In deep learning I would say that image recognition is really the most advanced Application of deep learning image classification is very mature Image documentation is also very mature. So if you are able to move your problem from a 3d point set, which is difficult To an image problem and you know, you know, you know, you are getting closest to something workable at least But you know, I said, okay I cannot do the same as Other people who've been working on this problem for years are going to do because you know, they're going to Really beat me on this issue so Okay, so that's easy. So I started like exploring what is really really the sensor Computing if you think about what the sensor does so this is kind of how it works. So there is like in this case, there's 32 Lasers so one laser is it's fixed like pointing in this angle And it's one is pointing in this so with with fixed lasers that it's they're spinning, right? They're spinning They were to capture the environment. So The estimate is is going, you know from, you know, that's a full cycle So from zero to 360 or from minus 180 to 180 So it's you know goes in this angle and there's one laser for each Synod angle for each like, you know, the kind of the elevation angle So that's how it works if we take the a frame and in this competition We were given frames as a CSV file So each line of the of a frame has like the three coordinates x y and z in meters and then intensity So if we convert the x y and z coordinates into polar coordinates in 3d So you have distance and two angles And we plot the the estimate angle Of course, you know, it's you know something we expect right it's spinning So it's going in this case it goes from From zero and then it goes all the way down to minus 180 Then it you know it flips goes to 180 and goes back would you really would expect now You will see that sometimes it goes to zero So you see that is it's a weird thing that that is for the most part is linear But sometimes it goes to zero. So, you know, but but we'll see what's going on later The important thing is that at least we are it looks that we are given the data order in the same way in which the sensor Capture the data. That's what really is of interest to us at this point so if we plot so this is the if we plot for The the the just the estimate angle, right in the same order as is being read from the sensor and if we plot it Into a let's say an image like I'm in we play into a matrix. So each line It's each horizontal line is one laser. So the laser. Let's say that points to the sky is the top Line and the sensor that points to the to the floor is the bottom line Then and then we plot in color the estimate angle. So you see that it goes from some purplish Color it goes to dark blue and then it goes to yellow which is 180 then it goes back to the same color. So it is this is just visualizing in a different way the You know the 1d representation. So we are just stuck in in a 2d kind of Matrix and visualizing this matrix Well, and we see something so we see it's kind of expected now. We see something weird, right? So we see this in the center, which is like They're like kind of shuffled So what this what could this be? so if we zoom in and then do again the same 1d representation so we this is this pattern so this the so the sensors each sensor goes down But they're kind of shuffled I don't know how you guys remember in Spain. We had, you know, many years ago. This is from maybe in the 90s We have Pay TV like cannot plus right and it was scrambled, right? So you have to have your decoder to be able to see it and back then I was in the university and With a few friends we build a decoder for this. So, you know, we kind of cracked The the algorithm and the way in which we cracked the algorithm was not using cryptography Although that was one of my passions back then was really by exploiting Correlation between the images and because a natural image sensor correlation you could basically if if people are just resurfing the image you could Find the Reordering that maximizes the correlation between lines. So it's very simple to this so this What I you know what we did which is find the Shaffelin angle. I don't think that this was intentional and this is not really an encryption But the same principle applies I think it's just that the angles the wing which are mounted they the way they wait you with in with the Lasers are mounted. They are sifted because they have to be close to each other. So they may need to have You know some you know arrangement so that they fit in a small space So it's very, you know, you do a few of a few correlations and you can find out The reordering that makes it that maximizes correlation again, and you do some fancy non pie Shaffelin which is very simple and then, you know, you can rearrange, right? So this is now if if I basically shift every line in the right way, you know, we get this now so we can see that is very gradual and you know every The angle the azimuth angle between one lighter and the other lighter You know, it's almost the same. So, you know, we've arranged this in a nice way now The question is so this is the original one as we are given it from the In the competition from the sensor and this is the rear the one now here in this case I'm plotting the distance, right? So I have it realign and what I'm plotting here is the distance now You can see that in the color so that the yellow is closer to you of course the floor, you know Is closer to you and as as you know, you are measuring the distance in you know in the in the sky So you see darker blue so it's farther away, right? And there are some blocks that you can see there are some blocks There you see those those things are like at least three and partially one of them occluded on the side That are closer or farther And this is the ground truth. So this is really so it's now is a much much easier way of You know dealing with this problem. So in the top image you have the rearrange reshuffled lighter Measurements in a dense almost dense matrix. There's holes. We'll see what those holes really mean and on the bottom We see the ground truth and you can almost tell visually that there's you know that there's high Even obvious correlation, which is great things even easier to see Lighter in this way than to see it in these very sparse, you know point clouds that are really really sparse So this is a super dense matrix. So Now what are those? black points so the black points are really where the measure the the sensor measured almost zero and It may be because you know, you are spinning the lighter and there's like nothing, right? And then the sensor may time out and then it speeds a zero, right? That's that's what we think and then So basically we took every point that had a very small distance and we put it in In a some dark Point and then we'll see how we handle this in the deep learning model now because it's Equivalent as if you have some image with noise like with points that you don't know really what they what they are But there's all solution that you can that you can come up with now if instead of putting the Asimod which we already did we plot the synid angle Now there's something weird going on We would expect that because we have a discrete amount of lasers. We have 32 lasers Why you know why so we see those spikes which look cool There's there's a more lasers closer to zero because typically there are more objects, you know, they're you know in your line of view But then why you know, we we see this like almost noisy Synid measurements and if you you know so so if you look at this image There's something funny going on in this image. So you can see that There's a discontinuity Right there, right? So what could this be right because obviously there's no It is not, you know a glitch in the matrix like in the movie. There's something going on There must be a physical reason why There's a discontinuity and the reason is because the car is moving, right? So if you You know if you pretend that you slow time and then you see the car moving So the rotational speed of the lighter is in comparison, you know Relative to the car moving. So when you start measuring, you know, distance you're moving And then when you almost going back the car is in a different place So they are changing in this way the frame of reference, right and the lighter is not accounting for that and actually you can see that The it's fine the way the way in which with a lighter with this lighter at least you can measure the movement is because The top of the car which is static relative to the car is the only thing that is static right at the top of the car Versus the environment is encoded so we can derive movement of the car Using the top and then you can re-reference the whole thing with the top of the car and when you re-reference You change your your reference system to to one point and we just starting taking measurements Then boom you get discreet the discreet angles which were what we were looking in the first place So So this looks pretty good. So at this point, what do we have? So we have so if we reshuffle the stuff so we have the distance You know and we can we can already see stuff, right? So this is really you really have to think about you have this in in this goes like a circle Right, so though we are seeing just the front. This is from minus pi to pi For minus 180 to 180 You have the intensity and then you have also, you know optionally You have the SM or angle if we if we think that there could be some correlation because Let's say that typically right cars are in your line of sight. There's hopefully remember these things work on correlations and You know, we can assume that there's more cars You know, maybe on this angle and on this one that on the sides, maybe on crossroads You may seek out from the side but not so often same with pedestrians, right? The lesson should be in these areas and hopefully there's no pedestrians, you know that you are about to To kill right just in front of you. So but you know, we'll see what we do But at least now so now what we've done so far is we've come with a completely different representation of the data data that if you've worked with a Three with debilitating models this almost looks now like a very very easy problem now hold on because The issue here is you have to enter all these things that we've Mentioned so we are we add the four things the distance intensity as a mode and then The we'll see why we enter the the ring coordinates and also the fake points at the end You have to get a semantic segmentation Label of every valid 3d point, right? So this looks, you know, very very similar So we use the workhorse Workhorse of segmentation problems in the learning which is called unit So unit is a very popular architecture for segmentation problems now the issue with that unit Like with the canonical unit is that there's a few issues one is that it does the The max pooling if you are familiar with the unit You see that the convolutions that they come up of convolutions and then you gradually reduce them And you get like a latent space at the bottom and then you reconstruct again with upscaling Convolutions or interpolations so you can use the convolutions or transpose convolutions Or interpolation is doesn't really matter in this case too much But if you think about it in a regular unit in a unit you do it in the two axes right in X and Y now In our case, this is tricky because yeah, the X axis looks that we can squeeze right, but the Y axis is really the lighter Lighter angles so there's no really in the same way in which in an image We have like a trans translational invariance in an image in this case. We have only Translational invariance in one direction in the in the X direction in the estimate directions. We do not have Translational invariance in the scenic direction because this is like a different This is a different Laser and a car doesn't look the same here than in the very far not only it looks a smaller in the in the You know when it's far away, but it's also typically it's been shown in different lighter lasers so But anyway, this is the architecture based on a unit. It has roughly 10 million parameters with a special dropout to increase generalization and We did some ablation analysis that basically ablation is a very boring task of you know Running the same experiment taking out stuff, right? I'm measuring the relative impact improvement or worsening of your performance if you take give or take Some characteristics and we found that when we You know when we you know added or removed the estimate angle it really this didn't change too much So not you know not surprising, but anyway, we also tried many different things in these competitions You know it's not that you have of course You have to have a good idea and be able to implement it That is critical that you are able to try experiments really really really fast Those of you who've done deep learning should know that deep learning is really a very empiric discipline Right, although you see equations and everything looks like Fixing stone is really not the case and you need to run experiments and see really you know You may have like an intuition of what make work better than other things but it's really about having a pipeline that you can test very very quickly and The speed is critical on these competitions Models staying long to train so you have to balance between you know complexity of your architecture number of experiments you run etc That's critical. I cannot emphasize this enough. So Now there's a few things the most important things what I already mentioned that there's Translational invariance in this direction, but there's no translation and invariance in this direction Right, there's two ways you can deal with this problem the most elegant way But we didn't do because again, this is finding a shortcut to do quickly is to build any type of convolution that has shared ways in what direction but not in the other that is in in pytorch This is very very well. Well, no, it's not very easy, but it's doable relatively easily Because they give you some low-level primitive to build your own convolutions in TensorFlow It's a bit more convoluted so We behind this competition would use TensorFlow and Keras So we basically did a shortcut which is super quick, which is just adding an extra channel That basically tells the a regular convolution architecture Where it is in the image in the image, which is really a dense matrix, right? so so if you think about in an image in a in a convolution the What a neuron at the end sees it doesn't really know where it is Where right because it has shared ways all across the image But in this case it's critical to know where it is in which at least in which laser It is and the way in which we do it is we just add a new channel which just telling In every line the in a number meters my between minus one and one something that basically tells the Convolution weights, you know where they are and they can learn a combination of this number plus the actual contents of the image So it can learn from the positionals location so and then so this is You know one one trick Actually uber did that pretty good paper on this trick and then we also we have this issue with with Images that are different, you know, remember I said the different different number of points So it looks they have different number of width in this So there's two ways you can basically do one net that in one shot give you all the predictions or we'll see why later You can split the the the whole frame into, you know Small segments with some overlap run in Rantating or inference in all of them and then reconstruct and then you get with this you can do TTA which is text and test time augmentation and you can do kind of a You know runtime ensemble and and you know incredibly the secretary is super fast and we were able to do Your augmentation in real time in real time and we still being within the limits of the competition requirements So How we optimize them all nothing really fancy Really, I think the we just did a validation split of 10% of the of the images We have to regularize these with a dropout and then we did flips and some small alignment from Produration some out in Gaussian noise and then for a 4d optimizer. We didn't do anything fancy Just Adam. That's it. Maybe today. We will use one cycle which is you know more You know, it's newer. It's the cool new on the Block and then I think the important thing is here For the loss function we use Remember this Intersection of our union metric that we are given is not differentiable So we cannot use it directly as the loss function. There are two ways. We use a very new Loss function called love us which is very sophisticated mathematical function that is a good proxy of I O you but surprisingly didn't Didn't really get as much better performance and it was much slower So we did a mix of cross entropy class a softened version of the I O you which you know It's very easy to basically soft, you know make a Soft version of the hard I O you and then also the same with soft dice coefficient So there's actually there's a loss called it to work Chiversky loss got something that is named after a Russian guy. The math are good, right? That's guaranteed So this is a traverse key loss. It's a generalization of the F1 F2 I O you and dice And then you can with this formula you can basically soften it and Do closed up with it and then for those fake points that we see we basically Void those function. So then that's how we take care of those like fake points. They are, you know, kind of a nightmare And then we train for 100 epochs, but not much Looks a lot, but it was relatively quick to train Maybe it took one day, but much for the whole thing Which you know incredibly it's given where we are today with the learning that some models takes weeks this pretty good and then In terms of performance so this competition what this competition was crazy because we had like a different We didn't even know the rules Because they were in Chinese and they kept changing everything but but in any in every part of the A stage of the competition, which we really didn't understand we are scored On top and significantly better than the rest of the teams I think this is why because what I said we something very very difficult and very Native to the sensor without losing an inch it's early an inch of accuracy of the sensor so the winning submission had a runtime inference of 65 milliseconds well below the limit of 100 so and this is with four TTI overlaps So pretty good. Even we tested with no overlaps and we will still have one with just 20 milliseconds So this means that these guys could have better lighters and the deep learning model will not be the bottleneck actually funny because once we were in the actual competition the Alibaba guys Told us that they tried something similar, but they couldn't make it work So again, so what I said is you have to do something different but do something right I think they made a mistake somewhere and then they thought this wouldn't work But it did if you do everything, you know carefully then You know what once you train a model and you want to optimize it for inference You know because we were it was very very fast already the only thing we did it says, you know trick that is very easy You know that you to train deep learning architectures typically you use batch normalization Right to help you with you know the vanishing gradients because of the stats of the activations now if you think about what is really batch normalization is really Everything in deep learning is multiplications and additions everything right? Don't let people fool you. It is just multiplications and additions So what is a convolution? It's the same right is a multiplication and addition so you can actually once the model is trained you can actually mix the two of them you can Get rid of the batch normalization Those those additions and so take the bias and weights and then shift them to the convolutional weights very easy and then you remove many layers performance and And that's it. That's I would say that the summary is that we created a very novel representation of a LiDAR sensor and then we Of course once we have this in a dense like Main tricks we could leverage existing segmentation architectures with a few caveats and tricks and then in every part of the Competition we scored much better and to our surprise not only we want the objective price of the competition But then the organizers so this happened to be so this I didn't know when when I joined I thought that this was just an online competition and because everything was in Chinese This happened to be one of the biggest like a conference about did learning in China with lots of people you know in China physically in a in a in a city in China and we got the Also an extra price for the most innovative Solution of the whole you know competition We have many different tracks for other tasks and that's it One two and that is well It's fascinating that you beat so many people without really knowing what the hell was going on. It's a very very impressive Does anyone have any question? I did you say what you won what the prize was or is it secret? Well, it's money. Yeah, okay Chinese citizenship or no I'm not sure if I wanted this price You don't have to reveal the money. No, well, it's it's it's public in the competition rules So was money in in June? Uh-huh. Yeah, good exchange rate Okay, any questions out there for under this about this this experience of Incredible story Okay, I don't see any. Oh, here we go. Here we go. Is that Jesse up there? I see maybe maybe not. No, it's not We have a question here in the middle Fatima is hitting towards you. Keep your hand up Hello I'm I guess that you didn't Got to this solution at the first time. So I would like to know How much time did you spend to achieve? all of this So it's funny. I always get this question like how much time you get so Actually not much coding. So the way they typically goes for me is on this so I've done many competitions And typically like I'm giving the problem and then I don't do anything. I spent like a few days Thinking about it. And then I guess the moment in which I knew that We had a great representation Maybe only took one day once, you know, I did the exploration because I have a background in electronics So I was I guess like the hidden secret I was going to to I was seeking for and actually found was how would I Send the data of the LiDAR sensor if I was a firmware engineer Right. I thought he myself how I would do it In packets, etc. And then try to then but this could have not worked right, but it did in the first try so that's that's the I guess the The answer which may not be very You know, maybe a bit surprising, but I guess we were a bit. We had a bit of luck in that regard Awesome. Thank you Any other questions out there? Here we have one right up gonna make you work. Fatima. I have to come down to the front now Just if you keep your hand held up so Fatima can see you Just down here in the front So I would like to ask you because I know that you have been engaged in many other competitions since this one And I know that you have evolved a lot Even if this is fascinating the way that you have envisioned the representation of data to be embedded in the in the model Will you now that you with all the experience that you have with other competitions What would you have changed if you will approach this sort of this problem now in the data representation? This is one question and the other question in model architecture. Would you have changed something? Well, I guess the So the answer is in the data representation I don't think I would change anything and I also made a recommendation to the hardware community to send the data In this way because there is very very complex Substandard in my view architectures to deal with point clouds, right? And a point cloud is really like a set of points in in let's say pretty good space now But a lighter although easy it is a point cloud It has less entropy that you know many arbitrary point clouds for example It's impossible to have a point if you're shooting in this direction It's impossible physically to have a point here and a point, you know beneath the point right because it reflects Right, so there is entropy. So there's a you know precondition, right? This data representation is the native data representation So I guess Maybe if this was like an image competition like a pure image Then I will give you a better answer although people in deep learning architecture. We used to feed RGB channels You could also feed In JPEGs for example, there's luminance and chrominance and there's a sample. Why don't you feed? Luminance and chrominance, but in this letter, I wouldn't deal with that now What would I be doing differently, you know if I wanted to further improve this one will be the validation So the validation we did a random percentage. We did a quick test To because we were given the the images like shuffled like the images the frame shuffled, but we were able to using correlations Reordered them and build videos of The cars and so of course if you do this then you can build a better validation model that Basically doesn't have any overlap with other videos, right? So this this this will be one difference and this is for architecture. There's many new things coming in segmentation Architectures, there's a pyramid network, etc. So, you know, I will use, you know, the coolest new thing Which there are few they give you tiny percentages So with that alone, you may not be able to win now. We will improve a bit but not winning that's with that Okay, thank you very much. Well, thank you very much. It's a fascinating story and thank you to Andres and Congratulations once again. Thank you