 OK, so I should probably just promise this by saying algorithm, is that it? So first we're going to define a zonotope, and so what we do is we let A be a matrix within our n by m, then the zonotope is going to be zA, which we're going to define to be equal to the projection under A of the hypercube from negative 1 to 1 in dimension m. OK, so it's that guy. And here in this case, what we're going to have is we're going to have that m is greater than or equal to n. So if we kind of want an example of a zonotope, what you can do is you can try to draw a cube, and then the zonotope would be, so this would be essentially a rotation of q, and then if we project it onto the chalkboard, the outline is going to be our zonotope. So that's a zonotope. OK, and what we want to do is we want to find vertices of the zonotope. So you guys all know what a vertex is. Now, maybe briefly, why do I care about finding vertices of a zonotope? So if we're looking at a function f, and it's from rm to r, and I know that I can approximate f in some way, let's not do rm, sorry, it's going to be on the hypercube into r, and I know that I can approximate f of x by something like g of A of x, where g is going to be mapped from zA into, in other words, g's domain is a zonotope. So if I want to find that domain, I need to find the zonotope, and so finding the zonotope is just finding the vertices. OK, so finding it, what we're going to do is we're going to look at the map, which takes x within rn, and what it does is it maps it to A, A transpose. Anyone see why that should be a vertex? That's good, if not, I mean, it doesn't get to me. I'm just waiting for like, they see what people will be like, oh no, that's clear, it's all over. OK, so is this a vertex? Because we're going to ask, how does this map onto the vertices? Like, what's the probability of hitting a specific vertex? Oh, sine, so sine is going to, so this is going to be a vector filled with ones and negative ones, and so if A transpose x, this is some vector, it's like v1, and then the sine of it is you just take the component-wise sine. So this is going to be sine of v1 stacked through v0. Yeah, 0 is 0, and so we have to be kind of careful about the case that you hit 0, and essentially, OK, so the case for this thing has 0 in one of the components, it's going to be a set of measures 0, right? Like, the lib back measure is 0. So if you choose any measure, absolutely continuous with respect to lib back measure, you're going to be good at using the algorithm. So sine is like the vector, sine h, e, x. Yeah, yeah, yeah, sine, so sine of a1, and this is the same thing as sine a1. The algorithm really belongs in quotes because it's like, pick a bunch of points, plug into the function, what do you get? Like, I don't even know. Can you guys call that an algorithm? OK, solid, OK. So there's two different ways to do it. You can fix the number of sample points, like choose p number, like px's, and then plug them all into this function, or you can say fix k with the natural numbers, right here, I'll say less than or equal to the number of vertices of your zonotope z, and then 2 just wild, and then you're also going to let v be defined as the empty set k elements, and your set v draw a normal distribution, or just some probability distribution, normal is what we're going to analyze, and compute v plus, which you're going to define equal to, and v minus is just going to be the negative of v plus. So the important thing about zonotopes also is they're symmetric. Because you took a linear image of a symmetric object, it's going to be symmetric in the, yeah, in the output. So this normal distribution is like, so x will be like an n-dimensional object, so it's just in each coordinate at thermal? Yeah, yeah, yeah. So it's a multivariate Gaussian distribution. So then we're here, and then what you do is if v plus is already in v, you're done, like, continue the wild loop. Okay, let's see. How should we phrase that? Yeah, let's say it's not. If it's not in there, add v plus and v minus to your collection of vertices. And so essentially what we want to do is we want to analyze this, and we want to see if this set v at the end is a good approximate for our zonotope. So it does the convex hole of v approximate. This is where we ultimately want to go. Once we have all these points, can we approximate our zonotope? Okay, so let's do our analysis. The first thing we're going to do is we're going to say that a, a1, a m, and these are all column vectors. A, i are column vectors. And a, i, none of these are zero. Because in the case that one of them is zero, you're essentially, like, so say a m is zero, then you might as well just be mapping from our m minus one. It's not going to affect it at all. Okay, so one of those is zero. Then what we can also do is we can write that z, a is equal to a1 plus two a m where a, i is just going to be all scalar multiples between negative one to one of one of these columns. So it's going to be whatever that symbol is, what's that called? Gamma? Gamma? Gamma. And so this is set-wise addition. Just take an element from each of those and add them together. And so the important part about this is is we've decomposed our zonotope into very simple polytopes that we're adding together. Now what we need is a theorem. So this is a polytope p which is equal to p1 v is equal to v1 vk where each of these is an extreme point of one of these sub-polytopes. So vi extreme or vertex, vertex. And two, exit c. In this case, these are all subsets of our n. Pi c is the set of maximizers in the polytope p sub i of the linear functional generated by c. So in other words, this function, so inner product with c. So if we just draw a picture, so in our case one of these a sub i's, this is, so this is, and if we choose a vector c, so this is c. Then the maximizer of the inner product on this k is going to be this point. So if I call this a sub i, then a sub i c is greater than or equal to xc for all other x. That's what this set's doing. It's looking in one direction and it's finding the maximal element. And so what it's saying is you can do that component-wise and then you can look at your whole thing and put it all back together. So this is kind of the trick for proving that that map actually goes to a vertex. So what we want to do is, I don't know how I use the words. Okay, so what I'm doing is I'm looking at the map. Okay, now what I want to do is I want to show that this gets a vertex. So I look at sine of a transpose x. And this is the same thing as of ai inner product with x. This is going to be a1. This is the sine of this guy. If sine, that'll be 1. So what that means is that ai x is greater than 0. So that means if we look at the linear functional just generated by x, ai is the maximizer. At maximizer on the polytope of ai. So in other words, s ai x, this is just going to be equal to. And similarly in the other case, so if it's negative 1, we end up getting is the maximizer is going to be equal to negative. You can have lots of different ai's that this sine is equal to 1. So what's special about them to make sure that these are the ones that are maximizing the inner product? You could have lots of different ai's. Okay, so I'm fixing the matrix A, right? It's not changing, so ai is not changing. Oh, okay, okay. And then that's how you got the ai's in the first case. Yeah, yeah, yeah. The polytopes in this case are just little lines. They're just little lines. And then you're just going to sum together a bunch of little lines. And so yeah, it's like a thing. It's the simplest possible case of that theorem that I stated. Yeah, this one that I just started raising. It's like the simplest possible case of that. And so these are all fixed. And really I probably should have used c here. Because then where x is now taking the place of c here. And what we're going to do is we're going to use that. So okay, so here what we needed to say that this is equal to 1 or negative 1, we needed x to not get on the hyperplanes generated by when these are equal to 0. And so we're just going to assume that we're there. So is x in Rn or is it true? It's in Rn. It's not in Rn. Why? Entire Rn? Yeah, so age transpose maps Rn to Rn. I mean like, are we not like specifying x in the like 2 or? No, it can be anywhere. The important part is it's not, is that the sign is either 1 or negative 1. It just doesn't want to be on one of those hyperplanes. Probably when you're sampling, it's like drawn on a sphere or something. Okay, so if we put this all together. So I'm going to call this guy Vi in either of these cases. So it's positive called just Ai. Then what we have is that A signed ATX. Really, I probably should have just not erased that here. It is equal to V1 plus dot dot dot plus V, yeah, yeah, that's right. So we just maximized. We just chose the maximal elements of each of those polytopes. This guy is within A sub 1 so forth. And so by that theorem, this implies this guy is a vertex. So that shows that the map is in 2. And by the fact that that was an if and only if characterization, you can show that it's on 2 by selecting the C from that theorem. So in that theorem, we have there exists C such that Vi is equal to Si. And in this case, we would just be looking at it with some Vi. Okay, and if you just plug this C into the function, you'll see the outputs, the vertex V, if you just work through the calculations. It's not terribly complicated. So what we've found so far is that this map, excluding some hyperplanes, is both into and onto the set of vertices. So if sign of AiX goes to 0, then it maps to the edges? I'm not going to say where it maps, because I don't know. You could end up inside the convex hole. You're not necessarily going to end up... Although I would think you'd map to an edge. It depends on if they all hit 0, you hit 0. But yeah, if just a couple hits 0, I would imagine you're going to be on an edge, but I'm not 100% sure. And because it's set of edges 0, if you're sampling the axes from a Gaussian distribution, you don't even care about that, because you're pretty much not going to hit it. You care about it in the sense that you always want to make sure that this is spitting out a number, but it's not somewhere you want to be mapping. You don't care about what the result is there. You just want to rule out that case. So now what we're going to ask is, let's say we have probability distribution, probability measure. What we want to know is the probability of the set of x within our n, such that h transpose x is equal to v for some vertex. Doesn't even have an idea as to what the set should be. But does it normal come? Does he have another normal come? What do you need to see all the kids doing? I just have no idea. Okay, we really have no idea what you guys are doing. Okay, so what we're going to show is that this set x with our n, a sine h e x equal to v, is going to be equal to the normal cone on z of v. It's the interior. So it's the interior of the normal cone, z of the picture. I'll write the definition and then I'll draw a picture. So for a given polytope, this k polytope, what we have is that in k are visa vertex. I mean this definition works for non vertices, but I'm only going to use it for vertices. The normal cone is going to be the set of x within our n, such that x z minus v is less than or equal to zero for all. Okay, so if I draw a picture of what this set's going to be. Okay, I'm here. And this is my polytope k. And we're just not here. And this is, then the normal cone is going to be, these lines come out at right angles. That's going to be the normal cone k. So essentially what this is saying is if you pick x, so let's say I pick x here, and I map down, yeah, just pretend you have to, and so then you go from v to any z. So if I'm going to go to, let's say this guy is z, then the angle between them has to be greater than 90 degrees. Because that's what the inner product being less than or equal to zero is essentially telling you. There's some cosine factor on the inner product. So what happens here is, yeah, you end up with all these elements that are coming out of a cone at the vertex away from the convex hole. Is that semi-clear as to what the normal cone is? And so what you can actually do is you can show that for every element, indeed, I'll write this down. I'll use math. So every element in the interior for x within the interior of the normal cone. Also this is, I just want to clarify, this is a little misleading how I drew this because it looks like the normal cone is centered at v, but the normal cone is actually centered at the origin and this is like I shifted it just so you could kind of see how it would graphically be if you shifted it. So for every x within the interior, we have that v is the sole maximizer on k of the linear functional generated by x. And that's easy to check just using the vector. I'm not going to do it. And so what we want, so then what we actually end up with is we end up with that set, x such that v is equal to a sin a transpose x. This is going to be equal to the normal cone, the interior, and the interior you have to do to get rid of those hyperplanes where you hit zeros. The interior of the normal cone, z. So if we take the probability of this, let's call this event e. So then the probability that we map on the e under this map is just equal to the measure of this normal cone, z, v, where I've added back a couple sets of meters here. So that's a nice geometric characterization. And now we want to start transitioning into maybe approximating how well or how close we get to the zonotope with this algorithm even if we don't get all of our disease back. And I'm not going to prove that either because it's like relatively just computational based. So what we have to ask is what's the right notion of distance between the approximate convex hole and just the convex hole of the zonotope itself? So if we draw, let's say we have this guy, the zonotope. And let's say that our algorithm just returns these vertices. We'll highlight it all of them. Let's not do that. Let's get rid of these ones. So our algorithm returns four of them. And so now what we have is we're looking at this guy. So that's our approximate convex hole. That's our approximate convex hole to the zonotope. And we're like, okay, well, how should we tell the distance between these two? And what there is, is there's the Hausdorff metric or Hausdorff distance. I think metric is a little overkill. You have to do something special. But the Hausdorff distance between two sets and this is going to be the same thing as the maximum of the supremum A within A and from B within B and then dot dot dot. And in this dot dot dot you do the same thing except you switch the infant soup and you do it with respect to B within B and A within A, so that's symmetric. That's why you have to have the two pieces. Otherwise, this by itself is not symmetric. And so this thing satisfies the triangle inequality and it's a relatively good notion of distance between sets. The other thing we're going to do is we're going to define the simplicial constant which is an intermediate step to using the Hausdorff distance. The Hausdorff distance by itself is kind of unwieldy. It has so much going on in it. And so the simplicial constant is just going to be defined for a single vertex. So for a polychoke k it's going to be denoted alpha k and then V, V is a vertex. V plus n is a vertex k. This is going to be the infimum of convex hole vertices of k without the distance between V. So essentially what we do is we remove the vertex V and we look at what's left and then we see how close everything is left to. So if we look at this convex hole right here and we let this guy be V then this distance there's essentially the projection onto the base that's closest to it and this is going to be alpha k V. That's a simplicial constant. And in the case that your approximation is only missing one vertex this alpha sub k V is the same thing as the Hausdorff distance between this guy and that guy. Only in the case that you're missing one vertex. If you're missing like so once you go down somehow your approximation was just this line right here if you only had two points you no longer have a nice relation between the simplicial constant and the Hausdorff distance between your convex approximation. The other thing which I probably should have remarked on earlier is if we draw let's say we draw two derivatives. So if we draw this guy and if I draw the normal cones on this these are essentially yeah I'm finding the probability of the masses which will map to a given vertex. So on this normal cone right here this normal cone has a relatively small mass compared to the rest of the normal cones, right? And the thing to note is that the simplicial constant is going to be pretty small too. So the normal cone like if the normal cone is smaller it appears that the simplicial constant should also be smaller. Similarly sort of like if we look here and we consider the distance from this vertex to that base so the simplicial constant that simplicial constant is bigger and the normal cone is also bigger. So we expect this to retain some sort of analytic relation between the normal cone and the simplicial constant. And luckily in my research I didn't have to do that because I've already done that. Because it was not see it at all. So I'm going to... So questions. So you should say like normal cones are starting from the origin? Yeah they're starting from the origin but it's a little in my mind it's easier to picture them drawing from the vertex but they are starting from the origin so like I could have put in the origin here and then I would just translate this line to here. Some of like linear transformation for the sector right? Wait a second. The linear transformation by A for the sector like... Yeah yeah yeah I just I've mapped this thing so this is the really the normal cone plus that vertex. So these normal cones effectively cover the whole the whole space except for the zero? Yeah and if you don't do the interior and the interiors are destroyed. Which is important because if it's going to find a probability measure it better sum up to one. And so that's like yeah there's different ways of getting that it has to cover the whole space. I feel like the people that do a lot of analysis with polytopes some of this stuff is so obvious maybe. And I just... Yeah it was a lot of trial and error figuring this out. And eventually you'd run into some paper that's like oh yeah I'm like obviously this. Yeah I guess. Maybe not. Okay so I'm not going to prove this next bit. I'm not going to prove anything. But what this next bit does is it gives us the number of samples we need to draw and map under our function to get essentially all vertices with a sufficiently large simplicial constant. So what I'm going to do is I'm going to say fix epsilon and delta greater than zero user chosen and k like k be a convex a finite and symmetric set. So it's a polytope that's symmetric I should have written that. And then if we let x sub i be drawn from a normal distribution for i equal 1 to p to be determined and define uk and this is going to be the vertices of k with a sufficiently large simplicial constant. So alpha k of v is greater than or equal to delta. So it's all ones that have so if we draw another guy and we're like here so if we choose delta in between that simplicial constant and this guy that big one we're not necessarily concerned about these points that affect on our approximate. And so this is only looking at the vertices which have a sufficiently large simplicial constant. And that's just it. And so we're going to define the probabilistic event the event ak the collection of sample points intersects nk d union minus v is not equal to empty set for all within so this should maybe be the interiors of these guys because we don't want to map to those zero points but it's not it's up to a set of measures zero. So what does this mean? If we've intersected then if we look at if this intersection is non-zero and that means for sum i a sin a transpose x sub i is going to be either equal to v or minus v just by definition of what the inverse image is on v of this function like we decided the normal count maps to there so if we've intersected the normal count then we hit v or we hit minus v so our algorithm returns v let's just note as to what this is saying what ak is saying then I'm just going to say if p is sufficiently large p is greater than I'm going to say c and this is dependent on the diameter of your polytope k that's really the important part it's also dependent on epsilon and delta of course then the probability that ak happen ak happen happens ak is going to be greater than or equal to 1 minus epsilon so the probability that we return all vertices with sufficiently large simplicial constants is greater than 1 minus epsilon getting all the significant vertices that we want with this high probability maybe not as high because we might want 1 okay does that make sense with the theorem state okay and now what we want to do is we want to use this to get a result on the house star distance I'm not going to prove this it's not tricky once you use the result of someone else I guess I will say that the important result of someone else is something that relates to the simplicial constant to the mass of the normal cup so now let's say this result so theorem once again fix epsilon greater than 0 and delta greater than 0 and what you're going to do is you're going to choose p as in the previous theorem then if we let to be equal to it's going to be all the images under our map and to keep these are all our Gaussian samples then what we find is that the house star distance between the convex hole generated by the vertices we've returned and zone tau z this is going to be less than or equal to the vertices of z without so the cardinality of the vertices that your approximation didn't get times delta all over 2 with probability at least 1 minus 2 to the a where equal to the vertices of z without u z all divided by 2 and u z in this case is just the set of vertices vertices of z such that the simplicial constant is greater than delta so essentially this is saying that given some maybe crappy constants we can get as close as we want with arbitrarily close probability now I feel like to someone in ACO when you see 2 to the power of something that's always a bad sign is it not like an exponential? because it's going to be an exponential time algorithm that's essentially what you're saying so it's a little better in practice the analysis feels very much not tight I don't know how to make it tight but this is like go away but that's about as good as it gets here and from an analyst perspective we just let out someone go to zero so there's no problem okay so let's try and prove this in ten minutes okay so what we're going to do is just let k1 and k2 be symmetric symmetric convex holes generated by subsets of the vertices of z so of our zonotope z okay so you just take some of the vertices that are symmetric and find the convex hole that's what k1 and k2 are then what we can do is we can do a probability calculation so what we're going to do is we're going to let ak be defined as before so ak i this is going to be algorithm algo returns all v within vertices of k such that the simplicial constant is greater than delta k sub i v is greater than or equal to delta okay that's what a sub k i is and by the previous theorem the probability of this is greater than 1 minus epsilon okay so now what we do is we do it with a computation so the probability of ak sub i this is going to be the same thing as the probability of ak sub 2 given ak sub 1 all times probability of ak sub 1 plus ak sub 2 ak sub 1 complement times probability of ak sub 1 okay that's just the law of total probability and then what we can do is we're going to solve for that guy and what you get is you get the probability of ak sub 2 given ak ak sub 1 is equal to the probability of ak sub 2 minus probability of ak sub 2 ak sub 1 so many of these this is going to be times probability of ak sub 1 c and all over and what you have to divide by at the end is you have to divide by the probability of ak sub 1 the same thing right here probability of ak sub 2 is greater than equal to 1 minus epsilon by the previous theorem and then this thing right here so the probability of ak sub 1 is greater than equal to 1 minus epsilon so it means that this probability is less than epsilon and then this guy is bounded by 1 so what we're going to end up getting there is that this probability right here is greater than equal to epsilon all over the probability of ak sub 1 and then lastly probability of ak sub 1 intersection of ak sub 2 this is the same thing as probability of ak sub 2 given ak sub 1 times probability of ak sub 1 and if we look at this we can just go down so plugging in this guy for this we see that p of ak sub 1 is on the bottom this is greater than 1 minus 2 epsilon that's what all that was about so if we can extend this by induction but then what you end up with is that the probability of the intersection of these events across some collection of convex holes is greater than 1 minus a epsilon so this is going to be how we get that 1 minus 2 to the 8 times epsilon and so now what you have to do is you have to decide well now we're going to have to do this event for a bunch of these case of eyes to actually obtain some sort of bound on a hostile distance and the problem is that for a given sample you only need it for a couple of them but you don't know which couple of them you're going to need for your given sample of x i to x b so you just have to be overzealous and get them all so we're going to let for my sake I'm going to say that the normal cone of v we're going to overload the notation and so this is n k v union with n k minus so in other words it's the set of points which map to that your algorithm needs to take in to get either v or minus v out so now what we're going to do is we're going to let case of eye be all symmetric, convex, cloudy block holes of points holes of vertices such that the vertices of k contain u of z. I remember u of z is all the vertices with simplicity constant created on delta so in other words all the points that have a significant simplicity constant so it contains all these and remember so with probability 1 minus epsilon we have all these vertices because this is essentially the event az and the other thing to note is that the cardinality of k i this is going to be equal to 2 to the vertices of z without u k u z all over 2 so now we're going to do is consider n to p such that all the events they case of eye so now we see that what we're going to be looking at is the probability of this event which is going to be 1 minus this constant times epsilon which is what it's doing we're going to let v we're going to define it as how do you define v so you don't know how to define more notation that's a race so it's not 11 so we're going to define v to be the vertices within z such that x of j intersects the normal cone z of v and other words this is return vertices return by the algorithm and what we want to do is we want to order these so we want to actually order order the set I convert z without v this is going to be equal to v j not empty right not equal to empty set in other words this is the same thing as the algorithm has spent that point out yeah yeah yeah what I do in the paper is I do plus or minus I'm just not going to do plus or minus so over there we have that so we're going to order them and the thing to remember is that because our algorithm spits out the points this is symmetric so what we're going to do is we're going to make it so that v j plus one is equal to minus v j for j odd so essentially yeah this comes in pairs and now what we're going to do is we're going to define let's say this goes up to and we're going to define c of i equal to the convex of the vertices of z without v sub j j equal to one to two i and so what is this doing two things to notice okay the first thing to notice is that c of zero is equal to z and c of n divided by two is equal to the convex hole of v in other words it's our approximation so what we're doing is we're counting from zero and we're decreasing down to our approximation by removing sets of two so if we just had yeah this really shouldn't have made that much so we're here right and like you need more points to make a point here here okay so essentially what we would have here is the whole shape the whole shape is c zero and then what we would do is we would pull off one of the vertices so this guy would be our c sub c sub one and then we would pull off another vertex we would pull off this guy and then we would be we would do which in this case is going to be equal to convex v okay does that make sense how we're decreasing down okay and the reason you have to do it one at a time is because the simplicial constant only has a relation to the house folk metric when you're only missing one vertex so you just need to do this chain down sometimes it'll have to get to leave you gotta get out of here so we have these guys the other thing that's immediate is that the collection of these c sub i's is a subset of k sub i in other words a c i holds so we get all vertices with simplicial constant greater or equal to delta okay and so what you then need to do is we show that c i of v the simplicial constant of v is less than delta for all v so for all v within the vertices of z without v intersected with c i so if it's a vertex of c i and it's one of the points that you didn't get then it's gonna have to have simplicial constant less than delta and this isn't immediate immediate from a of c i which maybe isn't obvious and so what you do is you assume by contradiction assume alpha c i of v is greater or equal to delta okay and so then by the fact that a c i holds what you're then gonna have is that x sub i intersected with the normal cone of c i at v is not equal to empty set and this is plus or minus and this is just by definition of a c i it returns all vertices with simplicial constant greater or equal to delta okay so if we're here then the last thing you need okay so this let's do in the first case so a c sub one so I want to show this for i equal to one and then what you do is you have that in c sub one v this is going to be a subset of in c sub zero v union with in c sub zero so if we draw a picture which is what we want to do so this is the vertex we removed create c sub one that so if we're going to yeah I'll just try it so if this is v one and this is the vertex I removed and so this is right over here let's say that this is v and so if I look at the normal cone of this guy generated by this sub figure then it's here and what I'm saying now is that this normal cone is a subset of the two normal cones on the general figure which go here and here here and here so graphically it's kind of here is you're just shrinking down and has to go somewhere and this is where okay so then since we have this pulse right here what we then end up with is that x sub i intersect these guys so in c o union with in c o v one okay and if we remember c o is just z and so by our algorithm this means that v or v one are return points in other words alright w in other words v or v sub one is within big v which we defined earlier but that's a contradiction because we chose these points to specifically be ones that we're not in that set okay so that's a contradiction so what we then have is that alpha c one is less than delta for the specific vertices and specifically alpha c one is v two there'll be three in this case it's going to be the one that we removed to get v two it's going to be the vertex that we used to decrease down to v two it's going to be less than delta so in general you kind of do an inductive argument so you do an inductive argument and what you show is then that alpha c sub i of v two i plus one is less than delta which immediately implies that the house star distance between c i and c i plus one is less than delta and then lastly house star distance between n over two c zero is less than or equal to the sum of the house star distances between c i c i plus one i equal one two is less than or equal to n over two times delta and that's the house star found if you go up on n okay there we go, we're done you mentioned it was kind of important that you had this relation between how large the e-substitial constants are and how big the cone is that particular vertex so I'm just wondering like what's known about the dependence between these two I have one of the relations no I'm only going to have one of them done so the relation I have so you can actually get two inequalities, you can get a conditional constant on the lower end of the inequality and on the top end so you have essentially what like two norms are when two norms are like equivalent it's like an equivalent of sorts except it's not because it's definitely not norms okay so the bound that I use is that the conditional constant on this guy is less than or equal to the diameter of the base of k of v all over tangent of arc sine 1 minus 1 half r omega v squared and in this case omega v is the this is the probability measure of that normal cone normal cone of v and r is just a function so r of y is going to be equal to 2 times 2 y 1 over n minus 1 so in the trick the way they get this bound is you look at a normal cone and you have to embed a spherical cone or like a you have to embed like a not a polygonal cone because right now all these are like they have hard sides and you need to embed a spherical cap and so like you need some cone which is round like that and so that's why you end up popping out all these tangents in arc sciences because you have to go down to the spherical cap yeah and the constants like p could like it's not, it looks pretty big and yeah the number of samples you need is in this analysis, it's not great and this to actually get the analytic bounds I think you just lose a lot a lot of precision but it is kind of impressive that there are you know like bounds at least in my life oh with the base so you can just go up to the diameter of the full set but if you're looking at like this kind and you're looking at this vertex this guy is the base it's the essentially the face generated by the vertices closest to this vertex so yeah and generally you just take vertices that are adjacent and generate that convex hole and that's the base so what's at a percentage of this random algorithm versus a super easy but let's say like you have this relation so you can just like match the unisphere and just take uniform steps that are actually okay so sort of deterministic and you might be able to do something like that where it's like quasi deterministic right you're just trying to represent a uniform distribution in that case and you might be able to do some analysis of convergence there too for that uniform sort of matching so the deterministic algorithms I know alright don't really know I know of they're much more mathematically advanced to actually run the algorithm like in our algorithm any of the math is like behind the scenes to actually run the algorithm you just type plug this into that function a bunch of times and make the convex hole alright the first search algorithm which I'm not familiar with but it actually it's a little more complicated really I think the main benefit of ours is time saving if you're okay with an approximation to the zone itself so if you're doing an integration you're okay just integrating over a slightly smaller domain you lose some but you do it quicker yeah and then use of implementation like an engineer would not stroll to implement that's not really