 Hello. My name is Fernando. I'm from Pixel Animation Studios, and today I'm going to talk about the discretization of differential barriers on polygonal meshes. Our goal is to make geometry processing tools originally developed for triangle meshes available for polygonal meshes. And our motivation is quite straightforward. Many industries rely on polygons instead of triangles. This is true for instance at Pixel, where we use co-domain meshes to define subdivision surface. More broadly, polygonal meshes can be found in any area based on geometric design. Even in engineering applications, polygons are often preferred because they do a better job in conforming to complex geometries. But you might be wondering why not triangulating the polygons? Unfortunately, splitting the polygons into triangles introduces numerical artifacts. Which manifest themselves in different ways. For example, let's say we have this hexagonal tonic of atolls. We can rematch the polygons by making the triangles as learning as possible, or we can apply this recent technique that it uses at intermediate triangulation, splitting the polygons with virtual nodes. Either way, computing coverage on these meshes produces drastically different results. This is because the polygons are not planar, and the choice of triangulation changes the shape inside the polygons, thus the curvature. Now let's say we have a dense quad mesh generated through subdivision. Here the quads are quite uniform, so we would expect similar results for any triangulation. Instead, these are the results for a conformal parameterization, where the colors indicate the quads conform distortion. In this case, the issue is that we care about the distortion of off of the quads, not offering any little triangles. So using a triangulation is effectively over constraining the computations. Due to these artifacts, numerical methods on polygonal meshes have received attention from many disciplines, specifically from graphics that have been somewhat dedicated to polygonal applications. But they overlook all the other operators using jump crossing like radiance, chip operator for curvatures, and so on. The mechanics community has also proposed several techniques. The closest to our work is the virtual element map, which extends finite elements to polygons by defining basis functions implicitly, hence the name virtual element. However, these maps are focused mostly on 2D flat meshes instead of curved surface. So in this work, we introduce a systematic construction of differential operators that is varied on surface meshes made of arbitrary 3D polygons. In this process, we will improve existing applications by making them compatible with first order derivatives. And at the same time, we will reproduce existing discretizations in a special case of triangular meshes. In practice, what we get is a way to apply geometry processing to polygonal meshes without modifying the specifics of any algorithm. And the core of our approach is the discretization of the co-gradient operator. The co-gradient is simply the cross product between the surface normal and the gradient vector. An important property of a co-gradient is that it obeys the Stokes theorem. This allows us to use a boundary integral, making our construction agonistic during the shape of the poly. To discretize this boundary integral over a polynomial of phase f, we obtain an approximation of the co-gradient operator, which is exact for linear functions. Since the linear function has a constant gradient, we can also relate our boundary integral to the integral of the normal vector over the polygon, even though we have no knowledge about the surface feeding the interior of the polygon. This integrated normal is a norm quantity called the vector area, and it provides a well-defined notion of area and normal for any 3D polygon. By combining the co-gradient in a vector area, we define our discrete gradient as a matrix G and maps function values as verses to a vector per phase. More specifically, the discrete gradient applied to a function is evaluated as the average of the co-gradient of this function over the polygon, rotated by the polygon norm. Thus leading to this simple expression, where this terming highlight corresponds to the column of the matrix G associated with the vertex V. Note that by construction, our discrete gradient is exact on linear functions, and in fact, in the special case of triangles, our result is also equivalent to the gradient of linear basis functions, even though we may not use of basis functions in our derivation. There is, however, one important difference, our matrix has a null space, and this may affect any computation related to the gradient, because it may introduce its previous modes. In order to identify and quantify this null space, we will relate our discrete gradient with operators acting on one form. One form is our alternative way to encode tangent vector fields available in x-serial calculus. Instead of a straight vector, with two coordinates relative to a local frame, one form provides a coordinate free representation that measures the circulation of this vector over the edges. In the case of a gradient vector, the circulation is simply the difference of vertex values, and we use the matrix D to denote the mapping from vertex values to the difference of values per edge over the phase F. In the continuous setting, the mapping between vectors and one forms is defined by adding used cooperators, flats and sharps. These operations are inverse of each other, so no information is lost. However, the discretization of vectors at faces and one forms at edges breaks this bi-directional mapping, and to better understand this mismatch, as discretized in use cooperators, the discrete sharp is a matrix U that maps one forms to a tangent vector per face. By mimicking this smooth setting, we assemble the sharp matrix by simply rearranging the terms from our gradient matrix, leading to this expression where C is the centroid of the phase F. On the other hand, the discrete flat is another matrix V that maps a vector from a face to one form local to that face, and each row of this matrix V returns the circulation of the vector evaluated at each edge of the face. And we can compute this circulation by projecting the tangent part of the vector into the edge vector. Notice that in one direction, our discrete music operator don't lose any information. In fact, starting from a tangent vector, if we apply flat and then sharp, we get back the same input vector. So the mismatch shows up when we revert the order of these operations. And we quantify this difference by introducing the projection operator. The projection is a matrix P defined per face that maps a one form to another one form inside the face. This matrix indicates the information lost by a one form when we extract the part of the one form associated with a vector. So consequently, we can use P to quantify the part of a function that has a zero gradient. Or in other words, P provides a closed form expression for the no space of our gradient matrix G. Geometrically, the matrix P measures how much information is off the polygon's plane and how much is not linear at all. What it is together can finally construct our Laplacian matrix compatible with our gradient. On the definition of the Dirichlet energy, we use the square norm of the one form, which expands into two terms. The first one, computing the square norm of the gradient of the function, and the second one, penalizing the no space. Note that we are penalizing these two terms equally. We arrange these terms and we obtain our polygon Laplacian matrix, which contains in the middle an inner product matrix, which has this form defined by the shape operator U and the projection operator P. Importantly, this matrix verifies all the basic properties expected from Laplacian, and in the special case of triangles, it will be reduced to the coton Laplacian. Our result also shares some similarities with the polygon Laplacian proposed by Alexei Verdesky. Even though the individual matrices are different, we were able to show that their expression is equal to ours, plus an additional term related to the non-planarity of the poly. This age is already accounted for by our projection matrix P, so the matrix from Alexei Verdesky is penalizing this term twice. In practice, we noticed that our version of the Laplacian produced a slight improvement in accuracy when we solve Poisson equations. Another important difference is the fact that this previous inner product requires a singular value decomposition to assemble the matrix C, while our inner product is given in closed form. Our discretization also applies to operators on directional fields. In this case, we care about the direction of the vectors, but not about the norm of these vectors. For this reason, we need to account for the misalignment between tangent planes, and this can be done by discretizing the connection over a polygon. In our construction, we assign a tangent plane per vertex and per face of our polygon mesh, and then we encode the discrete connection as the smallest rotation mapping normals, from each vertex to each incident face. With a discrete connection, we can then collect the vectors u at the vertices of a face f and bring them to a common coordinate system defined at that face, and we indicate these rotated vectors by u superscript nabla. We can then discretize the covariant derivative over a polygon as an operator that applies the same gradient we saw before on each coordinate of these rotated vectors, thus returning a matrix per face. Similarly, we can also define a covariant projection operator that accounts for the no space introduced by the discrete covariant derivative, and with these terms combined, we obtain a Dirichlet energy for vector fields, which leads to our vector laplacian. To compute curvatures, we're going to look at the shape operator, which is defined as the gradient of surface normals. In this most setting, the shape operator has two important properties. First, it returns tangent vectors, as any gradient does. Secondly, it forms a self-adjoint operator, thus making the matrix symmetric. We enforce these properties in our discretization by defining the discrete shape operator as a matrix that maps normals at vertices to a matrix per face. And we assemble this matrix using this expression, where capital N indicates a matrix with all the normals of the vertices incidentally face f. Notice how in our construction, we are seeing the tries in the gradient of the normals, thus making the shape operator also symmetric. And in these outside terms, we are making sure that the shape operator is orthogonal to the face normal. We can then extract principal curvatures in directions from the shape operator by computing its eigenvalues and eigenvectors. Now, let's see some applications. We start with shape editing. Here we have a Rasmash with points x, and our goal is to blend this shape towards another pose. To do so, we set a Jacobian matrix J per face of the polygonal mesh. And this matrix indicates the amount of transformation that each face should receive. In this specific example, we compute J using our gradient, applied on the point of the target pose. But we're going to do that only for the faces marked in red. For all the other faces, we set J to identity, so this face should not deform. We then compute the deformity points, which are called y, by minimizing its lightly modified version of the Dirichlet energy that includes the matrix J on both terms. So in the first term, we are penalizing the tangential deformation inside each polygon versus the second term, where we are penalizing the deformation in the new space. So these are deformations off the plane of the polygon, or nonlinear deformations. And these will avoid mysterious nodes. With our discrete operators, we can also compute nonlinear parametrization for polygonal matches. This was not possible before with previous polygonal methods. Here we minimize any choice of a distortion function, the psi, and we combine it with our projection term. So here's a result using the as read as possible distortion module to parametrize a quad mesh. And here's another example using this symmetric Dirichlet energy to parametrize a hexagonal mesh. And we point out that if we were to triangulate the polygons for these parametrizations, the evaluation of a distortion function and its derivatives and its derivatives would become more costly because there are more triangles than polygons. Just for reference, in the case of the as read as possible parametrization, the triangulated version took more than a double for the time required by the quad mesh version. Since we know how to compute curvatures on polygons, we can also generate suggested contours. The idea here is to multiply the shape operator with the camera direction, which returns the so-called radial curvature. Then we can extract contours, analyzing the gradient of these values. So for this application, we are combining the shape operator with our gradient. Grooming is another great example where we combine other operators we introduced in this work. Here we are computing a smooth groom, interpolating both the height and the orientation of each segment, forming these handle curves indicating in blue. Therefore, we need a scalar and a vector location all combined. Here's another example showing the same interpolation to groom feathers is moving over a wing mesh. And here is easier to see how these handles in blue are made of several segments, which allow us to create these band shapes. We also consider other common geometric processing applications in our paper, including conformal mapping, line fields for texture synthesis, and the heat method. In particular for the heat method, let me say that a good rule of thumb is to set the diffusion time step using the largest polygon diameter, instead of the mean edge length. This ensures that the diffusion connects any petal versus sharing a polygonal face. So in conclusion, we showed how to make a geometric processing compatible with polygonal meshes by introducing polygon-based differential graders. This was possible because our formulation is based on a new gradient, which comes from a co-gradient making our construction agnostic to the shape of any polygon or even any triangulation of the polygon. This gradient also comes with a notion of projection operator that will quantify any new space and eliminate any spurious nodes. As future work, there are a couple of interesting directions to pursue. Since our discretization can be seen as an extension of the virtual element method to surface meshes for the case of linear functions, one possibility is to generalize this construction even further by considering high-order virtual elements. Another direction is the discretization of differential graders on polyhedral meshes. This allows geometry processing on volumetric domains with meshes made of hexes or even promos like cells. Basically any cell in 3D that may have non-flat sides. Thank you for your attention.