 Thank you for inviting me to give a talk here. I'm pleased. It's nice to be back at the Azure S after a number of years, so I'm happy about that. So the title here is discrete minimal surface algebras and I will try to describe a little bit what we've been doing over the last maybe 10 years or more related to finding solutions to equations that somehow relate to minimal surface equations. I will tell you exactly what I mean by that. So we have taken a number of different approaches in different contexts and so on and I will try to outline these and also you'll see hopefully some equations that you will see again and have already seen during this workshop. So what do we want to do? So we want to do discrete minimal surfaces, quantum minimal surfaces, non-commuted minimal surfaces, whatever you want to call it and well there are several motivations. First of all, as we've already seen today, I mean some equations that you may enough have realized that they were minimal surface equations appeared and these kind of equations appear in physics like the IKKT model we heard about and also in membrane theory in the matrix regularization. But also there is a general mathematical interest I think to see what extent can we do non-commutable minimal surfaces. Is there some kind of nice theory or is there nothing like the classical situation or do we have some results? So what I'm talking about today is I'm joined work with a number of authors like Jaegyang Choi in Korea and Jens Hoppe here, Gerd Hysken and also at the end with Maxim Konsevic. So some outline what I will do. First of all I will sort of tell you something about that you can formulate Kehle geometry in terms of Poisson brackets. A little bit like the previous talk you've seen and how that leads to equations for minimal surfaces written as Poisson bracket expressions. Then I will go to the paper which I borrowed a title to this talk which is the discrete minimal surface algebras trying to solve these equations that you have in front of you. I will then do something slightly different but related. I will try to solve these equations in the vial algebra. So u and v are operators or algebra elements which commute to one and the x i's are the elements in the vial algebra you're trying to find to solve these equations. And then also I will end by saying something about a non-commutative cadenoid and solving these equations which you now recognize from the last talk and connect that a little bit to the other two approaches. So these type of equations have also been studied earlier in particular the top one and the bottom one. My friends by Kohn and Michel Divoire Villette and they call them Young Mills Algebras and this for the Young Mills Algebras and then some inhomogeneous Young Mills Algebras. So they've been studied. I mean in this talk I will not focus so much on mathematical aspects of these algebras and so on but I wanted to give you an overview of how we try to solve these equations. So in order to motivate this why these equations pop up I would like to quickly recall how we may formulate Kelly geometry in terms of the Poisson algebra generated by isometric embedding coordinates into some ambient space. So this is part of a you can do this more generally we wrote a number of papers here. If you have an arbitrary n dimension Riemannian manifold you can rewrite the geometry in terms of an n bracket a multilinear bracket instead but for Kelly manifolds or Parakeller and so on almost Kelly and so on you can do with the Poisson brackets. So how do we do this? Right so on a Kelly manifold we know that the symplectic form of the Poisson structures is intimately related with the metric and this compatibility you can write in many different ways. I've chosen to pick a particular way of writing it in local coordinates so g is the metric tensor gab inverse and then you have the Poisson bi-vector and then the metric here and this equation holds on a Kelly manifold where gamma is equal to one but I will keep gamma here because that's a degree of freedom that is convenient to have as we continue. Right gamma is for I will gamma is for the moment one I will show you on the next slide what why I put this gamma in although for a Kelly manifold is one okay it has to do with the choice of Poisson bracket on your manifold. So the claim is here that if the Poisson structure is compatible with the metric in this way with gamma possibly being non-zero you can reformulate all of Jumanian geometry in terms of Poisson brackets of embedding coordinates into some ambient space so that's the statement we and we do it explicitly in these papers. Now so let's go to surfaces and embedded surfaces and for simplicity we will choose the ambient space in which these surfaces are embedded to be RM because that's simple and of course we know we can always do it we are Nash embedding theorem and it will be the case in our examples. Surface means two surface. Surface really means a two-dimensional real dimension surface right yeah and so it's a surface which gets an induced metric from the Euclidean metric and now for surfaces if we take an arbitrary density row we can introduce a Poisson bracket on the space of functions like this. Now there is of course a natural choice or canonical choice of this density which is simply the square root of the determinant of the metric maybe the Kepler choice one could call it and now we introduce this gamma which is the ratio of the square root of the determinant of the metric and this row you choose for the Poisson bracket so it tells you how much your choice deviates from the Kepler choice or whatever you want to say so and if you have this Poisson bracket in theta here you really realize that you have this relation again but now gamma doesn't have to be one it could be something else and now for surfaces this is sort of just one another way of writing the cofactor expansion of the inverse of the metric is just a this anti-symmetric sum of terms of the matrix elements of the matrix so that's just an identity. Now to come to minimal surfaces let us apply this to the Laplace operator on the surface so the Laplace Beltrami operator we know how to write in local coordinates like this and we can write it in different ways so I chose two different ways here so the first way of writing it is with a Poisson bracket with the embedding coordinates x i's a double Poisson bracket and here you see you have to introduce these gammas here if gamma is not one you get these extra gammas now you can also write it in terms of the local coordinates on your surface so u1 and u2 or u and v as they will also be called and you can write the Laplace operator like this of course you have sums over a and b here yeah okay so some particular examples when gamma is equal to one it becomes particularly simple right then it's just this double Poisson bracket equation for the Laplace operator in terms of the embedding coordinates now if you have a conformal metric on your surface just being proportional to the delta ab the metric and you choose rho to be equal to one that's the choice where the Poisson brackets of the local coordinates u and v is equal to one and you also get that gamma is square root of g is equal to this e then the second formula on the previous slide the formula which wrote the Laplace operator in terms of the embedding sorry the local coordinates it simply becomes this so it's just proportional to the sum of two double Poisson brackets with one of the coordinates here and the other coordinate there yeah assuming a conformal parametrication of your surface right so these are two equations that we will use to motivate what we introduce as non-commutative minimal surfaces so just a few remarks before we go on just to see how it looks like so I claim that everything can be written now in terms of Poisson brackets of the embedding coordinates how does it actually look like in practice right well for instance if you have a surface you can compute the Gaussian curvature in terms of the embedding coordinates and the formula for the Gaussian curvature is this huh so it's just a sum of products and iterated Poisson brackets so that's how all these formulas look like just to give you an idea and of course in terms of trying to find that's one of the original motivations here trying to find sort of how can we quantize these kind of geometrical systems well we sort of know how to quantize them if we can see things in terms of Poisson brackets we can just replace them by commutators naively so that was one of the original motivations for doing this and but of course it has an independent interest as well so just one more remark not so much related to this talk but one could of course now turn the question around when you see that you have all these formulas for Emanian geometry written in terms of Poisson brackets of some generators Xi some embedding coordinates you can ask well now if I start with a Poisson algebra can I introduce Emanian geometry in a consistent way in a Poisson algebra without referring to any underlying manifold and I mean we did this I did this with a student and yes you can find some simple conditions for a Poisson algebra and that allows you to do Emanian geometry in a Poisson algebra so that's just a remark but you can turn this this question around which is quite nice yes so with the help of these three formulations of the Laplace operator and we would like to formulate the equations for a minimal surface so it's well known that for a minimal surface in Euclidean space the equation so that you have for the embedding coordinates is simply that the Laplace operator on them is equal to zero so it's a it's a harmonic embedding and in terms of our Poisson bracket formulation this is simply this equation of course should say for i equals one two to n here that this should be zero but we can formulate it in this way coming very close to the equation we saw in the previous talk or we can choose another Poisson bracket where we the density was equal to one and not square root of g and then we see that well the conformal factor we had in front one over e doesn't matter if you want it to be zero so the so it's actually not equal here but proportional to this one and you can formulate the condition that the surface is minimal in terms of the parameters u and v like this and this is of course assuming that the metric is conformal that these tangent vectors here are orthogonal and have the same length so now also we have been interested in equations that look similar to minimal equations describing minimal surfaces in spheres in these these type of equations here from a physical point of view arose if you if you try to do regularization of membrane theory time dependent equations you choose a particular gauge and you work and you end up with equations similar to this one that you have to solve now these equations here Laplace of xi equal to minus 2 xi these are equations that now you assume that the the length of the vectors is one so you're in s d minus one should be d minus one up here as well and if you're all on top of that so these equations you know you have a minimal surface embedded in s d minus one also now written in terms of Poisson brackets so now we want to take these equations we want to replace Poisson brackets by commutators and we want to study whether or not we can find solutions right as and by solutions I mean anything algebras the algebras matrices operators whatever try to find solutions and what do we expect well from these equations here we expect something very rich already when d is equal to four because when d is equal to four these equations are minimal surface equations for minimal surface in s3 now by well-known paper by Lawson we know that surfaces of arbitrary genus minimal surfaces of arbitrary genus exist in s3 of course this equation should have all these kind of surfaces as solutions and of course if you do some non-commutative version of this you also expect somehow that you probably have a rich solution space so already d equals four should show a rich structure and you might also say that it even for d equals four it might be impossible to solve the non-commutative equations completely because we will have so many objects lying around there yeah but that's good because we want many objects to find examples so now with this introduction how you can rewrite the Laplace operator in terms of Poisson brackets and look at the different types of minimal surface equations and let us now go to non-commutative or discrete versions by using of course this correspondence to define our objects what y minus two that's what you get that's what you get when you when you when you when you derive the minimal surface equation in it's the dimension of the surface two right yeah you have a Lagrange multiplier and then you vary the equations and you get something like that also yeah and so this is an overview of one paper we wrote which I borrowed the title of this talk discrete minimal surface algebras and those equations correspond to minimal surfaces in spheres right now we allow for this eigenvalue or here whatever you call it it should be an i here sorry xi of course xi this eigenvalue to be arbitrary I mean it was minus two on the last slide but we we just to have some more room and flexibility we allow for arbitrary values and we call this collection of these mu i's the spectrum just to have something to say and we look at this equation as in different ways so of course these equations make sense in a Lie algebra right you let x i's be elements of a Lie algebra and you can make sense of this equation so that's one way of looking at this equation that given a Lie algebra and given a subset of elements which I call curly x you want this equation to be solved so really crucial or complex equation um doesn't matter I mean you mean over the complex real numbers Lie algebra yeah it doesn't matter so much now it doesn't right of course when we later on maybe want to find representations with the x i's our Hermitian matrices um it it matters but here it doesn't matter mu i can be complex just from the start right it will of course be real um if you have Hermitian matrices or so on and you take that take the conjugate of both sides yeah but for now we just leave it complex and also the multiplication the multiplication of of elements is normal multiplication or can you have a star product right now there is no multiplication right now it's just a Lie algebra so the only multiplication you have is the Lie bracket yeah however of course now the next thing to say is of course you can consider I mean sort of an associative algebra the associative free algebra x1 to xm you generate an ideal by this relation and you quote out and you get an algebra an associative algebra right so that's of course like the enveloping algebra of your Lie algebra that's one other way of looking at these equations maybe if you want to find some kind of basis and so on to use the diamond lem and all these things it's quite useful to do that right so I was thinking of a cube bracket over there and well we wanted to find this really wanted to find operators so then it's not so but of course you could consider these equations you could have any bracket it doesn't even have to be maybe Jacobi identity or I don't know but if that's interesting or not I don't know right so so you can do several algebraic things with these objects and of course con and juba we let for for the case where this is zero on the right hand side they put a lot of properties like causal and so on and homological properties of these algebras as I said I will focus on solutions to these equations so what kind of solutions we have well we have for instance Clifford algebra solutions now going away from the Lie algebra solutions I just mentioned you will see them later on so if I have a Clifford algebra satisfying these relations here I can easily see that and check that well if you do this sum here you will end up with something proportional to EI again now this this commutator is really the commutator of course not the anticommutator then you would just have zero right so it's really the commutator so representations of Clifford algebras if you want or just Clifford algebras as they are and solve these equations so Lie algebras maybe perhaps more natural as written well if you take an orthonormal basis with respect to the Keeling form of a semi-simple Lie algebra then you can easily see that the x-sides are solutions to the equations simply because the double commutator is the product of two structure constants and they will just be the Keeling form if they're orthonormal so you have a solution now of course let us look just a little bit more in detail on some other solutions you get from the algebras and let us look at sln here just to have something concrete to look at so sln you have alpha 1 to alpha n minus 1 being simple roots and we have to choose some kind of of of basis here so we choose elements e alpha e minus alpha and h alpha to satisfy these commutation relations quite a normal choice of basis and where these h alphas are part of the Carton subalgebra fulfilling that this linear functional and h is just this inner product with respect to the Keeling form and in sln all roots have the same length so let us call this l squared the length of our root and now having these roots we can just take the plus and minus combinations of the negative and positive root like this and these elements will now fulfill nice double commutator relations okay so you actually get this maybe not so important exactly what you get here but you see if I take one of these e alphas commuted with e beta twice and I get something back which is proportional to e alpha plus again now this is of course alpha plus or minus beta has to be a root here so what happens of course is that e alpha with e beta gives you the root e alpha plus beta then you get the term when you subtract beta again and you go back to alpha so there is no mystery here and so you can compute all these not so interesting so what can we do here so there are many ways now of choosing subsets of these things to get solutions to the equations for instance you can choose positive roots like this I think even you can choose the the signs to be independent and in this case every commutator like this would be proportional to xi again for all these elements here that's one way of constructing solutions there are other ways for instance here you include some elements of the carton subalgebra and take every root with a plus and a minus like this and now it's no longer true that every double commutator is proportional to to the element again but things cancel and work out here so you get solutions to the double commutator equation where it acts on the carton subalgebra as being zero and not on the carton subalgebra it has some other eigenvalues yeah so just to give some examples that there are many ways of of choosing these elements to get solutions so now let us consider the case when d is equal to four remember that dx equal to four you have four operators or four algebra elements and that there were equations corresponding to minimal surfaces in s3 which we expect to be rich already so and then we do some simplifications here so we assume that at least two of these eigenvalues are equal and the other two are also equal we call them u and rho here and we complexify these matrices Hermitian matrices xi as lambda and t now of course you don't have to talk about matrices you could equal or well talk about star algebra or so on but we talk about matrices to be concrete that was also an original motivation for for doing this now you just rewrite the double commutator equation in terms of these new variables and you get some expression like this and what you can note here is that things drastically simplify if lambda and t are normal operators now normal operators means that lambda lambda dagger and t t dagger commute so this term goes away and these two terms here are actually equal and these two terms are also equal and this goes away so you just have one term on the right hand side that's all can make things simpler so what do we do well when lambda is normal we don't assume t to be normal for now we assume lambda to be normal because it's unitary diagonalizable so we can find a basis such as and always find a basis where lambda is diagonal and then let me show you how we think of the solutions these equations in terms of a directed graph which turns out to be useful so what do I mean well it's like the adjacency matrix of a graph right and you let g with vertex set v and edge set e be a directed graph and the vertices are the eigenvalues of lambda okay and there is an edge between these eigenvalues if and only if the corresponding matrix element of t is non-zero okay so the graph encodes the eigenvalues of lambda that's drawing the the the graph in the complex plane with the eigenvalues are the vertices and then you just look at t and you draw the edges from the respective eigenvalues to get this graph so although of course this graph as depicted here does not encode the value of the matrix elements of t it sort of encodes the structure which turns out to be useful so this is just an example we'll see later so this I mean we use this simple representation of solutions to matrix equations in a number of different situations where it was very useful in some papers here for instance in this paper here it was crucial to to sort of classify all possible finite dimensional Hermitian representations of some cubic algebras right so which was much more tricky to do without graphs than with graphs so if you you can ask questions like if I have some equations and I have a graph of this type what kind of restrictions do I have on my graph coming from the equations can I derive rules for my graph to be a solution and so on yeah and so the fuzzy sphere solve these equations right so this up here is just a concrete representation of SU2 here with some choice of normalization which you see here and this is solution which actually these are equal to two here and for lambda is equal to that which is already diagonal and t is equal to this matrix and now if we just depict it in our graphical way it will be a simple string and like this with the eigenvalues of lambda are just these real values on the real line and then we just connect them one by one like that now these equations also have a rotational symmetry which I didn't talk so much about so you can if you want you can rotate this eigenvalues from the real line to go out in the complex plane if you want but you can get other solutions from SU2 right so for instance you can for arbitrary complex numbers z and w you can construct these two matrices which have different eigenvalues related to these of course and a different type of representation graph here and they're not equivalent these representations so they're different now the fuzzy torus is also a solution so it's generated by matrices g and h which have this commutation relation that those are the clock and shift matrices we know and now this is a rational root of unity here so q to the sum n is equal to 1 and you get a solution by in principle taking g and h here just to be lambda and t and g is the diagonal clock matrix and h is a shift matrix and this gives you a solution to the equations with this eigenvalue here depending on q and it looks like the graph looks like this those are just of course the n different roots of unity here now of course now you start wondering here now these equations they contain should contain solutions which correspond to surfaces of arbitrary genus in s3 of course the torus here has genus 1 and we saw the fuzzy sphere has genus 0 of course you would like to find solutions maybe explicit solutions of arbitrary genus right this is in general a tricky problem to find explicit things of higher genus but they should be here somewhere and we don't understand this completely at all and i mean maybe it's also very difficult because there are so many solutions in here but let us try something else here so let us take sl3 and let us take the this t1 to be an element of the carton subalgebra typically the ones we choose to be diagonal matrices that's also lambda is phi of t1 where phi is a representation of sl3 by anti-harmission matrices now if you take these these two elements and you just compute that you get a solution of the equations with these eigenvalues here and of course you can write down for instance if phi is the n comma 0 highest weight representation you recognize the weight diagram of course of of this kind of representation and this is n equals 3 perhaps and then you have to look at this matrix here to see how they are all connected with directed arrows so not so important but and of course you can go on you can take now this n to infinity limit and you get a sequence of of matrices somehow inside sl3 with perhaps some limit this i don't know yeah so here so to conclude here it says it is not it is not so difficult to find explicit solutions that you can actually work with for these equations which might be useful in i don't know in physics or not and but it's surely interesting that you have the structure in mathematics and you can say you can actually say a lot more here uh general about the representation theory and the connection between graphs and representations and irreducible not irreducible and and so on which i've chosen not to present so you're assuming q is not just one because t q q because mu is equal to 0 if q is less so on no q is not equal to 1 that's right no better way when what you construct here is a projection of cp2 okay yeah okay yeah good interesting yeah so what happens have you considered the case of bps kind of solutions meaning that you also have these num equations with the extra terms sort of stabilizing no i suppose solutions that also satisfy the num equation meaning that they have this commutator equal to epsilon ijk times you know right equal to the commutator the sum of commutator xi xi xj or something like that squared yeah multiplied by epsilon is the solution no i haven't looked at i mean the fuzzy sphere yes i think right but but this i don't know i don't yeah okay so now there was one set of equations to solve now let me proceed to another set of equations which are maybe not directly relevant for physics as equations but still very interesting and you can actually prove more for these equations and those are the the non commutative minimal surfaces which we look for in the vial algebra um when you wrote the Laplace operator in terms of local coordinates so we return to the equations defining a minimal surfaces in rm and we think of having a parameterized minimal surface in r3 and we know that the minimal surface equations can be written like this if the metric is conformal and and recall here that the common this choice of Poisson bracket was u with v is equal to 1 and the question we had is what happens if we naively translate these equations into non commutative algebras do we get something non trivial do we sometimes come in simple or not can we find solutions and so how do we think about this well we know we can find a lot of explicit parameterized non commutative and minimal surface in in r3 that's a classical subject so we thought maybe we can actually find analogues of these surfaces um right that's what i said we in terms of physics we may not solve the more relevant physics equation but maybe this will help us to understand what to expect from the other equations and also it turns out to be a nice um nice theory in itself here so what do we do since since the classical Poisson bracket is one we start with an algebra generated by u and v satisfying that the converters equal proportional to one this is the vile algebra we denote it by a h bar and we also for more technical reasons you can say we need this fraction field i mean where you consider also all the inverses of elements polynomials in u and v and now we start from a extremely naive um definition not very elegant but very naive and very close to the classical situation now what do we do well we take some tuple of elements in the vile algebra or if you want we take an element of the free module of rank n over the fraction field remember the fraction field is not so important we will later see that we can find solutions in the vile algebra itself but to make the general statements we need fraction field so we call it a non commutable minimal surfaces if they are all um all uh emission really in some sense they satisfy the minimal surface equation but we remember that this minimal surface equation only holds if we have a conformal parametrization so with the help of um derivatives here we write down these conditions very naively so what's du and dv well they are simply the commutators with v and u which in the classical situations where you have the Poisson bracket are exactly the derivatives with respect to u and v but here we have them as inner derivations in the algebra we write down this fact that the length of x prime u and x prime v are the same it's and the fact that the off diagonal terms they are orthogonal so you have this equal to zero somewhat symmetrized to be a real condition these two f's are same the two f's are the same thank you yes let me just different typeset right so these are the equations we want to solve you want to find x i's and elements of the vile algebra for which this is true you could imagine doing a lot of fancy algebraic things here you could define oh this is the algebra generated by these x i's completed to to include all the possible derivatives of the x i's and so on yes but this is not the route i want to take i want to show you theorems about solutions so this is actually what you get here a non-commutative vice stress theorem so recall for those of you who don't remember vice stress representation theorem tells you how you can write every minimal surface in r3 okay so it goes two ways so first of all you say here well you choose two elements f and g now they need to be holomorphic and here we we write that so d bar f and d bar g is equal to zero where d and d bar are just a complex combinations of d u and d v as usual and so you take such and so if you think about this these are quotients of polynomials in lambda only and not lambda dagger you take these and you write down x1, x2, x3 like this now f now this integration is just the anti-derivative of polynomials right so there is nothing fancy going on here if so if you can integrate these polynomials here you know that these elements satisfy this Laplace equation and also this these restriction that you need the parametrization to be conformal so this gives an a way to generate infinitely many explicit solutions to the equations in the algebra okay and thank you i didn't write lambda lambda is u plus iv let me just write that on the board so thanks should of course have been here thank you so lambda is the complex combination of u and v and we write everything in terms of lambda and the real part so that's where you get a mix of all the lambdas and u's and v's and so on when you write it out okay so it's nice that you can generalize the Weierstrass representation without change more or less to the non-commutative setting so one more thing so of course the the non-trivial part is not to show that this is a solution perhaps but the other way around that all solutions can be written this way in the vial algebra yeah there is another classical representation theorem which tells you how to obtain minimal surfaces and this is the following and this representation formula just depends on choosing one polynomial or quotient of polynomials f in lambda like this and construct this one then if you sort of integrate these phi 1, phi 2, phi 3 as x i's you get a minimal surface just another way also usually in a in a book about minimal surfaces you find this and so given any polynomial you find a minimal surface here and for instance now let me go to some examples you can find a non-commutative versions of the NEPA surface that this one is constructed this way right so we choose in principle we choose f to be just lambda to some power right and then we just write down what will this be well you can integrate that explicitly and you just get the real part of these things of course as usual I didn't say but the real part is just I mean one half the element plus it's of course a emission transpose yeah and now n equals 3 corresponds to the NEPA surface right and in a parametric yes and now you see when you write this out you recognize sort of if you recognize the parametricization of the NEPA surface as you find it in Wikipedia or in some book you will of course recognize this part here this but without the h bars and this part here now of course what this theorem provides you is the sort of non-trivial correction terms that you have to do in order to compensate for the non-commutativity so this theorem gives you these correction terms automatically if you want to look at it that way so unless these corrections are here it will not satisfy the equations of course you can go on and produce a lot more examples so it's really you can produce an infinite number of explicit examples if you like okay so let's okay just so now before going to the next part you can wonder a little bit here now if I have my non-commutative minimal surface in the setting right or if I have it in the other setting with the commutators with the x's I mean of course classically they are equivalent you can solve the equations in local coordinates you can write them in the in the embedding coordinates and they are equivalent and we have many many solutions to this setting right can we use them to produce solutions to the other setting which perhaps contain the more relevant physical equations now that we have so many examples here yeah of course classically is a change of coordinates but non-commutative in the non-commutative word is not as simple but we tried we have tried and we have succeeded to some extent so and that's this next part here together with Jens and Maxime which we are writing up now and we did the following so let me be a little bit more precise about the things I just said so if we have this Poisson bracket here with the sort of natural one over square root of g factor we get the minimal surface equations to be this and if we have this Poisson bracket with just a one here we get this now the two are related by a change of coordinates right so this comment is what I just said we want to produce solutions to the top equation from the bottom equation in the non-commutative setting as well so assume that we have a parametric minimal surface and assume that we sort of reparameterize these in terms of u tilde and v tilde which has a Jacobian which is just equal to the square root of g here now for a Poisson bracket where the u tilde and v tilde the Poisson bracket is one and you get that these x's now solve this equation right so in terms of u and v it didn't solve it solved the other equation in terms of the parameters but if we reparameterize this way you solve this equation which we want to solve so can we make use of this in the non-commutative setting and let's consider the case of the catenoid embedded in r3 how can we do this okay so let us recall what's the catenoid well we parametrized the catenoid in r3 for instance in the following way with caution hyperbolic functions and trigonometric functions and we introduced w to be the complex combination of these and it's just being equal to that now we can explicitly do the reparameterization we talked about on the previous slide by setting u tilde to be u and v tilde to be these this function of v alone you easily check that you get the the Jacobian to be the correct one and now you have these w's and z being functions of u's and u tilde with this these functions inside here and of course one would have to invert this up here to get this now so what have we obtained here well what we have obtained is sort of a parametrization in terms of the variables u and u tilde which have Poisson bracket one that's the main difference right in the in the other case we didn't know what the Poisson bracket of u and v was or maybe one of a square root of g and we don't know maybe we don't know what to do with that in the non-commutative setting but in this setting u and u tilde v tilde have Poisson bracket one which sort of make them easy to quantize in some sense so now we want to take these expressions and just replace u tilde and v tilde by operators that commute to one more or less right so so in analogy with sort of standard representations of operators which commute to one and we let them in this case we let them act on sorry for the four for your modes on the circle like this this is really just a motivation for how we now make the ansatz to solve the equations right we see that this e to the i u tilde shifts just one step and this v tilde is diagonal this way right so we make the following ansatz for w and z and w is just something whatever shifting one step z is a diagonal operator and it corresponds to these ones the functions right so this one the e to the i u tilde would just shift one step and whatever this is will be a just diagonal so that's the right ansatz and this z will just be diagonal now remember that w is x one plus i x two so and z is x three so we want these three operators which satisfy these double commutator equations now we just plug things in and you notice that these equations now in terms of this just becomes these two equations here as written and then you plug in the ansatz right and as you expect these the equations you get will be equations for these w n and the z n which defined our operators now we write it in terms of the modular square of w n which we call r n and then z n this is z n on the previous page so if we think of the w's and z to be operators in sort of the the doubly infinite fox space just the states labeled by an integer going from minus infinity to infinity these are the recursion relations we get for the the action on the different states of of w and z and we can solve these recursion relations yeah so we immediately note here that you see this is just this equation shifted one step right so this combination here will be constant for all n's and that's what we call c here so we immediately see that and now you can just solve the recursion relation by by picking some initial values and then doing the recursion in both directions now the only slightly non-trivial thing is that of course we have to keep our n positive in order to solve for w n at the end so we have to prove that these recursion relations they preserve positivity and they do so for a specific set of initial conditions so r 0 is positive you choose some c which was this combination of which was constant we noted on the last slide and you choose r 1 to be between r 0 and r 0 plus something constructed out of c and r 0 so you have a range in which you're allowed to choose r 1 to get a solution and this is to guarantee positivity of r and then you just simply you take this you do the forward recursion for n bigger than equal to 2 or the backwards recursion for n less than equal to minus one and you just get everything in terms of r 0 and r 1 and you can do the same thing for z here and you get all the z's so given the initial conditions you get a unique solution and this solution satisfies that the r n's are positive and you can define the following operators you can always include a phase here but it's not important you can you can buy unitary transformation you can remove it so there is nothing here so in principle you get these operators now so remember from our general ansatz here to the double commutator equations remember this was just a motivation to get this ansatz so this ansatz has really from the look of it nothing to do with the cateenoid really it's just a some ansatz motivated by the cateenoid and we get solutions which we have down here now how do these solutions look like do they have anything to do with with the classical functions defining the cateenoid well um they do somehow so you can just plot this recursion relation for some initial values right and this is r n you plot here which is more or less the the square of w n and here is the z n going like this and if you plot the corresponding functions i didn't do that here and you have a quite nice match of the behavior so really r n and z m seems to be more or less discretations of the functions of the cateenoid although that was not really how they were constructed it turns out that the solutions are exactly like that and in the things we are writing up now there are more interesting relations between the initial conditions and the parameters when you define the cateenoid you can you can shift the face a little bit you can multiply everything by a constant you can do all these kind of things and they seem to have a corresponding freedom sorry a corresponding um notion in the non-community world and there's also interesting to note that there is some choice here of r1 between r0 and this value which sort seems to be some sort of non-community effect here that you're allowed to choose some non-community activity in this thing um because in in in the classical case this is a symmetric function here which is it's not here it's not symmetric but for certain values of the initial conditions you get something symmetric okay so okay before i summarize so what did we do well we looked at um a cateenoid in some parameters which was not really correspond to the equations we want to solve we looked at how to do a coordinate transformation in the classical case and we thought of can we implement this in the non-community case by this procedure and the cateenoid is not the only example it seems like we can do the same kind of reasoning to obtain solutions to the double commutator equations even for even for other surfaces minimal surfaces so it seems like to some extent this would help us to define solutions so just to summarize um what i i try to give an overview of what we've done different approaches to non-commutative discrete quantum minimal surfaces here um and they are these equations are motivated both from physics as we've seen and but also from mathematics because i think it's an interesting question to what extent these objects exist and i have nice properties in the non-commutative world um and in particular presented these two different ways one in terms of the vial algebra and the other one in terms of just the x's to obtain equations for minimal surfaces and one may find solutions many solutions and in the vial algebra one even can sort of in some cases classify all solutions like with the vial stress representation so you have infinitely many explicit non-commutative minimal surfaces that you can write down and at the end i try to present an idea how to sort of connect these a little bit how to go from one to the other is it possible in the non-commutative setting it seems like yes in some cases you can actually do this to obtain non-commutative minimal surfaces which solve the more relevant physical equations and that was it thank you very much yeah if you kind of go back to this slide when you choose the complex structure w no no after that after that yeah is it the reminiscent of self-duality in the kind of Youngville's case means is it the can we interpret this z w w dagger as some self-dual equation and the other one is because it's young I'm I'm sorry I'm not a physicist so I don't have a good question answer but after choice of this w equal to x1 plus ix2 it is after choice of the complex structure sorry I don't know no no the answer is no this is just rewriting the equation it's no self-dual it's not something yes the question here so in the case of three emission matrices precisely this equation so how close are you to having a complete understanding of all solutions of these things all solutions of these equations or the catanoid case no with three matrices and maybe with a mass on the right hand side I don't know um with three matrices um this equation um no not the general I mean for the corresponding equation in the vial algebra some sort of a complete understanding but here no yeah it I mean all these morally speaking all these solutions should be here somehow lurking around right all these minimal surfaces yeah yes I have two questions so you said you were taking a field a vial field I'm not an expert so is this just vial algebra localized in with respect to some multiplicative sets or localized I mean it's a total fraction field for the vial algebra you can construct a total fraction field yeah yeah so you're just quotient of polynomials right you get you can show that these or conditions are satisfied for if you take the complete algebra to localize at the complete algebra and you can show that these conditions are fulfilled and you can construct by this or localization you can construct the quotient field yeah have you found second question have you thought of taking say other nice converging sub-algebra of star sub for instance on on on the sphere you can there's a explicit star products on the upper half plane there are explicit star products which are converging on some class of functions two construct solutions that's what you mean yeah I mean of course the vial algebra is the simplest choice simple yeah right um well you would I mean if you if you work like this you would like really you um oh you mean having an algebra with still keeping u commutator v is equal to one but allowing for more more functions in there apart from polynomials no that was not your you commutator would sometimes be the vial right right yeah some other I mean you think of the sphere as well you have three functions and right patient identities like yeah I mean yes I mean maybe you would have to go to s3 I mean for the ordinary sphere maybe you don't get so much interesting like because they already got two dimensional manifold so you would have minimal surfaces in the two sphere which is yeah I don't know really what that would mean but you can you can imagine having introducing more elements in the vial algebra for instance I mean I know for instance Stefan Waldmann constructed a sort of a much larger algebra including a lot of more smooth elements of the vial not just the polynomials of course you can work in such an algebra also in this case if you want to define the catanoid in the vial algebra of course you cannot directly do it because you don't have access to the cosh of you right but in this this much larger algebra you could imagine doing it yeah for my culture if I'm interested in the quantum string is there a link between quantum solutions of the string and what you would do in Lorentzian manifold for the string worksheet connection with the minimal yes I mean the string version of classical equation for string is a minimal right right yeah if I'm interested in quantum strings real quantum strings right yeah so you should not action you should not really ask me I I'm not a physicist but one of the point was of course to see to try to provide explicit solutions to these classical equations to try to understand maybe perhaps more the quantum version of the strings and so on but this I don't know much about thank you thank you