 All right, okay. Welcome, everyone. My name is Alejandro Cardenas Anandano, and I will be today's Latin American webinar host. I'm so pleased to be with you today and to have the chance to introduce our guest speaker, Edgardo Cepterrap. Hold on a second, I was having a feedback, okay, sorry. So it's a pleasure to introduce Edgardo Cepterrap from Maple Soft. He did his bachelor in physics at Estado do Rio de Janeiro. Then he did a master's in theoretical physics at Centro Brasileiro Jupesquisa's Physicas. And then he got his PhD in theoretical physics at Universidad Federal do Rio de Janeiro. Then he did a couple of postdocs at University of Waterloo, Simon Fraser University, and University of British Columbia. He was the head of the theoretical physics department at the State University of Rio de Janeiro until 1996 when he moved to Canada to start working for Maple Soft for, and also was a sort of joint professor at University of Waterloo, University of British Columbia, and Simon Fraser until 2005, where when he became like a full, like a staff member of Maple Soft. Since then, he has, he and his students have developed the algorithms and Maple libraries, the ODE, PD solvers, mathematical special functions, and specifically also the physics package that Maple uses. Today, he will talk about applying the power of computer algebra to theoretical physics and using Maple. So please join me in welcoming Dr. Cepterrap. Okay, so I'm sharing the screen now. Hello, my name is Cepterrap, and I do the physics differential equations and mathematical functions, software of the Maple system. Today, I will talk about the physics project at Maple Soft and the resulting physics package with some examples in classical field theory, quantum mechanics and general relativity. The level, I will try to start with something that is simpler, and then go into something a bit more complicated. So first of all is while computer algebra, and the reasons are several. First of all, we can concentrate more on the ideas instead of the algebraic manipulations. This is relevant, I remember my PhD thesis, more or less 80 pages of algebraic computations by hand in quantization of supersymmetry. We did the computation two times by hand, of course. And in fact, the ideas were just one or two and it was a large number of computations. So with the computer algebra, we can move away completely from that paradigm into something else, which is focus the ideas and let the computer do the heavy algebra. The second thing is that we can extend results easily. For instance, during the month that the thesis committee were analyzing my thesis, I was able to extend the results now using computer algebra to produce two new papers with problems much more complicated than the ones I tackled for my thesis. We can explore the mathematics surrounding a problem. For instance, if it involves differential equations, we can explore different solutions for differential equations using symmetries, using these, that if there are special functions involved, we can plot compute series, see the properties, et cetera. And not less important, we can share results in a way that is reproducible so that if we have colleagues, we can do some computation, put some text, send the worksheet that allows our colleague to reproduce the results precisely, change something, send the worksheet back. So we interact with something that is a sort of a live computation instead of paper and pencil or latex, which is their computation. So if it is so good, why is that it wasn't used more? And the reason is that computer algebra systems were not built for the typical computation we do in theoretical physics. So there were notation and mathematical methods missing. For example, coordinate free representation for vectors or vector differential operators, everything about tensors, covariant distinct from contravariant, sum rule for repeated indices, function and differentiation, covariant differential operators, not to mention direct notation in quantum mechanics. All these was missing. Another thing is that the paradigm in computer algebra was always too similar of a calculator, but with algebra you input a computation and you receive the result. But when we compute with paper and pencil, we frequently want to represent a computation without actually performing. For instance, we do computations with integrals that are not computed until the end after combining them and taking some properties of the integrand, et cetera, that only appear later in the computation. Also we use a very flexible hand-like style for doing computations with paper and pencil. And we use a textbook, I call it the textbook like notation. It's really close to calligraphy, what we do in theoretical physics with notation. All these was missing. And finally, some key things of the computational domain. So for instance, product and differentiation involving commutative, anti-commutative and non-commutative orchestras wasn't there. The ability to set custom defined algebra rules, commutator, anti-commutator or bracket rules, and the ability to distinguish, use properties of genetic, unitary, on a reported medium, quantum operators. Nothing of this was there. Essentially, the Maple project introduced all these things into the computer. So I will go now with some examples. And the examples will go over quantum mechanics, classical field theory, general relativity and something of what's next. First of all, I will do the presentation alive. So what you are watching now is indeed a Maple computer algebra worksheet. And in this worksheet, there is space for text that can include formulas as you see here, for instance. And the way I set this in order to make it visible, I will use with a prompt. And where you see a prompt like here, this triangle, then I type things and I press enter and things get executed. Everything that is preceded by a percentage means it's inert. So represents the operation has all the properties and the differentiation, simplification, et cetera, but the operation is not computed. For instance, here I will say that the non-computed integral of cosine of x with respect to x is equal to the computed integral. It gives this when I press enter, the output appears and I see an equation level. I can then refer to this equation level, pressing control L and I put the number of the equation level. The number gets inserted there, I press enter and I retrieve the contents of that equation level. I will use this for the time doing the presentation. So starting now with some topics. First of all, in quantum mechanics, we introduce it all about didx notation. So we have kets and brass, discreet and continuous basis, scalar product, orthonormalization relations, closure, projectors, quantum operators, eigenvectors, eigenvalues and commutators. This is not a lecture on quantum mechanics, so I won't go into see this, see that. We have it, no, I'm just saying all about didx notation is there and let's see it at work directly with something that is not so usually seen computer algebra, which is the use of this formalism to the right properties of mathematical options. So consider unitary operators in quantum mechanics. And then I want to show for instance that the eigenvalues of a unitary operator are all on the unit circle and so their modulus is equal to one and then show that an operator, the exponential of the imaginary unit, lambda is a real parameter, h is an armitian, okay, that this object is unitary. How can I formulate this kind of abstract problem in a computer? Well, first I load the package and now I indicate to the system that u is a unitary operator. This setup command is an emulation of what we do with our own brain. We start the formulating process, okay, I will use the letter u to represent a unitary operator, okay. But when I say that, the whole system now knows that you represent the unitary operator. Now, if u subscripted epsilon is a normalized eigenvector of u of u with eigenvalue epsilon, then on the left is the representation of the operator, on the right is the actual contraction, is the scalar product of u times the ket. So if ket is an eigenvector, the result is an eigenvalue times the eigenvector. So I press enter and it appears on the left, this is the operator I am computing equals to on the right the operator, the operation being computed. Okay, I want to see that the modulus of epsilon is equal to one. So take the value of that equation number four and now take the scalar product of five with four means the scalar product of the left-hand sides of both equations is equal to the scalar product of the right-hand sides. And automatically we get the number one. How this happens? Well, because I already said that u is unitary and therefore the system knows how to multiply these things, knows how to take the norm of this vector and therefore arrives at the result. Show that when h is a median v is unitary, the problem seems technical. However, for your computer is simpler than trivial. You say, okay, h is a median and now a lambda is a real object. So first construct an equation saying, okay, this is the departure point, take the dagger and now just multiply both equations left-hand sides multiplied equals to the product of the right-hand sides in one way and we automatically get one, multiply the other way and we also get one, therefore being unitary. Go with some properties. Consider two sets of kets A and B, each of them constituting a complete orthonormal basis on the same space. Verify that the operator u constructed in this way maps one basis into the other and show that u is unitary and show that the matrix elements of u are equal in both bases. Start with the first or load the package. So, okay, now I set up u as a quantum operator and indicate bracket rules that tell that A conforms a complete orthonormal basis and be the same, these are discrete basis and now I construct, this is the departure point, u is this operator. I want to check that this operator maps one basis into the other. So just take the operator and multiply by A. Here I'm using an apron trick which consists of delaying the operator once, one level of evaluation by enclosing with plits, with quotes. So this is the operator, the operation I want to perform. Now unleash the computation. On the left-hand side, we have u applied to A. On the right-hand side, this sum is scalar-producted with this ket and we get just that result. The system knows how to perform this operator. So we see that u scalar-product A maps into B. So this operator maps one basis into the other. Show that the operator is unitary. Well, now that we get some confidence in how it works, it's easy. Take the dagger and multiply the two equations. Left-hand sides multiplied equals the product of the right-hand side and this operator and we get this, multiply the other way and we get this and since these two basis are complete this is just the identity operator and therefore u is unitary. Show that the matrix elements in the basis are equal. Okay, recall the definition of u. This is it. Okay, just compute the matrix element. The matrix element means bra times the operator times ket. I apply and I use the facilities that a computer provide. I can multiply the whole equation on the left-hand and the right. So I am delaying the operator with these quotes. So this is the operation I want to compute. Now just unleash the operator and I get that thing. Likewise, multiplying with the B basis. This is the operation I want to perform. Unleash the operator and I get that thing since the right-hand sides are the same. Therefore the left-hand sides are also the same. And so the matrix elements of u in the basis are equal. Okay, the worksheet contains much more material than what I can present in 40 to 45 minutes. So I left a bunch of sections here, for instance, shrubbing the equation and unitary transformations. It's a very nice problem. Translation operators using the recommendation also. But all these become more tedious at the time of presentation. So it's there, the worksheet can be downloaded after the talk and whoever is interested can open these sections that were not presented. All of those that has an asterisks here are presented. And those which have no asterisks are there, but to be checked after one. So let's go with something more involved. Check that the commutation rules for the components of angular momentum satisfy this well-known thing that we study for the undergrad, first quantum mechanics course. So we start, we are going to work using tensor notation. So we will use space indices as lowercase Latin letters. Now define L, R and P as 3D Euclidean space tensors. L, of course angular momentum, R and P are positional momentum. And now indicate that L, R and P are also quantum operators and indicate the commutation rules for P, for R and P in terms of chronicle delta and for R equal to zero. So this is the setup of my problem. Now we have indicated algebra rules. What this means is that the system will take this algebra rules into account when it makes sense. Because these are quantum operators, which means they don't commute, not things we can do products, but these products don't commute. So they are displayed in all different color to indicate that these are non-commutative objects. We introduce the definition of the component of the angular momentum in terms of position and momentum. This is actually the vector product of R and P. And the rule that we want to verify is that the commutation, the commutator between the components of L satisfy this 3D algebra. So well, all what we need to do is to substitute this inside this. And because the system already knows the commutation, the algebra rules between R and P, now that we have everything expressed in terms of R and P, well, press the button and just tell me, show me the result. So we do the substitution. I note that when we do the substitution, this, the left hand side is Lj, but here is Lk. The system takes care of all these things. And with the instance of repeated indices, the index cannot be repeated more than once. So the system takes care of collision and synthesis as well. So we have, what we want to prove is that this is true, provided that these algebra rules are satisfied. Okay, if we want in one go, just press the button and there we have both left hand side and right hand side simplified to the same thing. If we want to go in two steps, expand first the commutator, and now this product of levisimita tensor is expressed into sums of products of Kroniker deltas. The Kroniker deltas contract with all these non-commutative operators. These products are non-commutative. You cannot switch the order of the operands, but then taking into account the algebra rules, both things simplify to just that thing that was mentioned. Okay, go with something a bit more involved, quantum commutation rules. So what we want to show is we want to derive the commutation rules in the coordinate representation between another iterative function of the coordinates and the related moment. The iterative function here is represented by here. For that we depart from the differential representation of P as this object, imaginary unit and a plan's constant. Okay, solution. We load the package. We are going to work using vector. So we use that. Then we need to set the problem, which means all of x, y, z and P are emittent operators, all the position operators committed themselves. And we will indicate now that the x, y, z are the differentiation variables of the corresponding differential operators. And we won't tell the form of the operators themselves. Neither indicate the commutation rules of between the P because that's what we want to derive. So okay, our departure point is this. I will set an alias, which means the capital X will represent the sequence of x, y, z that is simple to write. And I will use a compact display, which means f of x will now be displayed as f in order to avoid redundant display. So what my departure point is take the commutator and I want to apply it to a ket, a three dimensional ket. Now, although this is represented as a product, the fact is that P is a differential operator. So at some moment I may want to apply. For that, we have a special routine that transforms the product of an operator times something into the actual application of the differential operator. So the left hand side now becomes equals to this. So P is applied to the ket minus P applied to the product of f times the ket. Now we define P to be this operator, which is consistent with what we said here. Okay, now expression 38, which is here becomes, sorry, becomes this thing. And now here we use it in gray because it's preserved by percentage. We use the inner form of NABLA just to have control step by step and see how it works. So now we are going to execute these operations. So we take the value of that. So now the operation is performed, not just represented. And now we want to simplify this thing and we get this by I, there is a common factor here that we can factor out. So just factor, now we get this and we can see that this is just the gradient. We produce an equation where the left hand side is the computed gradient equals on the right hand side to the inner form of the gradient, which is just its representation, substitute one thing into the other. And then we are almost there, just multiply, do a formal operation, multiply by the inverse of our ket, and there we are and we get what we are looking for. I will skip all of these. Just give a word about this. Much more involved problem is to show that the hydrogen atom has a hidden SO4 symmetry. It's a very nice problem and really shows the power of all the software at work. It's a brand that's complicated right hand. I will just formulate it here, indicate that the solution is there, but the experiment itself will take 30 minutes. Basically, you depart from the Hamiltonian, the final angular momentum, and the Runge-Lenz vector. This vector is a constant in the classical theory. So in the quantum version, the commutator with the Hamiltonian needs to be equal to zero. For that, you need to symmetrize in order to have an Hamilton operator. And then just departing, just departing from the basic commutation rules of R and B, show that L is conserved, Z is conserved, I mean, it's a constant, that determine these commutation rules and show this as well. And finally, introducing a new object like this, you can show that the algebra behind this problem is this, which is an SO4, the algebra. The involved problem in four steps is so there, is left there for whoever wants to give a look. I'm moving into something else, which is key in theoretical physics, which is functional differentiation. Starting with classical theory, start with the most simplest example, lambda phi 4, we load the package, set the coordinate systems to the Cartesian. By the four, the metric is an Minkowski metric, which standard in special relativity, we will use a compact display for phi. And now we introduce the Lagrangian density. We see from the input that this is very similar to what we do with paper and pencil. The output is also very similar to what we see in textbooks. So now we construct the four-dimensional integral of this, which means this is the action. And what we want to compute is the functional derivative of the action with respect to the field, equated to zero, and that is the classical equation of motion for the field. So the field equations. So I am delaying the operation using these pleats, now unleash the operation, and there we are with the field equations. This means the Lagrangian course. Now it's important to say that this kind of output is not just the takeout output, it's alive, it's there. So I can show all the components, I can transform the differential equations, I can assign values, I can really work with them. Okay, go with something more involved. So deriving Maxwell equations, departing from the four-dimensional action for electrovenants. So we load the package, set coordinates to the Cartesian, define the electromagnetic potential a mu, use a compact display for it, define the electromagnetic field tensor using the standard formulas. And again, take the action and the functional derivative of the action, which is this, with respect to the tensor field, in this case. So this is in four dimensions, you see the index is raw, here is new, here is new. So the operation is delayed with these pleats, quotes. So now unleash the operation, and there we are with Maxwell equations in four dimensions in terms of notation. But for the fact that we see that some simplifications can be performed here, taking into account Einstein's sum rule for the PD-Limbises. So just simplify, put a minus sign, divide by four, and we get the standard form of Maxwell equations in four dimensions. And just to go heavy with this, I will do a last problem in this, which is to derive the Gross-Ptyevsky equation for the ground state of a quantum system of identical particles. This problem is, this description is relevant for both Einstein's compensates. So the steps are the same, construct the Lagrangian, minimize the action equating to zero, and, well, so while showing this, if I show the other ones, because it illustrates as well another thing that is that we can compute with vectorial differential operators, also unfunctional differentiation. So this is the energy for the system, C is the field representing the problem, G5, four is atom interaction, V is an external potential, we set the real objects of the problem and the Lagrangian density in terms of the energy according to the standard textbook formulas, is being by that. So in fact, the whole problem here is to compute this function of derivative. How do you do it? So there's the conjugate, there is the time derivative of five here, there is the gradient of C, not five, C, C squared, the norm, there is a four term, how do you do it? Okay, just unleash the operation and here is the function of the derivative perform, which means this is the field equation for the problem, so it's the gross p-t-u-c equation. Now, here we see that there is a Laplacian in disguise, so construct an equation with the computed Laplacian equals to the representation of it. Now simplify this equation, taking into account this equation. What we get is the form, so essentially all these got compacted into this, appearing here. Now, just isolate the time derivative, which is here. Okay, isolated and plus simple algebraic manipulation, we get the standard form of the gross p-t-u-c equation. I would go, I don't have time, but it's there for whoever wants to see. This is a very nice physical review paper, topological constants or magnetostatic traps. All the steps are there. This is also a 20 minutes presentation. I will skip it, but it's there, it's a very nice problem. And now I want to switch to general relativity. First of all, I want to talk about the database of solutions to Einstein equations. So in the physics package at Microsoft, not only we have commands for general relativity, but we added a whole database of solutions to Einstein equations. And for that, we took the main reference for this, which is a book by Stephanie and collaborators. He was the main guy of the project. And the authors have read more than 4,000 papers containing solutions to Einstein equations in the literature, organized the material into chapters, classified according to the properties, remove the redundant solutions presented as different when they are not. And resulting in a fantastic work, but for the fact that these are actual paper and ink. So you need to read, read, you need to copy by hand, and the equations are not alive. Yes, this is paper, it's actual paper. What we have done is digitize the whole book in Jamaica, we finished in the year 2016. So it is now possible to actually compute with these exact solutions directly. Everything is alive, the whole book. I will start with some examples, starting with the very, very trivial things. Otherwise we'll become, it's okay, anyway, takes a trashy solution. So in order to load the solution, all that you do is to give some portions of the keyword, here are short sheds, okay, SC. And the system knows, okay, this is the only match that counts with SC. So sets the coordinates are a spherical coordinates, and this is the metric. And that's all what we do. All the general relativity tensors are automatically computed on backgrounds. So for instance, this is the Christopher definition. If I rise the first index, give the value one, this is actually a matrix, and I can see the components. This is the definition of the Riemann tensor. This is the Riemann invariance. These are the wild scalar definitions expressing in terms of the wild tensor and the null vectors of the Neumann perverse formalism. This is the value of the wild scalars. I can define a tensor and compute the killing vector. So it's a tensor one indexer and tells me that for this matrix, all this has been computing alive now on the fly. The Riemann invariance, the wild scalars, or the killing vectors. So for instance, let's check. If this is the killing vector, take for instance, the second one, which means this one here. If this is true, then the derivative of the killing vector should be equal to zero. So compute the derivative, we get that thing, compute the components of that thing. It gives this, okay, simplified, and we see. Indeed, all the components of this tensorial expression are equal to zero. Compute the geodesics, we'll give this, use compact notation. Okay, give me the differential equations behind this so abstract notation, the actual differential equations. Is this, is a heavily non-linear problem? Okay, but it happens that on the equator, so at theta equals five divided by two, if we evaluate this system at five, at theta divided by two, we get a system equation that is not so terrible. So we can ask the differential equation solver of the system to actually solve it. It takes more or less 10 seconds in this computer and it will return and count in here, there we are. So it's saying, I just asked for the first two solutions. It's saying, okay, there is one solution with this value of r and this value of five, and then t as a function of r and five is here. This is the second solution, same thing. And t is the integral of, this is r, this is the tau derivative of five and so on. One can query the database of solutions. For instance, this gives me all the solutions that involve the mathematician and the decimita. I just get the index, the first number refers to the chapter in the book, then the equation, then the case. So in order to load, for instance, the first equation, I just indicate something like this and I get the equation loaded or I can search visually by properties. For that, I launch a metric search. For instance, I want to search for pure radiation. The letters I believe are too small here, but anyway, petrope type of type D, play runs keep petrope type O isometry dimension one search and it tells me that there is one solution, 2874 one. Close. So I load the solution by just putting, and there we have with the solution alive. I can compute the type of the case in BSD and the play runs keep petrope, segue type is also whole. And how many solutions are digitized? This is the whole bunch of indices. Count them, there are 991. I will skip and go into the, okay, I will go directly to here. So I will tackle now the computations of physical review paper in the area of general relativity at the end of the year 2013. Delta is VCF, we're calling potentials absence of cost. And I will skip here the physics of the program. I'm interested in how can the computations of this paper be performed on a computerized system and whether this is or not advantageous. So the paper proposes this metric that depends on two functions, lambda and V. And first wants to compute the trace of this expression where I find some function of the radial coordinate, R is the richie tensor. These are the covariant derivatives in general relativity. And this is the energy momentum tensor given by this expression that involves lambda here and both V here. The second thing is to compute the traceless part of Z. So in the item E we compute the trace in item V compute the traceless part. And then we want to compute an exact solution to the system of differential equations given by the traceless part of Z. So first of all, we load the package as usual, set the coordinates to be a spherical and now we introduce the line element. It's given by this, we want a compact display for the functions lambda nu in order to avoid redundant display of the functionality. And we add set the metric in V, that is the metric is the one we wanted, the functionality of lambda and nu are omitted but are there. Now we indicate the energy momentum tensor as indicated in the paper. And now we define this to be a tensor. We want a compact display for phi and now we enter the expression we want to compute. So let's say mu nu equals all this thing on the right hand side. Okay, we define this to be a tensor. So the definition for the tensor set is that and the answer to V, which is the trace of Z is just the trace of Z. This is nice. If you want to show it in a standard map notation we see all this and now we see the advantage of this don't show the functionality and show derivatives as indexed is much more compact. This line becomes four and discuss in fact a lot of redundancy in the display. Okay, next item compute the traces part of Z. For that we define a tensor mu nu a double mu nu which is the traces part. By definition that's the object minus the trace and we apply it by the metric in this case divided by four which is a dimension of space time. So now we define and we want the components of mu nu of double are just this in this very compact notation. For the computer takes zero time to do this computation. And now we want to go with something that wasn't there in the paper. The paper doesn't there and computing the solutions of these equations. In fact, the OD system is given by the non-zero components of double is this. And before tackling this is a heavily non-linear system. I want to analyze the system. So I run a differential elimination process and it tells me the computer is telling me well your system of equations splits into three cases according to the values of three p-volts p1, p2, and p3. So here is p1, here is p2, and here is p3. It's actually there is a p4 on the story. So if p1 is equal to zero is case three but if it is different from zero we have here and then splits according to p2. If p2 is equal to zero goes to zero goes here otherwise you have case one but if it's equal to zero then there is a solution only p3 is different from zero and only p4 is different from zero and vice versa case two. So indeed the number of cases is just three and let me give a look at the size. The first one is to be in size. The second one is to be not that big. And the third one is simple. Okay, takes the third one. Take the third one. This is a non-linear system. Okay, this is linear but here this equation is non-linear as we can see here. So in fact the first equation is just a constraint. So split the problem into a constraint and a subsystem. This is pure differential equations. And this is a constraint between lambda and phi. The problem has four unknowns phi, lambda and v. So compute a solution for the subsystem. It comes pretty fast. And it tells me that this is the solution. Okay, now we can specialize one of the integration constants that appear here by using the constraint. So evaluate the constant that the solution gives me is isolates in one and then substitute this value into the system and we have the solution to the problem that is not shown in the paper. And indeed verify that this solves the system and it does. And now I am at 37 minutes. So I will start winding down. So want to tell a couple of minutes about the physics project and what's next. So summarizing physics is a software project. It started on Microsoft in 2006 but in truth this started in Rio de Janeiro in Brazil when I was the head of the theoretical physics department in 95 before coming to Canada. This was my research project there. Then I came to Canada, started working with the French equations and 10 years after I decided to restart with this project now directly for Microsoft. The idea is to develop a complete or as complete as possible computational symbolic numeric environment specifically for physics. Very important with target the occasional research needs in equal footing. I'm also important. We don't want fortranish input and fortranish output. We want flexible style of computations as much as possible as we use with paper and pencil and as we see in textbooks. Second thing to mention here, the package is growing every year sometimes multiplied by two its size in size. Now includes this sociable database of solutions once in equations with all the solutions alive and not mentioned in this presentation there is a dedicated programming language for physics that comes with the package. It has more or less 200 programming routines so that allows anyone to program in physics. And these are actually the routines used to construct the package. The third thing that I want to mention that is very important is that the physics package is updated every week and the update is distributed on the web not by me actually by me but working for Microsoft. So it is distributed by Microsoft. The updates includes new developments, lots of bad fixes here and there whatever people report and the novelties are included. So it's not that people wait, it needs to wait for one year. Typically we receive people feedback and when it's possible we implement this right away and it's there for everyone. So what is next? Next is the standard novel supercatch aims at allowing for the reproduction of the typical computation we do with the standard model. I put here a reference, what we have in the 2018 piece, a first version of the package. The standard model has SU2 indices, SU3 indices, the gauge tensors of all the interactions present that essentially is everything but for a gravitation. They are the German matrices, power matrices, direct matrices, the electric weak and strong charges, the electromagnetic field strength, the blue on field, blue on field strength, Higgs boson, everything is there and it's a work in progress. We expect to have it finished by the end of this year. Then the scattering matrix and Feynman diagrams, we do have a lot about this but there are restrictions. The current matrix, the scattering matrix in this moment can be computed up to three vertices which is good to cover for instance what you have in textbooks but it's not good for instance to cover that the computations are doing the same. The drawings of the diagrams needs to be redesigned. Works very well up to two loops which covers again basically what you see in textbooks but if you want more loops we need a more powerful design drawing and the commutation of the Feynman integrals themselves they are represented by the computation. The computation is not performed, we want to perform. It's necessary if you want to do dimensional regularization and renormalization. The goal is to allow for computing perturbative expansions in field theory with no restrictions, including renormalization and the lack. So just as basic stuff, we set three coordinates considered for instance these the interaction Lagrangian there is a five four and eta three. We want to compute the Feynman diagrams of these up to order one, which is the three level. Okay, these are the normal products. If you want the diagrams, the two diagrams up to three level are those the symmetry factors. Okay, now we want to go with two vertices and show me the final expression of the algebraic form of the expression and the diagrams themselves. There we are up to two loops with the symmetry factors, et cetera, et cetera. And this is the final form in terms of normal products and so on. So we do have, but what we want to do for this year is to extend, remove the restrictions. So no more just up to three vertices and more complicated, I'm not sure we will, but we are targeting be able to draw with as many diagrams with as many loops as possible. Okay, finally, we are entering in numerical relativity. Numerical relativity, current restrictions and the development. Okay, it's actually much more than under development, but we have actually a whole new package with physics factors. Let's have, let me just. So we have in this moment a package that essentially does this. We have two packages with three, three plus one. Three, two, three, sorry, three, six. Okay, so we have one, the three plus one package that has all the three dimensional versions of commands to work with the ABN formalism. This package three plus one, this one is finished, allows to reproduce most of the symbolic part of the computations. This is chapter three, but the current version is beyond that. This textbook is numerical relativity, there were with a bunch of symbolic computations, those can be covered by this package. And then there is what people use to perform numerical simulations, which is the CACTO software. It's a software developed by the community, by universities, basically interfaced by the Einstein toolkit. But it's a very complicated package. To instance, it's difficult, but to use it, you need to write C programs that are used to run the simulation. What we are doing with these physics CACTO this package here is within a maple worksheet, it takes differential equations written in maple system from those metrics with those energy momentum tensors that this is standard symbolic task and writes for us the C programs that are used to run the simulation. And if we finish the program, we want to have an applet that directly not only writes the C programs, but you press the button and run the simulation because CACTOs can be installed in your own computer for instance, an applet. And that's all. So I'm closing here, passing the microphone. Okay, thank you very much for this great webinar. I think we're very happy to have you. Hold on, I'm getting an update back, got it. And okay, so just to remind everybody that you can ask questions using Twitter or the award YouTube channel, and then we will read the questions. Also, you will be able to download what it got used during his presentation from our webpage. Okay, so shall we start with questions from the other coordinators? Be silent. No, I have a question for you, Edgar. So I was wondering how it included for instance, all the part related for instance with group theory, because since for the part of quantum mechanics or particle physics is very essential to deal with groups, the standard groups, let's say, SU, 3E or whatever, but also with kind of non-typical groups in which people, is it possible for instance to, I can define the algebra of my group and then maybe I can deal with it without assuming that it's a special group or something like that? Suppose there is no physics package. Wow, there is still the maple system. The maple system comes with differential equation solvers, a whole group theory package, dense, full of commands, integrator, simplificator, et cetera. So what the physics package does is bring all what was missing. This is essentially the first slide of the presentation. Those things are not present normally in computer algebra systems. In fact, maple is the only system that now includes these mathematical options to compute and then uses what is already in the system. And there is already a full package, full feature package for group theory. So you can define various of the groups and the properties are there and they can be used right away. That's the answer. Yeah, and another question is, because yeah, you make your presentation thinking in particle physics quantum and general relativity, but is it also in the planning or it's already included, for instance, inside the physics package, for instance, tools for people working on solid state physics or fluid mechanics, in which they also had to deal with the differential equation, but sometimes it's easier if the tools is already pre-cooked and then they can just apply. There are two levels. Okay, first of all, I don't know if I mentioned this. The reference for this project is the Landau collection for theoretical physics. That of course includes statistical physics. And solid state physics is all around in Landau's work. So I can tell you there are two answers for this. The first one is all the mathematical objects that enter these computations are there and the operations between them. Can I perform them? And the answer is yes indeed. Now, for some things, you would prefer what you call somehow coop formulas. For instance, in statistical physics, there are many things that you would prefer. There is this object that we use frequently and we want a formula. That thing is not there, but can be programmed easily, either for someone who uses or for us extending. Now, what I've been doing here is extending the package in the direction of requests. So when people come and say, look, I want to do this problem that is not a 3D. This is actually an infinite dimensional problem. I want to work with tensors in this way because they represent many particles. This, this, that is a problem in solid state physics. Okay, you can set the problem and it works. But as said, there are no specific tools for that. Unless people start requesting those tools, these coop solutions. So for example, there is no tool to compute the Lagrange equations, you know? The equations of movement. No, there is no, but you compute the Lagrangian and take the function and derivative and get all the equations at once. Yeah. To follow up there, how do people will request this type of? Okay, there are two ways of requests. The first one is the easiest, write to physics at maplesoft.com. Okay, that's me. We'll pop up here, right away. That's one. And the other one is there is a blog, maker blog called Maker Primes, where maker users post questions and insights and requests or, you know, point of things that are not working as you would expect, maybe because you're missing something, maybe because there is something that needs to be adjusted. So it's either sending to physics at maplesoft or posting in the Maker Primes blog. Anyone can post. Thank you. We have a question from our YouTube channel from William Torres Bobadilla. And he's asking with respect to available packages, what is the performance of your routines compared to, compared with fine calc, hue graph, et cetera? Yes. That's one question. It's this. I can read it here. Oh. The answer is this. I mentioned that in the incarnation you have in maple 2018, which is the latest version of maple, there are restrictions. You have the scattering matrix, you can compute with up to three vertices, which essentially means two loops and the not more than that. Now, so in that sense, fine calc can compute much more than that. That's the first part of the answer. And you also did what a lot higher orders. Okay. But that's what I was saying. What is next? What we intend for this year is to remove entirely that restriction and the prototype that is actually there in maple 2018, but it's not uncovered. It's able to compute almost at the speed of a C program, which is much faster than fine calc with higher orders. If we finish what are our plans, this will be in the new release of maple 2019. Great. Do we have any other question from here? Yeah, I have one more. I mean, I have all orders, but let's see for the moment. Yeah, I was wondering, so this physical package is already included in maple. So in the sense if people purchase or the university purchase maple, they are for sure they will not get this package. Yes, let me give you a comment here. This physical package, it has so much and so much presence on the web, on this story of the updates every week, et cetera, that people tend to think that this is something separate from maple. No, it's not. I work for maple soft. This is a maple soft package. It's exactly part of the system. And in fact, part of the package is now used by other parts of the library because of the ability to compute with non-community things. So indeed, this is an official maple package. It comes with the copy of maple right away. Everything I should hear works without adding anything to the software. So in that sense also, in some sense, the nice stuff that this package is already part of maple software is that there are full compatibility with maple itself. Because I know for an instance with other software that the people are kind of, it's not part of the OTP software, it's just people, community, building stuff that each time there is a new version of the software, it's a nightmare for them. Part of the maple system, and therefore it's compatible at every single release because it just comes actually with much more at every release. But it's also maintained by the company because when you have different packages than at different universities, well, you can't ask for people to keep maintaining the package when some problems appear or we need more functionality. These two things, fixing problems and adding functionality on request, according to the use people are making of the package, this is key and only happens because it is a maple soft package. And then it has the other thing that is not part was developed here and other part was developed from another university that repeated a lot of the things of the previous one and another part in a third university, the guy wrote everything from scratch and no, no, no, no. It's all written in a single place using the same routines, professional level on everything working in sync together. There is no part that doesn't play properly with the other part. Okay. I have a question just for our broad public. It might sound a little bit personal but could you please tell us your history or story? How did you get from being the head of a physics department to now work for a software company? Okay, okay. Well, two parts of the question. First, how something like this happened in unexpected ways. I wasn't planning this at all. I was in Brazil and said, well, after several years of head of the department is a lot of headaches. So because you need to deal with difficulties. Now, who wants to do that? We want to do physics. He's fun. Okay. So I said to my wife, okay, how about we go to spend one year? Let's go to the University of Guatemala. These are the guys who invented computer actually. On a flight there, I wasn't expecting nothing specific and they received the application, you know, like this, come, come. And this is because we had written at that time a program to solve partial differential equations using Maple essentially to tackle the Hamilton-Yakobi equation in classical mechanics. I was surprised is computer systems don't know about the PDE and this is weak. So we wrote that and then the guys in Waterloo said come and then I said, okay, I'm going for one year and then he said stay, I stay six months more but then Waterloo was too cold and I didn't want to stay there and then people from mathematics at UBC in Vancouver what I am now I said, oh, how about before you return to Brazil you come for a year here to UBC, here is warmer. And then we came here and then people from Simon Fraser said, okay, how about you stay for 15 years? I said, no, 15 years, I need to return. By 2007, my wife and I, we had three kids at that time and said, okay, you know, we are kind of staying, okay, we stay. That's how happened and in 2007 the company said, oh, this is okay, but we want you to work full-time for us not just part-time and the other part-time at university. And this and that, it has advantages and many disadvantages, I use the people, the environment, you know, universities, it's a very nice environment. But on the other hand, it's a lot of fun. So that's how it happened. Okay, thank you very much. We are a little bit over the time maybe if we have a last question, let me check the chat. Well, yeah, there's another last question, that looks like a, you can read it on the chat at Eduardo from Nicholas. You said that Maple can compute two loop matrix elements which process in particular word you're referring to. Well, because in the system you can represent everything from a scalar, say the scalar, a spinor or a vector, then I'm not referring to any particular system. You just have the representation for the mathematical options. So you construct the interaction Lagrangian and from there you apply just common diagrams and you get also what is the limitation to the number of variables Maple can deal with. Well, the program that you see in this moment, first of all, these loop only powers up to four because that's what is the normalizable. So that's one thing, the number of fields, there is no restriction to that. The number of vertices, so you are taking this category matrix and spanning in powers, you are taking just up to three vertices. So it's up to over three. But the number of variables and I suppose you mean the number of fields, there is no restriction to the number of fields. Okay, once more, thank you very much for this great webinar. It was awesome and now people can refer to that. We just remind you that you can follow us on Twitter, Facebook, YouTube, in our YouTube channel and we will pause the Maple worksheet. Thank you very much at Gardo for joining us in Latin American webinars and let us keep in touch. Okay, thank you, bye. Bye.