 Sumit is currently a second year computer science undergraduate student at IIT Bombay. He's a contributor to SimPy as well as SimEngine. He has also successfully completed his GSOC 2015 under SimEngine under the Python Software Foundation. Let's all give him a big round of applause. Hello everybody. Good morning to all of you. I hope you had a good keynote. So I am Sumit from IIT Bombay as he introduced. I am representing SimPy here and the whole team is here, actually the SimPy India team. So today morning I had my specs broken so I had to dig up for a spare one. So things went wrong today so let's see we are starting on time. We will probably complete on time too. Most of you in the first two days of conversation, conference I had awesome conversations with most of you. So it was a really fun thing. So people who don't I didn't talk to at least might know me as the one who is wearing SimPy t-shirt for the last three days. So that's there. So let's begin. So what is SimEngine? If we had DevSprint's lightning talks workshops on SimPy. So you might have some knowledge about what SimPy is and now I'll say what SimEngine is and if you don't understand any of these words what is fast core, what is a computer algebra system, we'll go into that. And I'll also touch upon what SimPy is, how did it come about, what are computer algebra systems and all that. So and yeah that's it. So we'll see. So what do I have for you today in this talk? I don't want to dig, I don't want to go deep into the algorithms or the theory part. So that's what I noticed, that's what I noticed usually in talks. Whatever, whenever you go deep people usually drop their attention and you don't take anything back. So I want you to take back from this talk the basic knowledge of what SimPy and SimEngine is. And people who are a SimEngine is a C++ library. So people who are here and asking what are you doing in a Python conference. So I will tell you why SimEngine was born and to the ones who are just the Pythonists here and want some Python here. I'll talk a bit about Python wrappers to C++ code and how it's linked to all of the SimPy, SimEngine and all that. So what are computer algebra systems? So computer algebra systems are basically they do algebra and they manipulate mathematical equations. So say you are solving a physics problem or you are doing research and you have a big equation and you want to calculate the integral of that. Doing it manually is almost impossible in theoretical research if you are saying. And say you have an equation and you want to solve it. So that also is a very hard problem. Integrals are not at all a trivial problem. Even doing them manually takes us time. So you might not know how to calculate the integral. So if you have a system which is doing that for you then everything is fine. And yeah once then people might say why do you call it as computer algebra system? How is it different from numeric computation? So that's the usual question people ask. How is SimPy different from NumPy? So NumPy is more popular as people who are working with pandas or anybody who is working with data you would have come across NumPy arrays and all that. So how is symbolic computation different from numeric computation? Numeric computation is that you are crunching numbers. Say somebody gives you an equation. How will you solve the equation? You have say you want to find the root of equation. You do Newton-Raphson. All you have to do is it doesn't depend on what the equation actually is. All you have to do is substitute the values, get the values and you have to just find the value of the function. Once you are able to evaluate the function to a number, you can reduce the function to a number. All you have standard algorithm. So how is symbolic computation different is given an expression, you have to manipulate the expression itself. Say I give sine x, the program should give me its derivative as cos x. So but a numerical computation can only give the derivative at a particular point because it has only numerical algorithm. So you have to say differentiate sine at x equal to something. Say now there's for integrals is the best example I can give you. As I said solving integrals is not at all a trivial problem. You need heuristics to solve integrals. So say you have sine x. So all the numerical computation library can do is compute the integral of sine x in an interval. So that's pretty simple. Everybody knows what an integral is. You have to just chop it off into very small pieces and the more pieces you divide the interval into the more accurate your integral comes out to be. All you have to do is evaluate the function at different points then multiply by the interval size and you get the integral. So here it doesn't depend on the function. So whatever may be the function you can as long as you can evaluate the function you are done. But in a symbolic library if you give x square plus sine x it should return its integral as x cube by 3 minus cos x. So that's where it's different. So you will have multiple functions and you have to evaluate the integrals of all that. So how did all this start? This was started basically by scientists. We as programmers, this was brought upon to us by scientists. So even the algebra system that I am working on was started by a physicist and mostly developed by professors and scientists. So this was basically started for scientific computing. So SimPy participates plays a huge role in Python's scientific computing community. So it was started for scientific computation. Then now the algorithms were so developed that you got insights into the algorithms and all that. So now it's no longer a subset of scientific computation. You no longer develop symbolic algorithms to do scientific computation. People have found applications to cryptography and artificial intelligence. So there are good symbolic groups all over the world. There's one in University of Waterloo and there's also research interest of symbolic computation in Austria if I'm not mistaken. So they are doing awesome work in this. So there are applications to multiple areas. It's no longer just scientific computing. So now once you have this symbolic, as I introduced to you, once you have this symbolic library, you need them fast. So everybody wants it fast. Why fast? So it's like once you are doing it for research and you are not doing it manually, you know that you are doing heavy scientific computation and basically symbolic algorithms are naturally more heavy. You can intuitively guess why. So you have to manipulate expressions. So you will have multiple heuristics in your code. Say you have to calculate the integral. You basically try out different methods till you get the method where you can solve the integral. That's how a symbolic algorithm goes about. So symbolic algorithms are naturally more heavy and heavy studies requires such heavy scientific study requires fast algorithm. So this celestial mechanics is one such area. We in our SIM engine, we came across this. We were looking at other computer algebra systems which are competing us. So there's this system called Piranha. It is developed at Max Planck Institute of Astronomy. The professor who developed it, he is an astronomy professor and all he needed that computer algebra system was for celestial mechanics. All he had to do is he gets some complicated expression forms and all he had to do was a series expansion, Poisson expansion and Taylor expansion. So he developed this and it went on to become his PhD thesis. So developing this computer algebra, he has single-handedly developed this for the last 10 years and it's like, it's one of the fastest now. So SIM engine was born with this goal. So we have SIM-PAI. SIM-PAI is pretty fast and it's pretty cool. That also has been developed for the last 10 years. But we needed something faster. Eventually, as long as we know that there's something out there better than us, we needed, since we have a strong community, we needed something really cool. So SIM engine was born parallelly. We said we'll develop SIM-PAI and let's try out something faster. We'll try to make something really fast. So SIM engine was born with this goal. So what is SIM-PAI? How many of you are aware of SIM-PAI now? And how many of you, is it due to this conference or before? So SIM-PAI, all of us are wearing SIM-PAI t-shirts here and there. SIM-PAI is a pure Python library. So yeah, now the other computer algebra systems are MATLAB, Mathematica, Maple, if you have used any. So people, these are proprietary. So you have to pay for their license. So SIM-PAI is just pure Python. There is no C++ code. You can open every file and you can find only Python. And all the right from scratch is written from Python. So it's standalone Python. It has no much dependencies. But you'll need dependencies say you want to plot. You'll need a plotting library and all that. And that's it. Once you know the code is in Python, all you have to do is say you are doing a problem and say your computer algebra system is not solving. So you're stuck. Say if you're using Mathematica and you're not able to do it, you're stuck. So but here say you are there. You can always extend it. You can always build onto scripts. So all you have to do to use SIM-PAI is in your Python code, you have to just impose the libraries. You can use all the functions. So that's it. So say you want to evaluate the Laplace transform of X-PAI-A. So all you have to do is just a three line code. If you execute this Python script, you will get the result. So the Laplace transform of X-PAI-A is this. You can get it. It comes in gamma form. So that's it. So I would like to highlight the power of symbolic computation. So using SIM-PAI is pretty easy. Anybody can get started with using SIM-PAI. So you have to import the SIM-PAI library. And in it printing, what it does is say you have equations. You can print them in pretty forms. So you can say you want the equation and you need the LaTeC form. You can tell SIM-PAI to print me in the LaTeC form to directly include it anywhere else. If you want to include it anywhere else. So you have to declare symbols. Symbols are basically different from numbers. So when you say that X is a symbol, you're telling it that don't treat it as a Python object. All you have to do is, once it's symbol, the SIM-PAI knows that it's a symbol so it doesn't evaluate expressions. And you can pass it to the functions that that's there. So here, yeah, that's it. So you can just create expressions. Sinex into e power x. To find the derivative, just dip of sine x into e power x, you get this is printed in LaTeC, I think. So unique code, sorry. And that's the derivative you can verify. And you can integrate. You can also integrate symbolically. So if you use integrate and if you don't pass this tuple, x comma minus infinity comma infinity, if you just do integrate sine x square comma x, you get the integral form in symbolically. If you want to evaluate it, you give the parameters, you give the limits, t comma b. So it will evaluate. So doing this, now this is more accurate. This is the exact value of the integral that's there. So doing it symbolically is better. And you can also calculate limit. This is just a piece of what SIM-PAI is. It's not even a percent. So you can use solve. So I forgot to update this. Now we have a solve set module. So you can, yeah, solve should, as you might have realized by now solving equations would be a imparted part of a symbolic of a computer algebra system. Say you have a complicated equation. So you just pass it to solve and you give the parameter with respect to which you want to solve. So you'll return the values. So there is a new module written solve set, which say if you give a equation to the, say you give a equation to the solve set, you'll give the answer in form of set. Now how is it different from solve? What solve does is, here you can see, you can take a equation and you are solving it. It returns you values. Say now you pass sine x. Our previous solve used to just return 0. Say sine x equal to 0 is the equation you want to solve. Our previous solve, this solve returns only x equal to 0. If you pass it to solve set, what it does is it returns all the values possible. You can guess that there are, for sine x equal to 0, there are infinite values possible. So solve set does that. So if you desolve is a differential equation solver. So you can pass a function and it solve the differential equation for you. You can verify that if this is fine. And you have a matrix, we have a matrix module. So if you, this function gives the Eigen values. And the last one is the Bessel transformation. I think it converts, it's converting Bessel function to a spherical Bessel. It's rewriting in a spherical Bessel form. So this is just to highlight the, highlight how simple it is and how extensive it is. So simply as a following modules, you have the core capabilities of writing equations, manipulating expressions. You have polynomials, calculus, combinatorics, discrete maths. There's also geometry, there's physics. Physics, I find very interesting in SimPy. There's also quantum physics which is, which is still at a very beginning stage. There's not much implemented. So people are working on that. And there is a spin-off project from SimPy called PyDye. So there's a developer called Jason Moore. So he's a physics professor. And he's working with Python Dynamics. So you all might have come across this free body diagrams and all that. So you can solve mechanics using Python. So he has, he's working on that. So there's statistics, cryptography, parsing, and printing. So printing also is, if you want Unicode, Latex or XML, even XML is there. So, and why Sim, SimEngine 1. If I'm telling you that SimPy is so good and there are so many modules implemented and there's so much time spent on it, why was SimEngine born? So Python too slow, this is one reason. And this cannot be blamed as such. SimPy is fast enough, even though it's pure Python, SimPy is astonishingly fast enough. So you can pretty much do most of your manipulations that you need. But still, there are places you find that if you, if the same optimization was done in a C++ code, how good you would be? So if the same algorithms were implemented in a C++ code, you would be much better off. So we found the need to speed up. So you can, SimPy is great as such, but if somebody else is working on a faster project, then it's pretty cool. So we, what we found is the Pythonic interface is awesome. So we, that's why all of us are here. We all love Python. So we didn't want to move away from that. So the plan was this. So write the core in C++ and use it from a Python interface. So all you have to do is write the algorithms in C++ and use, wrap it, wrap it using Python. So once, then you can always, you import it from the shell. So, so as such, the idea of Simengine is you rewrite the, at least now when you start up, you rewrite the core capabilities in C++. Then you have Python wrappers. So you can, anyways, use it from the Python shell. And once you have this, you can, you can always replace this core by the SimPy core. So you can always pass this as optionally to SimPy. And SimPy, then the other modules of SimPy would be pretty fast. So that was the whole idea of why Simengine was born. So now Simengine has been in development for quite a few years. Last two years, it participated in Google Summer of Code. So we got multiple modules implemented. So Simengine is the same idea of SimPy, but everything is in C++. So the internal code, C++ file of Simengine would look like this, where you have to declare pointers to expressions and it's not that clean. So this is still pretty clean. The code gets uglier if you see the source files. And to say, to say add and all that you, now the operators are not overloaded. And we don't, and as a practice, we don't overload operators in C++. So anyways, if you're writing a Python wrapper, so in the Python wrapper, you can always say that A plus B is add of A comma B. So internally, what we do is we just use function. We don't use operator overloading as a practice. And this RCP, if you're seeing, RCP is just a pointer. So this is equivalent to doing basic star x equal to. But what is RCP? RCP is a reference counter pointer. So here what happens is, say in pointers, once you have a huge C++ library, then you might find yourself in segfoils and all that. Debugging becomes a pain. And you might not know where you're going wrong. But RCP is a reference counter pointer. You can never segfoil with RCP. So what it does is every new stack, whenever there is a new copy done, it increases the reference. So by the end of the stack, if the references become zero, then it relocates itself. So RCP is pretty cool. So when writing a C++ library, this is the first thing you have to take care of. You have to follow the right practices. Else you'll just be stuck. So Symbiongyn is not as exhaustive as Sympi as of now. So we have the basic capabilities. So as I said, since you have Python wrappers implemented, you can use Symbiongyn from the Python shell. So we still don't have the beautiful printing that we had then. So here the printing is just the expressions. So here it's the same thing. It looks like Sympi now. You can declare symbols. And once you have equations, you can expand them and all that. So say a equation like 2x plus y whole square in a symbolic system, it never simplified on its own. A symbolic system never simplifies on its own unless you tell it to. So else, what happens is say you're doing algorithm. Else in every step will simplify and the algorithm goes off, it becomes really slow. So what it does is once you get a symbolic expression, it never simplifies on itself. So if I just did 2x plus y square, it is just 2x plus y square. If I tell it, you have to expand it or you have to simplify it, it'll expand. So here 2x plus y square is coming to be 4x square plus 4x y plus y square. And the next equation is the same. And the last one is the rational. So 1 by yz minus yz into yz. So it comes to be 1 minus y square z. This is the basic arithmetic. And there are modules are very few now. So last year in GSOC, a student called Tilina had implemented number theory and matrix module. These are by far the best modules as of now in Symmingen. So once you have Symmingen, you can... Here you have the basic combinatorics. I think binomial 5, 2 gives 10. That's the 5, 2, 2 if you have done combinatorics. So divides tells if 2 divides 5. Now divide 5, 2 returns false. And these are the number theory functions. Prime factors of 100 returns 2, 2, 5, 5. And prime factor multiplicities of 90 is 2. Once you factorize 90, you get 2 power 1 into 3 power 2 into 5 power 1. Yeah, that's the... And what is integer 2? How is it different from the plane 2? Integer 2 is a symbolic 2. So say you have rational objects in Sympi. You have rational objects in Python. They are different from rational objects in Sympi. So say you do 3 by 2, you get 1.5. But there 3 by 2 is kept as 3 by 2. And all the operations are done as it is 3 by 2. And let's see this. Generate prime. It uses the sieve algorithm. I don't remember. Iranos or something like that. I don't remember. Yeah. So sieve genera... The torsion functions. These are the number theory functions. The torsion function is number of... I don't exactly remember. The number of numbers which are less than a number and... It's basically... It comes from the Fermat theorem. A power 5 of 1 is congruent to 1 mod 5 or something like that. 1 mod A. So there is a matrix module. So this is just the basics. So you can create matrices. And you can find the inverses. You can add. And there are decomposition algorithms implemented and all that. So basically introduction to matrix would be this. Matrices are basically of two types. So say you have a big matrix. And beforehand, whenever you deal with real life problems, in most... In some cases, you come across cases where beforehand itself, you know that most of the entries are 0. So and the rest are 1. There are very few entries which are filled with 1. So these are called sparse matrices. And there are matrices where you don't know beforehand anything and or else most of them are full. So those are called dense matrices. So if you know sparse matrices, you have better algorithms. You can do things faster. So that's why we... There is always two different classes implemented in computer algebra system. One is a dense matrix and one is a sparse matrix. So once you have a matrix, you can find the determinant inverse and all that. And you can also do solve equations. Once you have a matrix, if you have an equation in matrix form, you can always find solve the equation in matrix form. So once you implement matrix, beforehand you can solve linear equation. So that's it. And we have decomposition algorithms. If there's QR decomposition, LU, LDU and all that. There is also two special decompositions implemented. There's fraction-free decompositions implemented. So there is this fraction-free LU and fraction-free LDU. So people who are interested can go check that. So there's this paper written on that by University of Ontario, I think if I'm not mistaken. So you can check on that. Those are cool algorithms. Once say you have an equation and you find its inverse and most of them result in fractions. So and if you're manipulating them directly, things can get a bit slow. So this algorithm ensures that beforehand, if you inverse, you don't get fractions. So you can manipulate it faster. Then you can make the transformation again. So that's how this is done. So what plans do we have for SIM engine? So the plan is just write the core capabilities in C++ and have Python wrappers. Writing wrappers is considered equally important. So we are not just writing the C++ code and keeping the wrappers there it is. And somewhere after a long time, we write the wrappers for multiple modules. So wrappers are developed parallel because that also is of high priority to us. So once we write the wrappers, we can use it from SIM pipe. And if you are aware of SAGE, how many of you are aware of SAGE? Yeah, so people are aware of SAGE. So SAGE now uses SIM engine as a symbolic engine. So SAGE used to use GINAC and FINAC if you are, I don't know if you are aware of that. So SAGE uses SIM pipe as a symbolic engine, but as a symbol for the symbolic manipulation. But it also used to have GINAC and FINAC. Once we implemented this, this year's GSOC project was make SAGE use SIM engine. So now SAGE uses SIM engine. And eventually we want to implement many wrappers. So we want C, Julia, Ruby wrappers. The whole point is don't reinvent the wheel. So while developing this, take enough care that this is the fastest possible. And if it's not the fastest possible, this is just another computer algebra system. There are enough out there. We are taking care that it's the fastest possible. And once we have this, then multiple computer algebra systems can use this as its core. Once you have SIM engine very fast, you can call it, say if you develop MATLAB wrappers, you can call it from MATLAB and all that. So that's the whole idea of having this. So yeah, so we have the, now say we have the C++ code. We want to write the Python wrapper. We have a C++ function file. We want the interface. And we want to use it from Python. There are multiple solutions. This is a good enough list that I found. So there's this Python C API. You can write it by hand. If you have, I think if you have tried or had a look at this, it's very messy. So even to write a small function, you need py object and I don't know what. It's really messy. So it's not at all, it's not at all, it's not even an option to be considered for a proper C++ library. So there's pyreg, there's boost, Python, SWIG and SIP are very popular. And there are the rest. So you can use F2py is a Fortran to Python conversions, I think. And Cython is one solution and Cython is very popularly used. So don't confuse Cython with Cpython. Cpython is the core Python. Cython is the interface with which we develop wrappers and there's one more use. So we'll have a look at that. So Cython is the option that we are considering and we are using. So how does Simengine write wrappers to C++ code? So you have C++ code and we won't write the wrappers. So we use Python. So Cython is a blend of Python and C. So we'll have a look at the code. Once you have a look at the code, you will see why. So Cython, what Cython can do? There are two things where Cython is used. Say you have a Python code. So with very minimal changes, you can make it to Cython code. So once you have a Cython code, Cython can be compiled. So you have Python code, make minimal changes. It becomes Cython code, compile, execute. It is just Python and it's fast. It's very fast. So that's one use. And the other is you have C++ and you write a Cython code for that. What the Cython code does is it calls a C++ function and it returns and there you can define Python functions and call those C++ functions. So that's the interface. And then it just creates Python extension module. So you can just import it. So let's discuss the first thing first. So improving Python performance. So this is the example. This is courtesy to Kurt Smith. This example I took from a sci-fi talk. So improving Python for performance. So Python, say we have to calculate the Fibonacci. Everybody's aware of Fibonacci numbers. I'm assuming that. So you have to calculate Fibonacci numbers. So the Python function looks like this and the C++ function looks like this. It's pretty much the same. So I don't think in the number of operations it differs a lot if you check the complexity. So only difference here you can see is when you declare a comma b equal to 1 comma 1 in Python, you are saying that it's a Python object. So the precision there is huge. So it's a multi-precision. And here when you say int, you are limiting the precision. So the only place where it differs is knowing the type of a. In Python, the type of a is now here, the precision is not known. So it's multi-precision. So in Python, what you do is this just looks like Python code. It's pretty much the same. Except beforehand you say that a, i, and b are ints. So the overhead of this being multi-precision is reduced. So the multiplication in multi-precision is very slow. If you have used GIMP library, I think you would have come across this. So you declare to GIMP and you do a star b and do for two machine integers. Machine integers are pretty fast. So here the only difference is you just declare the type. Now this speeds up a lot. So say the Python performance is 1x. The C++ is 100x. And Cython is 80x, which is pretty cool. So we are happy with that too. So that's one advantage of this. So the underlying magic is say you have the Cython code. When you compile the Cython code, what it is actually doing is it's checking your code and it's making an equivalent C++ code, C code. So once you, I'll come to that, we'll see an exercise. So you can, it's making an equivalent C code and that C code is made into a Python extension module. So from your, you can import that model and use its functions. So let's try it. So I have the exercises here. So I don't think I can try all the exercises. So here you see. So there is, I have written Cython code. So this is just a C++ code except the function is you, you are telling the function the parameters. The parameters are in. Okay. This is just a Python function. The only thing different is you are telling its types. So and yeah, so let's test that. So is it faster than, is it faster than the Python code? So this is the test file that I'm writing. I'm defining, that was pi ad square. So it's a lame attempted pun. So this is pi square ad. So it's the same function just that you don't know the, don't know the types. So what I'm doing is the pure, I'm timing the pure version and I'm timing the Cython version. So let's see. So at first I'll have to compile. So how do you, Cython code is compiled. That's the advantage. So this is the standard script. So I don't think any single, if there's a single Cython file, I think this is the script that doing it. So Cythonize is converting the, as I said, it's converting the Cython to a C code. So let's do this. So good. So what, this is a setup minus IF is parameters for in place and force in place. What in place as it sets up in the same directory and force regardless of whether it has been compiled before, don't use a cache. So compile it forcefully. So let's see. Now I think we can run the test module. This is import error. Okay, wait. Okay, wait. I think, yeah. I think I figured it out. Let's see. So this is a test. So I think I'm not importing the, let's import. Import. What was the file name? Module. Oh, yeah. Okay, wait. I don't know what's going wrong. Maybe while writing the tests, I've been very careless. Okay, yeah. Cool. So let's, so from module, the first time I wrote it, I wrote it as hello world. Yeah, so if you can see, let's do this again. So the pure Python version is 0.23 seconds and the Cython version 0.06 seconds. So it's pretty fast. So that's one, one, one. The second thing, as I said, wrapping C++ code, this is of much import interest to us. So what you do is in your Cython code, you say a cdf extern from string.h, from the string.h, read the string.h and take in string length as a function. So, and you define a Python function get length. So what it does is takes a string and it calculates the string length. So calling it from Python is, you can import the module and you can get the length. So I have one more exercise for that. So let's see. So here, so I have a, say I have a math.c and it has a basic function add one, which takes a int and it adds one. You can, do I have a math.py? No, what else do I have? So I have PyMath. So what, PyMath is a Cython wrapper here. So what is PyMath doing? It's taking add one and it's defining a Python wrapper over that. So our setup, our setup is pretty much the same. So if you want to see how, how less it differs is the same thing. All that I changed was a Cythonize this. So let's see. So let's compile. If I import PyMath and if I do PyMath.py add one, if this is the function. So yeah, that's what. So this was a c++, math.c was just a c++ code and you have defined the interface and you're using it from Python. So that's what Cython is all about. So Symmension basically does this in a very large scale. So when you have to do this for classes, c++ classes and all that, it gets really, it gets really, it's not this simple, as it looks. So the contributors, so I would like to thank all the contributors, still a very small community. So on that site, it was the one who started Sympy back in 2006. He's the one who started a Sympy engine and is the lead developer now. And we have around, I don't know, 18 contributors if I'm right. So in the order of their first commits, these are the guys I would like to thank them all for contributing to Sympy engine. So what is the status? We have the basic manipulations. We have the matrix module. We have the function module which has a lot of functions. So and there's a number theory module and there's a polynomial module which is the crux of most of computer algebra systems. So I am currently working on that and we are taking a lot of time into the design decisions and all that. We wanted to be pretty fast. So we, unless it's fast enough than the one, unless we are already faster than the ones out there, there's no point proceeding further. So in the past two years, we had six GSOC students. When I put in the talk proposal, I had said that there has been no release of Sympy, Sympy engine. But we had the first beta relays very recently, version 0.1.0. This was done because we had to make sage use Sympy engine. So what are the plans, future plans? We have to complete polynomial module. Once the polynomial module is done, you can build on top of that. You can build multiple modules on top of that. Unless it's not there, there are very less you can implement. So once you have polynomial module, you can build on series expansion and you need pattern matching. Pattern matching is also a very important tool. So eventually, we are planning of having assumptions, multiple modules, wrappers to Julia. We have started wrappers to Julia. The work has been started. And we want a huge community. The Sympy community is of 400 members and around 400 members and something that huge would be cool. So the summary is, so that's it. So I hope you have understood what symbolic computation is all about. What it means to symbolically manipulate expressions and how is it different from numeric evaluation. And we spoke about Sympy and Sympy engine. The ideology is different. Why they were born and the whole point of having Sympy and the whole point of having Sympy engine is completely different and how Sympy engine can interact with Sympy using Cython. So I think there's one slide that I missed. So there's this slide. So it's already you can use Sympy engine for Sympy. So you can import Sympy, you can import Sympy engine. So say you have a Sympy symbol and a Sympy symbol. Instead of using a Sympy symbol, if you use a Sympy engine symbol, you are much faster. So you can already do that. So that's the plan. So that's it. So I would like the courtesy goes to Kurtzman. So for the example, the one example I took from a stock. SyPy 2015. That's it. Hope you enjoyed. So you can, these are my links. So GitHub, Facebook and email. So I would, as a part of Sympy team, I would like to thank Vijay and all the organizers for having us here. So firstly, he allowed the whole team to be there in the workshop too. So that's it. So you would see all the guys here. So if you want to talk, you can talk to me or any of the Sympy. So there's Harsh there. Harsh is a core developer of Sympy. He has been associated for the past two years. And he has also participated in SyPy this year, SyPy US. You can talk to him. So anything you want to know about the community, about the code? Yeah? OK. So are you guys using anything from C++? We are actually. You guys? We are using C++. The problem is we are still aiming for multiple compilers. We are aiming for Windows, Mac and Linux support with Visual Studio and all that. So most of the times, these constructs themselves are not supported. So usually we, it's so minimal that if Visual Studio doesn't support, whatever Visual Studio is supporting Sython does. That I'm sure of. The reason I was asking was because I felt that like shared pointers, auto pointers you can use for, you know. No, we don't use RCP. We don't have auto pointers as of now. OK, OK. I was just wondering. Yeah, I'm not aware of the Sython internals, but we don't have that in our code that I'm sure of. OK. The second question is, again, since we're talking about dense and sparse matrices, do you guys, I mean, have you compared any comparison with, say, you know, Laplace, Atlas or LAPAC, the numerical linear algebra ones that are out there just to see, you know, how they compare? That is a linear algebra library, right? Yeah, because you guys are also doing, I thought, when you said decomposition, I think, so that sort. So no, no, I don't think we have yet benchmarked, but it would be cool to benchmark. So we have just benchmarked with the Gainac, Sage and Piranha, and Piranha is pretty fast. So we are, once we build that, we'll explore, but I would like to benchmark. You know, I mean, of course, go ahead and it's very welcome that you're trying to benchmark it against Fortran libraries. But talking about something that's already there in closer to Python, have you tried comparing the performances against the SciPy sparse, SciPy dot sparse model? No, no, no, I don't think so. OK, and actually I had another question. Could you go back to the slide where you showed three examples? Yeah, yeah, yes. Python C++ and Scython, the same problem. So yeah, the ones who don't have questions, you can leave, but yeah, I hope you had fun. So yeah, what did you want? Yeah, you said something about multi-precision here. Yeah, so A is one. You can also do A equal to 10 per, say, 50 in Python. It doesn't matter. But in C++, you can't do that for integers. So integers, you know how much bits it's taking. And say this manipulation results in something very huge. Python can handle that C++ won't. So you're restricted there. But since you're restricted, it's also faster. It knows that once you declare it, you're using just that space. You don't have to change your space. In Python, you might have to. So the code can get slower. OK, thank you. You understand? Cool. Question here. So the C++ code that you mentioned, is it multi-threaded or it is multi-threaded? No, no, we are still in single-threaded. But piranha is multi-threaded. So it's very hard to beat it. But that's what I wanted to say. Piranha is multi-threaded. Still we are pretty close to it. So that's pretty cool. So the problem with multi-threaded is the compiler support. That's what we are worried of. Anybody else? So yeah, else you can meet me out. So thanks. Hello. Guys, a big round of applause to our speaker right now.