 Hey, so our next presenter is Boris Tezutah from the University of Otago. Are you a student or a lecturer or cool post-grad? Brilliant Cool and Boris is here to talk to us about That stuff which is probably the most fantastic title. I'll do it. I'll do it. I'll do it a path to multi arbitrary precision distributed computation with Python 3 Thank you very much Boris Yes, so I'm here to talk about a package there I've been developing in order to facilitate solution of some nonlinear PDEs associated with problem in my thesis So let me start Saying things so first. I just give a little bit of background I mean, I'm assuming that if you're here, you're probably familiar with things like NumPy sci-pi and so forth And you've probably heard of something like MP math and SIMPy Which gives you an extension and precision and it also has a lot of the kind of algorithms that are within NumPy And then I will sort of give some prototypes to describe Or to fix a nice problem that we can attack and then I'm gonna go and look at the internals and so forth and yeah, we'll get there Okay, so you've probably encountered this that oh, hey, Python is a fantastic glue language Ah, yes, but this is a little bit unfair because we can do even more we can write a nice Python code And that can be made to match C execution rates. So this is this idea of just-in-time compilation So there's this nice library. I don't know if you've heard of this, but this is called number And I'll say a little bit more following this Yeah, so we have this very rich kind of numerical ecosystem that we want to leverage to solve problems quickly But we can actually also make use of another nice New package. I think this was actually developed mostly this year So there's this thing called to dusk now this gives you the possibility for doing out-of-call calculations and parallel calculations and If you use the right kind of scheduler, then you can do distributed calculations and it's all basically the same piece of code So this is good makes things easy for us Okay, so again as I say we want to solve some partial differential equations and the particular methods that I'm interested in Spectral methods. So these are extremely high-order methods But there was a bit of a downside because they usually lead to ill-conditioned systems So hence why I would like to do some arbitrary precision stuff Okay, so first thing I will just give you a very very simple example of what Number is so you can imagine writing some kind of horrendous thing like this that would probably be quite slow to execute But you can essentially for very very minimal if it insert this kind of decorator and this would generate some compiled code But yeah Very very simple and there are some kinds of nice things that you can also do So if you pass these things like no Gil then you can avoid global interpreter lock Which is nice because then you can also do threading very very easily Yeah I'd like to point out and make some more comments about dusk because I think this is fantastic So let's consider constructing some array with elements that are like this So medically you'd write something like this. So notice that this looks very much like just NumPy stuff, but you're inserting this chunks parameter so in contrast what happens when you use dusk is that you have this Delayer you have this directed acyclic graph representation of the array. So it's not immediately constructed. You have sort of some The nested dictionary of dictionaries that tells you how this will be constructed So, yeah, we can then pass this to a shidula and you can execute it And yeah, you can fill it with chunks if you want to do this in a distributed fashion Right, so let's make some kind of modest goal So let's say that we'd like to write some algorithm representing some mathematical problem we'd like to only write this once and In particular, we'd like to be able to sort of leverage MP math and some pie and we'd like to be able to do this in a simple way So, okay, we will introduce this notion of specifying an evaluation library. So I construct this kind of class that has these lib if Evaluation library type things and the idea is that well you can subclass NumPy arrays and you can have Bound attributes that have this information stored within it. Now, why can you do this? Well, usually you think of an umpire arrays containing some kind of float data But you can also stick object types in there So you can leverage all the kinds of underlying algebraic structures and so forth that already exists in umpire arrays Okay, so what what else would we like to do? Well, we'd like to have this kind of function delegator Construct and with it we'd like to have bound methods that are pretty much the same kind of syntax as NumPy But now we can pass this additional parameter lib if so here this lm represents an evaluation with MP math at precision of this was 30 And LS would be evaluation with some pie. So you notice for instance here I pass pie on three to sign and Exact result very very accurate result and then float result We'd also like to be able to do something else So if you noticed when I actually gave this number slide, I very very awkwardly define some functions now The reason I did this is because okay if we have these Delegated functions and we can pass to them a parameter that allows us to return a function handle So sorry this is getting a little bit convoluted now But if we have this ability to return sort of the native NumPy Functions, then we can still use the same piece of code and have this compilation and still avoids Gil using this no Gil parameter Okay, so let's make a prototype problem and see how this would all work So here I'm just Defining matrix elements in this way. So if you like this is a representation of a Hilbert matrix and These things are horrible because they have an enormous condition number. So We never roughly whenever you try to increase or whenever you try to solve a problem and you increase the condition number by one power 10 by power of 10 then you will lose an order of precision in your result So here we are going to because this is a symmetric matrix. We're going to attack this with a Cholsky decomposition So, okay, you'd like to write a piece of code like this. So here you make a context with MP math Evaluation library and here. I'm just increasing the dimension of this matrix So, yeah, I have embedded a prototype representing this Hilbert matrix and I pass this to this Cholsky solver And then I make use of sort of standard Syntax Okay, but what if this in was actually an or this calculation was slow? Well, we'd like to go back and use dusk So what we'd like to do is well, we'd like to have consistent interfaces such that We had this previous code here and we would just replace these functions with the equivalent Delegated dusk functions But we'd have to specify just an additional chunks parameter. So, okay, we'd like to do this And we can do this and This appears to actually work quite nicely So if you look at this red line, this is the error If you do this computation with numpy at float 64 and if you look at the blue and green curves Then these are in P math equivalents. I Also do this computation with this dusk Implementation just to compare sort of whether this is consistent. Okay, so What's going on? How's this working? Okay, so If we look at this lib ev object Then this must have to it bound some methods and some information. So for instance Is a particular type that you're specifying a Native number type that can be compiled with this no Python and no gil parameter Or is it not so for instance in the case of Complex 128 it is in the case of something like a simpy rational type, then it's not and Then we have various methods that allows to extract the actual Class directly and we have methods that Stipulate the representation of the type as it would appear in an umpire array Okay, so how do we specify just a number well We'd like to delegate this based on some input So we'd either like to just if it's for instance a float and we require float 64 just store it if We pass a string like this fraction and we'd like to represent this in it as an MP math Then we'd like to pass that input to MP math dot MPF which allows for the construction of this thing So we have to do this kind of business here What is nice is that you can actually subclass this in numpy in DRA and you can provide Well, you can have it carry the slip if data, but you can also do some further trickery so you can provide hashing Capability for this and this allows you to Cash things and speed up calculations at key points You have to be very very careful when you do this because this should only be done for immutable types but hey Okay, so how are we actually pushing? Given some lib if to the correct function well here I just give a very very short snippet based on some already implemented functions So if you look at this MP map well This should be just a dictionary of key equal to value because you want your API to essentially mirror that of what is in numpy The annoying thing is is that different numerical libraries will have different names associated to the same kind of mathematical function So you have to start playing these kinds of games Now a further thing that you'd like to do is you'd also like to have numpy type broadcasting for whatever function you're calling So this is why I have these kinds of bracket things So you leverage this numpy dot from pyfunk and The idea is that you're writing a script that is what you can think of as some kind of function constructed So based on some Liby of input it will push you to the correct numerical implementation Sorry, I can't actually see these things. Yes. I have to I have to do this Okay, and in the end what you'd like to do is you'd like to inject this into a single class So you can conveniently access every single function that you either implemented or you pulled from some numerical library and so on So, okay if we followed that approach and had a lot of coffee then we could actually do this for pretty much whatever kind of function So here I have some or a lot of the numpy functions that can be evaluated with whatever kind of lib if you care about And then I have some additional things which make use of this just-in-time compilation Which in this case are some linear algebraic things and some polynomial and some things associated with polynomial families Here is Some cool graphs, so in this case I compute Well, I just evaluate or specify this one-half using this in P math library approach What is not shown at the end of this thing is that you're calling in P math dot in PF And on that side you would have in P math dot sign So if the particular function is implemented in that library then this is what this looks like if The particular function is not implemented in that library or you write your own implementation Then you'd have something more like the right where okay your function Constructor has to go in look at the actual definition that you've provided and then does all the evaluations from there On the left you're just pulling in P dot a range Okay, so that's kind of great. We can do this local function Delegator or dispatcher business So how do we actually get this to work with something like dusk? Well, you have to be sneaky and this is this is very hacky So what you're actually doing is when you do this distributed thing you spawn a whole bunch of different Python Processes and okay, let's bring these are locally. So we've locally spawned a whole bunch of different Python processes What we do is we have dusk instantiate These function Delegators and then instead of using the native implementations within dusk you map element wise to elements Which are bound within these function Delegators, which you've instantiated on each of these processes So this is pretty horrible, but It can be done and it does work and I don't claim that that's the best way to do it, but right And then well, we'd also like to do some Simplifications in other ways, so we'd like to be able to specify functions in the following kind of approach so you have a particular function and to it you think of having some associated variable labels and what you'd like to do is you'd like to be able to do some Symbolics directly in your Python interpreter. So here what I'm doing is I'm constructing some function containers and I'm saying that We have our function f which is a function of x y in and then what happens is that this becomes alphabetically reordered such that you have f of in x y and The reason for this is that when you want to start having composition of functions. So for instance here You need this canonicalization such that this evaluation works consistently And if you wanted to well, you could think of component containers where you have functions with Indices that represent something like a tensor something like that Here well, we'd like to continue this kind of abstract approach So we'd like to have a grid mappings embedded So usually when you formulate some kind of PDE problem, you have this with respect to some geometry say and the actual numerical method that you're wanting to use is Fumulated with respect to some fundamental grid which is different to this geometry And what invariably ends up happening is that you have to map a whole bunch of differential operators And you have to do a lot of work by hand to rewrite the system into this kind of adapted grid But why would you want to do that when you could write some operators that do this for you? So here I'm just doing a grid mapping from the unit square to Some square with side length a2 minus a1 b2 minus b1 Okay, you can also take these function containers and Great you have all this capability to do symbolic manipulations But you also want to be able to evaluate these things numerically So here I don't want to get into details But essentially you can construct grid objects and then feed these to these function containers And then you still have all the nice algebra associated with that And yeah, okay here. I'm getting a little bit carried away But the points to draw here is that I wish to introduce now these quadrature and derivative operators which represents you're taking derivative analytically and integrating analytically Okay, so now let's Change track a little bit. I'd like to describe this pseudo spectral approach for solving PD's So this may look a little bit complicated But the essential idea is that you have some function and you expand this function in terms of a basis so you have this weighted sum where these phi in are your basis functions and Well, why do you care about? Doing this pseudo spectral thing. Well, you have this extremely rapid convergence property so if you look at the magnitude of a Coefficient that multiplies these basis functions. Well, this should decay very very rapidly as you increase this little in and Yeah, so where do these phi in come from? Well, these are just solutions. These are just some well-known classical polynomial families Okay, so let's use that Let's do some sanity just checks so Supposing we took these Polynomial families and we looked at those which are defined with respect to bounded domains Well, we'd like to check that we can integrate these and we can differentiate these and so on numerically and Again, we do this by introducing some new kind of test polynomial here I'm just arbitrarily sitting if in to be this combination and g in to be that combination and this horrendous looking thing it's essentially just taking a polynomial differentiating it multiplying the other and Differentiating it twice and the point is that this row x is of fixed order So the power if you multiply it all this out the highest power of x would be some fixed number and we know that If this is this fixed number is to cap two times capital in then any scheme that we construct will be exact So then I introduce this some here all this is is that there's a representation of This row being integrated and you integrate it with a bunch of different schemes So this capital X represents different lattice choices. So how you sample these polynomials And you can do a similar thing with unbounded domains the details here are not so important I mean, I just want to check that the numerical schemes actually make sense So if we are implemented correctly if we look at that and we take the usual temperature of polynomials as Basis functions the genre polynomials and for instance Yaqabi polynomials, then we find that we okay We can evaluate things and we lose precision as we increase this capital in however We are working consistently because our errors when we look at this strange epsilon function that I introduced are sort of Reasonable and if we increase the precision which with with which we do the computation then, okay, we can just keep going Similarly on unbounded domains if we look at the derivative eras We get something that is reasonable So here you're looking at polynomial her meet polynomials and logo polynomials there You're looking at emits functions and the go functions So this is why so because these are polynomials and because you're working on unbounded domains You're always going to have this kind of error growth, but you can still decrease this as you wish Okay, so let's introduce some tests PDE problem so here I just take a Dirichlet Helmholtz equation in 1d for simplicity and I impose boundary condition that this thing dies at the edges of of the numerical domain So we take the genre polynomials or we take some linear combination of these things which satisfy these boundary conditions and we can substitute these things in and Integrate against our test function by and Essentially end up with a linear system. So this thing here and This turns out to be Ill conditioned because this is a sort of characteristic thing of these pseudo spectral approaches So, okay, we do a self-consistent convergence test. So how does this work? Well, you look at this original PDE and you say, okay Well, maybe I don't know how to solve that but I could manufacture a numerical solution by sneakily first specifying a psi Plugging it into my equation If you do that you will then have a right-hand side f and you take that as Fixed by this analytically specified psi and then you go back and you try to numerically reconstruct what you originally fixed If you do that and you compare errors, then you find that okay, this you could control That's actually quite nice because if you look at the decay or the tails These are linear on this semi-long plot. So you have this exponential convergence And you can avoid a lot of issues And this is for taking some kind of Gaussian wave packet. So the actual Data that you first fix in order to construct this right-hand side in homogeneity is actually non-trivial. So Yeah Okay, so what would we like to do now? We go from here. Well, I guess we are shown that you can reasonably do these kinds of things But the point is that you'd like to actually have all this abstract operator stuff sort of working well and at the moment I'm still trying to And Another idea is that okay There's this MP math package is written by this your Hansen person But he's also written something called ARB, which is a native C implementation of this And it turns out that someone in Julia land has written bindings to these which is quite nice because if you have a Numerical package written in C that works at arbitrary precision and is rigorous. Well, it's it's quite fast So, yeah, we don't appear to have Python 3 bindings for that and it will be useful to have them but Okay, I Guess questions. Thank you for listening this kind of thing. Thank you Boris So we're gonna be handing out the test papers now This is worth 15% of your final grade for the semester. Yeah, I'm sorry that I kind of yeah, I hope you got all of that Yeah, we've got plenty of time for questions. So anybody want to anybody want to throw something out there Cool Tim, I got a bit carried away. I guess just a real basic question How many digits precision that you needed to go to for some of your work? Well, okay the actual problem that I'm wanting to solve I did not describe at all But I've had issues It's complicated I've had issues. Okay. Let's back up. I guess I have a little bit of time so I can talk about that Wait, I have a question. Yeah, what is the actual problem you're trying to solve? So the idea is that you want to construct initial data for simulations in numerical relativity and to do this you have to solve constraint equations and to do that We're using a funny kind of setup where you do a kind of deformation of Solutions, so you take some nonlinear system and you linearize this thing and at each step You're solving a linear problem and then iterating this and this is where the issue is because this is ill conditioned You can't get a solution to the full nonlinear thing that is sufficiently accurate if you take a sufficiently high band limit So a sufficiently large in These plots so that's the kind of frustrating thing which this should address and in principle if you evaluate That with something like 20 30 BP in the end You'll get something that looks to be accurate to what you'd usually expect at float Anyone else? Cool. Okay. Thank you again Boris. That was Heavy That's awesome. Thank you very much. Sorry again You